Quantifying the Hyperparameter Sensitivity of Neural Networks for Character-level Sequence-to-Sequence Tasks

Adam Wiemerslage, Kyle Gorman, Katharina von der Wense

Main: Multilinguality and Language Diversity 2 Oral Paper

Session 8: Multilinguality and Language Diversity 2 (Oral)
Conference Room: Marie Louise 1
Conference Time: March 19, 16:00-17:30 (CET) (Europe/Malta)
TLDR:
You can open the #paper-102-Oral channel in a separate window.
Abstract: Hyperparameter tuning, the process of searching for suitable hyperparameters, becomes more difficult as the computing resources required to train neural networks continue to grow. This topic continues to receive little attention and discussion---much of it hearsay---despite its obvious importance. We attempt to formalize hyperparameter sensitivity using two metrics: similarity-based sensitivity and performance-based sensitivity. We then use these metrics to quantify two such claims: (1) transformers are more sensitive to hyperparameter choices than LSTMs and (2) transformers are particularly sensitive to batch size. We conduct experiments on two different character-level sequence-to-sequence tasks and find that, indeed, the transformer is slightly more sensitive to hyperparameters according to both of our metrics. However, we do not find that it is more sensitive to batch size in particular.