Extreme Fine-tuning: A Novel and Fast Fine-tuning Approach for Text Classification
Boonnithi Jiaramaneepinit, Thodsaporn Chay-intr, Kotaro Funakoshi, Manabu Okumura
Main: Machine Learning for NLP Oral Paper
Session 2: Machine Learning for NLP (Oral)
Conference Room: Carlson
Conference Time: March 18, 11:00-12:30 (CET) (Europe/Malta)
TLDR:
You can open the
#paper-317-Oral
channel in a separate window.
Abstract:
Although fine-tuning a pre-trained model with a conventional approach has shown to be effective in various downstream tasks, previous work has used only backpropagation to fine-tune the model, which causes a massive amount of computational resources and time. We propose Extreme Fine-Tuning (EFT), a novel approach for fine-tuning a pre-trained model effectively and efficiently. EFT uses backpropagation for a brief fine-tuning and an iterative extreme learning machine for training a classifier. We applied EFT to four text classification datasets, MELD, IEMOCAP, IMDb, and AG News, and compared its performance with state-of-the-art (SOTA) approaches. The results indicate that EFT noticeably outperformed the other approaches in training-time measurement with comparable model performance. We will release our code at https://github.com/up-33/extreme-fine-tuning.