During the last few years the importance of a quick transformer tunnings for downstream tasks become even more demanded. Since such pretrained states might be treated as few shot learners makes them even more attractive for low resource domain tasks.
Earlier announced awesome list of sentiment analysis papers has been enlarged with reced advances in more time-effective tunning techniques:
- template-based
- p-tunnings
In a nutshell, we my tunning extra token embeddings (p-tuning), or design input prompts (AutoPrompt) in order to adopt model for downstream tasks by keeping model state frozen.
Checkout the Awesome Sentiment Attitude Extraction List for a greater details.