Domain shift occurs when the test distribution is different from the training distribution on the same task, usually degrading predictive performance. We study meta-learning approaches to few-shot domain adaptation for the sentiment classification task. We use two representative meta-learning methods: Prototypical networks and MAML, and a multitask learning baseline. We find that the multitask baseline proves to be quite strong, outperforming both meta-learning methods. However, MAML achieves performance close to multitask learning when domain shift is high. We also find that smart support set selection increases model performance slightly for all models studied.