Computational lyric analysis has become an important area in MIR as large-scale lyric repositories and streaming platforms continue to grow. An especially relevant domain is songs that engage with social and political issues, as reflected in initiatives such as the Recording Academy’s Best Song for Social Change category. However, identifying socially engaged songs is still largely a manual process. This paper presents a comparative study of nine models for automatic classification of commercial song lyrics into eight social-issue categories plus N/A. We construct a Social Change Lyrics dataset of 641 expert-annotated songs and augment it with GPT-based paraphrasing, back-translation, and theme-consistent continuations, and we exploit a cleaned, topic-relevant subset of the WASABI corpus for unsupervised and weakly supervised pretraining. All Transformer-based systems are built on RoBERTa-large and use either full fine-tuning or Low-Rank Adapters, and they are evaluated alongside traditional text baselines under a shared bootstrap protocol with 1,000 resamples that yields confidence intervals for accuracy and macro-F1. The best-performing model is a RoBERTa classifier trained only on the augmented supervised data, while none of the curriculum-style pipelines that incorporate WASABI-based pretraining surpass this augmented baseline. These results indicate that, in this low-resource setting, supervised data augmentation provides larger and more reliable gains than additional unsupervised or weakly supervised pretraining, and that parameter-efficient fine-tuning can match full fine-tuning at lower cost. We discuss confusion patterns and qualitative errors, highlighting both the promise and the inherent limits of single-label lyric classification in culturally sensitive domains.