About Me
Hi, I'm Katy! I'm a PhD candidate at USC Information Sciences Insitute. I'm a member of CUTELABNAME, advised by Jon May. My research focuses on developing high-quality, well-grounded benchmark datasets for fairness in LLMs, with particular emphasis on participatory, community-engaged methods.
I'm passionate about LLM fairness 📝⚖️, ethical AI 🤖❤️, science communication 👩🏻🔬📢, and LGBTQ+ rights 🏳️🌈🏳️⚧️. I also love Star Trek 🖖 and strategy games 🎮.
I am actively seeking research internships in responsible AI, collaborators for new papers, USC undergrads to work with me in the 2024-2025 academic year, and invited speakers for the ISI Natural Langauge Seminar, so please reach out if would like to be any of these things!
Research Interests
- LLM Fairness Benchmark Development: I work on developing targeted benchmark datasets to measure social biases in large language models.
- Community-Engaged Methods for AI Fairness: I also focus on participatory methods for fairness benchmark development, ensuring that everyone has a voice in how LLMs treat them.
- LLM Evaluation: Recently, I've been exploring how to evaluate fairness of closed-source models. I'm also interested in benchmark validation and in LLM evaluation more generally.
Publications
These are my peer-reviewed conference and workshop publications.
- Virginia K. Felkner, Jennifer A. Thompson, Jonathan May. GPT is Not an Annotator: The Necessity of Human Annotation in Fairness Benchmark Construction. Association for Computational Linguistics, Bangkok, Thailand, Aug. 2024.
- Virginia K. Felkner, Ho-Chun Herbert Chang, Eugene Jang, Jonathan May. WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models. Association for Computational Linguistics, Toronto, Canada, Jul. 2023.
- Virginia K. Felkner, Ho-Chun Herbert Chang, Eugene Jang, Jonathan May. Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models. QueerInAI Affinity Group Workshop, North American Association for Computational Linguistics, Seattle, WA, Jul. 2022.
- Nicolaas Weideman, Virginia K. Felkner, Wei-Cheng Wu, Jonathan May, Christophe Hauser, Luis Garcia. PERFUME: Programmatic Extraction and Refinement For Usability of Mathematical Expressions. CheckMATE, ACM SIGSAC Conference on Computer and Communications Securite, New York, NY, Nov. 2021.
- Virginia K. Felkner and Elisabeth Moore. Investigating the Efficacy of Unstructured Text Analysis for Node Failure Detection in Syslog. Second Workshop on Machine Learning for Computing Systems, Supercomputing 2020.
Talks, Panels, and Posters
- Invited Speaker, John Brooks Slaughter Leadership in Engineering DEI Summit. Feb. 2024.
- Roundtable Discussion: Defining and Disrupting Antisemitism: Perspectives from Artificial Intelligence, Sociology, History, Law, and Ethics. Association for Jewish Studies, Dec. 2023.
- Invited Panelist on Generative AI and Biases, QueerInAI Workshop at NeurIPS. Dec. 2023.
- Guest Lecturer on Ethics and Power in NLP, USC CSCI 662 Advanced NLP, September 2023.
- Poster Presentation, Anti-Queer Bias in LLMs. CRA-WP Grad Cohort for Women, April 2023.
- Poster Presentation, Anti-Queer Bias in LLMs. Southern California NLP Symposium, November 2022.
- Invited Speaker, Anti-Queer Bias in LLMs. USC ISI Natural Language Seminar, June 2022.
Media Coverage of My Work
- USC ISI AI/nsiders Podcast Episode 6
- Los Angeles Blade Busting Anti-Queer Bias in Text Prediction
- dotLA Homophobia Is Easy To Encode in AI. One Researcher Built a Program To Change That.
- Cosmos Magazine Busting homophobic, anti-queer bias in AI language models
- USC Viterbi News Lost in Translation at the Border
- USC Viterbi News Busting Anti-Queer Bias in Text Prediction
- MultiLingual Developing machine translation to help Indigenous refugees navigate immigration courts