Research Profile

Portrait of Louisa Yang

Louisa Yang

Research Fellow in the Sociology of AI

Short Bio

I study how AI knowledge is produced, evaluated, and governed across corporate, open-source, and civic contexts. My work combines ethnography, interview-based methods, and document analysis to trace the social life of benchmarks, datasets, and model releases. At the Future Histories Institute I co-lead a project on the institutional dynamics of evaluation cultures in machine learning.

Research Interests

  • Benchmarking cultures in AI
  • Data governance and documentation
  • Labor and organization in AI research
  • Open-source and corporate research dynamics
  • Algorithmic accountability and auditing
  • Sociotechnical imaginaries and policy

Short CV

  • 2023–present: Research Fellow, Program on Algorithmic Societies, Future Histories Institute
  • 2019–2023: Doctoral Researcher, Department of Sociology, Meridian University of the Commons
  • 2017–2019: Research Associate, Civic Infrastructures Lab, North Coast School of Social Research
  • 2015–2017: Project Assistant, Data Commons Workshop, New Harbor College

Affiliations

  • Future Histories Institute — Program on Algorithmic Societies
  • Civic Infrastructures Lab, North Coast School of Social Research

Education

  • PhD, Sociology, Meridian University of the Commons , 2023
  • MA, Science and Technology Studies, North Coast School of Social Research , 2018
  • BA, Anthropology, New Harbor College , 2015

Teaching

  • Sociology of AI and Data
  • Qualitative Methods for Machine Learning Communities
  • Ethics, Policy, and Governance of Algorithms

Awards

  • Early Career Paper Prize, Society for Critical AI Studies , 2022
  • Seed Grant, Networked Knowledge Initiative , 2024
  • Teaching Excellence Award, North Coast School of Social Research , 2021

Publications

  • Yang L., Benchmarking as Social Infrastructure: Coordinating Research Through Metrics, Journal of Algorithmic Societies , 2024.
  • Yang L.; Alvarado T., Data Work and Invisible Labor in Model Evaluation, Platform & Culture Review , 2023.
  • Yang L.; Noor S.; Kim J., Openness, Secrecy, and the Political Economy of Compute, Proceedings of the Symposium on Critical AI Studies , 2022.
  • Yang L., Fieldnotes from a Hackathon: Rituals of Acceleration in Machine Learning, Technoscience Quarterly , 2021.

Abstract

This project examines how evaluation practices in machine learning shape research trajectories, institutional priorities, and the distribution of labor. Using a mixed-methods design—ethnographic observation of research sprints and workshops, 72 semi-structured interviews with practitioners across industry, academia, and open-source communities, and document analysis of model reports and dataset cards—we map the emergence of composite benchmarking, the consolidation of leaderboard governance, and the rise of third-party auditing collectives. Findings indicate that shifting from single-task metrics to composite scores reconfigures incentives, often producing metric drift (where indicators evolve faster than interpretive norms) and transferring documentation burdens to precarious contributors. We identify three governance levers that mitigate these effects: participatory evaluation protocols, stewardship roles for maintaining benchmark registries, and transparent versioning practices for model and dataset disclosures. The study contributes a sociological account of how evaluation infrastructures both coordinate and constrain AI research futures.