publications and projects
2026
- Auditing LLM Responses in a Complex Policy Landscape: Abortion Law in the United StatesRo Encarnación, Christen Hammock Jones, and Danaé Metaxa2026
- In ReviewEveryday Auditing of TikTok’s Generative AI Manga FilterRo Encarnación, Luis Morales-Navarro, Hita Kambhamettu, and 1 more author2026
2025
- Workshop PaperCan an LLM Tell Me If I Can Legally Get an Abortion?Ro Encarnación and Danaé MetaxaHEAL @ CHI 2025 Workshop – Human-centered Evaluation and Auditing of Language Models, 2025
- Auditing the Audits: Lessons for Algorithmic Accountability from Local Law 144’s Bias AuditsMarissa Kumar Gerchick, Ro Encarnación, Cole Tanigawa-Lau, and 3 more authorsProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 2025
In this work, we “audit the audits,” analyzing the documents produced pursuant to one of the United States’ first enacted laws regulating the use of artificial intelligence in employment: New York City’s Local Law 144. This law requires employers and employment agencies using certain types of automated tools to publish “bias audits” with statistics about how different sex and racial groups fare in the hiring process when the tools are used. We collect and conduct a comprehensive analysis of all Local Law 144 bias audits (N=116) made publicly available to our knowledge from the law taking effect in July 2023 until early November 2024, and describe the extensive challenges we faced in identifying, archiving, extracting information from, and ultimately analyzing these bias audits. We identify several ways that bias audits produced in accordance with Local Law 144 are incomplete evaluations of algorithmic bias, despite news coverage and characterizations by employers and vendors suggesting otherwise. We show that Local Law 144 bias audits are significantly hampered by several issues, including missing demographic data, opaque data aggregation, problematic uses of “test data,” and reliance on metrics that do not represent how automated hiring tools are used in practice. We analyze the reported results in Local Law 144 bias audits alongside the four-fifths rule often used as a measure for assessing adverse impact in employment contexts. Most audits do not report results that would not suggest violations of the four-fifths rule. Crucially, however, we show that these tools could often be in violation of the four-fifths rule when considering potential impacts of missing demographic data. We offer ten practical recommendations to strengthen future legislative efforts that mandate algorithm auditing in hiring and other areas, and contribute an open dataset and codebase for extracting and combining bias audit results to support future auditing efforts.
2023
- Representation, Self-Determination, and Refusal: Queer People’s Experiences with Targeted AdvertisingPrincess Sampson, Ro Encarnación, and Danaé MetaxaProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, 2023
Targeted online advertising systems increasingly draw scrutiny for the surveillance underpinning their collection of people’s private data, and subsequent automated categorization and inference. The experiences of LGBTQ+ people, whose identities call into question dominant assumptions about who is seen as “normal,” and deserving of privacy, autonomy, and the right to self-determination, are a fruitful site for exploring the impacts of ad targeting. We conducted semi-structured interviews with LGBTQ+ individuals (N=18) to understand their experiences with online advertising, their perceptions of ad targeting, and the interplay of these systems with their queerness and other identities. Our results reflect participants’ overall negative experiences with online ad content—they described it as stereotypical and tokenizing in its lack of diversity and nuance. But their desires for better ad content also clashed with their more fundamental distrust and rejection of the non-consensual and extractive nature of ad targeting. They voiced privacy concerns about continuous data aggregation and behavior tracking, a desire for greater control over their data and attention, and even the right to opt-out entirely. Drawing on scholarship from queer and feminist theory, we explore targeted ads’ homonormativity in their failure to represent multiply-marginalized queer people, the harms of automated inference and categorization to identity formation and self-determination, and the theory of refusal underlying participants’ queer visions for a better online experience.
2022
- Adaptive Sampling Strategies to Construct Equitable Training DatasetsWilliam Cai, Ro Encarnación, Bobbie Chern, and 4 more authors2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 2022
In domains ranging from computer vision to natural language processing, machine learning models have been shown to exhibit stark disparities, often performing worse for members of traditionally underserved groups. One factor contributing to these performance gaps is a lack of representation in the data the models are trained on. It is often unclear, however, how to operationalize representativeness in specific applications. Here we formalize the problem of creating equitable training datasets, and propose a statistical framework for addressing this problem. We consider a setting where a model builder must decide how to allocate a fixed data collection budget to gather training data from different subgroups. We then frame dataset creation as a constrained optimization problem, in which one maximizes a function of group-specific performance metrics based on (estimated) group-specific learning rates and costs per sample. This flexible approach incorporates preferences of model-builders and other stakeholders, as well as the statistical properties of the learning task. When data collection decisions are made sequentially, we show that under certain conditions this optimization problem can be efficiently solved even without prior knowledge of the learning rates. To illustrate our approach, we conduct a simulation study of polygenic risk scores on synthetic genomic data—an application domain that often suffers from non-representative data collection. When optimizing policies for overall or group-specific average health, we find that our adaptive approach outperforms heuristic strategies, including equal and representative sampling. In this sense, equal treatment with respect to sampling decisions does not guarantee equal or equitable outcomes.