UX Researcher + Responsible AI Research Scientist

I am a third-year Ph.D. student and GEM Fellow in the Department of Human-Centered Design and Engineering at the University of Washington, specializing in Human-Computer Interaction with a focus on AI/ML fairness. I am a member of the Tactile and Tactical (TAT) Design Lab, under the guidance of Dr. Daniela Rosner, and the Wildlab, where I am also advised by Dr. Katharina Reinecke. My work across these labs focuses on exploring design, technology, and social impact through interdisciplinary research.

I am deeply invested in promoting responsible AI/ML systems that prioritize fairness, inclusivity, and equity. My research investigates the sociotechnical implications of race, culture, identity, and power, especially in how intelligent systems and automated language technologies interact with underrepresented communities. Leveraging inclusive design and justice-driven frameworks, I aim to enhance user experiences with a particular emphasis on Black American communities.

In addition to my academic pursuits, I serve as a board member of Acquiring Knowledge for Transcendence and design team lead at A Vision for Engineering Literacy and Access, both nonprofit organizations dedicated to supporting underrepresented youth in education. My career goal is to establish a research lab that addresses social disparities in tech design, ultimately contributing to data-driven solutions that mitigate racial biases and amplify diverse voices within intelligent systems.

View my resume here!

Research Interests

  • Sociotechnical Implications of AI Technology
  • Human-Computer Interaction
  • Responsible AI
  • Inclusive Design
  • Data-Driven Solutions to Mitigate Racial Biases

Ongoing Research Projects

Bridging Sociolects and AI: Assessing User Interaction with Sociolectally Adapted Large Language Models

As large language models (LLMs) become increasingly integrated into daily communication, it is crucial that they understand and utilize sociolects appropriately. This study presents a quantitative experiment in which LLMs are fine-tuned to generate text in specific sociolects, including African American Vernacular English (AAVE) and Queer Slang. Participants engage with these LLMs through tasks such as video summarization and topic discussions, allowing us to measure key factors such as user trust, cultural sensitivity, and confidence. Our research examines the impact of sociolectal adaptation on AI perception, focusing on user satisfaction, frustration, trust, perceived social proximity, and reliance

Collaborators: Daniel Chechelnitsky, Tao Long, Kaitlyn Zhou, Dr. Mark Díaz, Dr. Maarten Sap

In preparation for submission.