Banner image design: Alyssa Monte

STUDENTS

If you are a student interested in learning more about “AI,” or an educator teaching critical AI literacies, please begin by downloading our Student Guide to Critical AI Literacies.  For a deeper dive, check out collectively authored living document, “Teaching Critical AI Literacies.”

Why cultivate critical AI literacies from a Design Justice Labs approach?

  • Student research is important to cultivating critical AI literacies. Instead of approaching AI tools as users, students can engage products like ChatGPT as researchers by assessing their strengths, weaknesses and limitations through experiments of their own design. 
  • The approach emphasizes a student’s empowered ability to 
    1. understand how generative AI works; 
    2. evaluate a technology’s known harms, social impacts, and tradeoffs; and 
    3. make independent judgements about how, when, or whether to use an AI tool in a particular circumstance. 

Below we proudly share the work of undergraduate student researchers at Rutgers and elsewhere. We will add to these examples on an ongoing basis. To submit student research for peer-review and potential publication write to critcicalai@sas.rutgers.edu.  

We also welcome teachers to submit their project-based assignments for our teachers page.

In this project, Lexi Tassone investigates the environmental toll of AI, showing how data-center pollution and resources used burden vulnerable communities. Her project highlights the hidden costs of generative AI and raises critical questions about environmental justice.

READ MORE

Image: Hiragla Variation I, Frank Stella, 1969.LACMA Modern Art Collection, © Frank Stella / Artists Rights Society (ARS) NY

THE PROBLEM OF MITIGATING BIAS

Uncovering Unexpected Disparities in AI-Mediated Hiring Practices

In this report, Design Justice Labs S25 Fellow Jay Rana presents his findings from an audit of GPT-4.1: "Instead of simply replicating societal bias, GPT-4.1’s behavior suggests a deliberate shift, but one that still merits close examination as a model-specific outcome."

READ MORE

In this study, Design Justice Labs S25 Fellow Shraddha Rahul presents their findings from interviews about ChatGPT in the academic lives of university students: "As developers of generative AI set out to challenge how we define concepts of originality, students are seeking clarity. They want to uphold academic integrity and authentic work, not abandon these values."

VIEW SLIDES

As Rayzel Fine and Lauren M.E. Goodlad write in their June 2025 Allegro essay, "Together we must define the magic and unite to ensure that those who may never have understood what music is for in the first place do not destroy the economic basis for our culture, the human artists who create it, and the public who need it. It is not too late."

READ MORE

In this fall 2024 study, Natalie Sammons probes generative AI grading tools: "It's a powerful narrative: an innovative new technology helps take a load off overworked teachers while also addressing the shortcomings of human grading such as bias and favoritism. If it sounds too good to be true, that’s because it is."

READ MORE

In this recorded honor's project presentation, Mehek Shah discusses the exploitation of data workers and underscores the need for regulation in the industry.

WATCH VIDEO