Brooklyn Art Exhibit Fights AI Bias With Black Stories
In the heart of downtown Brooklyn, at the Plaza at 300 Ashland Place, an unusual structure has appeared: a large yellow shipping container. Its sides are adorned with black triangles, a design reminiscent of the flying geese quilt pattern that may have guided enslaved people toward freedom. This container is more than just an art piece; it's a bridge connecting the African diaspora's past with a more inclusive technological future.
This is the site of If We Don’t, Who Will?, an interactive AI laboratory by transmedia artist Stephanie Dinkins. Commissioned by the art non-profit More Art and on display until September 28, the project is a direct challenge to a generative-AI space that is overwhelmingly white-dominated, seeking to infuse it with Black cultural cornerstones.
Confronting the Bias in Artificial Intelligence
As society becomes more reliant on AI, the data these systems are trained on matters more than ever. Dinkins’s work addresses the critical problem of biased data, which often results in an AI worldview that fails to reflect the global majority. With Black workers making up just 7.4% of the hi-tech workforce, this underrepresentation has tangible, harmful consequences. It leads to discriminatory outcomes like predictive policing tools that unfairly target Black communities and tenant screening programs that reject renters of color.
Dinkins seeks to transform this landscape from the inside out. “What stories can we tell machines that will help them know us better from the inside of the community out, instead of the way that we’re often described, from outside in, which is often incorrect or misses a mark,” she asks. “I have this question: ‘Can we make systems of care and generosity?’”
A person examines the If We Don’t, Who Will? AI laboratory in downtown Brooklyn, New York City. Photograph: Driely Carter
How Community Stories are Training a New AI
The exhibit is a living laboratory. QR codes around the installation invite the public to use an app to submit their personal stories or answer prompts like “what privileges do you have in society?” These responses, which can be submitted by anyone in the world, are then used to train the AI. Inside the container, a large screen displays a generated image reflecting the newly submitted information.
To ensure the AI prioritizes Black and brown perspectives, Dinkins and her team intentionally fine-tuned the models. They fed the system images by the renowned Black photographer Roy DeCarava and trained it using African American Vernacular English (AAVE) to better understand its nuances. She also included imagery of okra, a staple food with deep ties to enslaved Africans and their descendants, which appears in the generated portraits as a talisman connecting past and present.
“We’re in this AI technological landscape that is changing our world. I don’t have a clue how it can do well by us if it does not know us,” Dinkins explained. She encourages sharing information as a way to “nurture the technology that we are living under.”
An image from the If We Don’t, Who Will? AI laboratory in downtown Brooklyn, New York City. Photograph: Driely Carter
The Artist Behind the Revolution
Stephanie Dinkins, recognized by Time magazine as one of the 100 most influential people in AI in 2023, is a self-described “tinkerer” without formal tech training. Her fascination with AI began over a decade ago with Bina48, a Black woman AI robot. This led to her ongoing project, Conversations With Bina48, and later to the creation of her own AI systems, such as Not the Only One, a voice-interactive AI trained on her family’s stories.
Redefining the Future of AI
Experts see Dinkins's work as a crucial step toward democratizing technology. Boston University professor Louis Chude-Sokei explains that she is posing a vital question: “‘What if we can start to train different algorithms to respond to different datasets that have liberating content or socially just content?’”
He adds that her work embodies a philosophy she calls “Afro-now-ism”, which she defines as taking joyful, creative, and positive action today to build a better future with technology, while staying aware of its dangers. By putting these tools directly into the hands of the community, her work helps reorient the cultural and political landscape of AI.
For Beth Coleman, a professor at the University of Toronto, this approach is essential for ensuring AI produces an accurate representation of the world. “There’s a good spirit of ‘how can we build a better world together?’ in Stephanie’s work,” Coleman said, “and at this moment in time that feels pretty revolutionary.”