AI for Community Launches at Howard University
Exploring trust, consent, language preservation, and the power of cultural memory in the age of AI
As we stepped into the Howard University bookstore, we were met by a large, striking poster honoring the university’s legacy, featuring iconic images of Toni Morrison and Chadwick Boseman among others. Their presence loomed large, not just on the wall but in spirit, setting the tone for a gathering that was both timely and deeply rooted. To launch AI for Community in that space was a deeply grounding moment, a reminder that the work we’re doing is part of a much larger story.
Joined by three of my co-authors — Dr. Lucretia Williams, Dr. Reza Moradinezhad, and Myles Ingram, we engaged in a powerful dialogue that reflected the heart of our book: that artificial intelligence must be shaped with communities, not merely deployed at them.
The event was made possible through the leadership of Dr. Williams, who shared her role in spearheading a landmark partnership between Google and Howard University. That collaboration resulted in the recording of over 600 hours of African American Vernacular English (AAVE), where contributors were compensated, consented to participation, and, most importantly, the data is owned by Howard. Elevate Black Voices is a living, breathing example of community-centered AI design.
The room was filled with thoughtful engagement. Trust was one of the first and most pressing themes to emerge. Dr. Reza Moradinezhad emphasized that to build trustworthy AI systems, we must radically rethink our outdated notions of consent. In this AI era, consent can’t be buried in 20-page PDFs full of legal jargon.
Instead, Dr. Moradinezhad called for bite-sized, clear, and culturally relevant explanations of how data is used, empowering individuals to make informed choices about their digital footprint. True trust, he reminded us, starts with transparency.
Dr. Reza Moradinezhad’s research focuses on two key areas: using generative AI to restore video and audio, and examining the ethical implications of Gen AI in video. His work highlights both the power of AI to preserve memory and the urgent need for new frameworks around consent, transparency, and responsible use.
Another critical question arose: How do we even help communities understand what AI is, or what it can do for them? To that, we offered a guiding principle: meet people where they are. AI literacy doesn’t need to start in tech labs. It should happen in churches, barbershops, community centers, and family gatherings, just like generations ago, when the post office was the town’s meeting ground. Trust is already built in those places. That’s where this dialogue belongs.
Responsibility followed closely behind. One audience member asked: If AI is entering our schools, how do we prepare children to use it wisely and ethically? I shared resources from TeachAI, a global initiative creating educational frameworks that teach students not just how AI works, but how to engage with it as responsible digital citizens, with attention to fairness, civic values, and long-term impact.
Dr. Lucretia Williams added a grounded and forward-looking perspective. She reminded the audience that we’re still in the early days of this technological shift. “Give it two to three years,” she said, noting that the broader AI ecosystem will need time to develop the necessary ethics guidelines, and shared language that can be understood and applied across institutions. Her point underscored the importance of patience and collective effort as we build the foundation for responsible AI.
Myles Ingram added another crucial dimension to the discussion, one that often goes overlooked: the difficulty of preserving endangered languages in AI systems. He pointed out that most of these languages suffer from a lack of structured, machine-readable datasets. Without intentional data collection and community-led efforts, many linguistic traditions risk being left out of the digital future altogether.
I also had the opportunity to share more on Laleh AI, a deeply personal and experimental project built from the legacy of my late mother, Dr. Laleh Bakhtiar, a renowned scholar of Islam and psychology best known for her gender-neutral translation of the Quran. Before her passing, we began exploring how her voice, ideas, and intellectual contributions could live on through AI.
Laleh AI invites users into a living dialogue with the legacy of the late Dr. Laleh Bakhtiar, scholar of Islam, Sufism, and psychology, by drawing from her archive of writings, lectures, notes, and reflections. Through a conversational interface, users can explore not only her scholarly insights but also the personal arc behind them: how a young girl growing up in Washington, D.C., reading Little Women, went on to become the first American woman to translate the Quran into gender-inclusive language.
Laleh AI traces her intellectual and spiritual journey, across cultures, disciplines, and decades, offering a space for deep engagement while also raising important questions about authorship, memory, and the ethics of preserving a scholar’s voice through AI.
At the Howard University event, I shared that I’m beginning to engage university professors in journalism and digital humanities to have students explore Laleh AI as part of a broader conversation on AI, authorship, and cultural preservation. This will include a public feedback form, allowing students, researchers, or community members, to offer insights on the tool’s tone, accuracy, and experience.
Get your copy here: https://www.routledge.com/AI-for-Community-Preserving-Culture-and-Tradition/Ardalan-Banifatemi-Gonzalez-Ingram-Moradinezhad-Williams/p/book/9781032846620
We closed the event with a reflection on where we are in history. AI today feels much like the early days of the internet, uncertain, overwhelming, filled with both hype and hope. But as we reminded each other: The question isn’t whether AI will be part of our future. The question is: Will our communities be part of AI’s future? If we bring people together, across disciplines, generations, and lived experience, we can shape this next frontier with care, creativity, and cultural depth.
Because at its core, AI for Community is a call to action: to remember that data is not just numbers, it’s memory, language, laughter, migration, story, rhythm, and legacy. And AI, when built with intention, can help carry those truths forward.
Next, AI for Community travels to the Frankfurt Book Fair in Germany, where the global publishing world gathers to explore the future of books, culture, and ideas. Then in November, we’ll continue the conversation at Columbia University in New York, bringing together scholars, students, and practitioners to explore how AI can be shaped by, and for, communities everywhere.
Related Stories:
