Building America’s AI Future with Community at the Core
The sign on the door read Council of Elders. I was at an engineering summit in Arizona, wandering between sessions, when I noticed it. Curiosity got the better of me, and I stepped inside.
I introduced myself, shared that I’d be speaking about artificial intelligence and community data, and asked if anyone was open to a conversation. A few minutes later, I found myself in a quiet corner with a Navajo elder, and his son.
We didn’t talk about AI at first. We talked about water, its scarcity, and its role in shaping land and life. At one point, I asked his son: If there were a local AI tool that could teach you what your father knows in just two minutes, would you use it? He said yes, without pause. His father smiled. Then they both chuckled, acknowledging the generational shift, not resisting it. It wasn’t about trading tradition for technology. It was about access. About using new tools to preserve, share, and understand what matters most.
That moment stayed with me. Because across the United States, similar questions are emerging: Can AI help my community? Will it understand us? Can we trust it?
If we want AI to work for all of America, we have to start with listening.
Communities don’t need abstract promises. They need clarity, relevance, and a voice in the process. We need to meet people where they are, show what’s possible, and build trust by making AI practical and accessible.
That belief shaped my work during my time as Deputy Director of the White House Presidential Innovation Fellowship Program at the U.S. General Services Administration. There, I helped champion an AI Health Sprint, an effort grounded in public-private collaboration. At its core, it wasn’t about the newest tools or fastest algorithms. It was about unlocking data and potential by working alongside companies, federal employees and patients who would be most affected by the outcomes.
Today, that same approach is more necessary than ever.
The U.S. Office of Management and Budget recently issued guidance on high-impact AI systems that shape major decisions in health, infrastructure, education, energy, and more. These systems must be not only technically sound but also built to function in the real world. That means AI must work where people live, reflect their needs, and earn their trust.
This shift is starting to take root. OpenAI’s “OpenAI for Countries” initiative is one example, it acknowledges that effective AI must adapt to different governance structures, cultural contexts, and environmental realities. The same principle applies here at home. America is a landscape of regions, each with its own geography, industries, and voices.
That’s why we wrote AI for Community, a book coming out June 16. It’s a collection of real-world stories about people using AI to support their communities. These are not all hypothetical case studies. They’re on-the-ground efforts that show what happens when AI is designed to work with local knowledge instead of around it.
Here’s what that looks like:
At Howard University, researchers partnered with Google to create a new dataset of African American English, collecting over 600 hours of community-contributed speech. Through nationwide events and Project Elevate Black Voices, participants recorded natural, culturally relevant dialogue to improve the accuracy of speech recognition tools for Black users. The resulting dataset, owned by Howard, is a vital step toward AI that recognizes and respects vernacular traditions. Because AI lacks African American English data, Black users face higher error rates, making essential communication technologies less accessible.
In New Zealand, Māori leaders are working with technologists to integrate Indigenous language and cultural values directly into AI systems. These efforts center community control, cultural sovereignty, and the preservation of te reo Māori in digital spaces — ensuring AI is not just fluent, but respectful and grounded.
And here’s what the future could look like:
In Texas, ranching families might use AI to monitor aquifer levels, while high school students explore careers in water technology.
In Florida, citrus growers could track crop health using precision tools, supported by regional training programs.
In Montana, AI could assist in wildfire modeling while creating new career pathways in climate and land data.
In Alaska, Indigenous fishing communities might balance sustainability with tradition through locally built AI systems.
In Arkansas, freight operators could benefit from AI tuned to agricultural shipping cycles, with community colleges training the next generation in advanced logistics.
In Maryland, Chesapeake Bay watermen might use AI to protect a centuries-old maritime economy, supported by conservation partnerships.
These stories, real and aspirational, aren’t about innovation for innovation’s sake. They’re about ownership. They show what’s possible when AI tools are built to reflect real needs and when the workforce is trained to maintain and improve them locally.
And that brings me back to Arizona. Later that day, the same Navajo Elder entered my workshop and quietly took a seat at the back. As we neared the end, he rose and walked to the front. He spoke with quiet conviction, recognizing that someone had come, not just to speak, but to listen; to ask how these tools might serve the community. That moment mattered. Because the future of AI in America won’t be shaped by scale alone, it will be defined by trust, precision, and meaningful relevance.
AI for Community is an invitation to design with communities, not for them. To treat regional knowledge as infrastructure. To invest in education, training, and public-private partnerships. This is how we build an AI future that includes everyone. This is how we get America ready.
AI for Community publishes June 16. We hope you’ll join the conversation.
AI NOTE: This blog post was co-created with ARC, an AI trained on the authors’ archives.