From Data to Discovery: Charting AI Bias and Paving a Just Path

Davar Ardalan
5 min readAug 13, 2023

Since stepping into the AI world in 2018, I’ve been both intrigued and surprised by the persistent issues of AI and data biases. These biases risk shaping the dreams of future leaders.

It’s crucial to understand these challenges and prioritize inspiring inclusivity in AI. By doing so, we can ignite passion in students, empowering them to innovate for the generations to come.

Primary Factors and Underlying Causes Leading to AI Biases:

Source: TulipAI

Recent investigations by the media, continue to highlight an unsettling trend: bias within Generative AI systems.

Machine learning models, including generative AI, learn from the data they are trained on. If the data contains biases — because it reflects historical, societal, or cultural biases — the model will likely inherit those biases. For instance, if a language model’s training data is drawn from the internet, it can pick up and even amplify the biases present in those online texts.

Historically, many fields, roles, and positions have been dominated by one gender, and this historical representation can get reflected in the data. If men dominated a certain profession, the AI might generate outputs that lean towards associating that profession more with men.

These biases can cause big problems. Innocent people have been wrongly arrested because an AI tool made a mistake. Others might not get a job or a loan because of these biases. To fix this, we need to use data from many different sources, especially those that show different viewpoints. We also need tools that understand the context and can tell the difference between old views and today’s reality.

From “Data to Destiny” a flowchart on how AI’s gender bias could shape the future for girls:

Source: TulipAI

In July 2020, together with Topcoder, I led the Women in History Data Ideation Challenge together with my former venture IVOW AI. The results confirmed some of our assumptions and fears:

  1. Historic gender and cultural biases perpetuate in the ecology of AI;
  2. Current classification of gender, ethnicity, and race in Wikipedia is flawed and lacking;
  3. Improved data ecologies that account for gender, culture, and history to create better algorithms are vital for popularizing future AI products.

Organizations such as the Algorithmic Justice League, are implementing comprehensive measures, from transparent documentation to interdisciplinary collaboration, to ensure AI development is ethical and unbiased. These steps range from inclusive data collection and fairness audits to advocating for legislative regulations and open-source collaborations, emphasizing a holistic approach to mitigate biases in AI.

By instilling principles of fairness and transparency in AI development, we can rectify these biases. Moreover, by showcasing the vast potential of AI to young girls and multicultural youth — demonstrating the empowering tools they can create — we can inspire a new generation of female AI pioneers.

“Decoding Fairness” illustrates how principles of fairness and transparency can rectify these biases:

Source: TulipAI

The roadmap to achieving: “Fair AI In Ten Steps,” a guide that simplifies the complex world of AI ethics into tangible actions and relatable examples.

The steps highlight the need for clear and open AI learning, collecting varied data, ensuring AI fairness, listening to people’s opinions, fixing AI mistakes, teaching about AI’s right use, setting AI rules, adjusting AI based on user comments, teaming up with experts from different fields, and sharing AI work for everyone to see and improve. The given examples in each step help simplify and clarify these ideas.

“Ten Steps to Understand Fair AI” including, diverse data, user feedback, fixing errors, AI education, rules, expert collaboration, and sharing:

Source: TulipAI

First, it’s about keeping a clear record of AI’s learning, like a diary and ensuring AI learns from diverse sources. It’s about checking AI’s fairness, similar to a teacher grading assignments and listening to and valuing human feedback. It’s about correcting any biases or misconceptions and emphasizing the importance of education while advocating for rules ensuring AI’s ethical use, refining AI tools, and encouraging collaboration between experts from different fields for the betterment of AI.

By championing transparency, inclusivity, fairness, and teamwork, we can foster an AI landscape that serves all. Crucially, this will motivate young girls to join the industry, igniting their passion to develop innovations that will inspire forthcoming generations.

Generative AI and voice technology, when harnessed correctly, for instance, can be powerful tools for change. “Freedom Speaks” is a shining example of this potential realized. It’s a call to action for all of us to recognize the power of technology in shaping narratives, empowering voices, and inspiring generations.

Amazon Alexa skills, like “Freedom Speaks,” currently in beta, are pioneering the way we think about technology’s role in amplifying underrepresented voices.

By delving into the rich tapestry of Iranian culture, history, and female empowerment, “Freedom Speaks” challenges stereotypes and broadens horizons. It’s not just about education; it’s about inspiration. Young girls, especially those of Iranian descent, can find role models in these stories, reinforcing the belief that their voices, their stories, and their dreams matter.

Steps to Create a Voice App Celebrating Women in AI:

Shaping AI Systems with Cultural Data, Report on Women in History Data Ideation Challenge.

Iran Davar Ardalan, is an award-winning media executive and the Founder of TulipAI, a media and AI consultancy firm based in Southwest Florida.

As a former Executive Producer at National Geographic, she led groundbreaking media projects and pushed boundaries in spatial audio. Ardalan’s expertise extends to voice AI and its role in safeguarding wisdom, nature, history, and culture. Her pioneering work at IVOW AI, along with her engagements at esteemed events like AI for Good and Voice2022, showcases her thought leadership.

Ardalan’s contributions as the Executive Director of “Freedom Speaks” and as the Deputy Director of the White House Presidential Innovation Fellowship Program further demonstrate her commitment to amplifying voices and driving innovation. With distinguished recognition, including prestigious awards and the Ellis Island Medal of Honor, Ardalan’s visionary leadership and dedication solidify her as a powerhouse in media and technology. Her remarkable journalism career at NPR News adds to her outstanding credentials.

--

--

Davar Ardalan

Founder TulipAI. National Geographic, NPR News, SecondMuse, White House PIF Alum.