Is the lack of diversity across the tech industry preventing ethical AI?


AI, Diversity in tech

The development of AI is showing astounding promise across all industries – from helping to diagnose and treat cancer patients faster to speeding up deliveries with driverless delivery vehicles. But for all the good AI promises, no technology has raised quite as many ethical concerns as AI and it’s no surprise when futurists have predicted “AI could be billions of times smarter than humans” in just a few years.

In response to the increasing adoption of AI, last week the European Commission released its own guidelines calling for ‘trustworthy AI’.

According to the EU, AI should adhere to the basic ethical principles of respect for human autonomy, prevention of harm, fairness and accountability. The guidelines include seven requirements (listed below) and call particular attention to protecting vulnerable groups, like children and people with disabilities. They also state that citizens should have full control over their data.

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

 

But what the report fails to acknowledge is the lack of diversity that exists across the multi-billion dollar companies leading the AI race and the impact this is having on the outcomes being generated by AI systems.

Artificial intelligence exists to search through vast amounts of our personal data in search of patterns and the implementation of AI-infused technologies are now being rapidly adopted impacting critical parts of our daily lives – from education, employment, healthcare and manufacturing. Increasingly, powerful artificial intelligence tools determine who gets into school, who gets a job and who pays a higher insurance premium. Yet a growing body of research shows that these technologies could be plagued with bias and discrimination, mirroring and amplifying real-world inequalities.

A recent study conducted in America revealed facial recognition systems frequently misidentified people of colour. AI-infused job-hunting tools tended to favour males and computer vision systems for self-driving cars experienced more difficulties in spotting pedestrians with darker skin tones.

But the key question is why? A report issued by New York University's AI Now Institute believes the reason is down to the fact that the people responsible for building these technologies are in the vast majority of cases, white and male. Only by encouraging and recruiting more women, people of colour and under-represented groups, can artificial intelligence address the bias and create more neutral systems. 

This will become ever more critical as AI begins to expand human understanding and decision-making in fields like education, healthcare, transportation, energy and manufacturing.

Francesca Lazzeri, AI and ML scientist for Microsoft's cloud developer advocacy team believes that ‘as a society, we should work together to ensure that AI-based technologies are designed and deployed in a way that not only earns the trust of the users who use it, but of those from whom the data is being collected in order to build those AI solutions. It is vital for the future of our society that we design AI to be both reliable and reflect ethical values which are deeply rooted in important and timeless principles.’

The European Commission is due to meet again this summer to work with stakeholders to identify areas where additional guidance might be necessary and figure out how to best implement and verify its recommendations, providing an opportunity to address the issues around the impact diversity is having on ethical AI.

To read the full EU guidelines, click here.

Posted by Helen Thomas