The Impact of Scaling in AI Technology
“Scale” has become a buzzword in the AI industry as technology companies rush to harness ever-larger amounts of online data to improve their artificial intelligence systems. It’s also a red flag for Mozilla’s Abeba Birhane, an artificial intelligence expert who has spent years challenging the values and practices of her field and its impact on the world. Her latest research finds that expanding the online data used to train popular artificial intelligence image-generating tools disproportionately results in racist output, particularly against black people.
Challenges Faced by AI Experts
Birhane, a senior advisor for artificial intelligence accountability at the Mozilla Foundation, shares insights on the challenges faced by those working in the field of artificial intelligence. She highlights the importance of paying attention to data and conducting audits of large-scale data sets to ensure models’ success. Despite differing opinions on the value of machine learning, Birhane emphasizes the need for ethical considerations in the development and deployment of AI technologies.
The Ethical Implications of Scaling in AI Systems
While scale is often hailed as the key to success in AI research, Birhane’s research sheds light on how scaling up can lead to harmful outcomes. Expanding data collections can inadvertently increase the prevalence of hateful content and biases in AI systems. The findings also reveal a concerning trend of darker-skinned individuals being disproportionately labeled as suspects or criminals in AI-generated outputs. Despite these challenges, Birhane remains skeptical about the AI industry’s willingness to adopt necessary changes without strong regulations and public pressure.