The rapid rise of generative AI has captivated the world, but as the technology advances at an unprecedented pace, a crisis has emerged: the erosion of public trust in the AI industry.
The 2024 Edelman Trust Barometer, a comprehensive survey of over 32,000 respondents across 28 countries, has revealed a startling decline in global confidence in AI companies, with trust levels plummeting from 61% to 53% in just five years.
The US has seen an even more dramatic drop, with trust falling from 50% to 35% over the same period. This cuts across political lines, with Democrats (38%), independents (25%), and Republicans (24%) all expressing deep skepticism about the AI industry.
Once well-trusted by the public, the technology sector is losing its luster. Eight years ago, technology reigned as the most trusted industry in 90% of the countries studied by Edelman.
Today, that figure has plummeted to just 50%. In fact, the tech sector has lost its position as the most trusted industry in key markets like the US, UK, Germany, and France.
When it comes to specific technologies, trust levels are even more concerning. While 76% of global respondents trust tech companies overall, only 50% trust AI. This 26-point gap is even more pronounced in areas like gene-based medicine (23-point gap) and genetically modified foods (40-point gap).
The Edelman study also highlights a stark divide between developed and developing nations in their attitudes toward AI. Respondents in France, Canada, Ireland, the UK, the US, Germany, Australia, the Netherlands, and Sweden reject the growing use of AI by a three-to-one margin.
In contrast, acceptance of AI significantly outpaces resistance in developing markets such as Saudi Arabia, India, China, Kenya, Nigeria, and Thailand.
What drives mistrust of the generative AI industry?
So what’s driving this mistrust?
Globally, privacy concerns (39%), the devaluation of humanity (36%), and inadequate testing (35%) top the list of barriers to AI adoption.
In the US, fears of societal harm (61%) and threats to personal well-being (57%) are particularly acute. Interestingly, job displacement ranks near the bottom of concerns both globally (22%) and in the US (19%).
These findings are further reinforced by a recent AI Policy Institute poll conducted by YouGov, which found that a staggering 72% of American voters advocate for slower AI development, contrasting to the mere 8% favoring hastening it.
The poll also revealed that 62% of Americans express apprehension about AI, overshadowing the 21% who feel enthusiastic.
Recent controversies, such as the leak of over 16,000 artist names linked to training Midjourney’s image generation models and insider revelations at Microsoft and Google, have only heightened public concerns about the AI industry.
While industry titans like Sam Altman, Brad Smith, and Jensen Huang are eager to advance AI development for the ‘greater good,’ the public doesn’t necessarily share the same fervor.
To rebuild trust, the Edelman report recommends businesses partner with the government to ensure responsible development and earn public trust through thorough testing.
Scientists and experts still hold authority but increasingly need to engage in public dialogue. Above all, people want to feel a sense of agency and control over how emerging innovations will impact their lives.
As Justin Westcott, Edelman’s global technology chair, aptly stated, “Those who prioritize responsible AI, who transparently partner with communities and governments, and who put control back into the hands of the users, will not only lead the industry but will rebuild the bridge of trust that technology has, somewhere along the way, lost.”
Fear of the unknown?
Throughout human history, the emergence of groundbreaking technologies has often been accompanied by a complex interplay of fascination, adoption, and apprehension.
There is no doubt that millions of people use generative AI regularly now, with surveys showing that some 1/6th of people in digitally advanced economies use AI tools daily.
Studies from individual industries find that people save daily hours using generative AI, lowering their risk of burnout and lessening administrative burdens.
Generative AI is perhaps representative of an unknown and potentially unpredictable feature. Fear surrounding it is not an entirely new phenomenon but rather an echo of historical patterns that have shaped our relationship with transformative innovations.
Consider, for instance, the advent of the printing press in the 15th century. This revolutionary technology democratized access to knowledge, paved the way for mass communication, and catalyzed profound social, political, and religious shifts.
Amid the rapid proliferation of printed materials, there were fears about the potential for misinformation, the erosion of authority, and the disruption of established power structures.
Similarly, the Industrial Revolution of the 18th and 19th centuries brought about unprecedented advancements in manufacturing, transportation, and communication.
The steam engine, the telegraph, and the factory system transformed the fabric of society, unleashing new possibilities for productivity and progress. However, these innovations also raised concerns about the displacement of workers, the concentration of wealth and power, and the dehumanizing effects of mechanization.
This dissonance surrounding generative AI reflects a deeper tension between our innate desire for progress and our fear of the unknown. Humans are drawn to the novelty and potential of new technologies, yet we also grapple with the uncertainty and risks they bring.
The French philosopher Jean-Paul Sartre, in his magnum opus “Being and Nothingness” (1943), explores the concept of “bad faith,” a form of self-deception in which individuals deny their own freedom and responsibility in the face of existential anxiety.
In the context of generative AI, the widespread adoption of the technology, despite growing mistrust, can be seen as a form of bad faith, a way of embracing the benefits of AI while avoiding the difficult questions and ethical dilemmas it raises.
Moreover, the pace and scale of generative AI development amplify the dissonance between adoption and mistrust.
Unlike previous technological revolutions that unfolded over decades or centuries, the rise of AI is happening at an unprecedented speed, outpacing our ability to comprehend its implications and develop adequate governance frameworks fully.
This rapid advancement has left many feeling a sense of vertigo as if the ground beneath their feet is shifting faster than they can adapt. It has also exposed the limitations of our existing legal, ethical, and social structures, which struggle to keep pace with AI’s transformative power.
We must work to create a future in which the benefits of this technology are realized in a manner that upholds our values, protects our rights, and promotes the greater good.
The challenge is that ‘the greater good’ is something of immense subjectivity and obscurity.
Guiding generative AI towards it will demand open and honest dialogue, a willingness to confront difficult questions, and a commitment to building bridges of understanding and trust.