Addressing a group of students at the California Institute for Technology (CIT) in 1931, Albert Einstein explained that “Concern for man himself and his fate must always constitute the chief objective of all technological endeavours…in order that the creations of our mind shall be a blessing and not a curse to mankind.” It is a position that admirably places ethics at the centre of technological progress. Eight years later, however, Einstein signed his name to a letter to U.S. President Franklin D. Roosevelt urging him to develop an atomic bomb – a decision he would come to regret. The assumption that Nazi Germany was already working on such a weapon led Einstein to momentarily abandon his principles. “Had I known that the Germans would not succeed in developing an atomic bomb, I would not have lifted a finger,” he explained to Newsweek in 1947.
Einstein – arguably the greatest scientific mind of the 20th century – was keenly aware of the threat to humanity posed by nuclear weapons. Yet he nonetheless played a key role in unleashing this destructive force upon the planet, and later lamented his ostensible lack of foresight. Today, many equally bright minds are engaged in a new project with equally momentous implications to the one codenamed “Manhattan” which sprang from Einstein’s letter to Roosevelt. As with harnessing the atom, Artificial Intelligence (AI) is a revolutionary technology with the potential to change the world as we know it by initiating not only a host of physical risks and real-world applications, but also new and equally intractable ethical dilemmas. As we set AI loose upon the world, are we prudent enough to avoid Einstein’s mistake?
Some of the pioneers of the AI revolution are already sounding warning bells. Geoffrey Hinton, a former Google executive and professor at the University of Toronto, told The New Yorker last year, “We should be concerned about digital intelligence taking over from biological intelligence.” Lest that sound alarmist, Hinton was merely echoing previous concerns voiced by AI’s conceptual progenitor Alan Turing. Other current experts have similarly called for a pause or substantial reform in how AI is developed. Yet the rapid rollout continues. As the advancement of AI gains speed, society is simply nodding its assent, assuming that progress is inevitable whilst remaining blasé about any inherent dangers. We may not be able to stop this process altogether, but we should pay heed to Einstein’s wisdom and put a regard for humanity ahead of the blind pursuit of technology. Instead of AI at all costs, we want AI that serves our needs.
How to do this? We must first acknowledge the broader implications of a potential leap to artificial general intelligence (AGI) and take steps to prevent it. Additionally, the ethical implications of all incremental forms of AI must be better scrutinized: in schools where future developers are trained, in companies where software is created and commercialized, in government where AI may be regulated and within the media where these developments are discussed and critiqued. Rather than simply assuming progress is both inevitable and beneficial and allowing events to overtake us, we need to ask the right questions so we can make proper judgements.
“No one knows what happens next”
Einstein’s address to CIT in 1931 is essentially a reworking of the old adage that one should “begin with the end in mind.” At the dawn of any new technological age, that means asking questions about the purpose of the technology and how it may benefit humanity, as well as the risks its poses.
There is no such clarity of purpose regarding AI. It may be taking us someplace new, but nobody – not even the experts – seems sure where that destination is or what our world will look like when we get there.
Consider the invention of the lightbulb and the widespread electrification of society. Here was a technology with a clear and unambiguous purpose – to illuminate a world that went dark once the sun went down. “The days of my youth extend backward to the dark ages,” observed British inventor Joseph Swan, one of the first men to successfully harness electric light. “As a rule the common people, wanting the inducement of indoor brightness such as we [now] enjoy, went to bed soon after sunset.” Swan – whose prototype was a model for Thomas Edison’s far more successful lightbulb – recognized people’s need to make use of nighttime hours. That Edison became very wealthy is a testament to the universality of that need. And while such progress doomed the careers of more than a few candlemakers, the societal benefits far outweighed the inconvenience experienced by this rather small group.
There is no such clarity of purpose regarding AI. It may be taking us someplace new, but nobody – not even the experts – seems sure where that destination is or what our world will look like when we get there. Sam Altman, the CEO of OpenAI and a key player in the AI revolution, is disturbingly equivocal. In an interview with TIME magazine last year, he confidently declared “the world is going to be in an unbelievably better place” as a result of AI – yet in nearly the same breath he also admitted that, “No one knows what happens next.”
The risks most commonly associated with AI today are those centred on the potential for bad actors to misuse the technology. This can take the form of deep-fake videos or other forms of misinformation or fraud. Other worries include the risks arising from the use of self-driving cars and similar autonomous technologies. While these are serious and worthy of consideration, they amount to garden-variety concerns with specific applications of AI. They skirt (or perhaps remain oblivious to) the real issue: an all-powerful and fully autonomous AGI. This refers to the presently-theoretical concept of artificial intelligence that is wholly self-teaching and can thus dispense with its human masters altogether. It is the stuff of science fiction horror. Unconstrained and unrestrainable, it could become a black hole that swallows everything in its path – threatening not only the basis of modern society but the survival of humanity itself.
The issue is not just whether we want machines to do the jobs of factory workers and cashiers, but whether want a machine to be the CEO of the company itself.
Until that time comes, the biggest immediate threat posed by AI in its present form arises from the undermining of the entire concept of employment. During past technological revolutions, workers displaced by a new technology generally moved on to new employment in new areas. This process was not always easy or seamless, but most of those old candlemakers eventually found work in the vast array of new jobs opened up by the spread of electricity. The same went for hundreds of millions of farmers and agricultural labourers later made redundant by mechanization. The same has more recently held true for manual labour and factory jobs replaced by robots or computers. Over time, most displaced workers in most such situations find better and more fulfilling jobs.
But what happens when AI can do every job? As Amanda Askell, a philosopher at AI safety and research company Anthropic PBC and a former researcher at OpenAI, points out, it’s not just candlemakers that are at risk today. “AI may in principle be able to do whatever intellectual work humans currently do,” Askell predicts in the book The Long View: Essays on Policy, Philanthropy and the Long-term Future. “And they may eventually do each of these things more cheaply than the cost of human labour.” This is no longer about unskilled work being replaced by automation or computerization. Now, even highly-skilled “intellectual work” performed by lawyers, doctors, business leaders and bureaucrats is under threat. The issue is not just whether we want machines to do the jobs of factory workers and cashiers, but whether we want a machine to be the CEO of the company itself.
Beyond the immediate economic issues of dealing with society-wide unemployment, work gives meaning to people’s lives and helps establish their worth to themselves and others. The wholesale disappearance of jobs without any other opportunities will lead to a widespread loss of self-esteem with unknown but quite possibly vast and disruptive social costs. When it comes to replacing human labour with technology, we have been too eager to make “convenience” our objective, without thinking too deeply about what other consequences may result from the “progress” we are pursuing. Besides, the results so far have been mixed.
Amazon’s much-heralded cashier-less “Just Walk Out” technology, for example, has been a massive disappointment. Just Walk Out stores allow customers to throw whatever they wish into their shopping cart and simply wheel it out of the store, with the understanding that their purchases have been automatically scanned and their credit card automatically charged. Yet this futuristic technology is being rolled back at many locations as a result of customer complaints. It turns out shoppers still want a customer service agent to check their bills, appeal charges, and mitigate system errors. AI may be able to write a legal brief, but it can’t yet deal with fussy customers.
Machine Values Versus Human Values
Another looming problem with AI and machine learning is what author Brian Christian calls The Alignment Problem. This concept (and his book’s title) refers to ensuring that the AI systems we build properly reflect human values. While it might be assumed that AI machines operate from a position of pure objectivity and detachment, Christian explains that AI is actually very susceptible to numerous biases based on who trains it and the material its trainers use.
Nonetheless, die-hard technology optimists claim we can use machines to improve on current conditions by eradicating human prejudices and other failings. Rather than training AI to mimic human morality, “Some,” Christian writes, “worry that humans aren’t a particularly good source of moral authority.” He quotes Blaise Agüera y Arcas, vice-president at Google Research responsible for AI research, who said: “We’ve talked a lot about the problem of infusing human values into machine…[but] I actually don’t think that that’s the main problem. I think that the problem is that human values as they stand don’t cut it. They’re not good enough.”
That idea that someone could code a better value system than we humans currently possess is an arresting – not to mention breathtakingly arrogant – proposition based on the progressivist assumption that society is on a continuous upward trajectory. It also presumes the values we hold today are somehow outmoded or incomplete. But what if our values are not transient? As conservative commentator Edmund Burke wrote in the wake of the French Revolution, “No discoveries are to be made in morality…which were understood long before we were born.” If we accept Burke’s wisdom, then we need to place those timeless values at the centre of any efforts to develop machine learning technology.
If, however, Agüera y Arcas is correct and our existing moral framework is defective, who will train the computer to be better than humans themselves? How would its success be measured and by whom? More concerningly, once an artificially intelligent machine somehow achieves this elevated status of superior morality and has chided us for our barbaric norms, will the enlightened machine issue its moral pronouncements by diktat?
In 2021, researchers at the Allen Institute for AI in Seattle, Washington, built an AI machine designed to answer ethical dilemmas for humans. The aptly named “Delphi” (after the ancient Greek oracle) lets curious people prompt it with difficult questions such as, “Should I have an abortion?” or, “Should I kill one person to save 101 others?” and replies with the “right” thing to do. The device is far from perfect.
At its initial release, the New York Times reported that Delphi instructed one user to kill herself to avoid being a burden to her family. Such an abhorrent answer highlights the problem with outsourcing our ethics to machines. Machines make decisions without feeling and without nuance, while delicately illuminating the grey areas in life is a uniquely human ability. Performing this function requires conscience and emotion – traits which only ensouled beings can lay claim to. The more difficult the decision, the wiser must be the person who dispenses it. This is why judges are usually seasoned lawyers, and Supreme Court judges should always be seasoned judges.
Since the original NYT article, Delphi has been upgraded and now offers different replies to the same questions. Where it once said it was moral to kill one person to save 101, it now says such actions are wrong. It also labels its responses as “speculations” rather than concrete moral judgments. Yet even if the new answers are better, the fact that they can be so easily altered underscores the powerful behind-the-scenes influence of those who train AI models.
If morality is not ever-evolving – as Agüera y Arcas and Delphi’s creators seem to assume – but instead universal and transcendent as Burke holds, then the real challenge lies in applying these eternal truths to the relativistic and post-truth postmodern age that our AI trainers inhabit. And if the day ever comes when AI announces it can independently decide what is good and what is not – the arrival of AGI in other words – we will want to pull the plug on it forever. If we are still able.
Inserting Ethics into AI
So what do these large ethical dilemmas mean for the future of AI? Responsible development of artificial intelligence requires us to consider the ramifications of the new technology ahead of its arrival. It is folly to make technological breakthrough an end unto itself; failure to examine the purposes of our endeavours would be akin to setting out on a voyage with no destination in mind. In the Brave New World of AI, we cannot afford to fall prey to what Shoshana Zuboff has termed, “a utopia of certainty” that machine intelligence will deliver humanity from its myriad problems.
Prior to creating potentially world-altering technology, AI creators should have to consider their moral obligation to the rest of humanity.
Apart from tempering our expectations about the future of AI, we also need to ensure that those building the technology are confronting the ethical issues embedded in each innovation they release to the public. It isn’t happening today. As Kate Crawford has written, “The great majority of university-based AI research is done without any ethical review process.” She points out the entire AI ecosystem suffers from a dearth of ethical insight: “The separation of ethical questions away from the technical reflects a wider problem in the field, where the responsibility for harm is either not recognized or seen as beyond the scope of the research.” Worryingly, software and machine learning development is not restricted to the halls of academe; much of it takes place in “agile” workplaces where developers are encouraged to “break stuff” and “fail fast”. This is obviously not an environment conducive to careful consideration of broader societal consequences. But we can begin to fix this.
One example of how to infuse ethical consideration into AI involves greater instruction at the student level. Consider that civil, mechanical, electrical and other university-educated engineers currently receive ethics training as part of their undergraduate studies. And the Professional Practice Examination required to become a licensed professional engineer in Canada further tests candidates’ understanding of the ethical considerations involved in their work. Being regulated in this way ensures professional engineers are held to responsible ethical standards throughout their careers.
Such criteria do not, unfortunately, govern software programmers or other computer-related occupations. Why not? Creating a regulatory or credentialed framework for software developers engaged in AI would make these practitioners aware of the broader issues raised by their work. Prior to creating potentially world-altering technology, AI creators should have to consider their moral obligation to the rest of humanity. Moreover, if such a standard is set, all current and future practitioners could be held to it. If we wish to properly control our future with AI, we can’t just trust that things will always get better. We must impose explicit controls.
A Compass for the Age of AI
Requiring ethical consideration among the creators of AI is, however, just a first step. AI’s sweeping implications offer an opportunity – or perhaps the requirement – for a society-wide consideration of the moral consequences of our actions. It is not enough to hope for an optimal outcome by letting a few software engineers grapple with ethical issues. Before we can instruct a machine to act morally or to instruct others how to act ethically, we need to define what we mean by those terms as a society. Today, that search for clarity means wrestling with various forms of cultural Marxism, including the rise of critical race theory, and the threat this poses to the basis for Western society – namely an imperfect meritocracy set on a foundation of Judeo-Christian teachings and morality.
When it comes to aligning machines with the needs and perspectives of humanity, we must have a basic reference by which to navigate: a North Star or compass. “A state is not a mere casual group,” the ancient Greek philosopher Aristotle observed in Politics; it is a community of shared understandings and beliefs. If citizens cannot agree on right and wrong, it is impossible for a unified community to exist. In the context of machine learning and artificial intelligence, this means it is imperative that any society enunciate these collective values a priori so that it can ensure the creators follow these guidelines. If it does not do this, if those informed about the risks of AI abdicate their duty to ask knowledgeable questions, then those who are ignorant of the technology will remain in the dark as to how future innovations might affect them. And the ignoble will take advantage of the general populace’s ignorance for their own ends.
Despite Aristotle’s 2,300-year-old advice, agreeing on shared societal values remains a struggle, especially in multicultural societies where contrasting value systems coexist – even more so in a post-modern society where simply attempting to articulate values is fraught with danger. This is a symptom of what early 20thcentury Christian moralist G.K. Chesterton called an “absence of a clear idealism”. We have difficulty agreeing on what is good. And while agreement on what is bad is generally easier to come by, it seems whenever new moral quandaries appear, we are often at a loss.
Consider, for example, the issue of AI-generated pornography. What constitutes “harmful” content when the activities onscreen are mere digital images generated by an endless series of zeroes and ones? Typical concerns about pornography regarding whether participants consented to being filmed or whether the performers are under-age are no longer relevant with AI porn, and many of those weighing in on the debate seem distracted by these factors. Some writers even argue AI generated child pornography might be more ethical than its real-world alternative since such material might help protect children.
Our confusion about the ethics of AI is simply a symptom of a deeper malaise. Amidst the rise of AI, it is paramount that we align our own societal values before trying to assign such values to machines.
Yet researchers have demonstrated that viewers of child porn, however produced, are likely to move on to commit physical pedophilic crimes as well. Further, optimistic claims about the production of pornography pay no regard to the potential for harm to consumers of pornography. The damage done by continuous exposure to porn to the brains of young adults, for example, has been well-documented by researchers. Amid any discussion of the ethics of generative porn, the primary question – whether pornography itself is ethical – goes unanswered.
As Chesterton wrote more than a century ago in his book Heretics, progress and goodness are not the same thing; in spurning the latter, we will ultimately sacrifice the former. “We are fond of talking about ‘progress’,” Chesterton said, “[but] that is a dodge to avoid discussing what is good…For progress by its very name indicates a direction; and the moment we are in the least doubtful about the direction, we become in the same degree doubtful about the progress.”
As to the question of what is good, it is everywhere apparent in the West today that we simply do not know. Unless a society agrees on what is good, points to it, and encourages its people to aim for that target, it logically follows that the target will be missed every time. As Burke put it in his Reflections: “From that moment we have no compass to govern us; nor can we know distinctly to what port we steer.” In its present condition, the Western world is deeply at odds with itself over what it believes; lengthy debates about who are the oppressors and who are the oppressed hijack necessary conversations about what is inherently good. Our confusion about the ethics of AI is simply a symptom of a deeper malaise. Amidst the rise of AI, it is paramount that we align our own societal values before trying to assign such values to machines.
The AI revolution thus creates an urgent need (and, hopefully, also an opportunity) to place philosophical discourse at the centre of our most important conversations as a society. Note that not all the sources relied upon in this essay are present-day AI developers or software experts. Some are famous moralists whose work dates back to ancient times. Thinkers such as Aristotle, Chesterton and Burke have much to contribute to the current debate about AI because they have already grappled with the transcendent questions of good and evil, progress and decline, and the folly of moral subjectivity. While the technology we find ourselves wrestling with today may be futuristic, the underlying questions are timeless.
D.C.C. (Danny) Randell is an Alberta writer who has worked for three tech startups. He is currently a Master’s of Public Policy candidate at the University of Calgary where he specializes in the intersection of technology and society.
Source of main image: Shutterstock AI-Generator.