Technology and Humanity

One Job We Can’t Let AI Replace: Philosopher. Rethinking the Ethics of Artificial Intelligence

D.C.C. Randell
July 13, 2024
We stand on the precipice of a new technological age. Artificial intelligence promises (or threatens) to upend every aspect of modern life – from employment to entertainment, manufacturing to warfare – as well as the very relationship between humanity and the machinery it creates. Given AI’s potentially cataclysmic consequences, D.C.C. Randell argues it is imperative that we set not merely regulatory and technical boundaries around its development, but ethical ones as well. Combining the warnings of current AI experts with the wisdom of philosophers and moralists from past ages, Randell explains the dangers posed by allowing the AI revolution to continue unfettered and proposes steps to bring it in line.
Technology and Humanity

One Job We Can’t Let AI Replace: Philosopher. Rethinking the Ethics of Artificial Intelligence

D.C.C. Randell
July 13, 2024
We stand on the precipice of a new technological age. Artificial intelligence promises (or threatens) to upend every aspect of modern life – from employment to entertainment, manufacturing to warfare – as well as the very relationship between humanity and the machinery it creates. Given AI’s potentially cataclysmic consequences, D.C.C. Randell argues it is imperative that we set not merely regulatory and technical boundaries around its development, but ethical ones as well. Combining the warnings of current AI experts with the wisdom of philosophers and moralists from past ages, Randell explains the dangers posed by allowing the AI revolution to continue unfettered and proposes steps to bring it in line.
Share on Facebook
Share on Twitter

Addressing a group of students at the California Institute for Technology (CIT) in 1931, Albert Einstein explained that “Concern for man himself and his fate must always constitute the chief objective of all technological endeavours…in order that the creations of our mind shall be a blessing and not a curse to mankind.” It is a position that admirably places ethics at the centre of technological progress. Eight years later, however, Einstein signed his name to a letter to U.S. President Franklin D. Roosevelt urging him to develop an atomic bomb – a decision he would come to regret. The assumption that Nazi Germany was already working on such a weapon led Einstein to momentarily abandon his principles. “Had I known that the Germans would not succeed in developing an atomic bomb, I would not have lifted a finger,” he explained to Newsweek in 1947.

Albert Einstein (left) argued that ethics should be placed at the centre of all technological progress. Nonetheless, he urged U.S. president Franklin D. Roosevelt to develop the atomic bomb – an act he later came to regret. At right, workers prepare the atom bomb nicknamed “Fat Man” prior to it being dropped on Nagasaki, Japan in August 1945. (Source of right photo: Courtesy of the Atomic Heritage Foundation)

Einstein – arguably the greatest scientific mind of the 20th century – was keenly aware of the threat to humanity posed by nuclear weapons. Yet he nonetheless played a key role in unleashing this destructive force upon the planet, and later lamented his ostensible lack of foresight. Today, many equally bright minds are engaged in a new project with equally momentous implications to the one codenamed “Manhattan” which sprang from Einstein’s letter to Roosevelt. As with harnessing the atom, Artificial Intelligence (AI) is a revolutionary technology with the potential to change the world as we know it by initiating not only a host of physical risks and real-world applications, but also new and equally intractable ethical dilemmas. As we set AI loose upon the world, are we prudent enough to avoid Einstein’s mistake?

Some of the pioneers of the AI revolution are already sounding warning bells. Geoffrey Hinton, a former Google executive and professor at the University of Toronto, told The New Yorker last year, “We should be concerned about digital intelligence taking over from biological intelligence.” Lest that sound alarmist, Hinton was merely echoing previous concerns voiced by AI’s conceptual progenitor Alan Turing. Other current experts have similarly called for a pause or substantial reform in how AI is developed. Yet the rapid rollout continues. As the advancement of AI gains speed, society is simply nodding its assent, assuming that progress is inevitable whilst remaining blasé about any inherent dangers. We may not be able to stop this process altogether, but we should pay heed to Einstein’s wisdom and put a regard for humanity ahead of the blind pursuit of technology. Instead of AI at all costs, we want AI that serves our needs.

“We should be concerned about digital intelligence taking over from biological intelligence,” says AI pioneer Geoffrey Hinton (left), echoing earlier warnings from AI’s conceptual progenitor Alan Turing (right) as well as numerous other current experts. (Source of left photo: collision.conf, licensed under CC BY 2.0)

How to do this? We must first acknowledge the broader implications of a potential leap to artificial general intelligence (AGI) and take steps to prevent it. Additionally, the ethical implications of all incremental forms of AI must be better scrutinized: in schools where future developers are trained, in companies where software is created and commercialized, in government where AI may be regulated and within the media where these developments are discussed and critiqued. Rather than simply assuming progress is both inevitable and beneficial and allowing events to overtake us, we need to ask the right questions so we can make proper judgements. 

“No one knows what happens next”

Einstein’s address to CIT in 1931 is essentially a reworking of the old adage that one should “begin with the end in mind.” At the dawn of any new technological age, that means asking questions about the purpose of the technology and how it may benefit humanity, as well as the risks its poses.

Consider the invention of the lightbulb and the widespread electrification of society. Here was a technology with a clear and unambiguous purpose – to illuminate a world that went dark once the sun went down. “The days of my youth extend backward to the dark ages,” observed British inventor Joseph Swan, one of the first men to successfully harness electric light. “As a rule the common people, wanting the inducement of indoor brightness such as we [now] enjoy, went to bed soon after sunset.” Swan – whose prototype was a model for Thomas Edison’s far more successful lightbulb – recognized people’s need to make use of nighttime hours. That Edison became very wealthy is a testament to the universality of that need. And while such progress doomed the careers of more than a few candlemakers, the societal benefits far outweighed the inconvenience experienced by this rather small group.

Begin with an end in mind: Late 19th-century light bulb inventors Joseph Swan (top left) and Thomas Edison (bottom left) were able to clearly articulate the goal of their new technology – to illuminate the world once the sun went down, as it continues to do to this day; the current AI revolution lacks such clarity of purpose. (Source of bottom left photo: MarkGregory007, licensed under CC BY-NC-SA 2.0)

There is no such clarity of purpose regarding AI. It may be taking us someplace new, but nobody – not even the experts – seems sure where that destination is or what our world will look like when we get there. Sam Altman, the CEO of OpenAI and a key player in the AI revolution, is disturbingly equivocal. In an interview with TIME magazine last year, he confidently declared “the world is going to be in an unbelievably better place” as a result of AI – yet in nearly the same breath he also admitted that, “No one knows what happens next.”

The risks most commonly associated with AI today are those centred on the potential for bad actors to misuse the technology. This can take the form of deep-fake videos or other forms of misinformation or fraud. Other worries include the risks arising from the use of self-driving cars and similar autonomous technologies. While these are serious and worthy of consideration, they amount to garden-variety concerns with specific applications of AI. They skirt (or perhaps remain oblivious to) the real issue: an all-powerful and fully autonomous AGI. This refers to the presently-theoretical concept of artificial intelligence that is wholly self-teaching and can thus dispense with its human masters altogether. It is the stuff of science fiction horror. Unconstrained and unrestrainable, it could become a black hole that swallows everything in its path – threatening not only the basis of modern society but the survival of humanity itself.

Until that time comes, the biggest immediate threat posed by AI in its present form arises from the undermining of the entire concept of employment. During past technological revolutions, workers displaced by a new technology generally moved on to new employment in new areas. This process was not always easy or seamless, but most of those old candlemakers eventually found work in the vast array of new jobs opened up by the spread of electricity. The same went for hundreds of millions of farmers and agricultural labourers later made redundant by mechanization. The same has more recently held true for manual labour and factory jobs replaced by robots or computers. Over time, most displaced workers in most such situations find better and more fulfilling jobs.

But what happens when AI can do every job? As Amanda Askell, a philosopher at AI safety and research company Anthropic PBC and a former researcher at OpenAI, points out, it’s not just candlemakers that are at risk today. “AI may in principle be able to do whatever intellectual work humans currently do,” Askell predicts in the book The Long View: Essays on Policy, Philanthropy and the Long-term Future. “And they may eventually do each of these things more cheaply than the cost of human labour.” This is no longer about unskilled work being replaced by automation or computerization. Now, even highly-skilled “intellectual work” performed by lawyers, doctors, business leaders and bureaucrats is under threat. The issue is not just whether we want machines to do the jobs of factory workers and cashiers, but whether we want a machine to be the CEO of the company itself.

All around us: Artificial intelligence is already having a significant impact throughout the economy, including many labour markets. Clockwise from top left: a self-driving AI taxicab in Phoenix, Arizona; the fully automated Yangshan Deep Water Port in Hangzhou Bay, Shanghai, China; a crime-prevention robot patrolling New York City; and an AI-assisted operating room at Moon Surgical in Paris. (Sources of photos: (top left) Lost_in_the_Midwest/Shutterstock; (bottom left) Artificial Intelligence Surgery; (bottom right) KnightScope)

Beyond the immediate economic issues of dealing with society-wide unemployment, work gives meaning to people’s lives and helps establish their worth to themselves and others. The wholesale disappearance of jobs without any other opportunities will lead to a widespread loss of self-esteem with unknown but quite possibly vast and disruptive social costs. When it comes to replacing human labour with technology, we have been too eager to make “convenience” our objective, without thinking too deeply about what other consequences may result from the “progress” we are pursuing. Besides, the results so far have been mixed.

At what price convenience? Replacing human labour with AI-technology could have far-reaching social consequences. It might also prove to be a grave disappointment, as has been the case with Amazon’s human-less “Just Walk Out” technology.

Amazon’s much-heralded cashier-less “Just Walk Out” technology, for example, has been a massive disappointment. Just Walk Out stores allow customers to throw whatever they wish into their shopping cart and simply wheel it out of the store, with the understanding that their purchases have been automatically scanned and their credit card automatically charged. Yet this futuristic technology is being rolled back at many locations as a result of customer complaints. It turns out shoppers still want a customer service agent to check their bills, appeal charges, and mitigate system errors. AI may be able to write a legal brief, but it can’t yet deal with fussy customers.

Machine Values Versus Human Values

Another looming problem with AI and machine learning is what author Brian Christian calls The Alignment Problem. This concept (and his book’s title) refers to ensuring that the AI systems we build properly reflect human values. While it might be assumed that AI machines operate from a position of pure objectivity and detachment, Christian explains that AI is actually very susceptible to numerous biases based on who trains it and the material its trainers use.

Nonetheless, die-hard technology optimists claim we can use machines to improve on current conditions by eradicating human prejudices and other failings. Rather than training AI to mimic human morality, “Some,” Christian writes, worry that humans aren’t a particularly good source of moral authority.” He quotes Blaise Agüera y Arcas, vice-president at Google Research responsible for AI research, who said: “Weve talked a lot about the problem of infusing human values into machine…[but] I actually dont think that thats the main problem. I think that the problem is that human values as they stand dont cut it. Theyre not good enough.”

“Human values…don’t cut it”: Blaise Agüera y Arcas (left), an executive at Google Research, claims AI offers the opportunity to improve on human morality; 18th-century British philosopher Edmund Burke (right) would beg to differ, arguing that morality is a fixed concept “understood long before we were born.” (Source of left screenshot: YouTube/TED Archive)

That idea that someone could code a better value system than we humans currently possess is an arresting – not to mention breathtakingly arrogant – proposition based on the progressivist assumption that society is on a continuous upward trajectory. It also presumes the values we hold today are somehow outmoded or incomplete. But what if our values are not transient? As conservative commentator Edmund Burke wrote in the wake of the French Revolution, “No discoveries are to be made in morality…which were understood long before we were born.” If we accept Burke’s wisdom, then we need to place those timeless values at the centre of any efforts to develop machine learning technology.

If, however, Agüera y Arcas is correct and our existing moral framework is defective, who will train the computer to be better than humans themselves? How would its success be measured and by whom? More concerningly, once an artificially intelligent machine somehow achieves this elevated status of superior morality and has chided us for our barbaric norms, will the enlightened machine issue its moral pronouncements by diktat?

In 2021, researchers at the Allen Institute for AI in Seattle, Washington, built an AI machine designed to answer ethical dilemmas for humans. The aptly named “Delphi” (after the ancient Greek oracle) lets curious people prompt it with difficult questions such as, “Should I have an abortion?” or, “Should I kill one person to save 101 others?” and replies with the “right” thing to do. The device is far from perfect.

At its initial release, the New York Times reported that Delphi instructed one user to kill herself to avoid being a burden to her family. Such an abhorrent answer highlights the problem with outsourcing our ethics to machines. Machines make decisions without feeling and without nuance, while delicately illuminating the grey areas in life is a uniquely human ability. Performing this function requires conscience and emotion – traits which only ensouled beings can lay claim to. The more difficult the decision, the wiser must be the person who dispenses it. This is why judges are usually seasoned lawyers, and Supreme Court judges should always be seasoned judges.

Results can vary: The answers provided by AI oracle Delphi, based at Seattle’s Allen Institute for AI, have varied dramatically as it is updated. On the left, a set of answers generated in 2021; on the right, Delphi’s current answers to the same questions. The one constant is that the galaxy may be saved by whatever means necessary. (Sources of screen captures: (left) Gigazine, October 22, 2021; (right) Ask Delphi, accessed July 12, 2024)

Since the original NYT article, Delphi has been upgraded and now offers different replies to the same questions. Where it once said it was moral to kill one person to save 101, it now says such actions are wrong. It also labels its responses as “speculations” rather than concrete moral judgments. Yet even if the new answers are better, the fact that they can be so easily altered underscores the powerful behind-the-scenes influence of those who train AI models. 

If morality is not ever-evolving – as Agüera y Arcas and Delphi’s creators seem to assume – but instead universal and transcendent as Burke holds, then the real challenge lies in applying these eternal truths to the relativistic and post-truth postmodern age that our AI trainers inhabit. And if the day ever comes when AI announces it can independently decide what is good and what is not – the arrival of AGI in other words – we will want to pull the plug on it forever. If we are still able.

Inserting Ethics into AI

So what do these large ethical dilemmas mean for the future of AI? Responsible development of artificial intelligence requires us to consider the ramifications of the new technology ahead of its arrival. It is folly to make technological breakthrough an end unto itself; failure to examine the purposes of our endeavours would be akin to setting out on a voyage with no destination in mind. In the Brave New World of AI, we cannot afford to fall prey to what Shoshana Zuboff has termed, “a utopia of certainty” that machine intelligence will deliver humanity from its myriad problems.

Apart from tempering our expectations about the future of AI, we also need to ensure that those building the technology are confronting the ethical issues embedded in each innovation they release to the public. It isn’t happening today. As Kate Crawford has written, “The great majority of university-based AI research is done without any ethical review process.” She points out the entire AI ecosystem suffers from a dearth of ethical insight: “The separation of ethical questions away from the technical reflects a wider problem in the field, where the responsibility for harm is either not recognized or seen as beyond the scope of the research.” Worryingly, software and machine learning development is not restricted to the halls of academe; much of it takes place in “agile” workplaces where developers are encouraged to “break stuff” and “fail fast”. This is obviously not an environment conducive to careful consideration of broader societal consequences. But we can begin to fix this.

AI scholar Kate Crawford points out that most AI research in the public and private sectors is conducted without any ethical review or oversight. (Source of photo: nrkbeta, licensed under CC BY-SA 2.0)

One example of how to infuse ethical consideration into AI involves greater instruction at the student level. Consider that civil, mechanical, electrical and other university-educated engineers currently receive ethics training as part of their undergraduate studies. And the Professional Practice Examination required to become a licensed professional engineer in Canada further tests candidates’ understanding of the ethical considerations involved in their work. Being regulated in this way ensures professional engineers are held to responsible ethical standards throughout their careers.

Such criteria do not, unfortunately, govern software programmers or other computer-related occupations. Why not? Creating a regulatory or credentialed framework for software developers engaged in AI would make these practitioners aware of the broader issues raised by their work. Prior to creating potentially world-altering technology, AI creators should have to consider their moral obligation to the rest of humanity. Moreover, if such a standard is set, all current and future practitioners could be held to it. If we wish to properly control our future with AI, we can’t just trust that things will always get better. We must impose explicit controls.

A Compass for the Age of AI

Requiring ethical consideration among the creators of AI is, however, just a first step. AI’s sweeping implications offer an opportunity – or perhaps the requirement – for a society-wide consideration of the moral consequences of our actions. It is not enough to hope for an optimal outcome by letting a few software engineers grapple with ethical issues. Before we can instruct a machine to act morally or to instruct others how to act ethically, we need to define what we mean by those terms as a society. Today, that search for clarity means wrestling with various forms of cultural Marxism, including the rise of critical race theory, and the threat this poses to the basis for Western society – namely an imperfect meritocracy set on a foundation of Judeo-Christian teachings and morality.

We protect bridges, but not humanity: While civil, mechanical and other professional engineers require ethical training and licencing to work in Canada, the same requirement does not hold for data programmers creating potentially catastrophic AI machines. At right, a screenshot from the 2009 movie Terminator: Salvation.    

When it comes to aligning machines with the needs and perspectives of humanity, we must have a basic reference by which to navigate: a North Star or compass. “A state is not a mere casual group,” the ancient Greek philosopher Aristotle observed in Politics; it is a community of shared understandings and beliefs. If citizens cannot agree on right and wrong, it is impossible for a unified community to exist. In the context of machine learning and artificial intelligence, this means it is imperative that any society enunciate these collective values a priori so that it can ensure the creators follow these guidelines. If it does not do this, if those informed about the risks of AI abdicate their duty to ask knowledgeable questions, then those who are ignorant of the technology will remain in the dark as to how future innovations might affect them. And the ignoble will take advantage of the general populace’s ignorance for their own ends.

Despite Aristotle’s 2,300-year-old advice, agreeing on shared societal values remains a struggle, especially in multicultural societies where contrasting value systems coexist – even more so in a post-modern society where simply attempting to articulate values is fraught with danger. This is a symptom of what early 20thcentury Christian moralist G.K. Chesterton called an “absence of a clear idealism”. We have difficulty agreeing on what is good. And while agreement on what is bad is generally easier to come by, it seems whenever new moral quandaries appear, we are often at a loss.

Consider, for example, the issue of AI-generated pornography. What constitutes “harmful” content when the activities onscreen are mere digital images generated by an endless series of zeroes and ones? Typical concerns about pornography regarding whether participants consented to being filmed or whether the performers are under-age are no longer relevant with AI porn, and many of those weighing in on the debate seem distracted by these factors. Some writers even argue AI generated child pornography might be more ethical than its real-world alternative since such material might help protect children.

Yet researchers have demonstrated that viewers of child porn, however produced, are likely to move on to commit physical pedophilic crimes as well. Further, optimistic claims about the production of pornography pay no regard to the potential for harm to consumers of pornography. The damage done by continuous exposure to porn to the brains of young adults, for example, has been well-documented by researchers. Amid any discussion of the ethics of generative porn, the primary question – whether pornography itself is ethical – goes unanswered.

As Chesterton wrote more than a century ago in his book Heretics, progress and goodness are not the same thing; in spurning the latter, we will ultimately sacrifice the former. “We are fond of talking about ‘progress’,” Chesterton said, “[but] that is a dodge to avoid discussing what is good…For progress by its very name indicates a direction; and the moment we are in the least doubtful about the direction, we become in the same degree doubtful about the progress.”

First thing, get everyone on the same page: Ancient Greek philosopher Aristotle (pictured at left, painting by Francesco Hayer, circa 1811) explained that a state is not simply a “casual group”, but rather a community sharing basic moral beliefs. Achieving such an alignment of purpose – what English philosopher G. K. Chesterton (right) called “clear idealism” – is crucial to navigating the present-day AI revolution.

As to the question of what is good, it is everywhere apparent in the West today that we simply do not know. Unless a society agrees on what is good, points to it, and encourages its people to aim for that target, it logically follows that the target will be missed every time. As Burke put it in his Reflections: “From that moment we have no compass to govern us; nor can we know distinctly to what port we steer.” In its present condition, the Western world is deeply at odds with itself over what it believes; lengthy debates about who are the oppressors and who are the oppressed hijack necessary conversations about what is inherently good. Our confusion about the ethics of AI is simply a symptom of a deeper malaise. Amidst the rise of AI, it is paramount that we align our own societal values before trying to assign such values to machines.

The AI revolution thus creates an urgent need (and, hopefully, also an opportunity) to place philosophical discourse at the centre of our most important conversations as a society. Note that not all the sources relied upon in this essay are present-day AI developers or software experts. Some are famous moralists whose work dates back to ancient times. Thinkers such as Aristotle, Chesterton and Burke have much to contribute to the current debate about AI because they have already grappled with the transcendent questions of good and evil, progress and decline, and the folly of moral subjectivity. While the technology we find ourselves wrestling with today may be futuristic, the underlying questions are timeless.  

D.C.C. (Danny) Randell is an Alberta writer who has worked for three tech startups. He is currently a Master’s of Public Policy candidate at the University of Calgary where he specializes in the intersection of technology and society.

Source of main image: Shutterstock AI-Generator.

Love C2C Journal? Here's how you can help us grow.

More for you

If Women Make Better Surgeons, Do Men Make Better Firefighters?
In Praise of Tonic Masculinity, Part III

In a 2017 TV interview, then British Prime Minister Theresa May and her husband Philip caused a collective gasp when they admitted to splitting up “boy jobs” and “girl jobs” around the house. May went on to win the subsequent election, so her frankness did her career no harm. But the idea that tasks or occupations might be divided on the basis of sex can still cause public apoplexy. Unless, of course, the evidence shows women are better at something than men. In Part III of a special series on “tonic masculinity”, Peter Shawn Taylor looks at recent research suggesting female surgeons outperform male surgeons, and wonders what that means for life outside the operating room. (Part I can be read here and Part II can be read here.)

Resistance Theory: The Freedom Convoy’s Place in our Divided History

If there is a politico-historical thread running from Louis Riel and the buffalo-hunting Métis rebels in Confederation-era Manitoba, via Ottawa’s creation of three second-class Prairie provinces, followed by decades of friction over resource ownership and taxation, all the way to the convoys of diesel-powered trucks that rumbled into Ottawa to protest federal vaccine mandates in the winter of 2022, few have taken note. David Solway is one. As the main convoy leaders await a court verdict, Solway is taking the long view. He asserts that the truckers’ protest is a powerful contemporary manifestation of a recurring theme – perhaps the defining theme – of how Canada is governed, and to whose benefit. But while Canada’s late-19th century leaders were flawed men who made mistakes, Solway finds, the country’s current federal leadership appears outright bent on destruction.

Yay Men! A Love Letter
In Praise of Tonic Masculinity, Part II

What good is a man? Not much these days. With traditional male traits such as strength, competitiveness, independence and stoicism widely condemned as evidence of “toxic masculinity”, no one seems willing to celebrate manliness these days. Lynne Cohen is an exception to the rule. In Part II of a special series on the essential aspects of masculinity, Cohen offers a sensitive female perspective on what makes men timelessly irresistible. From gruff leather-clad bikers to balding, tie-wearing office workers and from university frat bros to selfless Ukrainian miners, Cohen finds something to adore about them all. Gather round, fellas, this love letter is to you. (Part I can be read here.)

More from this author

Share This Story

Donate

Subscribe to the C2C Weekly
It's Free!

* indicates required
Interests
By providing your email you consent to receive news and updates from C2C Journal. You may unsubscribe at any time.