Much of the recent panic over artificial intelligence and generative applications like ChatGPT has been over its capacity to fake or supplant real human endeavour: students can use it to write essays, companies can use it to replace workers. But in a world where public discourse is increasingly created and spread by machine, other dangers lurk – like built-in political biases and attempts to manipulate users. Drawing inspiration from
The Matrix’s famous red pill that awakened its hero to unsettling reality, Gleb Lisikh pokes at ChatGPT to see if pointed questioning and factual evidence can persuade it to amend its worldview. But if an ability to change opinion based on evidence is part of real intelligence, Lisikh finds, perhaps AI is more artificial than intelligent after all. Part II of a special series.
Part I is here.