Experts who warn that artificial intelligence (AI) poses catastrophic risks on par with nuclear annihilation ignore the gradual, diffused nature of technological development.
As I argued in my 2008 book, The Venturesome Economy, transformative technologies – from steam engines, airplanes, computers, mobile telephony, and the internet to antibiotics and mRNA vaccines – evolve through a protracted, massively multiplayer game that defies top-down command and control.
Joseph Schumpeter’s “gales of creative destruction” and more recent theories trumpeting disruptive breakthroughs are misleading.
New technologies introduce new risks. Invariably, military applications develop alongside commercial and civilian uses.
Airplanes and motorised ground vehicles have been deployed in conflicts since World War I, and personal computers and mobile communication are indispensable for modern warfare.
Yet life goes on. Technologically advanced societies have developed legal, political, and law-enforcement mechanisms to contain the conflicts and criminality that technological advances enable.
The Manhattan Project, which developed the atomic bomb and helped end World War II, was an exception. It had a high-priority military mandate. With the Nazis seeking to develop a bomb of their own, speed and effective leadership were essential. And as all-out thermonuclear war became a real threat, statecraft and strategic deterrence helped avert doomsday.
But nuclear weapons are a misleading analogy for AI, which has followed the typically diffused, halting pattern of most other technological transformations. AI spans disparate techniques – such as machine learning, pattern recognition, and natural language processing – and has wide-ranging applications. Their common feature is mainly aspirational – to go beyond mere calculation to more speculative yet useful inferences and interpretations.
Unlike the Manhattan Project, which proceeded at breakneck speed, AI developers have been at work for more than seven decades, quietly inserting AI into everything from digital cameras and scanners to smartphones, automatic-braking and fuel-injection systems in cars, special effects in movies, Google searches, digital communications, and social-media platforms. And, as with other technological advances, AI has long been put to military and criminal uses.
Yet AI advances have been gradual and uncertain. IBM’s Deep Blue famously beat world chess champion Garry Kasparov in 1997 – 40 years after an IBM researcher first wrote a chess-playing programme. And though Deep Blue’s successor, Watson, won $1 million by beating the reigning champions in 2011, it was a commercial failure. In 2022, IBM sold off Watson Health for a fraction of the billions it had invested. Microsoft’s intelligent assistant, Clippy, became an object of ridicule. And after years of development, autocompleted texts continue to produce embarrassing results.
Machine learning – essentially a souped-up statistical procedure that many AI programmes depend on – requires reliable feedback. But good feedback demands unambiguous outcomes produced by a stable process.
Ambiguous human intentions, impulsiveness, and creativity undermine statistical learning and thus limit the useful scope of AI. While AI software flawlessly recognises my face at airports, it cannot accurately comprehend the nuances of my carefully and slowly spoken words.
Large language models (LLMs), which have become the public face of AI, are not technological discontinuities that magically transcend the limitations of machine learning.
Whereas I found my 1990s Google searches to be invaluable timesavers, checking the accuracy of LLM responses made them productivity killers. Relying on them to help edit and illustrate my manuscript was also a waste of time. That said, LLM fantasies may be valuable adjuncts for storytelling and other entertainment products.
Perhaps LLM chatbots can increase profits by providing cheap, if maddening, customer service. Someday, a breakthrough may dramatically increase the technology’s useful scope.
For now, though, these oft-mendacious talking horses warrant neither euphoria nor panic about “existential risks to humanity.” Best keep calm and let the traditional decentralised evolution of technology, laws, and regulations carry on.
-- Project Syndicate
Bhide is a professor at Columbia University