rethink everything
Preparing for dawn
In the coming decades, artificial general intelligence could present us with a tightrope to infinity. Will we be prepared for a safe crossing, or do we risk a plunge into oblivion?
Without tip-offs from time travellers, a genuine insight into the future that can—and should—change the way we approach something that does not yet exist is an extremely rare gift. The prediction of artificial general intelligence (AGI)—the kind of AI that can apply itself to any task with superhuman proficiency—is such an insight. We’ve already invented narrow artificial intelligence—the type of AI that can perform only the specific function it was designed for—and AGI would be the natural next step. So, it seems inevitable that if we can invent AGI, we will.
But that also means any unintended consequences could be so bad for us that a single misstep might be more than our species can afford. Some of our greatest minds, such as tech billionaire Elon Musk, neuroscientist and philosopher Sam Harris, and physicist Max Tegmark, are already making worried noises about what such consequences might be. And if we’re to stop them before they can occur, such speculation is vital.
What might be
For instance, much debate has been focused on the singularity: the moment at which an AGI becomes capable of self-improvement. Given the superhuman speeds involved, the singularity could theoretically precipitate an intelligence explosion—a runaway effect in which the AGI makes extremely rapid upgrades to itself, achieving the equivalent of thousands of years of human progress in as little as a few hours. Whether this turns out to be the best or worst thing that has ever happened to us could depend on whether we have already solved the alignment problem:
In 2003, philosopher Nick Bostrom put forward the prototypical thought experiment1 used to illustrate the problem. The experiment revolves around a paperclip maximiser—an AGI programmed with the seemingly benign goal of creating as many paperclips as possible. To a human, the caveat that there are some things we’d rather not have turned into paperclips is so obvious as to make articulating it a waste of oxygen. But without such caveats, an AGI paperclip maximiser would convert our atoms into paperclips at the first opportunity. This is, of course, not intended to be a realistic scenario. But to say that we would never give our AGI such an ill-considered goal would be to miss the point. For when dealing with a superintelligence, even a tiny misalignment between its goals and our own could translate into an existential threat.
To mitigate that risk, it has been suggested that we might, so to speak, hermetically seal the AGI off from the outside world by confining it to a computer with no Internet access. But this might not entirely solve the puzzle that has become known as the containment problem. For instance, just as we can exploit an infant’s inability to understand object permanence in a game of peekaboo, an artificial superintelligence could use psychological techniques far beyond our comprehension to manipulate us into voluntarily doing its bidding.
These are just a selection of the theoretical consequences that AI researchers are debating. What should we make of such worries?
The carrot and the stick
On the one hand, opinion is divided even at the highest levels as to which, if any, of these concerns will turn out to be warranted. Right now, everything is hypothetical, and so an alarmist response is unwarranted. That said, we don’t need to know that our house will one day burn to the ground for us to buy insurance. These concerns might be theoretical, but they do not seem impossible. And just one of them could bring about a catastrophe. It would, therefore, be reckless to take a cavalier approach to AGI development.
The immediate problem is that the incentives are almost perfectly aligned to encourage just such an approach. The first business or government to bring an AGI online would have superhuman levels of intelligence at its disposal, so it’s not difficult to imagine how such a prospect might raise one’s appetite for risk. Thus, if we want to shift the incentives such that they drive the kind of “safety-first” approach we want to see, we need to pull the lever of investment.
Investors will therefore increasingly want to see that developers are sponsoring non-profit AGI safety research companies such as OpenAI and creating internal safety and ethics committees to implement their recommendations.
But in the long term, this may not be enough. Insufficient safety measures would represent such a risk to the public that some sort of industry regulation seems likely, if not inevitable. This could well include measures that allow for investors to be penalised for safety failures and public incidents.
This doesn’t mean that profits will be hard to come by. Putting a man on the moon might itself have been an “all or nothing” goal, but the innovation and industry required to make a serious attempt would have been economically valuable whether or not anyone ever made one small step. Similarly, if AGI is to be realised, companies will need to make significant advances in narrow AI along the way. And as we have seen, narrow AI is nothing if not profitable.
Spreading the wealth
If we succeed in developing AGI safely, regulation will also be required to ensure that it’s used for the good of the many, not the few. For if AGI is developed by a single company that somehow retains exclusive control over it, we would, to say the least, have a problem. Worst case, this company could, quite literally, control the world and everything in it, with its relatively few investors as the sole beneficiaries. And even if the company has no interest in absolute control, its model for selling AGI access could still take inequality to new extremes. They could name their price, and the potential for economic benefits would be large enough to ensure that some number of people would, quite rationally, pay it.
On the other hand, ubiquitous AGI access could render inequality a non-issue.
That could mean new food production methods that solve world hunger, innovative building methods that create easy access to housing, and vastly more effective drugs. AGI would also make production itself far more efficient, creating the kind of abundance which could bring the value of individual products to the point where everyone can afford more than they need. Although inequality will always exist to some degree, such a world could see the economic pie become so large that even the poorest people would be rich by today’s standards. Indeed, the reduced need for human labour could drive us toward a new economic system, in which the idea of “affording things” is no longer a thing. With that kind of potential, an investment in open-access AGI could be the ultimate investment in the future.
Arthur C. Clarke said that “any sufficiently advanced technology is indistinguishable from magic”2, and AGI would be the ultimate demonstration of that insight. Nevertheless, superintelligence isn’t sorcery, and the laws of physics are clear. As such, any increase in economic growth will necessitate a corresponding increase in the consumption of energy and material resources. That means sustainability will be non-negotiable.
Finding ourselves
Finally, there’s the profound question of how we want superintelligence to change us. In principle, AGI could remove any need for humans to carry out any kind of physical or intellectual labour, leaving us to pursue lives of leisure. After all, the only way to perfectly level the economic playing field is to remove it. But are we psychologically capable of living a world in which meaningful achievement is a contradiction in terms? If not, we need to think carefully about how much of the economy we ought to place in digital hands—enough to create a world we really want to live in, but not so much that we die of comfort. AGI could give us everything we want. But only we can decide what we truly need.
Important information
This document is issued by Bank Lombard Odier & Co Ltd or an entity of the Group (hereinafter “Lombard Odier”). It is not intended for distribution, publication, or use in any jurisdiction where such distribution, publication, or use would be unlawful, nor is it aimed at any person or entity to whom it would be unlawful to address such a document. This document was not prepared by the Financial Research Department of Lombard Odier.
Read more.
share.