A Touch on The Creative Side: Why It’s Too Soon to Fear AI Capabilities

July 5, 2018

In one of his last public appearances, Prof. Stephen Hawking expressed real fear that AI improvements will lead to world chaos. But as AI capabilities are still very much limited, it’s up to mankind to start harnessing its benefits to our favor. Are we capable of doing so?

In one of his last public appearances, Prof. Stephen Hawking, who passed away in March, expressed a real fear of the possibility that AI will bring disaster upon mankind. In a speech during the Lisbon Web Summit last November, Hawking stated that well-harnessed AI abilities could lead to the end of poverty, eradicate diseases and even stop the damage we’ve inflicted upon Planet Earth. But in his mind, it depended too heavily on people who hadn’t exactly proven themselves in the past. “AI could develop a will of its own,” Hawking said in his signature computer-generated voice. “The rise of AI could be the worst or the best thing that has happened to humanity.”

It’s not the first time that scientists have tried to protect the universe from itself. But Hawking wasn’t talking about possible risks a generation or two away – it’s a matter of decades. Despite the notion that AI is on the fast track to rapid evolution, this is not the case. We’re talking about a much slower and more premeditated pace of development. Is this a breakthrough? By all means, yes, but not the kind that is going to change our lives on a large scale any time soon.

Unlike environmental issues, where the impact of near-term solutions on future generations is more concrete – such as reducing the pace of climate change or reducing our over-consumption of red meat and fossil fuels – horror scenarios involving AI originate in places where imagination is high, such as sci-fi movies and books. A brilliant New Yorker cover portrayed a homeless man lying in a New York street surrounded by robots and a cute robo-dog. High impact, definitely, but still science fiction. No less, no more.

The even spookier story around AI – and don’t get AI confused with automation or even ‘computers’ – is about its effect on the labor market. It’s important to stress that this ‘automation’ started ages ago and will continue to happen on a much larger scale as a natural part of the technological revolution. 200 years ago there were cotton pickers, and 50 years ago there were milkmen, and today everyone is a computer engineer. The ‘robots’ are already here. AI and its derivatives will open more job opportunities, while other professions will vanish altogether – perhaps cashiers and taxi drivers will be the first to go. But we’re not talking about a large-scale unemployment crisis for mankind.

The main issue surrounding the development of AI is the fact that computers still don’t have the capability to imagine, to guess, to create. They simply don’t address the creative side of things. Computers can’t build a narrative. Computers don’t even have the intuition of a three-year-old.

Imagination and intuition are at the core of our unique power as humans. In this context – when machine learning scientists are dealing with these generalization abilities – there are mathematical proofs. The most famous is Rice’s Theorem, which expresses the idea that there are absolute limitations on what a machine can learn.

Humanity has disappointed itself many times throughout history. Do the atomic bombs that are well kept in unknown shelters around the world, which many hold responsible to the relative peace in the last decades, say otherwise? Will we know how to channel these massive capabilities to our benefit? We can already see evil forces trying to take advantage of AI’s many shortcomings, and as more decisions and calls will be made by machines, the will of these forces will grow. Another heavy load on the shoulders of mankind. Like we haven’t had enough of that already.