Months before renowned physicist Stephen Hawking passed away, he issued a warning about artificial intelligence (AI).
During the Web Summit technology conference in Lisbon, Portugal, in November, Hawking acknowledged that he's on the record saying there's no difference in what can be achieved by a biological brain and a computer.
Therefore, if the human mind has “unlimited potential” in its ability to develop, “computers can, in theory, emulate human intelligence and exceed it.”
Hawking admitted there's no way to predict what can be achieved with AI, but he reasoned that it's possible some of the damage that's been done to the natural world can be undone through its use.
He labeled creating effective AI possibly the “biggest event in the history of our civilization,” but the uncertainty of its future bears the question if it will be the best or worst.
“So we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it,” Hawking explained.
Whether AI proves beneficial or harmful to the future of humanity, the physicist had a warning for everyone involved in its development.
“Unless we learn how to prepare for — and avoid — the potential risks, AI could be the worst event in the history of our civilization,” he said. “It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”
Hawking explained that there have already been concerns that machines could take over the work done by humans and “swiftly destroy millions of jobs.” AI could also develop a “will of its own” in conflict to that of humans.
“In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity,” he surmised.
Hawking called for the employment of the best practice and management, as well as the preparation for consequences well before they arrive.