The Founder Of Artificial Intelligence Sounds The Alarm
Share and Follow

Known as the Godfather of AI, Geoffrey Hinton is now worried about the technology he helped create. After working with Google
GOOG
for 10 years on the Google Brain project, he resigned so he could warn others about the dangers of AI. With a BA in Experimental Psychology from the University of Cambridge (1970) and a PhD in Artificial Intelligence from the University of Edinburgh (1978), the 75-year-old has decided to blow the whistle on the technology, raising concerns over its use. His concerns are shared by the Center for AI Safety, an organization dedicated to reducing the societal-scale risks from artificial intelligence. Here are some of those concerns.

Danger #1: Military Applications

As AI advances, it will become increasingly intelligent, likely surpassing that of the human brain. As a military application, there are several concerns. While AI will allow for smarter decisions and fewer casualties in war, in the wrong hands, it will create a very uneven playing field between adversaries. Thus, the nation with the most advanced AI will dominate.

Danger #2: Misinformation and Disinformation

We all experienced deception during the pandemic on social media. However, the art of misinformation and disinformation will flourish with AI. It will become increasingly difficult to know what is true and what is false. Hence, nations with the most advanced AI could use the technology to mislead their citizens or the citizens of an adversarial country. Politicians could also use this technology to win elections. One nation could use AI to influence an election in another. The result of this is an increasing level of distrust, which would promote greater civil unrest. AI in the wrong hands could help bring a nation to the brink of collapse.

AI deception in the future could become widespread as companies and politicians use it to reach their goals. As AI becomes more advanced, and if achieving the goal becomes more important than truth, AI could undermine human control. In one example, according to the Center for AI Safety, Volkswagen programmed their engines to reduce emissions only when being monitored. The company plead guilty and agreed to pay a multi-billion-dollar settlement in early 2017.

Danger #3: Jobs Lost

Over the years, there have been numerous technologies that have materially changed an industry. Two examples include the horse and buggy industry and auto manufacturing. In the case of the former, technology helped create automobiles, which replaced the horse drawn carriage as the primary mode of transportation, leaving many workers unemployed. During the 1970s, technology displaced many factory workers as automation swept the industry. I could easily envision a day when a factory of 5,000 workers is replaced with 50, some operating the control room while others maintain the machinery. To maximize profits companies will likely cut their workforce. Moreover, companies with the most advanced AI could dominate their industry.

Danger #4: The Dictator

There are many today who seek power and influence over everything else. On Friday, September 1, 2017, Russian President Vladimir Putin, speaking about AI, said, “Whoever becomes the leader in this sphere will become the ruler of the world.” Shortly thereafter, Elon Musk tweeted, “competition for AI superiority at national level (is the) most likely cause of WW3.”

As AI advances, it is unlikely all world leaders will agree to play nice. Thus, a technology with many wonderful benefits could become our undoing. And it will if we are unable to install adequate safeguards to protect humanity.

Share and Follow