J J Zavada
1 min readAug 21, 2020

--

The real question is not whether humanoids will assist humans in the future. The real question is whether the collective Artificial Intelligence of these future devices will be our servants or our masters.

"Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines?

The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime. While some experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.

Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us."

--Max Tegmark, President of the Future of Life Institute

--

--

J J Zavada
J J Zavada

Written by J J Zavada

Global Village Observer: I journal the disruption of socio-economic systems caused by our transition from the Industrial Park to the Global Village .

No responses yet