In the last few years, artificial intelligence and machine learning has taken a big leap in sophistication. AI-powered technologies already run many things we take for granted, from GPS navigating apps to financial trading. Many of the largest tech companies like Google and IBM are pouring billions of dollars in AI and machine learning research. According to the latest data from the International Data Corp (IDC) projects, worldwide spending on AI systems will hit $12.5 billion in 2017, an increase of nearly 60% from 2016. IDC projects this global spending to increase at a compound annual growth rate of 54.4% through 2020, when total revenues will reach $46 billion.
What was once in the realm of science fiction has become a near certain reality. Presently many AI applications are designed for specific tasks that keep the technology dependent on man. But companies are also trying to create artificial intelligence that’s closer to human intelligence, and it is doubtful they will stop there. This exponential change harnessed properly could hold the answers to many of our big problems.
But the rate of technological change we have experienced in the last 10 years is now exceeding our ability to adapt as individuals and especially by our governments. So much so that billionaire techies Elon Musk and Bill Gates have expressed fear of the advancement of AI when it gets to the point where artificial minds exceed the abilities of man. And physicist extraordinaire Stephen Hawking has famously stated that the rise of artificial intelligence “could spell the end of the human race.” Legendary science fiction author Isaac Asimov foresaw these future ethical dilemmas and introduced The Three Laws of Robotics in his 1942 short story “Runaround”. The Three Laws, quoted as being from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Now as we see major advancements in AI and machine learning everyday visionaries are getting more alarmed about how putting in safeguard guidelines that ensure the safe and beneficial use of direct brain-machine interaction. A recent paper by Wyss Center: “Help, hope and hype: ethical dimensions of neuroprosthetics” published in Science believes a semi-autonomous robot must have a reliable control or override mechanism. Without this override a person might be considered negligent if for example they used the robotdrops a baby it picked up. The authors propose that any semi-autonomous system should include a form of veto control – an emergency stop – to help overcome some of the inherent weaknesses of direct brain-machine interaction. Professor John Donoghue, Director of the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland comments in the paper that: “We don’t want to overstate the risks nor build false hope for those who could benefit from neurotechnology. Our aim is to ensure that appropriate legislation keeps pace with this rapidly progressing field.”