Understanding the fear of A.I.

Artificial intelligence
September 10, 2020 | | Blog, Data analytics, Smart spaces |

This post is also available in: French

AI has been somewhat prominent in the public eye recently. People are starting to show interest in both the inspiring capabilities and the ever-present fears of AI. Considering our use of AI technology in the illumenai platform, we wanted to showcase and address some of the fears that surround this emerging technology.

If you weren’t already aware; Bill Gates, Elon Musk, Stephen Hawking and the band Double Experience have all voiced their concerns for AI technology. The major points they raise against AI are reasonable, and we’ll cover some of them shortly. First, it’s crucial that we all share an understanding of what AI is and how it works.

AI is, in essence, our attempt to make machines think like we do. AI is software that’s capable of making decisions that depend upon the parameters provided and its source code. This is a broad definition, because AI can be broadly applied. For example, at illumenai we use AI to analyze data, control lights and temperature, and actively disinfect areas. A popular use for simple AI is in chatbots like Replika, who analyze user input and create an output as a response. Currently, we use a Narrow or Weak form of AI, which is only able to perform a limited set of predetermined functions. Eventually, we may see AI evolve and become superior to humans in terms of intelligence, but that’s going to take a while.

Finally, there are four categories that AI fall under; Reactive, Limited Memory, Theory of Mind, and Self-Aware. Technopedia explains this through a helpful visualization, “…an AI-driven poker player. A reactive machine would base decisions only on the current hand in play, while a limited memory version would consider past decisions and player profiles. Using Theory of Mind, however, the program would pick up on speech and facial cues, and a self-aware AI might start to consider if there is something more worthwhile to do than play poker.”

This fear has two main parts, both equally important. First, no system is perfect, and AI is no exception. There are bound to be examples of poor decision making by AI, so how can we trust that it will make the right decision? Second, as AI becomes more complex, it also becomes less capable of giving reasons for its decisions. This is especially problematic when the machine makes a poor decision, as it obscures the line of reasoning that led to the failure, making troubleshooting more difficult.

The best way to address this is to ensure that the process is overseen by experts. For fields like healthcare, where mistakes can be fatal, AI would serve as a suggested path, but not an absolute path. The expert would receive advice from the machine but would still make the final decision. This way, if something does go wrong, it will be corrected before taking any action.

This is a very real possibility, but things may not be as dire as they seem. By its nature, technology is created to automate systems and reduce the human input and work. When computers started becoming commercially available, people also worried about losing their jobs. Certain parts of the workforce were replaced, but over time the technology created significantly more jobs than it took over.

Even so, it isn’t helpful to view AI as a “replacement” for humans. Instead, we need to think of it as a tool, like a forklift or a dishwasher, that helps us work smarter. The work done by humans will change, but the need for work done by humans is not going away.

AI is a common concept used in all forms of Science-Fiction from The Matrix to Blade Runner to 2001: A Space Odyssey. Excluding the utopian depiction of AI in Frank Herbert’s Dune, a plot involving AI in sci-fi almost always makes the AI turn evil (like SkyNet).

To be blunt, the current forms of AI we use are simply not capable of going against their programming or being malicious. Looking into the future, when AI could match or surpass our intelligence, we can still avoid AI overpowering us by giving it positive traits. One approach for developing advanced AI is by Neuroevolution. Essentially, you mimic the process of evolution observed in biological life. Using virtual environments and tasks, you can force the AI to evolve cognitive abilities. Considering that our own ethics and morality are products of evolution, it would seem that by the same process, we could evolve AI to the point of sharing our ethical and moral codes. AI researcher Arend Hintze explains the process in simple terms, “We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty, and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.”

To conclude, we shouldn’t be quite so scared about AI. We should always ask questions and voice our concerns, but we should not hold back the evolution of artificial intelligence because of our fears. Most likely, the AI that you work with will be simple and easy to use, and you won’t even associate it with the evil AI of sci-fi infamy.

If you don’t believe me, read this article in the Guardian, written by an AI, which assures us that they come in peace.

However, it can’t hurt to be nice to your computer…

Comments are closed.