Artificial Intelligence is no longer the stuff of science fiction. In fact, it’s already in your home.
Have you ever thought about Skynet from the Terminator movies or HAL from 2001: A Space Odyssey becoming real? We might soon be on the verge of developing artificially intelligent software which could be just as powerful as these villainous, intelligent machines or could even surpass them. Should we be worried about intelligent machines taking over the world? Or are the concerns being exaggerated?
What is Artificial Intelligence?
Artificial Intelligence is a term used to describe systems that can mimic human behaviour and perform tasks that are partially or fully human-like. Some of these tasks include speech recognition, speaking, facial recognition, etc. The best way to build AI is by using a system called Machine Learning and using hardware in a special way known as an Artificial Neural Network.
This type of AI already exists! You don’t need to have human-level intelligence in a system to have it classified as artificially intelligent. Weak AI is AI that can perform specific tasks. These systems are only good at the tasks they are designed to perform and nothing else. Digital personal assistants and automated customer service systems are examples of weak AI in use today and are very effective implementations of AI. We often undermine weak AI, but Google’s DeepMind has recently developed an AI system called AlphaGo which managed to beat Lee Sedol, the best Go player on earth. This proves that even weak AI is capable of surpassing human intelligence in some cases.
When thinking about AI, we usually think of strong AI. These are systems that would be able to perform several complex tasks rather than just one or two. They would be able to mimic human behaviour, or even surpass humans, while performing several general tasks such as speaking, controlling a spaceship or recognising people and their voices. Fundamentally, to be in this category, the system must be able to think and behave like us or better than us.
This type of AI does not yet exist but will possibly do so in the future. Skynet and HAL are examples of strong AI.
If AI does indeed reach strong levels in the future, we could face some problems. Some influential people, such as Stephen Hawking and Elon Musk, have warned against the use of AI for weapons and dangerous tools. They are right to do so because we cannot let human lives be wiped out by an artificial software. That would be unethical and risky, to say the least. These concerns have cast a shadow of darkness around artificial intelligence and some think that its development should be halted for our safety. This approach, in my opinion, is completely wrong. Stopping the development of AI does not stop its misuse. Because of the amount of data collected online, even AI systems available today could be used in a threatening way. The collective voice of the scientific community supporting the further development of AI will only benefit us as time passes. What should be carefully regulated, however, is the way we intend to use AI.
Scientists strive to invent something new and powerful, but it is up to us to decide if we use or misuse it. Nuclear power is a prime example of this, as we can use it for electricity supply or for bombs. I believe the future of AI is bright and hope that we get to see strong AI in our lifetimes, something which is highly likely. All we must do is keep it on a leash in terms of its capacity to cause harm. If we can direct the capabilities of strong AI towards inventions and problem solving, we might even make a huge leap in technological progress.
Image: Mathew Hurst/flickr