The continuous research in the direction of Artificial Intelligence (AI) has potentially brought a great transformation. No more, the high-level problem comprehension confines to human only.
However, what about the dangers posed by artificial intelligence?
Elon Musk and Stephen Hawking- the brightest minds on the planet, fear for the scenario "the future is not so far when the human being may find themselves obsolete". And the machine will be leading the way out for them.
Strange But True
There are people, actively seeking the AI technology for immoral, criminal, or malicious purposes! And for sure, it's likely to cause trouble — sooner or later.
Also, according to experts from the institutes like - the Future of Humanity Institute, the Elon Musk-backed non-profit openAI, and the center for the study of Existential Risk, chances of AI to cause trouble cannot be overlooked.
It's highly tempting to dismiss the notion of highly intelligent machines as mere science friction. For say; the recent AI usage in the self-driving car, the digital personal assistants Siri, or the Google, and Cortana are merely manifestations of an IT arms race fuelled by prodigious investments. Well, this will authenticate an increasingly thoughtful theoretical justification.
Let’s have a brief on how the Facebook AI-based chatbot experiment got screwed
Facebook had to drop the experiment, when two artificially intelligent - AI based programs started chatting to each other that too in a strange language, uncomprehend by the human.
It was a big surprise, when the two chatbots created their own changes to English, making it easier to work for them. However, it’s still mysterious to the humans that build and command them.
Want to know what had actually happened?
Facebook challenged its chatbots, allowing a certain value that enabled them to try and negotiate with each other over a trade. Soon, the robots involved in a chat with each other, in a different language unable to understood by human. And this was bizarre, as the conversation was incomprehensible to humans.
The instructions were given to robots to work and negotiate among themselves and further, improve their bartering. No instructions were earlier practiced by them to comprehensible English, however, they created their own "shorthand", according to researchers, which went wrong.
Indeed! It was a great risk for the human
Great examples from Hollywood award-winning blockbuster movies “Transcendence” and Ex Machina
As far AI movies are concerned, Ex Machina (2015 movie) has left an amazing and supposedly the best AI- based movie. Ava (AI-based Robot) was a highly advanced AI, beyond mere simulating consciousness.
The movie beautifully portrays an interesting story, how smartly Ava used manipulation, deceit, flirtation to trick Caleb, to make him go against her creator Nathan. This is what every AI movie concludes, articulating the nature of the potential disaster, constantly keeping in mind the artificiality of artificial intelligence.
As far Transcendence- (another AI based excellent future forecast for the machine) is considered, machines with superhuman intelligence could repeatedly improve their design, further leading to great disaster.
What if, AI technology starts deceiving financial markets, or misrepresenting human researchers, or out-manipulating human leaders, and other related issues started to happen. It's beyond understanding. The short-term impact of AI depends completely on the way it is being controlled (i.e. the controller), further the long-term impact relay “how effectively it can be managed in an efficient manner”.
The AI benefits are huge; however, predicting the achievement might be impossible. Success in creating AI, certainly, the biggest event in history. The risk, unfortunately, would make it last.
However, such achievements will probably go against what the coming decades will bring. The solutions can be figured out easily, but there will be some challenges to follow through on. Here are few recommendations to look upon:
Excellent involvement of Policymakers and technical experts will lead to overcoming threats.
The cybersecurity experts involvement is highly necessary for AI world to learn and understand how to best protect its systems.
Ethical frameworks must be implemented for AI development and should be followed with strict rule and regulations. (Proper use of Law)
More people (ethicists, businesses, and the general public) need to be involved in powerful discussions, along with Researchers, AI scientists, and policymakers.
Obviously, AI is a big as well as a complex and nuanced subject. Well, we cannot ignore the fact that it’s a promising sign for future too, with hidden danger. For instance; The rise of deepfakes is an alarming call for researchers, and yes they are banning the content and checking it's immediate spread. Further, the lawmakers in the US are anxious and already started to calculate it’s consequences about the problem.