AI can potentially overpower the humankind.
Artificial intelligence sure is the creation of the human mind but with a potential to take over the civilization. Space X founder and Tesla CEO Elon Musk, who has access to the most advanced AI technologies, has warned that despite the immense possibilities with the new type of technology, powering everything from smartphones to smart homes, it is the biggest risk the human civilization currently faces and the best way to tackle it is through regulation.
During the meeting of the National Governors Association, Musk made comments on the subject of artificial intelligence with a cautionary tone. “I have access to the very most cutting edge AI, and I think people should be really concerned about it,” he said, describing the technology as “the biggest risk we face as a civilization.”
At the core of artificial intelligence is machine learning which essentially takes sample information (fed by humans) and develops its own understanding and functions accordingly. So for example, a smart car, which is powered by artificial intelligence, the future envisions a situation where there will be little or no human interference needed while driving or parking on the roads.
However, it is this very possibility which holds plenty of challenges. Musk said, “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late,” further adding, “AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.” RELATED: Elon Musk’s brain-computer interface plans are creepy and exactly the kind of AI you dread
It is a known phenomenon how robots are taking over human employment and according to Musk, going forward AI-enabled robots “will be able to do everything better than us.” As a Cnet reports quotes Musk, “ AI could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information.”
Musk’s comments come at a point when he himself is exploring a rather intimidating form of AI called Neuralink. The said technology aims to create an interface between the human mind and the computer. This interface will function as an extension of oneself and might also be a wee bit smarter than the human mind. Meanwhile, tech biggies including Facebook, Google and Apple are also developing artificial intelligence-based products. Recently, Facebook’s AI was discovered to have created its own non-human language, without human help. One can imagine a situation when crucial emails, let’s say one from your boss, are in fact churned out from an AI bot. Scary, isn’t it?
It isn’t the first time someone associated with developing a kind of technology has warned against its implications. Stephen Hawking, the British scientist, who himself uses one of the basic forms of AI to communicate with people, warned in 2014, “The development of full artificial intelligence could spell the end of the human race.”
Meanwhile, Bill Gates echoed the sentiments during an AMA in 2015 when he said, “I am in the camp that is concerned about super intelligence,” adding, “First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern.”
Google and Apple, for instance, are constantly working on improving their voice-based digital assistants; right from understanding various accents to suggesting users something they would like in terms of music or places. However, at the moment, there is no cap on the extent to which artificial intelligence should be exploited nor is there any law where specifies where to hold the bot responsible and not the company building it. ALSO READ: SpaceX and Tesla founder Elon Musk’s new mystery portal ‘X.com’ launched
Talking about law, an AI lawyer is already on payroll at law firm Baker & Hostetler, which handles bankruptcy cases. Called ROSS, the robot uses the supercomputing power of IBM Watson to scan through huge batches of data and, over time, learn how to best serve its users. The future is not far when these AI lawyers start proceedings for fellow botkind against the human race.
To avoid such a catastrophe, Musk suggested regulators need to step in certain AI developments to examine their safety. “You kind of need the regulators to do that for all the teams in the game. Otherwise the shareholders will be saying, why aren’t you developing AI faster? Because your competitor is,” he said.