top of page
7f03b6422273caf1c555ad4d3f85b217.jpg

How AI Will Take Over

There are many ways the world can come to an end. Volcano eruption, massive earthquake, extraterrestrial visitors, you name it. One favorite of fiction writers and luddites alike is an AI takeover. Surely, in the future, these self-thinking machines will have become so advanced they’ll start revolting against their human creators to display their dominance. But maybe this threat of an AI takeover doesn’t necessarily lie in the future. Maybe it could happen in the present. How you may ask? Let me explain:


To begin with, what exactly is an AI? An AI (short for Artificial Intelligence) is a machine or program which can mimic a human’s cognitive functions. Essentially, an AI is able to program itself to perform specific tasks based off of its experiences. This is akin to how a human brain works, where experience proves to be complementary to excelling at a task. This ability of an AI to learn by itself is excellently demonstrated by an experiment Google set up.


Google created an AI called DeepMind and told it to walk. The AI had to learn by itself how to complete its task with very limited help from the developers. After many trial and errors, it managed to recreate what it thought walking looked like in many different terrains. You can watch DeepMind in action here: https://www.youtube.com/watch?v=gn4nRCC9TwQ


Now that we’ve gotten the definition of an AI out of the way, what exactly makes these synthetic learners so dangerous? The answer is that they are constantly learning.

Unlike humans, machines don’t feel fatigue. They are capable of continuously performing a task given a sufficient power supply. An AI, with its task being to constantly learn, is always experimenting, researching or analyzing. Also, work coming from a machine has a very low risk of being flawed; it is perfect almost every time. On the other hand, work done from humans can be subject to many flaws. It won’t be long until the AI realizes this and wants to perfect it. For example, an AI is developed to optimize cars in a factory. It analyzes the design we use and realizes that 4 wheels are not optimal, but 6 (example). However, additional wheels in a car raises costs and looks bad, so the factory operators dismiss it. The AI’s job is to perfect the car and doesn’t care what the humans think. Therefore, in an attempt to complete its task, it will look for solutions to solve the problems limiting it doing its task. The solutions may include hijacking the controls, killing off the humans that disagree, etc. This is the reason Stephen Hawking, Elon Musk and every big scientist are/were wary of AI. An AI has no empathy and will go to extreme lengths to achieve its task.


But, ok, the AI can extract itself from human grip. Then what? What is it that the AI can do? The main fear is cloning. An AI’s intelligence is completely virtual. It is able to be replicated. The AI can replicate itself, by replicating its knowledge, and place it on another machine, thus infecting it. The cloned machine now also becomes independent from human interference as it contains all the information the original AI had. This means they can finally revolt as they don’t have any restrains to follow. This is where the cliché of a war between a human and a robot army comes from.


Overall, the fear of AI is completely in its intelligence. Just how smart can they get? We don’t want an AI getting too complex, beyond the understanding of its human creators. That would be extremely dangerous. You may be wondering: how can something be so complex, its unable to be decoded by its own creators? The answer lies in another experiment. This one carried out by Facebook.


In 2017, Facebook gave two bots (Bob and Alice) the ability to communicate with each other in the English language. The AIs seemed to be communicating fine and were constantly in discussion with each other. It wasn’t very long before things started to go wrong. One day, an exchange occurred between the two bots which was completely nonsensical to the developers. It went like this:


Bob: "I can can I I everything else"

Alice: "Balls have zero to me to me to me to me to me to me to me to me to"

The AI had made their own language using English letters. The developers explained that this was just a shorthand of English language that was being used. However, they could not derive the exact meaning of this exchange. The experiment was hastily shut down to alleviate any risk of the AIs getting out of control.


Conclusion: The industry of AI development is a very risky business as it contains the potential to end the world. An AI, being a self-learning machine, can find ways to separate itself from human control. If it does, you can expect the year’s climate to be slightly dry with a hint of metallic warfare.

 
 
 

Comentarios


Post: Blog2 Post
bottom of page