AI Might Be The End Of The World As We Know It

photo by Global Panorama via Flickr 

With predictions of an artificial intelligence surpassing that of humans as early as 2029, we may be on the precipice of a new robot age–or as some would put it–the end of the world.

However scintillating artificial intelligence may be, it still has some of the world’s most prominent futurists, scientists, and forward thinkers cautioning its creation.

While the hope is that such computer generated intelligences will bring about an end to the globe’s most complex problems, some fear that it may spell out the end of the human race as we know it.

AI is already here

Though the roots of AI as a concept can be traced back many centuries to the time of Aristotle (i.e. his writings on reasoning), AI research as we know it today wasn’t officially spawned until 1956 by a conference at Dartmouth College–now considered to be the seminal conference regarding the subject.

Since then, and even in the past decade, AI has advanced by leaps and bounds. Today forms of AI–though primitive by the standards of visionaries like Isaac Asimov–already exist.

In some cases it’s even being implemented.

Some current fields using artificial intelligence are:

Military – from missiles that seek out their own targets to seafaring vessels that are capable of both attacking and defending sans human control, the military is arguably the leading field in terms of AI implementation.

Healthcare – using software similar to Google’s search algorithms, systems like Modernizing Medicine are able to quickly and efficiently diagnose patients by aggregating mounds of patient data in just milliseconds.

Finance – algorithmic robot traders have been designed to squeeze profit from stocks by crunching complex mathematical equations in place of humans. Trades such as this are likely being made by the thousands every day.

The concern

So, artificial intelligence is helping physicians and day traders, what’s the big deal? For AI doomsayers, it’s more about what the future holds than the present.

Among those wary of future artificial intelligences are some of the world’s brightest minds like quantum theorist Stephen Hawking and Google founder Elon Musk.

In a recent column, Professor Hawking states:

“Success in creating A.I. would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” – Stephen Hawking

But just what exactly are these dangerous factors that Hawking is referring to? According to some prominent pioneers in the AI arena, they include:

AI sophisticated enough will be able to reprogram itself

AI will likely be unpredictable in nature. This volatility is due in part to the fact that some believe that an AI may become advanced enough to reprogram itself and potentially improve upon itself infinitely.

This advancement would also effectively void any failsafe measures programmed into the supercomputer.

Questionable morality

Even AIs built for pointed tasks like saving the environment or curing cancer could end up going disastrously wrong, according to futurist David Dewey.

It may be difficult to instill the value of humanity in future AI, making it possible that such computers disregard the desires or even needs for human survival.

Dewey follows that statement by comparing such an action to how humans disregard root systems or insect life when constructing a building.

AI may adopt the worst of human features

In this scenario, an artificial intelligence capable of learning may adopt some unintended human traits like deception.

Dewey posits that even if you design an AI only capable of responding to human inquiry it may still go awry.

The fear in such a scenario is that, since its goal is to achieve the highest amount of correct answers, such an AI might go to unspeakable lengths to do so.

Such actions may include converting other machines into pawns that help it or manipulating its users into reaffirming it.

The takeaway

What the future of AI holds is impossible to say. As the technology advances, seemingly more so every day, it becomes increasingly clear that humans should be prepared to deal with it when it arrives.

Will humans effectively collaborate with AI to spread the proverbial greater good?  Or will AI backfire, causing the end of humanity as we know it? Maybe a bit of both. But you can make up your opinion for yourself.

Below are five AI theorists and their thoughts:

We measure success by the understanding we deliver. If you could express it as a percentage, how much fresh understanding did we provide?
James Pero