Artificial Intelligence The Good The Bad And The Ugly Part 2 The Ugly


Posted on May 9, 2018 by Yann Tromeur

And you thought the last part was pessimist. AI has implication not only on jobs or societal matters but also on industries right?

Let’s do some IT history here. Have you heard of the ARPANET? It was a project for the US military during the cold war to prevent loss of communication in case of nuclear attack by the Russians. Invention of the Internet.

Do you see where I want to lead you? The weapon / military industry has been a leader on new technologies for decades (centuries?), our appetite for destruction  and war industrialization has created a colossal business sectors : 1.7 trillions USD in global expenditure in 2016 (

And we know, “The global Artificial Intelligence & Robotics in the Defense industry market, is valued at US $39.22 billion in 2018” – referenced in this article.

We are talking real life terminators there. This is no sci-fi.

Unmanned drones, autonomous warships and AI-driven sentry guns are already used by various armies around the world.

Less human flaws and more machine efficiency help improving military defense systems.

But not only, what is the greatest enemy apart from Global Warning ? Terrorism. I told you this was going to be bleak. Terrorists use all kind of weapons, all kind of methods to perform their attacks. You can use drones instead of planes to bring down buildings or replace a human driven car by an autonomous vehicle to dive into a crowded street.

Yes that is how AI can be used to as well. But it still is being controlled by humans for those attacks.

We know about the killed by autonomous car , and this has to be balanced with the 1.25 millions annual road fatalities worldwide, due to human failure.

In theory, a machine makes better decisions: based on facts, using an emotion-less algorithm.

Scientifically speaking, it makes sense.

To get back to those weapons, used by Nations’ military forces, would you agree they would be more reliable than human when to decide to kill or not? Decision making is one of AI application. Systems capable of analyzing big data pools and provide the best solutions to a problem, those systems exist.

They are already used for strategical decision, and I mean not only Business strategy.

In its short essay “will-artificial-intelligence-undermine-nuclear-stability”, Andrew J. Lohn and Edward Geist asks the question about the “strategic rivalries from (our) nine nuclear powers” and their temptation to build machines that could “act as strategists”.

Is it possible that one day, a machine could decide the first nuclear strike to prevent a predicted attack based on analysis provided by another AI?

With Government around the world making investments in AI (for example China 147.8 Billion USD in a 2030 plan), this is not to produce ALEXA’s type of device only.

But not only weapons can cause nationwide damages…

The Cambridge Analytica proved AI could help with individuals manipulation : the operation used available technos such as big data and machine learning to influence voters in a modern democracy. Political Bots just spread the “fake news”. AI all the way guilty. And the use of deepfake videos, bot-written article will expand, how can we know the truth when reality itself can be so “human-tificialy” forged?

Would that even matter at some point in the future?

Behind all those examples of techno-deviances it is our own nature we should question. We are the creators. We are responsible.