Future Threats Of The Cyber World

Age of Artificial Intelligence is upon us. Artificial Intelligence is started being used and it will be used in broader areas. If you did not notice before, YouTube tracks what you watch and recommends you other videos that you might like. If you like the video that was recommended, you choose to watch and let the circle continue.

At one point, you don’t even notice that the main topic of the video has changed. You started watching a video about a do-it-yourself project that you want to do, but now you are watching top 10 goals in the World Cup 2018.

Thus, you end up losing so many minutes or even hours in front of your computer and doing what YouTube offers you to do. If you can’t resist watching recommended videos, it seems like your brain is hacked, doesn’t it?

Integration of Artificial Intelligence into our lives

In 2016, the public’s opinion about contentious issues in United States, such as gun control and the presidential election, was aimed to be manipulated through social media. In this manipulation, autonomous computer programs – bot accounts – were used to tweet or share propagandas.

In 2016, Microsoft created an AI chatbot to act like a curious teenage girl and engage in smart conversations with Twitter users. The chatbot Tay displayed extremely racist and sexist behaviors in less than a day.

In 2017, a new technique called Deepfake has been introduced to create new videos. This technique consists of combining and superimposing existing images and videos onto source images or videos with the help of deep learning. This lead to creation of fake celebrity or revenge pornography on the internet. Furthermore, it was also used to damage the reputation of known politicians.

H. Kim et al., 2018/Gizmodo

Comparison of the two studies – the right one done in 2017 and the middle one in 2018. The background does not move anymore. Source: H. Kim et al., 2018/Gizmodo

As of 2018, Deepfake videos are getting harder to differentiate form the real videos. This shows that it is very abuseable and can be used for hoaxes.

All of the examples above have something in common: Artificial Intelligence.

More Malicious Stuff…

One of the most sophisticated malware ever – Stuxnet was released to neutralize Iran’s nuclear infrastructure in 2010. It was designed to spread like a worm and release its payload once it knows that it is inside of the right computer. That was the reason it stayed unseen and it infected over 200,000 computers.

But how?

As a proof of concept, a variation of WannaCry ransomware, which uses deep neural networks to hide and release its payload once it detects the target, was presented by IBM researchers.

DeepLocker was integrated into a video conferencing software. The malware was hidden showing no malicious behavior and the software was working well so that it could be downloaded and used by millions of users.

Designed by Macrovector, can be found on https://www.freepik.com/free-vector/webcam-fixed-on-computer-or-laptop-with-model-data_2874853.htm

Designed by Macrovector

Meanwhile, DeepLocker was waiting for its prey. As programmed, it was using facial recognition neural network and scanning the user via the webcam. Once it acquired the target face, it activated the ransomware encrypting all the files on the computer. A personal ransomware…

Tricking the Artificial Intelligence

There are several reports and studies that show how Artificial Intelligence is targeted by malicious attacks. These attacks aim to manipulate the input data to cause neural networks to act in a misleading way.

For example, students of MIT made computer vision algorithms to flag a toy turtle as a rifle by making minor tweaks to a toy turtle. While this seems as not harmful, a study that is made by University of Michigan, the University of Washington, and the University of California, Berkeley showed that placing small black and white stickers on stop signs made these signs undetectable by Artificial Intelligence of self-driving cars.

Adversarial Artificial Intelligence Attacks

It is extremely difficult to do reverse engineering and investigate the vulnerabilities of neural networks due to their impervious nature. If hackers find a vulnerability within the Artificial Intelligence by chance or trial and error, it would be very easy for them to exploit it secretly.

To relieve the raised concerns, Adversarial Artificial Intelligence Attacks are very hard to develop and even if they are developed, they usually do not work consistently. However, if we look at how Artificial Intelligence was used to create perfectly Deepfake videos, it is only a matter of time before hackers create AI infused malware or Adversarial Artificial Intelligence Attacks.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: