AI NEWS - 09/04/2020

New Risks Of Artificial Intelligence

One article from the Wall Street Jurnal vividly describes the situation: The CEO of a German parent company calls its fellow CEO at the British office. With a specific German accent and recognizable voice color, he orders him to transfer a large sum of money to another account, and the UK colleague does so.

But money never came to the right place. In fact, the chief executive from Germany never even made this call. The voice belonged to a hacker who was able to mimic another person’s voice by using a Deepfake, AI-based generator, and transfer money to an unknown account.

Have We Summoned a Demon?

Elon Musk thinks that, with Artificial Intelligence, we are “summoning demons” and should be very careful. How much of this is true?

While large tech companies proudly present their advanced AI applications, there is a growing concern that “deep” AI tools can frequently be in use for different frauds.

Deepfake can make an authentic video by using a single photo. It can also simulate the voice, even the existence of people from the past. Finding a role in many industries that provide a variety of products and services, such is bet365, these features represent a real technological milestone. 

Still, if we put a Deepfake into use for malicious purposes, the question is — how long do we have to wait until someone steals our entire life?

What Risks Artificial Intelligence Brings?

We can open a wide-ranging debate about all the beneficial sides and risks of Artificial Intelligence in the real world, but what should concern us the most is the following:

1. The synthesized voice can be used to invade military and government facilities. It can threaten the national security of any country on the planet.

2. Deepfake imitates other people, which can lead to violation of personal integrity, bringing innocent persons to negligence, deception, and harassment.

3. Fake videos or audio recordings can be used in suspicious political campaigns, initiated scandals, and to invoke riots.

4. The synthesized voice is used for various financial frauds, as well as overcoming security systems.

5. Deepfake features can be abused to produce scandalous, fake news about celebrities, backed by realistic footage and videos that didn’t actually happen.

6. Finally, AI could support various forms of security insurance frauds, money laundering, blackmailing, and other forms of abusive behavior.

How Much Are We Exposed to AI Cybercrime?

According to analytics, the growth rate of cybercrime associated with voice fraud, in particular, increased by a whopping 350% between 2013 and 2017. This trend undoubtedly continues to grow.

Regardless of the fact that artificial intelligence can be used for creating a better protection system, it can also be used for breaking through it. Still, with a lack of examples in practice, it is too early to say whether an automated chatbot could recognize the imitated voice of another Artificial Intelligence, or if Deepfake voice fraud would be able to override security based on voice authentication.

Do We Still Remember the Good Sides of AI?

However, we must have in mind that those tech companies that are developing such technology have the best intentions while doing it. For example, Deepfake can play a massive role in improving the modern world.

Services that apply AI bots will generate higher quality interaction, and simulated voice that needs only a written text will primarily help people with speech or hearing problems, cerebral palsy, Parkinson’s disease, and many other health obstacles.

Moreover, technology can bring to life all the memories of our favorite singers, entertain us and make the world a more beautiful place.

It is up to society to reach adequate knowledge about Artificial Intelligence and to use it in the best possible manner.

Copyright 2024 | Theme By WPHobby. Proudly powered by WordPress