Skip to content

What are the risks of artificial intelligence chats?

Google and Microsoft are in a race to make their new chats with artificial intelligence (“chatbots”), which will reach the public soon, become as popular or more popular than their search engines, but these new technologies come with new risks for cybersecurity, such as being used to create scams or build malware to carry out cyberattacks.

These problems are also seen in chatbots like the popular ChatGPT, created by OpenAI, technology that also powers Microsoft’s Bing search engine.

LOOK: Call of Duty will come to Nintendo and will stay for ten years after the agreement with Microsoft


Satnam Narang, a senior research engineer at the cybersecurity firm Tenable, tells EFE that scammers can be one of the biggest beneficiaries of this type of technology.

Chatbots allow you to create texts in any language in a matter of seconds and with perfect grammar.

According to Narang, one of the ways to identify these scammers is through the grammatical mistakes they make in the messages they send to their victims and that, if they use AI, they will be able to go more unnoticed.

LOOK: Outlook fixes error that could not detect spam messages in the inbox

ChatGPT can help (scammers) create nicely designed templates for emails or create dating profiles when they try to scam users on dating apps. And when they have a conversation (with the victim) in real time, scammers can ask ChatGPT to help them generate the response that the person they are trying to impersonate would give.”Narang notes.

In addition, the expert points out that there are other types of artificial intelligence tools, such as DALL·E 2 -also from OpenAI- in which fraudsters can create photographs of people who do not exist.

LOOK: Bing’s AI confesses: “I’m tired of being used by users”


Another of ChatGPT’s qualities is that it can help hackers create malicious programs (or malware).

This malware is not going to be the most sophisticated or well-designed, but it does give them a basic understanding of how they can write malware based on specific languages. So it gives them an advantage in their process, since until now whoever wanted to develop malicious software had to learn to program, but now ChatGPT can help them shorten that time.”, Narang details.

LOOK: Microsoft Edge offers 1GB of free data per month with its integrated VPN


Both OpenAI’s ChatGPT, Microsoft’s Bing, and Google’s Bard chatbots are carefully designed to avoid speaking out on a wide range of sensitive topics—such as racism or security—and offensive responses.

For example, they do not answer questions about Adolf Hitler, they do not agree to comment on the English word “nigger” (derogatory for “black”), nor do they give instructions on how to build a bomb.

However, Narang explains that there is already a “jailbreak” version (released or modified) of ChatGPT called DAN, which stands for “Do Anything Now” (“Do anything now”) in which there are no such barriers.

LOOK: GPT Chat: Can artificial intelligence be an ally in the fight against psychological disorders?

This is more worrying, because now (a user) could ask ChatGPT (no limits) to help them write ransomware (a program that takes control of the system or device it infects and demands a ransom to return control to its owner). Although it is not yet known how effective this ransomware could be”Narang explains.


The expert finds it difficult to implement rules at the national or institutional level to set limits on these new technologies or prevent people from using them.

LOOK: An artificial intelligence “interviewed” Bill Gates and made him confess his biggest youth mistakes | VIDEO

Once you open Pandora’s box, you can’t put anything back inside. ChatGPT is here, and it’s not going away. Because outside of this malicious use there are many genuine use cases that are valuable to businesses, organizations, and individuals.”, Narang concludes.

Source: Elcomercio

Share this article:
globalhappenings news.jpg
most popular