Skip to content

Watch out for these 3 ways criminals are now using AI against you

Chatbots can be used maliciously

Arthur Gaplanyan

AI robot looking in the mirror seeing reflection of a man

It’s no surprise that Artificial Intelligence is one of the top emerging technologies right now. It’s been a few months of AI tools taking the world by storm and improving upon themselves.

One of the most impressive things is how recent chatbots can generate text that sounds like it’s a human. You’d never really know it was written by a bot if you weren’t told. It takes minimal human interaction to get a quick and convincing block of written text.

We could do so much good with this technology, but of course, with every step forward there needs to be somebody to use it to cheat and steal.

We’re not surprised. We mentioned it last December as a threat to watch this year, and again 2 months ago as more and more AI generated phishing scams started popping up.

That’s just some of the ways AI is being used for malicious reasons.

The methods AI is being used maliciously have been categorized into 3 types:

1. Better Phishing Emails

As I just mentioned, AI generated phishing emails are on the rise. The speed and automation of the text generated just makes it easier to pump out more attacks. As if there wasn’t already a constant barrage of attempts to steal your info or get you to download malware.

The worst part, the number one clue to phishing emails to date has been poor grammar and spelling. With AI assisting criminals, it’s a lot harder to spot those fakes. The even worst part, AI can assist in making every single phishing email unique, making it incredibly difficult for spam filters to identify and block these threats.

2. Spreading Misinformation

Misinformation and disinformation are huge in the current internet era, that’s no surprise. Have you thought about how AI is affecting this?

Just think how easily you could use a free tool like ChatGPT and prompt it to write multiple social media posts that accuse the CEO of XYZ company of embezzlement and mention news outlets A, B, and C. Within a few minutes you will have a litany of social media posts that sound completely true, that it is challenging to tell they aren’t without researching further.

It’s definitely not good, but this might not sound like a direct threat to you. However, misinformation spreads confusion, and it could easily be used to defame your reputation, that of your business, or even members of your team. It also creates confusion and gets emotions running high, which usually means people don’t think things through – like clicking malware links.

3. Creating Malicious Code

AI is very powerful for writing computer code in multiple languages. We’ve tested it internally and it is very good at creating code from scratch, and phenomenal at correcting code you’ve written.

Guess what? It’s pretty good at writing malicious code as well. I mean, the AI software doesn’t really know what the purpose or intent of the criminal is, so it can’t stop them.

The creators of AI tools are trying to work on ways to prevent their software from being used maliciously. That may be coming, but who knows how well they can implement it. When they do, then criminals will look for workarounds to those fail safes.

Unfortunately, that is what the name of the game is. Staying one step ahead in an ongoing race. It’s the same reason we work so hard with baking cybersecurity measures into everything we do, so the multiple layers of security features can keep the criminals away from your data.

If you’re concerned about your business and team with security threats out there, get in touch. We offer multiple security measures including ongoing training to keep your employees aware of scams and what to look out for.