How organized crime is using artificial intelligence to advance its Latin American interests

Aug 31, 2024 | 0 comments

By Christopher Newton

With access to artificial intelligence increasing rapidly, some of Latin America’s organized crime groups have begun wielding it for criminal gain.

Though artificial intelligence (AI) has long been of interest in the worlds of science and science fiction, the release of ChatGPT in 2022 made a form of AI known as large language models (LLMs) broadly popular. And as AI becomes more prevalent, organized crime has begun to embrace the technology, with law enforcement struggling to keep up.

Most police agencies in the region “have more intelligence and they have more investigative powers on your traditional analog, criminal organizations,” Carlos Solar, a Latin America cybersecurity expert at the Royal United Services Institute for Defense and Security Studies, told InSight Crime.

But as criminal AI’s popularity grows and investigations reveal more information about criminal techniques, organized crime’s use of new technologies is becoming more apparent daily. These are four ways organized crime groups are now using AI in Latin America.

But First, What Is an LLM?
LLM technology is being integrated into more and more programs that now have AI assistants to help with writing and automating tasks. But what is it? It is a form of machine learning popularized by ChatGPT that lets users communicate naturally with AI models, making the emerging technology more intuitive and user-friendly.

LLMs allow you to use natural language to interact with the model, asking questions and making requests as you would a human. And the model responds with similarly natural language, making interacting with an LLM intuitive without any technical knowledge.

While digitally editing an image or writing code to automate a task has been possible for years, these kinds of tasks required specialized tools and knowledge that often required years of training or expensive software. Now, AI tools let you produce a realistic photo, write naturally in a foreign language, or automatically send a load of emails with some free software and a bit of experimenting.

Deep Fakes
Criminal groups have taken advantage of realistic AI-generated images and fake voices to commit fraud and extortion.

You get a phone call and you recognize the voice. It is your nephew, or granddaughter, or cousin. They have been kidnaped and you must pay a ransom, or they have an emergency and need some money.

But, in reality, your loved one is fine. A criminal group used recordings from social media and some AI tools to imitate their voice over the phone. As AI has improved in quality, the realistic quality of voice imitation, and the panic induced by thinking someone you care about is in danger, can make the scam more effective. This tactic has been used by Peruvian organized crime and is spreading around the globe.

Deep fakes are not limited to imitating someone’s voice. AI can also produce increasingly realistic images and videos.

On the US-Mexico border, for example, criminal groups have been reported using fake images to defraud the families of missing migrants. The groups create websites, posing as organizations that help find missing people. They then ask the families for photos of the missing person to identify them. But the groups use the photos to make convincing fake images or videos of the missing person, saying they were kidnapped and the families must pay a ransom for their safe release.

Optimizing Existing Frauds
Organized crime is applying AI to existing financial scams, increasing their efficiency and scale.

In the case of deep fakes, groups are often making old scams more convincing. With other scams, AI is helping criminal groups reach more potential victims more quickly and with fewer resources.

AI can save time. Instead of manually calling each potential victims, you can set up a computer to dial a bunch of messages all at once, Solar told InSight Crime. “You spend less resources and you’ll probably get a reward that will be much higher.”

Groups are using AI to automate calls, as well as chatbots and other forms of generative AI to interact with potential victims. Scammers have taken to dating apps with chatbots, including one known as LoveGPT, which will initiate a conversation. The bots are used in scams known as “pig butchering,” where scammers try to build an emotional connection with a victim through the app. After gaining trust, the scammers ask for money, often making up a story about a tragic emergency or presenting a fake investment opportunity.

Though automation can help smaller groups operate on a larger scale, some of the region’s most powerful groups may be getting involved. An assessment by Interpol warned that Mexico’s Jalisco Cartel New Generation (Cartel Jalisco Nueva Generación – CJNG), and Brazil’s First Capital Command (Primeiro Comando da Capital – PCC) and Red Command (Comando Vermelho – CV) are active with these types of frauds.

Better Phishing
LLMs are helping cybercriminals tune their messaging, imitate bosses, and create more emails faster.

You get an email from your boss. It is urgent. They have been locked out of their account but they need to finish something now for the client, so they ask you for your login to get it done on time. You respond without hesitation, helping your boss, you think. Later, when you speak to your boss and they have no what you are talking about, you re-read the email, more carefully this time. You notice that it was not sent from the boss’s email, but one that was strikingly similar.

Fraudulent emails, known as phishing, use urgency and imitation to fool victims into giving up their credentials or installing malware. In Brazil, for example, a cybercrime group called PINEAPPLE pretends to be the federal tax service.

They send emails with a link that goes to an imitation of the official government website. When victims try to download tax documents, they instead download malware. In Colombia, the government warned that phishing is now the “most common method used by cybercriminals to fraudulently defraud and obtain confidential information.”

As AI improves, it can better change its tone and imitate people’s styles. Criminals are able to craft personalized emails that victims are more likely to interact with before they even suspect something may be amiss.

“You can easily see, even in your own inbox, how phishing has gone from the ‘Nigerian prince’ asking for money to very sophisticated emails with the correct style, with the correct fonts, with the name of the person,” said Solar.

And as with other frauds, AI can help automate tasks, sending out masses of emails. Since the launch of ChatGPT, phishing emails have surged by over 4,000%, according to a report by cybersecurity firm Slash Next. And though mainstream LLMs have guardrails to try and prevent criminals abusing the software, the criminal world continues to find workarounds.

Writing Malware
Most popular LLMs’ will refuse to write malware or assist in criminal activity, but criminal actors have already learned to trick the model and criminal alternatives have been developed.

One of the issues with guardrails is that malware and normal software are often doing very similar things. The difference is how they are applied. For example, when you go to a site, you will often see a popup asking you to login. When you enter your username and password, it sends this information to a server to verify your credentials.

Popular infostealer software, such as banking malware used by cybercrime groups in Brazil, does almost the exact same thing. The only difference is this popup sends your information to a server controlled by the criminal group, where they can then log into your account or sell your data.

So a popular AI model may refuse a prompt to write an infostealer, but will help write a popup for a login. The software itself is not malicious, but when criminal groups trick people into downloading it with a phishing email, it can be used to steal from victims.

The best LLMs on the market are not yet good enough to write complicated software by themselves, but they are powerful debuggers and can help inexperienced coders develop higher quality software. And for experienced developers, AI can help write more code, much faster.

Beyond hacking mainstream LLMs to help develop malware, criminal groups have begun creating their own models. New ChatGPT imitations have been created with no guardrails whatsoever, helping produce malware without having to creatively tune prompts.
________________

Credit: InSight Crime

CuencaHighLife

Dani News

Hogar Esperanza – News

Google ad

The Cuenca Dispatch

Week of September 15

The Massive Blackout of September 2024.

Read more

Getting Rid of Verónica Abad: The Conflict the Government is Racing to Resolve.

Read more

Puerto Cabuyal: the commune that protects a marine reserve in Ecuador.

Read more

Amazon eco lodge News

Fund Grace News