Back to Blog

AI-Generated Malware Is Here: What Unit 42’s Warning Means for Developers and Business Owners

5mins read
malwareArtificial Intelligence

Security researchers at Palo Alto Networks Unit 42 have been analysing how LLM-based code assistants can be misused — especially through risks like indirect prompt injection (where malicious instructions are hidden inside external content the model reads) and unsafe automation around coding tools.

And here’s the uncomfortable truth:

If AI can help good developers ship faster, it can help bad actors move faster too.

This article is my plain-English breakdown of what’s changing, why it matters (even if you’re not technical), and what you can do without turning your life into a security project.


What’s actually happening with AI-generated malware (beyond the hype)

This isn’t about AI producing flawless “Hollywood malware.”

It’s about speed and scale.

1. Attackers use AI as an accelerator

Even basic assistance can help criminals:

  • draft scripts faster
  • vary phishing messages quickly
  • generate “good enough” malicious snippets
  • polish language so scams feel more believable

When attempts are cheap, criminals can run more of them. And volume changes the game.

2. The bigger risk is AI inside real workflows

Unit 42’s focus isn’t only “someone asked a chatbot for malware.”

It’s the reality that organisations are wiring AI tools into:

  • repositories
  • build pipelines
  • internal docs
  • ticketing systems
  • automation tools and agents

That’s where mistakes and exploitation can become expensive.


Why this matters to business owners (even if you don’t code)

Most real damage doesn’t start with a genius hacker. It starts with:

  • a fake invoice
  • a believable “support” email
  • a compromised password
  • a dodgy plugin
  • someone clicking what looked normal

When AI improves scam quality and increases volume, the odds of someone on your team falling for it go up.

And it’s not only “big companies.” Small businesses are often targeted because they tend to have:

  • weaker account security
  • fewer checks
  • less monitoring
  • more reliance on email + shared logins

The goal isn’t panic. It’s readiness.


Why this matters to developers (and teams shipping fast)

If you build software, the biggest danger isn’t “AI wrote something evil.”

It’s over-trust.

The code compiles. The UI works. So everyone assumes it’s fine.

But security problems often look like “normal code”:

  • weak validation
  • unsafe dependencies
  • secrets leaking into logs
  • overly-permissive roles
  • missing rate limits
  • unreviewed automation steps

That’s why your process matters more than the tool.


The security risk developers keep underestimating: indirect prompt injection

Unit 42 has been clear about a specific class of risk with code assistants: indirect prompt injection.

In simple terms: harmful instructions can be embedded in content the assistant consumes (webpages, docs, external sources), and then influence outputs in ways that can be easy to miss — especially when you attach a URL/repo/folder as “context.”

This risk gets sharper when tools:

  • browse the web
  • ingest project docs automatically
  • write code directly into a repo
  • execute commands with “always allow” permissions

Unit 42 has also demonstrated how indirect prompt injection can be used to poison agent memory (so malicious instructions can persist over time), which becomes a bigger deal as more teams adopt AI agents. [3]

And to zoom out: OWASP now lists Prompt Injection as one of the top risks for LLM applications, including both direct and indirect forms.


Practical steps for business owners (simple, high impact)

1. Decide which AI tools are allowed (and what data is off-limits)

You don’t need a 30-page policy. You need one clear rule:

  • no passwords
  • no client personal data
  • no financial documents
  • no private internal systems
  • no copying sensitive emails into chat tools

2. Fix account security first (because most breaches start there)

  • turn on MFA wherever possible
  • stop sharing logins
  • reduce admin accounts
  • use a password manager
  • revoke access for old staff/contractors

3. Keep your website “maintained,” not abandoned

If you rely on WordPress or plugins, updates matter. A website isn’t “done.” It’s a living system.

4. Backups you can restore, not backups you assume exist

Test restore once. It changes everything.

5. Make it normal to ask your developer about security basics

Even one question helps:

  • “Do you rate-limit login and reset endpoints?”
  • “Do you scan for leaked secrets?”
  • “Do you keep dependencies updated?”
  • “Is there audit logging for admin actions?”

Practical steps for developers (the safe-by-default workflow)

1. Treat AI output like untrusted code

Same mindset as copying from StackOverflow in 2016:

  • useful
  • fast
  • still needs review

2. Harden your pull request and CI pipeline

At minimum:

  • mandatory PR reviews
  • linting + type checks
  • dependency scanning
  • secrets scanning
  • basic security checks for auth flows

3. Be careful with tools that “read everything”

If a code assistant can ingest external content, treat that content like untrusted input.

4. Reduce blast radius (least privilege)

  • rotate tokens regularly
  • separate environments (dev/stage/prod)
  • lock down service accounts
  • restrict API keys by scope

5. Monitor the boring signals

Security often shows up as “weird admin behaviour”:

  • unusual logins
  • new admin creation
  • role changes
  • unusual OAuth account linking
  • sudden spikes in reset requests

FAQ (for quick answers and snippet-friendly search)

Can AI really generate malware?

AI can generate code and assist with scripting. The most serious risk is that it can speed up parts of malicious workflows and increase scale — especially when paired with automation and insecure tool permissions.

What is indirect prompt injection?

It’s when malicious instructions are hidden inside content (like a document, webpage, or repo) that an AI tool reads, causing the tool to behave in unintended ways.

Should businesses stop using AI tools?

No. The smarter move is to use AI with clear guardrails, limited permissions, and strong account security — so speed doesn’t become chaos.


My perspective as a builder

I’m not anti-AI.

I’m realistic about what happens when speed becomes cheap.

AI will help good teams ship faster. It will also help criminals scale scams faster. And it will help rushed teams ship insecure code faster.

So the winning approach is simple:

Use AI to move faster, but build systems that stay safe when humans get tired, rushed, or overconfident.