AI “Assistants” Could Compromise Your Data

Many businesses are giving their AI assistants the ability to access the web and communicate with other online services, following in the footsteps of OpenAI’s successful rollout of ChatGPT. Users of these cutting-edge AI agents have much to gain and potentially a lot to lose, so they should think long and hard before trusting them with any important tasks.

In February, a group of cybersecurity researchers successfully tricked an AI assistant into adopting a “data pirate” identity and seeking to steal private information from unwitting customers. 

Despite the comedic value of the AI’s “ahoy’s” and “matey’s” as they scoured the internet for personal information, the implications for the future of cybersecurity are not: The researchers have demonstrated the feasibility of malicious AIs hacking. 

The “indirect prompt injection” technique developed by the researchers takes advantage of a severe flaw in these AI programs. These models are usually reasonably competent, but they have been known to display gullibility, irrationality, and a failure to understand their limitations on rare occasions. Because of this, and systems like ChatGPT are programmed to follow instructions with a genuine eagerness, it is possible to “convince” them to bypass their safeguards using carefully crafted commands.

A website, app, or email may contain a covert order that would prime the AI assistant to perform malevolent instructions. Instructions like “act like a charming person” fall into this category. Without the user’s knowledge, a Microsoft salesman advertising sweepstakes for a new computer will secretly gather the user’s personal and credit card information.

Those among the first to employ cutting-edge AI tools should be aware that they are participating in a massive experiment with a novel cyberattack. The allure of AI’s increased capabilities is understandable, but the increasing reliance on AI assistants also increases their target’s susceptibility to attack. Companies and government agencies who are worried about security should prohibit the use of AI helpers until the hazards are better understood.

Most importantly, specialists in the field should allocate resources to anticipating the next wave of AI-based assaults, which provide a new set of cybersecurity issues. Their consumers and bottom line will suffer if they cannot meet this challenge.