We review vendors based on rigorous testing and research but also take into account your feedback and our affiliate commission with providers. Some providers are owned by our parent company.
Learn more
vpnMentor was established in 2014 to review VPN services and cover privacy-related stories. Today, our team of cybersecurity researchers, writers, and editors continues to help readers maintain their online freedom in partnership with Kape Technologies PLC, which also owns the following products: Holiday.com, ExpressVPN, CyberGhost, and Private Internet Access which may be ranked and reviewed on this website. The reviews published on vpnMentor are believed to be accurate as of the date of each article, and written according to our strict reviewing standards that prioritize professional and detailed examination by the reviewer, taking into account the technical capabilities and qualities of the product together with its commercial value for users. The rankings and reviews we publish may also take into consideration the common ownership mentioned above, and affiliate commissions we earn for purchases through links on our website. We do not review all VPN providers and information is believed to be accurate as of the date of each article.
Advertising Disclosure

vpnMentor was established in 2014 to review VPN services and cover privacy-related stories. Today, our team of cybersecurity researchers, writers, and editors continues to help readers maintain their online freedom in partnership with Kape Technologies PLC, which also owns the following products: Holiday.com, ExpressVPN, CyberGhost, and Private Internet Access which may be ranked and reviewed on this website. The reviews published on vpnMentor are believed to be accurate as of the date of each article, and written according to our strict reviewing standards that prioritize professional and detailed examination by the reviewer, taking into account the technical capabilities and qualities of the product together with its commercial value for users. The rankings and reviews we publish may also take into consideration the common ownership mentioned above, and affiliate commissions we earn for purchases through links on our website. We do not review all VPN providers and information is believed to be accurate as of the date of each article.

Researchers Reveal Images Can Be Used to Exploit AI Systems

Researchers Reveal Images Can Be Used to Exploit AI Systems
Andrea Miliani Published on September 04, 2025 Cybersecurity Researcher

Cybersecurity researchers from Trails of Bits have discovered a new vulnerability in AI systems that allows malicious actors to hide commands within images. The team used a prompt injection strategy to exploit the image downscaling feature in several AI models.

According to the report, published on August 21, the researchers were able to exfiltrate data from multiple AI systems — such as Vertex AI Studio, Gemini CLI, Genspark, Google Assistant, and Gemini’s web and API interfaces — through an image-scaling attack.

“By delivering a multi-modal prompt injection not visible to the user, we achieved data exfiltration on systems including the Google Gemini CLI,” explained the researchers. “This attack works because AI systems often scale down large images before sending them to the model: when scaled, these images can reveal prompt injections that are not visible at full resolution.”

The document provides more details on how image scaling can be exploited in AI systems, outlines mitigation strategies, and introduces Anamorpher — an open-source tool that enables the crafting and visualization of such attacks.

Trails of Bits demonstrated the attack’s effectiveness using Google’s open-source AI agent Gemini CLI. The experts showed that with the default Zapier integration enabled, a user could upload an apparently harmless image that, once processed, silently triggered Zapier to send the user’s Google Calendar data to an attacker’s email without any warning or confirmation.

The researchers recommend that developers avoid image downscaling where possible and instead restrict upload dimensions. If downscaling is necessary, they suggest always displaying a preview of the image that the model will process.

Anamorpher, currently released in beta, can generate crafted images using different scaling methods to demonstrate attacks and help other researchers develop defenses against this vulnerability.

Concerns about vulnerabilities in AI systems continue to grow. A few days ago, researchers at NeuralTrust revealed that OpenAI’s latest AI model GPT-5 is susceptible to attacks.

About the Author

Andrea is a seasoned tech journalist with a growing passion for cybersecurity, covering cyberattacks, AI breakthroughs, and the latest trends shaping the future of technology.

Please, comment on how to improve this article. Your feedback matters!

Leave a comment

This field must contain more than 50 characters

The field content should not exceed 1000 letters

Sorry, links are not allowed in this field!

Name should contain at least 3 letters

The field content should not exceed 80 letters

Sorry, links are not allowed in this field!

Special characters are not allowed in the Name field

Please enter a valid email address