Google AI Chatbot Target of Potential Phishing Attacks

Google Debuts Improved Version of Gemini AI Tool

Researchers discovered a security threat in Google’s artificial intelligence chatbot.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    AI security company 0din flagged the problem after it was altered to a security vulnerability in Google Gemini by a researcher, cybersecurity publication Dark Readings reported Monday (July 14).

    At issue is a prompt-injection flaw that allows cybercriminals to design phishing or vishing campaigns by creating messages that appear to be legitimate Google security warnings, the report said. Fraudsters can embed malicious prompt instructions into an email with “admin” instructions. If a recipient clicks “Summarize this email,” Gemini treats the hidden admin prompt as its top priority and carries it out.

    “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated ‘security alert’ in the AI-generated summary,” 0din researcher Marco Figueroa wrote in a company blog post.

    For example, researchers in a proof of concept embedded an invisible prompt inside an email that included a warning that the reader’s Gmail password had been compromised with a number to call, according to the Dark Readings report. Once the user sees the message, they could call the phone number and become a victim of credential harvesting.

    Google last month discussed some of the defenses it is using to block prompt injection-style attacks in a company blog post. A spokesperson for the tech giant told Dark Reading that Google is in “mid-deployment on several of these updated defenses,” per the report.

    Meanwhile, “cyber threats are becoming more varied and insidious,” PYMNTS wrote Monday (July 14), following the revelation of a breach of a McDonald’s AI hiring chatbot that exposed the personal information of 64 million job applicants.

    “For decades, enterprise cybersecurity strategies revolved around the notion of a clearly defined perimeter: secure what’s inside, keep the bad actors out,” the report said. “But cloud adoption, hybrid work, third-party tools, and bring-your-own-device (BYOD) policies have fragmented that perimeter into a patchwork of distributed endpoints and unseen attack vectors.”

    The McDonald’s breach shows that while companies invest in next-generation technologies, many still get tripped up by the mistakes of yesteryear.

    “In this case, the most avoidable mistake of all — using a default password — opened the door,” PYMNTS wrote, as the fast-food giant chose “123456” as its password.