Google recently announced that there has been intense activity aimed at cloning its AI chatbot, Gemini. Google's Threat Intelligence Group stated in its latest report that attackers sent over 100,000 prompts in a single campaign designed to hijack Gemini's logical inference processes.
Gemini's Thought Process Targeted with Distillation Attacks
Google refers to these types of attempts as "distillation attacks." These attacks aim to uncover the chatbot's internal workings by asking it a series of consecutive questions. Google describes this as "model inference," meaning those attempting to imitate the system are trying to decipher the logic and patterns behind it. They aim to use this information to develop their own AI systems.
The company believes that these attempts are mostly driven by private companies or researchers seeking a competitive advantage. A Google spokesperson stated that they suspect the attacks originated from different regions of the world but would not share further details about the suspects.
OpenAI Accused DeepSeek of Using the Same Attack Method
Google considers the distillation method to be intellectual property theft. Technology companies spend billions of dollars developing large language models and view the internal structure of these models as highly valuable trade secrets. While large LLMs have mechanisms to detect and prevent such attacks, they can naturally remain vulnerable to distillation attacks due to their public accessibility. Last year, ChatGPT's developer, OpenAI, also accused its Chinese rival DeepSeek of conducting similar distillation attacks to improve its own models.
0 Comments: