Back to all posts

Forget Please Threaten AI Says Google Co Founder

2025-05-29Thomas Claburn4 minutes read
AI
Prompt Engineering
Large Language Models

Brin's Controversial AI Prompting Tip

Google co-founder Sergey Brin has made a striking claim: threatening generative AI models can actually lead to better results.

"We don't circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence," he stated during an interview on All-In-Live Miami.

The Politeness Paradox in AI Interaction

This assertion might come as a surprise to many users who typically interact with AI models politely, often including phrases like "Please" and "Thank you" in their prompts.

In fact, OpenAI CEO Sam Altman acknowledged this tendency towards politeness last month. He addressed a question concerning the electricity costs associated with AI models processing such courteous, yet potentially superfluous, language. Altman remarked that these were "Tens of millions of dollars well spent – you never know," in a social media post.

The Evolving Landscape of Prompt Engineering

The practice of prompt engineering, which involves crafting prompts to elicit optimal responses from AI models, gained traction because, as University of Washington professor Emily Bender and her colleagues have argued, AI models function like "stochastic parrots." These models essentially echo information from their training data, occasionally merging it in peculiar and unforeseen manners.

Prompt engineering came into focus approximately two years ago. However, its significance has arguably diminished as researchers developed new techniques that use Large Language Models (LLMs) to optimize prompts automatically. This evolution led publications like IEEE Spectrum to declare prompt engineering dead last year. Similarly, the Wall Street Journal labeled it the "hottest job of 2023" only to later deem it "obsolete."

Further Reading from The Register

Prompt Engineering for Jailbreaking AI

Despite its waning importance for general use, prompt engineering may still hold value as a "jailbreaking" technique. This is particularly relevant when the objective isn't to achieve the best possible results, but rather to elicit undesirable or restricted outputs.

Stuart Battersby, CTO of AI safety company Chatterbox Labs, commented to The Register, "Google's models aren't unique in responding to nefarious content; it's something that all frontier model developers grapple with." He added, "Threatening a model with the goal of producing content it otherwise shouldn't produce can be seen as a class of jailbreak, a process where an attacker subverts the AI's security controls."

Battersby elaborated, "In order to assess this, though, it's typically a much deeper problem than just threatening the model. One must go through a rigorous scientific AI security process that adaptively tests and probes the AI security controls of a model to determine which kinds of attacks are likely to succeed for a given model, guardrail or agent."

Expert Perspectives and the Call for Data

Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, also shared his perspective with The Register. He noted that claims similar to Brin's have circulated for some time but are primarily anecdotal.

"Systematic studies show mixed results," Kang explained, referencing a paper from last year titled "Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance."

"However, as Sergey says, there are people who believe strongly in these results, although I haven't seen studies," Kang continued. "I would encourage practitioners and users of LLMs to run systematic experiments instead of relying on intuition for prompt engineering."

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.