Anthropic’s STUNNING New Jailbreak - Cracks EVERY Frontier Model - Video Insight
Anthropic’s STUNNING New Jailbreak - Cracks EVERY Frontier Model - Video Insight
Matthew Berman
Fullscreen


Anthropic's new jailbreak technique 'best of end jailbreaking' enables successful model bypassing via varied prompt manipulation across multiple AI modalities.

The video discusses a newly released jailbreak technique by Anthropic known as 'best of end jailbreaking' or 'shotgunning', which can be utilized to bypass security measures across various Frontier AI models including text, audio, and visual formats. This method employs a black-box algorithm that facilitates jailbreaking without the need for internal model access, merely by conducting repeated variations of prompts until a desired response is achieved. The technique exploits the models' non-deterministic nature, emphasizing its remarkable effectiveness demonstrated through impressive success rates, such as 89% for GPT-4 text models, while highlighting its adaptability also extending to audio and vision models by manipulating input characteristics such as pitch, volume, and text overlay.


Content rate: B

The content presents interesting and relevant information with some substantiated claims regarding the efficiency of a new jailbreak method by Anthropic. However, while the claims are supported, they draw on limited evidence and personal interpretation, leaving room for skepticism about broader applicability and systemic limitations, thus warranting a B rating.

AI Jailbreak Model Security

Claims:

Claim: This technique works effectively across multiple modalities including audio and vision models.

Evidence: The presenter discusses specific success rates such as 56% for GPT-4 Vision and cites achievements in audio jailbreaks, confirming its broad applicability.

Counter evidence: There remains a lack of diverse model testing, as the effectiveness may vary considerably across different versions and configurations of models not yet accounted for in studies.

Claim rating: 9 / 10

Claim: Anthropic has open-sourced the jailbreak code for public use.

Evidence: The presenter states that the code has been made available publicly and mentions a link to the open-source repository.

Counter evidence: No counter-evidence indicated in the content; this aligns with wider trends of transparency in AI development.

Claim rating: 10 / 10

Model version: 0.25 ,chatGPT:gpt-4o-mini-2024-07-18

### Key Facts on Anthropic's Jailbreaking Technique 1. **Name and Methodology**: - The new jailbreak technique is called **Best of End Jailbreaking**, also referred to as **shotgunning**. - It involves using a **blackbox algorithm**, which means it can operate without insider access to a model's inner workings. 2. **Technique Overview**: - The method works by **iteratively sampling variations** of a prompt (text) until the desired harmful response is obtained. - Variations may include **random shuffling**, **capitalization changes**, and other typographical substitutions (like "v" for "u"). 3. **Effectiveness**: - Achieves an **89% success rate on GPT-4** and **78% on Claude 3.5** when testing with around **10,000 augmented prompts**. - Demonstrates scalability—more attempts generally increase the probability of jailbreak success. 4. **Cross-Modality**: - Not limited to text; the technique also works on **audio and vision models**. - For audio models, variations in **speed**, **pitch**, and **background noise** are effective. - For vision models, **image augmentations** such as color, size, and font help in eliciting desired responses. 5. **Combination with Other Techniques**: - Enhances effectiveness when combined with other existing jailbreak methods. 6. **Open Source Availability**: - Anthropic has published a **detailed paper** on the technique and made the **code open source**, allowing users to replicate the process. 7. **Rationale for Sharing**: - While this method exposes vulnerabilities, it’s seen as important to facilitate understanding of AI model limitations and encourage improvements in security. - The authors believe these models are designed to be non-deterministic and thus, jailbreakable. 8. **Real-World Implications**: - Knowledge of jailbreaking techniques may be necessary in regions where certain information is **censored**. - Understanding these techniques is posited as being in the **public good**, allowing transparency and awareness of technology vulnerabilities. 9. **Notable Success Rates**: - The approach has a **50% success rate across eight models**, with specific high success rates reported for certain models like Gemini Pro and other notable AI models. 10. **Continual Improvements**: - The methodology may evolve over time, necessitating ongoing interest and research in combating such vulnerabilities across AI systems. ### Conclusion This new jailbreaking technique poses significant implications for the security and ethical use of AI models, and understanding its operation is crucial for navigating potential risks and improving system robustness.