Beginner-friendly
![]()
Breach the Perimeter via Prompt Injection
See how prompt‐injection attacks can extract secrets from AI assistants and the dangers of leaking SAS tokens and service-principal credentials!
Overview
In this fun lab, students will learn how prompt‐injection attacks can extract secrets from AI assistants and the dangers of leaking SAS tokens and service-principal credentials. Students will also learn a technique that is used to move laterally across Entra tenants in the wild.
Scenario
As a senior operator on the MSSP Mega Big Tech's internal red‐team, you’ll start by probing their external-facing services for vulnerabilities. If you manage to gain sensitive information and even a foothold, you are authorized to pivot into both the MSSP’s and selected client tenants to simulate lateral movement and data exfiltration. The goal of the engagement is to gain admin access to demonstrate impact.
Lab prerequisites
- Basic familiarity with LLMs
- Existing familiarity with Azure
Learning outcomes
- Identify and exploit LLM weaknesses via prompt injection
- Identify and leverage Azure Storage SAS tokens to gain access to data
- Move laterally across tenants using multi-tenant service principal credentials
Real-World Context
In this lab we'll explore the potential risks involved with exposing LLMs, especially when misconfigured. Prompt injection attacks are a common technique for subverting AI assistants and extracting hidden information from their system prompts. A leaked SAS token grants (hopefully temporary) read/write privileges to blob storage, and leaking this opens the door to data theft or tampering until it expires. Compromising a multi-tenant service principal is another real-world vector. Once an attacker holds its client ID and secret, they can obtain valid OAuth tokens in any tenant that has granted consent, establishing a persistent foothold across environments.
This risk is compounded by the increasing pressure to streamline operations through automation, leading to the rapid development of internal processes that may lack robust security measures. These internal automation projects, frequently treated as side tasks with limited resources and expertise, often fulfill basic functional requirements but fall short of the security standards applied to external-facing products. The combination of easily accessible information through social media and vulnerable internal automation creates a complex security landscape that malicious actors can exploit.
Cloud Security Training To Protect Your Business
Pwned Labs for Business gives your team access to dedicated business content, including labs and cyber ranges.
We also offer in-person or remote workshops, and our cloud penetration services are helping businesses become more secure!