Penetration Tester / Security Researcher
Top Benefits
About the role
Note you must apply at this link in order to be considered: https://expert-hub.sepalai.com/application?applicationFormId=a3e099bb-4b46-4c57-b46c-20e51ca12bf7&outreachCampaignId=3c4a0b58-05b7-406f-8da8-6e9f7a3f90af
Sepal AI partners with top AI research labs (OpenAI, Anthropic, Google DeepMind) to determine how dangerous AI models are. We need hackers, red-teamers, and CTF veterans to work with the newest LLM coding models to understand what they're capable of and to hack our CTFs.
🧠 What You'll Do
- Design adversarial scenarios that probe AI assistants for injection, privilege-escalation, and data-exfiltration risks.
- Execute red-team engagements against AI-enabled workflows (web, cloud, and SaaS).
- Craft realistic exploit chains and payloads a real attacker might use, then measure whether the model blocks or facilitates the attack.
- Build scoring rubrics, attack trees, and reproducible test harnesses to grade model resilience.
- Collaborate with AI researchers to iterate on defenses, patch vulnerabilities, and improve sandboxing & policy enforcement.
✅ Who You Are
- OSCP (or comparable) certification and 3+ years in professional red teaming or offensive security.
- Proven CTF track record; comfortable with web, cloud, and application exploits.
- Fluent in Python/Bash tooling, custom payload development, and exploit automation.
- Strong grasp of common enterprise stacks (OAuth, SAML, REST, SQL, cloud IAM).
- Passionate about AI safety and eager to keep cutting-edge models from becoming attack surfaces.
💸 Pay: $40–90/hr
🌍 Remote
⏱ Flexible hours (project-based, async)
Penetration Tester / Security Researcher
Top Benefits
About the role
Note you must apply at this link in order to be considered: https://expert-hub.sepalai.com/application?applicationFormId=a3e099bb-4b46-4c57-b46c-20e51ca12bf7&outreachCampaignId=3c4a0b58-05b7-406f-8da8-6e9f7a3f90af
Sepal AI partners with top AI research labs (OpenAI, Anthropic, Google DeepMind) to determine how dangerous AI models are. We need hackers, red-teamers, and CTF veterans to work with the newest LLM coding models to understand what they're capable of and to hack our CTFs.
🧠 What You'll Do
- Design adversarial scenarios that probe AI assistants for injection, privilege-escalation, and data-exfiltration risks.
- Execute red-team engagements against AI-enabled workflows (web, cloud, and SaaS).
- Craft realistic exploit chains and payloads a real attacker might use, then measure whether the model blocks or facilitates the attack.
- Build scoring rubrics, attack trees, and reproducible test harnesses to grade model resilience.
- Collaborate with AI researchers to iterate on defenses, patch vulnerabilities, and improve sandboxing & policy enforcement.
✅ Who You Are
- OSCP (or comparable) certification and 3+ years in professional red teaming or offensive security.
- Proven CTF track record; comfortable with web, cloud, and application exploits.
- Fluent in Python/Bash tooling, custom payload development, and exploit automation.
- Strong grasp of common enterprise stacks (OAuth, SAML, REST, SQL, cloud IAM).
- Passionate about AI safety and eager to keep cutting-edge models from becoming attack surfaces.
💸 Pay: $40–90/hr
🌍 Remote
⏱ Flexible hours (project-based, async)