TL;DR
Lead AI Scientist Safety (AI): Leading the safety team to develop model- and system-level safeguards for frontier AI models with an accent on post-training methods, adversarial robustness, and agentic risk mitigation. Focus on building evaluation tooling for automated red-teaming, managing research execution, and ensuring the safety of enterprise-grade AI solutions.
Location: Paris (Hybrid)
Company
Mistral AI is a pioneer in high-performance, open-source AI models and solutions designed to democratize intelligence for enterprise and personal use.
What you will do
- Manage the day-to-day execution and technical prioritization of the safety research team.
- Develop post-training methods and system-level safeguards for adversarial scenarios.
- Build and deploy evaluation tooling for red-teaming and monitoring model safety.
- Conduct research on agentic risks, alignment, and adversarial robustness.
- Coach and mentor team members to foster professional and technical growth.
- Contribute directly to the technical stack and research initiatives as an individual contributor.
Requirements
- 8+ years of experience in AI/ML research or engineering.
- Proven leadership experience in building and scaling high-performance AI teams.
- Deep technical mastery of machine learning, deep learning, and AI systems.
- Highly proficient in software engineering using Python.
- Hands-on experience with AI frameworks like PyTorch or JAX and distributed systems like Ray or Kubernetes.
Nice to have
- Hands-on experience training large transformer models in a distributed fashion.
- Prior experience specifically in AI safety or alignment.
- Interdisciplinary expertise in ethics, policy, or governance.
- Strong publication record in relevant scientific domains.
Culture & Benefits
- Competitive cash salary and equity package.
- Full health insurance coverage for employee and family.
- Generous parental leave policy.
- Daily meal vouchers and fitness access programs.
- Visa sponsorship support.
- Access to professional BetterUp coaching.
