TL;DR

Data Scientist (Integrity Measurement): Develop and improve measurement and metrics for detecting and responding to severe usage harms on AI platforms with an accent on actor- and network-level harm analysis and AI-driven prevalence measurement. Focus on designing robust statistical methods, optimizing LLM prompts for measurement, and collaborating cross-functionally to enhance safety systems.

Location: San Francisco or New York City office, onsite role with possible urgent escalations outside normal hours

Salary: $293,000–$385,000 + equity

Company

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity by pushing AI capabilities and deploying them safely.

What you will do

  • Own measurement and quantitative analysis for severe, actor- and network-based usage harm verticals.
  • Develop and implement AI-first methods for prevalence measurement and safety metrics, including off-platform data.
  • Build metrics for goaling and A/B testing when top-line metrics are unsuitable.
  • Maintain dashboards and reporting for harm verticals.
  • Conduct analyses to inform improvements in review, detection, enforcement, and roadmaps.
  • Optimize LLM prompts for measurement purposes and collaborate with safety teams on policies.

Requirements

  • Must be located in San Francisco or New York City for onsite work.
  • Senior data scientist with trust and safety experience driving measurement direction.
  • Strong statistics skills, especially sampling methods and prevalence estimation.
  • Experience with severe and sensitive harm areas like child safety or violence.
  • Proficiency in data programming languages such as Python, R, and SQL.
  • Excellent communication and cross-functional collaboration skills.

Nice to have

  • Experience with AI harms or leveraging AI for measurement.

Culture & Benefits

  • Equal opportunity employer committed to diversity and inclusion.
  • Reasonable accommodations for applicants with disabilities.
  • Competitive compensation with equity offers.
  • Work in a mission-driven AI research and deployment environment.