Protection Scientist Engineer, Intelligence and Investigations
OpenAI.com
220k - 425k USD/year
Office
San Francisco
Full Time
About The Team
OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe that achieving our goal requires real world deployment and iteratively updating based on what we learn.
The Intelligence and Investigations team supports this by identifying and investigating misuses of our products – especially new types of abuse. This enables our partner teams to develop data-backed product policies and build scaled safety mitigations. Precisely understanding abuse allows us to safely enable users to build useful things with our products.
About The Role
Protection Science Engineering is an interdisciplinary role mixing data science, machine learning, investigation, and policy/protocol development. As a Protection Scientist Engineer within Integrity and Investigations, you will be responsible for designing and building systems to proactively identify and enforce on abuse on OpenAI’s products. This includes ensuring we have robust abuse monitoring in place for new products, sustaining monitoring for existing products, and prototyping and incubating systems of defense against our highest risk harms. You will also respond to and investigate critical escalations, especially those that are not caught by our existing safety systems. This will require expert understanding of our products and data, and involves working cross-functionally with product, policy, and engineering teams.
This role can be based in either our San Francisco, DC or NY office and includes participation in an on-call rotation that will involve resolving urgent escalations outside of normal work hours. Some investigations may involve sensitive content, including sexual, violent, or otherwise-disturbing material.
In This Role, You Will:
- Scope and implement abuse monitoring requirements for new product launches.
- Improve processes to sustain monitoring operations for existing products, including developing approaches to automate monitoring subtasks.
- Prototype and mature into production systems of detection, review, and enforcement of abuse for major harms.
- Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to address them, and with Engineering teams to ensure we have sufficient data and scaled tooling.
You might thrive in this role if you:
- Have at least 4 years of experience doing technical analysis and detection, especially using SQL and Python.
- Have experience in trust and safety and/or have worked closely with policy, enforcement, and engineering teams. An investigative mindset is key.
- Have experience with basic data engineering, such as building core tables or writing data pipelines in production, and with machine learning principles and execution. Basic software development skills are a plus as this role writes productionised code.
- Have experience scaling and automating processes, especially with language models.
About Openai
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Protection Scientist Engineer, Intelligence and Investigations
Office
San Francisco
Full Time
220k - 425k USD/year
October 8, 2025