company logo

Engineering Analyst, Trust and Safety Novel AI Testing

Google.com

174k - 258k USD/year

Office

Mountain View, CA, USA

Full Time

Minimum Qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 7 years of experience in managing projects and defining project scope, goals, and deliverables.
  • 7 years of experience in data analysis or data science, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
  • 5 years of experience in data analysis with experience in SQL or Python etc.

Preferred Qualifications:

  • Master's or PhD in a relevant quantitative or engineering field.
  • 5 years of experience working in a Trust and Safety Operations, data analytics, cybersecurity, or other relevant environment.
  • Experience working with Large Language Models, LLM Operations, prompt engineering, pre-training, and fine-tuning.
  • Experience in designing and conducting experiments or quantitative research, in a technology or AI context.
  • Experience in AI systems, machine learning, and their potential risks.
  • Strong technical competency with a data-driven investigative approach to solve complex tests, including demonstrable proficiency in data manipulation, analysis, and automation using languages like Python and SQL.

About The Job

As an Engineering Analyst within the Trust and Safety Responsible AI Testing Team, you will be part of a global team that drives One Google solutions for industry-leading responsible AI applications.

Your role will focus particularly on the broad spectrum safety, and neutrality risks and will prioritize close partnerships with colleagues in Google Deep Mind (GDM) to design and develop solutions to further Google’s ability to prevent abuse on GDM base models.

You will blend your deep domain expertise in safety policy and GenAI testing processes with strong technical acumen in experimental design, data science, and engineering experience to drive creative and ambitious solutions to tests, with the ultimate impact of improving user safety in GenAI products. You will demonstrate an ability to thrive in a fluid, dynamic research and product development environment.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $174,000-$258,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Co-locate with Google DeepMind, you will drive structured and unstructured testing of novel model modalities and capabilities. Lead platform and tooling development to bridge constraints and scale adversarial testing. Design engineering solutions, prompt generation strategies, evaluation tooling, and leveraging LLMs for analysis.
  • Define testing and safety standards, working with cross-functional colleagues, policy and engineering, to ensure they are met. Perform analyses and drive insights to develop model-level and product-level safety mitigations.
  • Lead and influence cross-functional teams to implement safety initiatives. Act as an advisor to executive leadership on complex safety issues.
  • Represent Google's AI safety efforts in external forums and collaborations, contributing to industry-wide best practices. Mentor analysts, fostering a culture of excellence and acting as a subject matter expert on adversarial techniques.
  • Work with graphic, controversial, or upsetting content.
Work with graphic, controversial, or upsetting content.

Engineering Analyst, Trust and Safety Novel AI Testing

Office

Mountain View, CA, USA

Full Time

174k - 258k USD/year

October 6, 2025

company logo

Google

Google