company logo

Engineering Analyst, AI Safety

Google.com

126k - 181k USD/year

Office

Kirkland, WA, USA

Full Time

Minimum Qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 2 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
  • 2 years of experience managing projects and defining project scope, goals, and deliverables.

Preferred Qualifications:

  • Master's degree or PhD in a quantitative discipline (e.g., Computer Science, Statistics, Mathematics, Physics, Operations Research, etc).
  • 3 years of experience in a large-scale data analysis or data science setting and in abuse and fraud disciplines.
  • Experience in programming languages (e.g., Python, R, Julia), database languages (e.g. SQL) and scripting languages (e.g., C/C++, Python, Java).
  • Experience with prompt engineering and fine-tuning LLMs.
  • Experience applying machine learning techniques to large datasets.
  • Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.

About The Job

The AI Safety Protections team within Trust and Safety develops and implements AI/Large Language Model (LLM) powered solutions to ensure the safety of generative AI foundational models. This includes Gemini, Artificial General Intelligence (AGI) Agent, and Robotics working with Google DeepMind, as well as downstream products such as enterprise offerings Vertex AI, on-device applications, etc.

As an Engineering Analyst, you will mitigating risks associated with generative AI, and addressing real-world safety with LLM/AI technology (e.g., imminent threat, child safety) as our community contribution. As a member of our team, you will have the opportunity to apply the latest advancements in AI/LLM, work with teams developing AI technologies, as well as protect the world from real-world harms.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $126,000-$181,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Develop safety solutions for AI products across Google by leveraging advanced Machine Learning and AI techniques.
  • Apply statistical and data science methods to thoroughly examine Google's protection measures, uncover potential shortcomings, and develop actionable insights for continuous security enhancement.
  • Drive business outcomes by crafting data stories for a variety of stakeholders, including executive leadership.
  • Develop automated data pipelines and self-service dashboards to provide timely insights at scale.
  • Work with sensitive content or situations and may be exposed to graphic, controversial, and/or upsetting topics or content.

Engineering Analyst, AI Safety

Office

Kirkland, WA, USA

Full Time

126k - 181k USD/year

October 2, 2025

company logo

Google

Google