Responsible AI Solution Risk Assessor
Microsoft.com
97k - 206k USD/year
Office
Redmond, Washington, United States
Full Time
Are you passionate about advancing the positive impact of AI on a global scale? In this position, you will play a pivotal role in ensuring that our AI initiatives are implemented responsibly and effectively.
Our Trust and Integrity Protection (TrIP) team works with other parts of Microsoft to ensure we continue to be one of the most trusted companies in the world. We are seeking a detail-oriented and principled Responsible AI Solution Risk Assessor to be part of a team evaluating AI use cases across our organization. This role is critical in ensuring that AI solutions are developed and deployed in alignment with Microsoft’s Responsible AI principles, regulatory requirements, and ethical standards. You will work closely with the Office of Responsible AI and other governance bodies to identify, assess, and mitigate risks associated with AI technologies.
In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Responsibilities
- Risk Assessment Ownership: Lead the Responsible AI risk assessment process for AI projects within your purview.
- Use Case Evaluation: Analyze proposed AI solutions for ethical, privacy, and security risks, including identifying sensitive use cases (e.g., facial recognition, biometric analysis, or legally sensitive applications).
- Escalation Management: Determine when use cases require escalation to internal review boards such as the Deployment Safety Board or other governance entities.
- Approval Coordination: Ensure all necessary approvals are obtained before development or deal sign-off, maintaining alignment with internal Responsible AI policies.
- Documentation & Compliance: Maintain thorough documentation of risk assessments, approvals, and mitigation strategies to support audit readiness and compliance.
- Stakeholder Engagement: Collaborate with product teams, legal, compliance, and engineering to ensure risk considerations are addressed early in the development lifecycle.
- Policy Integration: Translate Responsible AI policies into actionable assessment criteria and workflows.
- Continuous Improvement: Contribute to the evolution of risk assessment frameworks and tools based on emerging technologies and regulatory changes.
Qualifications
Required/Minimum Qualifications
- Bachelor's Degree AND 4+ years experience in risk management, privacy, security, compliance, government intelligence, operations, and/or finance OR 6+ years experience in risk management, privacy, security, compliance, government intelligence, operations, and/or finance
- Working familiarity with Responsible AI frameworks (e.g., NIST AI RMF, ISO/IEC 42001, EU AI Act).
- OR equivalent experience.
- Master's Degree in Risk Management, Engineering, Government Intelligence, Security, or Information Technology, or related field AND 6+ years experience in risk management in the context of operations, engineering, information technology, business analyst, consulting, auditing, privacy, security, compliance, government intelligence, and/or finance
- Membership with a relevant risk domain area association including: International Association of Privacy Professionals (IAPP), International Information System Security Certification Consortium (ISC)2, and Information Systems Audit and Control Association (ISACA), Certified Internal Auditor (CIA), Society for Corporate Compliance and Ethics (SCCE), Disaster Recovery Institute (DRI), Certified Business Continuity Professional (CBCB), Committee of Sponsoring Organizations of the Treadway Commission (COSO), and Institute of Internal Auditors (IIA).
- Analytical skills and attention to detail.
- Excellent communication and documentation abilities.
- Experience working with cross-functional teams in a matrixed organization.
- Experience with internal governance processes such as AI review boards or safety panels.
- Knowledge of privacy-preserving technologies and bias mitigation techniques.
- Background in regulated industries (e.g., healthcare, finance, government).
- Certifications in AI ethics, compliance, or risk management.
- OR Bachelor's Degree in Risk Management, Engineering, Government Intelligence, Security, Cybersecurity, or Information Technology, or related field AND 8+ years experience in risk management in the context of operations, engineering, information technology, business analyst, consulting, auditing, privacy, security, compliance, government intelligence, and/or finance
- OR equivalent experience.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form.
Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.
#Aitjobs
Responsible AI Solution Risk Assessor
Office
Redmond, Washington, United States
Full Time
97k - 206k USD/year
August 22, 2025