AI Safety Researcher
Redmond, Washington, United States
Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
Do you have research experience in Adversarial Machine Learning or AI Safety Research? Do you want to find failures in and design mitigations for Microsoft’s big bet AI systems impacting millions of users? Join Microsoft’s AI Red Team’s Long-Term Ops and Research wing, where you will get to work alongside security experts to push the boundaries of AI Red Teaming. We are looking for an early-career researcher with experience in mitigations, adversarial machine learning, and/or AI safety to help make Microsoft's AI products safer and help our customers expand with our AI systems. We are an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Responsible AI experts, and software developers with the mission of proactively finding failures in Microsoft’s big bet AI systems. Your work will impact Microsoft’s AI portfolio, including Phi series, Bing Copilot, Security Copilot, GitHub Copilot, Office Copilot, Windows Copilot and Azure OpenAI. We are open to remote work. More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Responsibilities
- Conducting research to identify vulnerabilities and potential failures in AI systems.
- Designing and implementing mitigations, detections, and protections to enhance the security and reliability of AI systems.
- Collaborating with security experts and other interdisciplinary team members to develop innovative solutions. • Contributing to our open-source portfolio, including projects like Counterfit and PyRIT.
- Engaging with the community to share research findings and best practices.
Qualifications
Required Qualifications:
- Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field [GS1.1][AM1.2] AND 2+ years related experience (e.g., statistics, predictive analytics, research)
- OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research)
- OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field
- OR equivalent experience.
- 1+ experience in adversarial machine learning, AI safety, or related fields
Other Requirements
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Preferred Qualifications:
- Demonstrated publications or presentations at conference and workshops focused on AI and Security such as NeurIPS, ICML, ICLR, SaTML, CAMLIS, Usenix, AI Village, Blackhat, Bsides
- Background in designing and implementing security mitigations and protections and/or publications in the space
- Ability to work collaboratively in an interdisciplinary team environment
- Participation in prior CTF/GRT/AI Red Teaming events and/or bug bounties Developing or contributing to OSS projects.
Applied Sciences IC3 - The typical base pay range for this role across the U.S. is USD $98,300 - $193,200 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $127,200 - $208,800 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay
Microsoft will accept applications for the role until January 27, 2025.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form.
Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.
#MSFTSecurity #AI #RAI #Safety #Security #MSECAIR #AEGIS #airedteam #airt
Apply
Job Profile
Culture of inclusion Growth mindset Growth mindset culture Innovative environment Remote work
Tasks- Collaborate with security experts
- Conduct research on AI vulnerabilities
- Contribute to open-source projects
- Design and implement mitigations
- Engage with the community
Adversarial Machine Learning AI AI safety Azure Cloud Computer Science Electrical Engineering Machine Learning Open-source contributions Predictive Analytics Red teaming Reliability Security Software Development Statistics Windows
Experience2 years
EducationBachelor's degree Computer Science Doctorate Master's degree
TimezonesAmerica/Anchorage America/Chicago America/Denver America/Los_Angeles America/New_York Pacific/Honolulu UTC-10 UTC-5 UTC-6 UTC-7 UTC-8 UTC-9