Senior Security Researcher
Redmond, Washington, United States
Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
Are you a red teamer who is looking to break into the AI field? Do you want to find AI failures in Microsoft’s largest AI systems impacting millions of users? Join Microsoft’s AI Red Team where you'll emulate work alongside security experts to cause trust and safety failures in Microsoft’s big AI systems.
We are looking for a Senior Security Researcher with cybersecurity experience for our team to help make AI security better and help our customers expand with our AI systems. Our team is an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Safety & Responsible AI experts and software developers with the mission of proactively finding failures in Microsoft’s big bet AI systems. In this role, you will red team AI models and applications across Microsoft’s AI portfolio including Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot. This work is sprint based, working with AI Safety and Product Development teams, to run operations that aim to find safety and security risks that inform internal key business decisions. We are open to remote work. More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Responsibilities
Responsibilities:
- Discover and exploit GenAI vulnerabilities end-to-end in order to assess the safety of systems
- Manage product group stakeholders as priority recipients and collaborators for operational sprints
- Drive clarity on communication and reporting for red teaming peers when working with product groups, other AI Safety & Security ecosystem leads, and business decision makers
- Develop methodologies and techniques to scale and accelerate AI Red Teaming
- Collaborate with teams to influence measurement and mitigations of these vulnerabilities in AI systems
- Research new and emerging threats to inform the organization
- Work alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operations
- Embody our Culture and Values
Qualifications
Required/Minimum Qualifications
- Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics predictive analytics, research)
- OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research)
- OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research)
- OR equivalent experience.
Other Requirements
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
- Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Preferred Qualifications:
- Penetration testing qualifications; GPEN/GXPN, GWAPT, OSCP/OSCE, CRT/CCT/CCSAS
- Participation in bug bounties/CTFs
- Experience of using common penetration testing tools; Kali Linux, Burpsuite, Nmap, Nessus, etc
- Familiarity with one or more programming languages from: Python, C#, C/C++, PowerShell
- Prior knowledge of Generative AI not required, but basic familiarity with AI or willingness to learn is desired
Applied Sciences IC4 - The typical base pay range for this role across the U.S. is USD $117,200 - $229,200 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $153,600 - $250,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay
Microsoft will accept applications for the role until January 10, 2025.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form.
Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.
#MSFTSecurity #airedteam #MSECAIR #AeGIS
ApplyJob Profile
Culture of inclusion Growth mindset Inclusive culture Remote work
Tasks- Collaborate with teams
- Collaborate with teams on AI security
- Develop methodologies for red teaming
- Discover and exploit vulnerabilities
- Manage product group stakeholders
Adversarial Machine Learning AI AI safety C Cloud Collaboration Communication Cybersecurity Generative AI Linux Machine Learning Offensive Security Product Development Python Red teaming Security Security Research Software Development Trust and safety Vulnerability assessment Windows
EducationBachelor's degree Computer Science Doctorate Master's degree
TimezonesAmerica/Anchorage America/Chicago America/Denver America/Los_Angeles America/New_York Pacific/Honolulu UTC-10 UTC-5 UTC-6 UTC-7 UTC-8 UTC-9