New
Threat AI Investigator
![]() | |
![]() United States, New York, New York | |
![]() | |
OverviewThe Microsoft Threat Analysis Center (MTAC) is where geopolitical insight meets technical rigor. At the core of our mission is investigation: identifying and analyzing the actors, behaviors, and tactics behind AI-powered foreign influence campaigns that target democracies, disrupt public trust, and exploit emerging technologies. Our work is grounded in methodical, evidence-based inquiry-uncovering how these operations unfold and how they evolve. We are expanding our team and seeking technically capable investigators who bring a strong foundation in data analysis, automation, and AI systems. Ideal candidates are fluent in Python and SQL, experienced in building scalable analytics workflows, and skilled at distilling complex technical findings into strategic insights. A deep understanding of how generative AI systems function-and how they can be manipulated-is increasingly important, as is the ability to explore new datasets and surface patterns using lightweight modeling and statistical techniques. MTAC offers a unique opportunity to work at the intersection of national security, emerging technology, and global information integrity. If you're motivated by impact and ready to contribute to high-stakes investigations in the digital domain, we encourage you to explore opportunities with us. The Microsoft Threat Analysis Center (MTAC) is looking for a Threat AI Investigator who will focus on identifying and mitigating AI abuses. This role involves tracking and investigating sophisticated actors, ranging from nation-state threat groups to influence-for-hire actors, and beyond. They will contribute to MTAC's mission to detect, assess, and disrupt digital threats to Microsoft, its customers, and governments worldwide. MTAC is part of the Customer Security & Trust (CST) organization within Microsoft's Corporate, External, and Legal Affairs (CELA) group. In this role, the Investigator will focus on identifying and mitigating AI abuses on Microsoft's platforms and beyond. This role involves tracking and investigating sophisticated actors, ranging from nation-state threat groups to influence-for-hire actors. The ideal candidate will possess both geopolitical knowledge and the technical ability to build workflows that reliably surface and track these actors and their influence sets, conducting thorough investigations. They will also write and brief on a broader set of analytic findings, integrating open-source information with historical analysis to communicate succinctly and effectively to executives, government officials and public audiences. Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. This role is Onsite in the Microsoft New York Office.
ResponsibilitiesResearch and assess cyber and malign influence threats at tactical and strategic levels by drawing on information from social media accounts and websites, foreign policy priorities and perspectives from open-source reporting. Understand the components of generative AI and how technology stacks produce AI outputs. Identify and triage AI abuses based on behavioral and technical indicators. Write threat intelligence reports for senior audiences on adversary influence actors, networks, and operations powered by artificial intelligence (AI). Work closely with the broader Microsoft Threat Intelligence team in its investigations of nation state cyber, influence, and AI-first actor investigations. Develop engaging presentations and brief various stakeholders under tight deadlines. Follow innovative, non-intrusive, law-abiding methods for detecting, diagnosing, and deterring the most advanced and prolific threats in the information environment.Embody ourcultureandvalues. |