Research Director
AI Security Institute
About the AI Security Institute
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Research Director
Shape how the responds to advanced AI
Join the team most deeply embedded in the UK's response to AI risks. You'll lead critical research that directly impacts development and policy decisions – working closely with frontier developers, national security partners, and allied governments.
This role is for experienced technical leaders who want their research to drive real-world action, not just understanding. You'll direct one of three technical teams (Safeguards, Cyber & Autonomous Systems, or Control) or propose and build new ones. You'll set your team's vision, ensure high output and rigour, and keep it responsive to emerging developments.
As a senior technical leader, you'll also work with Chief Scientist Geoffrey Irving and CTO/Prime Minister's Advisor Jade Leung to shape org-wide strategy, set the bar for research excellence, and develop researchers across our 100-person technical team.
The timing is critical: we've scaled significantly, our government influence is growing, and this is the moment governments worldwide are deciding how much to prioritize AI security. With access to global decision-makers and AISI’s well-resourced, mission-driven team, you'll be positioned to drive some of the most consequential research of this decade.
What you’ll do:
- Lead a 10-30 person team: set the vision, roadmaps, standards, and talent strategy. Propose a new area you could lead or take on leading one of our current research areas. Among our current areas, we are especially excited about leaders in:
- Mentor and scale: develop our strongest mid-level and junior researchers; build momentum for more ambitious goals.
- Shape the org-wide technical agenda: help decide which research questions matter most and adapt plans to industry and governance developments.
- Engage externally at senior levels: work with frontier labs, national security partners, allied governments, and the wider research community
- Raise our quality bar: in methodology, scientific writing, technical hiring, and culture.
What you’ll need
- A proven track record leading impactful research teams, whether in industry (Senior, Staff, or Principal roles) or as an academic lab director.
- A clear vision of what research most advances AI safety and governance and the ability to turn that into teams, roadmaps, and results.
- Focus on impact: ability to recognize and adapt to frontier research and policy developments
- Extensive modern ML research experience. Typically 5+ years in areas like evaluations, safeguards, alignment, interpretability, or LLM capabilities.
Logistics requirements
- Spending at least 3 days per week in our London office. We support hybrid work.
- Joining for at least 12 months. We may be able to support secondments.
Salary & benefits
AISI’s senior technical staff are generally paid £105-145k, but we will consider higher salaries for the Research Director role.
Your talent partner will work with you as you move through our assessment process to explain our internal benchmarking process.
This role sits outside of the Civil Service’s DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, machine learning, and empirical research experience.
Additional Information
Internal Fraud Database
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
Security
Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance, and we will state this by exception in the job advertisement. See our vetting charter here.
Nationality requirements
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).