hero

Global Talent Jobs JOIN TALENT POOL

companies
Jobs

Research Scientist/Engineer (Evaluations)

Apollo Research

Apollo Research

London, UK
Posted on Feb 14, 2026
Application deadline: We are conducting interviews actively and aim to fill this role as soon as we find someone suitable.
ABOUT THE OPPORTUNITY
We develop and run evaluations that help assess the risks posed by scheming AIs. You will get to work with frontier labs like OpenAI, Anthropic, and Google DeepMind and be amongst the first to interact with new models before anyone else. The ideal candidate loves rigorously testing frontier AI models, and enjoys building efficient pipelines and automating them.
YOU WILL HAVE THE OPPORTUNITY TO
- Run pre-deployment evaluation campaigns on the most capable AI systems in the world. We partner with multiple labs, giving you access to a breadth of models that no single AI lab could offer. You'll be among the first people to interact with new models before anyone else.
- Deep dive into AI cognition. Scan through thousands of model transcripts to surface behavioral patterns that no one has ever observed before. These patterns are often deeply surprising and fascinating to study, e.g. the non-standard language and the reward-seeking reasoning described in our anti-scheming paper.
- Build new evaluations for frontier risks, from designing novel test environments to scaling them across hundreds of distinct scenarios.
- Work directly with frontier AI developers. Share your findings, engage with their feedback, and see your evaluations directly inform deployment decisions for the most capable AI systems in the world.
- Automate and improve the evaluation pipeline. We already use automation across building, running, and analyzing evals. Rapid progress in agent capabilities opens up radically new possibilities, and you'll have the freedom to rethink and reshape the pipeline as they emerge.
KEY REQUIREMENTS
- Software engineering skills: Our entire stack uses Python. We're looking for candidates with strong software engineering experience. Ideally, you have experience shipping and maintaining production Python code, and know how to factor messy problems into clean abstractions that others can use and extend.
- Process optimisation: You always try to improve workflows. Pre-deployment evaluations are very fast paced so ideally you love shaving friction off your workflows wherever possible.
- Data Analysis & Pattern Recognition: You can extract signal from large, messy datasets. You're comfortable with quantitative analysis and know when qualitative assessment is more appropriate. You can identify anomalies and unexpected model behaviors.
- Writing and communication: You succinctly convey qualitative and quantitative findings to a technical and non-technical audience.
- AI power-user: You are curious about the capabilities and propensities of frontier AI models. You have experience using different models, know which ones to use for which tasks, when not to use AI, and you always experiment with new AI workflows
(Bonus) We are using Inspect as our primary evals framework, and we value experience with it.
We want to emphasize that people who feel they don’t fulfill all of these characteristics but think they would be a good fit for the position, nonetheless, are strongly encouraged to apply. We believe that excellent candidates can come from a variety of backgrounds and are excited to give you opportunities to shine. We don’t require a formal background or industry experience and welcome self-taught candidates.

BENEFITS

  • This role offers market competitive salary, equity, and competitive benefits.
  • Salary: 100k - 200k GBP (~135k - 270k USD)
  • Flexible work hours and schedule
  • Unlimited vacation
  • Unlimited sick leave
  • Lunch, dinner, and snacks are provided for all employees on workdays
  • Paid work trips, including staff retreats, business trips, and relevant conferences
  • A yearly $1,000 (USD) professional development budget

LOGISTICS

  • Time Allocation: Full-time
  • Location: The office is in London, and the building is shared with the London Initiative for Safe AI (LISA) offices.
  • This is an in-person role. In rare situations, we may consider partially remote arrangements on a case-by-case basis.
  • Work Visas: We can sponsor UK visas
ABOUT APOLLO RESEARCH
The rapid rise in AI capabilities offer tremendous opportunities, but also present significant risks.AtApollo Research, we’re primarily concerned with risks from Loss of Control, i.e. risks coming from the model itself rather than e.g. humans misusing the AI. We’re particularly concerned with deceptive alignment / scheming, a phenomenon where a model appears to be aligned but is, in fact, misaligned and capable of evading human oversight.
We work on the detection of scheming (e.g., building evaluations and novel evaluation techniques), the science of scheming (e.g., model organisms and the study of scaling trends), and scheming mitigations (e.g., control). We closely work with multiple frontier AI companies, e.g. to test their models before deployment and collaborate on fundamental research.
At Apollo, we aim for a culture that emphasizes truth-seeking, being goal-oriented, giving and receiving constructive feedback, and being friendly and helpful. If you’re interested in more details about what it’s like working at Apollo, you can find more information here.
ABOUT THE TEAM
The current evals team consists of Jérémy Scheurer,Alex Meinke,Bronson Schoen, Felix Hofstätter,Axel Højmark,Teun van der Weij,Alex Lloyd and Mia Hopman.Alex Meinke coordinates the research agenda with guidance from Marius Hobbhahn, though team members lead individual projects. You will mostly work with the evals team as well as our team of software engineers, but you will likely sometimes interact with the governance team to translate technical knowledge into concrete recommendations. You can find our full teamhere.
Equality Statement: Apollo Research is an Equal Opportunity Employer. We value diversity and are committed to providing equal opportunities to all, regardless of age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, or sexual orientation.
How to apply: Please complete the application form with your CV. The provision of a cover letter is optional but not necessary. Please also feel free to share links to relevant work samples.
About the interview process: Our multi-stage process includes a screening interview, a take-home test (approx. 2.5 hours), 3 technical interviews, and a final interview with Marius (CEO). The technical interviews will be closely related to tasks the candidate would do on the job. There are no LeetCode-style general coding interviews. If you want to prepare for the interviews, we suggest working on hands-on LLM evals projects (e.g. as suggested in our starter guide), such as building LM agent evaluations in Inspect.
Your Privacy and Fairness in Our Recruitment Process: We are committed to protecting your data, ensuring fairness, and adhering to workplace fairness principles in our recruitment process. To enhance hiring efficiency, we use AI-powered tools to assist with tasks such as resume screening. These tools are designed and deployed in compliance with internationally recognized AI governance frameworks.Your personal data is handled securely and transparently. We adopt a human-centred approach: all resumes are screened by a human and final hiring decisions are made by our team. If you have questions about how your data is processed or wish to report concerns about fairness, please contact us at info@apolloresearch.ai.