Participants work in small teams to explore a research question in one of our key focus areas: AI governance, technical AI safety, policy reform, biosecurity, or effective giving. No prior experience is required — with guidance from experienced mentors, students develop their research skills and contribute to work that addresses pressing global challenges, with a £2,000 prize awarded to the best overall project at the end of the programme.

An 8-week entry level research programme helping talented Oxford students launch high-impact research careers.

Apply by January 28th

Our mentors bring experience from organisations at the cutting edge of impactful research.

Focus Areas

Biosecurity

The world remains vulnerable to future pandemics, made increasingly likely by advances in biotechnology and AI. To protect against these risks, researchers and policymakers must collaborate to develop robust systems for rapid vaccine deployment, treatments, and other interventions. Biosecurity efforts ensure humanity is better prepared to prevent, contain, and recover from catastrophic biological threats.

Our Biosecurity Project
Learn more

AI Governance

Tackling the gravest risks from AI demands sound decision-making and policy at both the corporate and governmental levels. AI companies must responsibly manage powerful system development, while governments need effective regulation. By addressing these challenges, AI governance ensures that AI systems are developed and deployed in ways that mitigate risks, promote safety, and align with humanity's broader values.

Learn more
Our AI Governance Projects

Policy Reform

Policy reform focuses on improving the rules, institutions, and incentives that shape large-scale social outcomes. Well-designed policies can affect millions of people at low marginal cost by changing how governments tax, spend, regulate, and invest. Compared to direct service delivery, policy reform often has longer causal chains and higher uncertainty, but can generate outsized and persistent benefits when tractable opportunities arise — especially where there is strong empirical evidence, clear legal pathways, and identifiable decision-makers. In effective altruism, policy reform is most compelling when it targets neglected but high-leverage bottlenecks, produces decision-relevant evidence, and directly informs real policy choices rather than abstract advocacy.

Learn more
Our Policy Reform Project

Effective Giving

Effective giving is the practice of using evidence and careful reasoning to direct donations toward the charities and interventions that do the most good per pound or dollar, rather than giving based on emotion, proximity, or tradition. It emphasises comparing causes and organisations by their cost-effectiveness, transparency, and proven impact, often focusing on areas like global health, extreme poverty, animal welfare, or existential risk where resources can achieve outsized benefits. In practice, effective giving encourages donors to treat donations as a tool for maximising positive impact, to remain open to changing their minds as new evidence emerges, and to commit to giving meaningfully over time rather than sporadically.

Our Effective Giving Project
Learn more

Technical AI Safety

Technical AI safety is the field focused on developing and testing concrete technical methods to ensure advanced AI systems behave as intended, remain aligned with human goals, and do not cause catastrophic harm, even as they become more capable and autonomous. It studies failure modes like deception, goal misgeneralisation, reward hacking, and unfaithful reasoning, and builds tools such as interpretability techniques, training objectives, evaluations, and monitoring systems to detect or prevent these problems. Unlike AI ethics or policy, technical AI safety works directly with models, algorithms, and training dynamics, and unlike general machine learning research, it is explicitly motivated by reducing risks from powerful AI systems rather than improving performance or capabilities.

Our Technical AI Safety Project
Learn more

Global Health

Global health is concerned with improving health outcomes and health equity for populations worldwide, particularly in contexts where disease risk, vulnerability, and access to care are shaped by transnational, environmental, economic, and structural factors. It focuses on health issues that cross borders, disproportionately affect low- and middle-income regions, and require coordinated international responses, integrating disciplines such as epidemiology, health systems research, environmental science, and policy. Central to global health is not only understanding disease dynamics, but translating that understanding into practical interventions, including prevention, surveillance, preparedness, and equitable access to technologies such as vaccines, diagnostics, and treatments, in settings where resources and infrastructure are constrained.

Our Global Health Project
Learn more

Programme structure

Application Deadline

28th January 2026, 23:59 pm

Decisions Announced

30th January

Kick-Off Day

1st February

Research Phase

2nd February to 22nd March

Submission Deadline

23rd March, 23:59

Awards Ceremony

TBC, During Trinity

FAQs

  • The programme is primarily designed for Oxford university students, but we also welcome applications from non-students (such as recent graduates or university offer-holders), or students from other top universities.

    If you're unsure about your eligibility, we recommend getting in touch with us before applying.

  • While fellows are expected to attend the programme's kick-off weekend in person, the rest of the programme can be completed remotely. We do, however, encourage in-person participation where possible.

  • The Programme is a project by Effective Altruism Oxford, which supports Oxford students in pursuing impactful careers. They are funded by the Centre for Effective Altruism.

Become a mentor

We rely on great mentors to supervise our projects. If you have relevant research experience and are interested in mentoring in a future programme, we’d love to hear from you.