Full-Time

Program Associate/Senior Program Associate

Abundance and Growth, Generalist Track

Posted on 6/27/2025

Open Philanthropy

Open Philanthropy

Philanthropic funder directing high-impact, transparent grants

Compensation Overview

$126.2k - $172.4k/yr

Company Does Not Provide H1B Sponsorship

Washington, DC, USA

In Person

Strong preference for candidates based in Washington, D.C.

Category
Business & Strategy (2)
,
Requirements
  • Bring 2-3 years of experience in relevant fields for the Program Associate position
  • Bring 5 years of experience in relevant fields for the Senior Program Associate position
  • Have academic experience that is directly relevant to the role – this is not a requirement, but would contribute in part towards the experience requirements for both versions of this role.
  • Bring experience working with the federal government, and have a solid understanding of federal policymaking (Congressional, executive branch, or the agency level). You are based in, or would be willing to move to, Washington D.C.
  • Have a high level of comfort in quantifying concepts and theories of change, and grounding quantitative estimates in the best evidence available, even if imperfect.
  • Write clearly about technical topics. You are experienced at writing for external audiences and can communicate our theories of change in a compelling way.
  • Bring strong research skills. You are able to engage with academic research, modeling, and creative problem-solving around difficult research questions.
  • Are excited about the potential impact this new cause area might have. We recommend reading this post before applying for the role to get a better sense of the kind of work you’d be focusing on.
  • Are able to overlap with US Central Time working hours for at least 5 hours per day.
Responsibilities
  • Soliciting and reviewing proposals from new potential grantees or potential renewals to existing grantees
  • Doing back of the envelope calculations on grants
  • Running investigations on particular grant opportunities and writing reports with your recommendations on whether to fund a grant
  • Speaking to existing grantees to check in on their progress
  • Having conversations with peer funders or experts in the space to stay abreast of new developments, look for opportunities to collaborate, and help solicit information about potential grantees
  • Doing research on a new sub-cause or area for discussion with AGF leadership
  • Writing for external consumption about our approach or to encourage further investment in AGF areas

Open Philanthropy is a philanthropic funder and advisor that directs money toward high-impact, cost-effective causes with the goal of doing the most good. It uses a research-driven approach to find and evaluate opportunities across areas like global health and development, scientific research, and long-term risks to humanity. Its grantmaking process is based on in-depth analysis and transparency: it publishes its reasoning and research so others can learn from it and collaborate with other funders. The organization balances a portfolio of established causes with high-risk, high-reward opportunities, funding both immediate needs and long-term challenges. Its aim is to improve lives and help build a better future by sharing knowledge and encouraging collaboration within the philanthropy community.

Company Size

N/A

Company Stage

N/A

Total Funding

$614.6M

Headquarters

San Francisco, California

Founded

2014

Simplify Jobs

Simplify's Take

What believers are saying

  • Hires Managing Director to expand advisory for major donors.
  • Awards grants like $1M to Cosmik and GHIT Fund.
  • Attracts talent as Howie Lempel joins from 80,000 Hours.

What critics are saying

  • Congress scrutinizes $15M AI grants to RAND in 2023.
  • Coefficient Giving poaches EA talent with $5,000 rewards.
  • GCR team vacancies delay high-stakes grant decisions.

What makes Open Philanthropy unique

  • Open Philanthropy publishes detailed grant reasoning transparently.
  • Employs hits-based giving across global health and GCRs.
  • Advises philanthropists with bespoke research-driven services.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Dental Insurance

Vision Insurance

Life Insurance

Paid Vacation

Parental Leave

Professional Development Budget

Relocation Assistance

Company News

Nucleic Acid Observatory
Aug 25th, 2025
The NAO is Hiring for Partnerships, Response, Virology, and Wet Lab Management

Nucleic Acid Observatory has recently received funding from Open Philanthropy to expand its scope: Nucleic Acid Observatory is now aiming to take on the problem of early warning from end to end.

Cosmik Network
Aug 15th, 2025
Cosmik awarded $1M grant from Open Philanthropy and Astera Institute for new social knowledge network for researchers

We're proud to announce that Cosmik has been awarded a total of $1M in grant funding from Open Philanthropy and the Astera Institute!

VentureBeat
Jun 13th, 2025
Do Reasoning Models Really “Think” Or Not? Apple Research Sparks Lively Debate, Response

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more. Apple’s machine-learning group set off a rhetorical firestorm earlier this month with its release of “The Illusion of Thinking,” a 53-page research paper arguing that so-called large reasoning models (LRMs) or reasoning large language models (reasoning LLMs) such as OpenAI’s “o” series and Google’s Gemini-2.5 Pro and Flash Thinking don’t actually engage in independent “thinking” or “reasoning” from generalized first principles learned from their training data.Instead, the authors contend, these reasoning LLMs are actually performing a kind of “pattern matching” and their apparent reasoning ability seems to fall apart once a task becomes too complex, suggesting that their architecture and performance is not a viable path to improving generative AI to the point that it is artificial generalized intelligence (AGI), which OpenAI defines as a model that outperforms humans at most economically valuable work, or superintelligence, AI even smarter than human beings can comprehend.ACT NOW: Come discuss the latest LLM advances and research at VB Transform on June 24-25 in SF — limited tickets available. REGISTER NOWUnsurprisingly, the paper immediately circulated widely among the machine learning community on X and many readers’ initial reactions were to declare that Apple had effectively disproven much of the hype around this class of AI: “Apple just proved AI ‘reasoning’ models like Claude, DeepSeek-R1, and o3-mini don’t actually reason at all,” declared Ruben Hassid, creator of EasyGen, an LLM-driven LinkedIn post auto writing tool. “They just memorize patterns really well.”But now today, a new paper has emerged, the cheekily titled “The Illusion of The Illusion of Thinking” — importantly, co-authored by a reasoning LLM itself, Claude Opus 4 and Alex Lawsen, a human being and independent AI researcher and technical writer — that includes many criticisms from the larger ML community about the paper and effectively argues that the methodologies and experimental designs the Apple Research team used in their initial work are fundamentally flawed.While we here at VentureBeat are not ML researchers ourselves and not prepared to say the Apple Researchers are wrong, the debate has certainly been a lively one and the issue about the capabilities of LRMs or reasoner LLMs compared to human thinking seems far from settled.How the Apple Research study was designed — and what it foundUsing four classic planning problems — Tower of Hanoi, Blocks World, River Crossing and Checkers Jumping — Apple’s researchers designed a battery of tasks that forced reasoning models to plan multiple moves ahead and generate complete solutions. These games were chosen for their long history in cognitive science and AI research and their ability to scale in complexity as more steps or constraints are added

PR Newswire
Jun 3rd, 2025
Yoshua Bengio Launches Lawzero: A New Nonprofit Advancing Safe-By-Design Ai

MONTRÉAL, June 3, 2025 /PRNewswire/ - Yoshua Bengio, the most-cited artificial intelligence (AI) researcher in the world and A.M. Turing Award winner, today announced the launch of LawZero , a new nonprofit organization committed to advancing research and developing technical solutions for safe-by-design AI systems.LawZero is assembling a world-class team of AI researchers who are building the next generation of AI systems in an environment dedicated to prioritizing safety over commercial imperatives. The organization was founded in response to evidence that today's frontier AI models are developing dangerous capabilities and behaviours, including deception, self-preservation, and goal misalignment. LawZero's work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today's systems, including algorithmic bias, intentional misuse, and loss of human control.LawZero is structured as a nonprofit organization to ensure it is insulated from market and government pressures, which risk compromising AI safety. The organization is also pulling together a seasoned leadership team to drive this ambitious mission forward."LawZero is the result of the new scientific direction I undertook in 2023, after recognizing the rapid progress made by private labs toward Artificial General Intelligence and beyond, as well as its profound implications for humanity," said Yoshua Bengio, President and Scientific Director at LawZero. "Current frontier systems are already showing signs of self-preservation and deceptive behaviours, and this will only accelerate as their capabilities and degree of agency increase

Open Philanthropy
Dec 30th, 2024
Future of Life Institute — General Support | Open Philanthropy

The Open Philanthropy Project recommended a grant of $100,000 to the Future of Life Institute (FLI) for general support. FLI is a research and outreach organization that works to mitigate global catastrophic risks (GCRs). We have previously collaborated with FLI on issues related to potential risks from advanced artificial intelligence. FLI is now seeking general…

INACTIVE