Full-Time
Posted on 9/23/2025
Philanthropic funder directing high-impact, transparent grants
$134.8k - $184.1k/yr
Company Does Not Provide H1B Sponsorship
Washington, DC, USA
In Person
In-person preference in DC; remote work possible; occasional travel to US and international locations.
| , |
Open Philanthropy is a philanthropic funder and advisor that directs money toward high-impact, cost-effective causes with the goal of doing the most good. It uses a research-driven approach to find and evaluate opportunities across areas like global health and development, scientific research, and long-term risks to humanity. Its grantmaking process is based on in-depth analysis and transparency: it publishes its reasoning and research so others can learn from it and collaborate with other funders. The organization balances a portfolio of established causes with high-risk, high-reward opportunities, funding both immediate needs and long-term challenges. Its aim is to improve lives and help build a better future by sharing knowledge and encouraging collaboration within the philanthropy community.
Company Size
N/A
Company Stage
N/A
Total Funding
$614.6M
Headquarters
San Francisco, California
Founded
2014
Help us improve and share your feedback! Did you find this helpful?
Health Insurance
Dental Insurance
Vision Insurance
Life Insurance
Paid Vacation
Parental Leave
Professional Development Budget
Relocation Assistance
Nucleic Acid Observatory has recently received funding from Open Philanthropy to expand its scope: Nucleic Acid Observatory is now aiming to take on the problem of early warning from end to end.
We're proud to announce that Cosmik has been awarded a total of $1M in grant funding from Open Philanthropy and the Astera Institute!
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more. Apple’s machine-learning group set off a rhetorical firestorm earlier this month with its release of “The Illusion of Thinking,” a 53-page research paper arguing that so-called large reasoning models (LRMs) or reasoning large language models (reasoning LLMs) such as OpenAI’s “o” series and Google’s Gemini-2.5 Pro and Flash Thinking don’t actually engage in independent “thinking” or “reasoning” from generalized first principles learned from their training data.Instead, the authors contend, these reasoning LLMs are actually performing a kind of “pattern matching” and their apparent reasoning ability seems to fall apart once a task becomes too complex, suggesting that their architecture and performance is not a viable path to improving generative AI to the point that it is artificial generalized intelligence (AGI), which OpenAI defines as a model that outperforms humans at most economically valuable work, or superintelligence, AI even smarter than human beings can comprehend.ACT NOW: Come discuss the latest LLM advances and research at VB Transform on June 24-25 in SF — limited tickets available. REGISTER NOWUnsurprisingly, the paper immediately circulated widely among the machine learning community on X and many readers’ initial reactions were to declare that Apple had effectively disproven much of the hype around this class of AI: “Apple just proved AI ‘reasoning’ models like Claude, DeepSeek-R1, and o3-mini don’t actually reason at all,” declared Ruben Hassid, creator of EasyGen, an LLM-driven LinkedIn post auto writing tool. “They just memorize patterns really well.”But now today, a new paper has emerged, the cheekily titled “The Illusion of The Illusion of Thinking” — importantly, co-authored by a reasoning LLM itself, Claude Opus 4 and Alex Lawsen, a human being and independent AI researcher and technical writer — that includes many criticisms from the larger ML community about the paper and effectively argues that the methodologies and experimental designs the Apple Research team used in their initial work are fundamentally flawed.While we here at VentureBeat are not ML researchers ourselves and not prepared to say the Apple Researchers are wrong, the debate has certainly been a lively one and the issue about the capabilities of LRMs or reasoner LLMs compared to human thinking seems far from settled.How the Apple Research study was designed — and what it foundUsing four classic planning problems — Tower of Hanoi, Blocks World, River Crossing and Checkers Jumping — Apple’s researchers designed a battery of tasks that forced reasoning models to plan multiple moves ahead and generate complete solutions. These games were chosen for their long history in cognitive science and AI research and their ability to scale in complexity as more steps or constraints are added
MONTRÉAL, June 3, 2025 /PRNewswire/ - Yoshua Bengio, the most-cited artificial intelligence (AI) researcher in the world and A.M. Turing Award winner, today announced the launch of LawZero , a new nonprofit organization committed to advancing research and developing technical solutions for safe-by-design AI systems.LawZero is assembling a world-class team of AI researchers who are building the next generation of AI systems in an environment dedicated to prioritizing safety over commercial imperatives. The organization was founded in response to evidence that today's frontier AI models are developing dangerous capabilities and behaviours, including deception, self-preservation, and goal misalignment. LawZero's work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today's systems, including algorithmic bias, intentional misuse, and loss of human control.LawZero is structured as a nonprofit organization to ensure it is insulated from market and government pressures, which risk compromising AI safety. The organization is also pulling together a seasoned leadership team to drive this ambitious mission forward."LawZero is the result of the new scientific direction I undertook in 2023, after recognizing the rapid progress made by private labs toward Artificial General Intelligence and beyond, as well as its profound implications for humanity," said Yoshua Bengio, President and Scientific Director at LawZero. "Current frontier systems are already showing signs of self-preservation and deceptive behaviours, and this will only accelerate as their capabilities and degree of agency increase
The Open Philanthropy Project recommended a grant of $100,000 to the Future of Life Institute (FLI) for general support. FLI is a research and outreach organization that works to mitigate global catastrophic risks (GCRs). We have previously collaborated with FLI on issues related to potential risks from advanced artificial intelligence. FLI is now seeking general…