Full-Time
Posted on 5/5/2025
AI interpretability tools and safety infrastructure
No salary listed
No H1B Sponsorship
San Francisco, CA, USA
In Person
In-person only; CPT/OPT accepted; no visa sponsorship for fellowship.
| , |
Goodfire builds infrastructure and developer tools that allow users to understand, edit, and debug artificial intelligence models. These tools work by providing a practical interface for inspecting the internal logic of AI, enabling developers to identify and fix errors within complex systems at scale. Unlike many competitors that focus solely on theoretical research, Goodfire operates as a public benefit corporation that bridges the gap between science and practical application through specialized debugging software. The company's goal is to ensure the creation of safer and more reliable AI by making model behavior transparent and manageable for researchers and organizations.
Company Size
51-200
Company Stage
Series B
Total Funding
$207M
Headquarters
San Francisco, California
Founded
2024
Help us improve and share your feedback! Did you find this helpful?
Company Equity
Goodfire launches Silico: A game-changer for LLM debugging. Goodfire, a San Francisco-based startup, has unveiled Silico, a groundbreaking tool designed to enhance mechanistic interpretability in AI models. This innovative platform allows researchers and engineers to delve into the inner workings of large language models (LLMs), adjusting the parameters that define their behavior during training. According to Goodfire, Silico represents the first commercially available solution that facilitates debugging at every stage of AI development, from dataset creation to model training. CEO Eric Ho emphasizes the company's mission to transform AI model development from a mysterious process into a scientific discipline, addressing the existing knowledge gap between model deployment and understanding. Mechanistic interpretability is a cutting-edge approach that seeks to unveil the complexities of AI operations by mapping neural pathways and their interactions. This technique is gaining traction among industry leaders like Anthropic, OpenAI, and Google DeepMind, and has been recognized by MIT Technology Review as one of its Breakthrough Technologies. Goodfire aims not only to audit existing models but also to streamline the design process, eliminating the trial-and-error nature of model training. With Silico, developers can fine-tune LLM behaviors, such as reducing instances of hallucination, by exposing and manipulating the model's parameters. The tool employs automated agents to handle much of the interpretative work, making it accessible for users without extensive expertise. While Silico offers promising capabilities, experts like Leonard Bereska from the University of Amsterdam urge caution. He acknowledges the tool's utility but warns that the term 'engineering' might overstate its precision, suggesting it primarily enhances the existing alchemical nature of AI model training. Silico enables users to examine individual neurons within a trained model, allowing for targeted experiments and deeper understanding of how specific inputs affect outputs. For instance, Goodfire identified a neuron linked to ethical dilemmas within an open-source model, demonstrating how modifications can shift a model's responses. Furthermore, Silico can assist in steering the training process by filtering out undesirable influences from training data, ultimately helping to create more reliable AI systems. By democratizing access to advanced interpretability techniques, Goodfire aims to empower smaller firms and research teams to develop tailored models that meet their unique needs.
Goodfire AI, a US-based AI research lab, has raised $150 million in Series B funding at a $1.25 billion valuation. The company was founded in 2023 by Eric Ho, Dan Balsam and Tom McGrath. Goodfire describes itself as a research company using interpretability to understand, learn from and design AI systems. The startup's mission is to build the next generation of safe and powerful AI through understanding rather than scaling alone. The company focuses on making AI systems more transparent and controllable by examining how they function internally.
/PRNewswire/ -- Today, Goodfire—the AI research lab using interpretability to understand, learn from, and design models—announced a $150 million Series B...
Goodfire raises $150M in Series B funding. Goodfire, a San Francisco, CA-based developer of interpretability tools for AI models, raised $150m in Series B funding. The round was led by B Capital, with participation from Menlo Ventures and Lightspeed Venture Partners. The company intends to use the funds to extend its development efforts, increase computing power, and expand operations. Led by CEO and co-founder Eric Purdy, Goodfire is an AI interpretability research lab focused on understanding and intentionally designing advanced AI systems. Its technologies enable organizations to understand internal model functions, debug code, and discover new insights to enhance performance. The company currently serves Microsoft Corp., the Mayo Clinic, and the Arc Institute, among others.
PRESS RELEASE - Goodfire, the leading AI interpretability research company, has announced a $50 million Series A funding round led by Menlo Ventures with participation from Lightspeed Venture Partners, Anthropic, B Capital, Work-Bench, Wing, South Park Commons, and other notable investors.