Full-Time
Posted on 9/5/2025
Open-source vulnerability scanner for developers
No salary listed
Boston, MA, USA
In Person
| , |
Snyk helps software-driven teams secure their codebase by scanning for security vulnerabilities and license violations in open source dependencies and container images. Its platform integrates with developers’ existing workflows (CLI, APIs, and popular IDEs/CI tools like GitHub) to automatically detect issues, prioritize risks, and propose fixes without slowing down development. The product targets both small teams and large enterprises that rely on open source software and containers, offering a dependency scanner, remediation guidance, and governance features through tiered subscription plans. Snyk differentiates itself by focusing on developer-friendly integration, proactive remediation, and coverage across code, dependencies, and container images, plus enterprise features for compliance and reporting. Its goal is to help organizations ship software faster while maintaining security and regulatory compliance.
Company Size
1,001-5,000
Company Stage
Late Stage VC
Total Funding
$1.6B
Headquarters
Boston, Massachusetts
Founded
2015
Help us improve and share your feedback! Did you find this helpful?
Flexible Work Hours
Unlimited Paid Time Off
Health Insurance
Life Insurance
Disability Insurance
401(k) Retirement Plan
Snyk targets AI agent risks with new security platform. Published March 24, 2026 News summary. Snyk launches Agent Security platform to govern autonomous AI agents, addressing growing enterprise risks across development, deployment, and runtime environments. Snyk uses RSA Conference 2026 in San Francisco this week to launch its Agent Security platform and push its Evo AI-SPM product into general availability, targeting a fast-emerging risk: autonomous AI agents writing and deploying code with little oversight. The move reflects growing concern that enterprises are scaling AI faster than they can govern it across development and production environments. The timing is not accidental. AI coding agents - from tools like Claude Code to emerging autonomous systems - are moving from novelty to infrastructure. They're not just assisting developers; they're increasingly generating, modifying, and shipping code at machine speed. That shift has exposed a gap security teams are struggling to close. Traditional controls - code reviews, static analysis, even cloud security platforms - were not designed for systems that can independently chain actions across environments. Snyk's bet is that the next security battleground sits squarely inside these agentic workflows. The governance problem enterprises didn't see coming. Snyk's framing is blunt: enterprises think they have AI under control, but often don't. Its internal data suggests that for every AI model deployed, organizations introduce roughly three times as many untracked software components. That's not just a visibility issue - it's a governance failure. Autonomous agents don't operate in isolation; they pull in dependencies, invoke tools, and interact with APIs in ways that can bypass existing controls. The result is a growing layer of "shadow AI" embedded directly in the software supply chain. Early deployments of Evo AI-SPM appear to confirm the problem. Even organizations with mature cloud security and CNAPP tooling reportedly uncovered unmanaged AI-driven components inside their codebases - components that had slipped through standard security checks. The implication is uncomfortable: enterprises may be securing where AI runs, but not how it gets there. From visibility to enforcement. Snyk's answer is to move security upstream - into the development lifecycle - and enforce policy before AI-generated code reaches production. Evo AI-SPM acts as the engine behind this approach, mapping AI-related components and translating governance policies into enforceable controls. At a high level, the system builds a continuously updated inventory - an "AI bill of materials" - covering models, dependencies, and agent behaviors. That inventory is then enriched with risk context, including potential vulnerabilities, bias signals, and other indicators. The more interesting piece is enforcement. Snyk's platform converts governance rules - often written in plain English - into policies that can be executed automatically within CI/CD pipelines. In theory, this removes the need for manual oversight, which simply doesn't scale in an environment where code is generated at machine speed. It's an appealing model, though not without challenges. Translating policy into code is notoriously difficult, and enterprises will need to trust that these automated controls don't introduce friction - or worse, blind spots. Securing the agent lifecycle. The broader Agent Security platform extends beyond code scanning into what Snyk describes as the full lifecycle of AI agents: environment, artifact, and behavior. The environment layer focuses on the tools and services agents rely on - an often-overlooked part of the attack surface. If an agent pulls from an untrusted source, the rest of the pipeline is already compromised. The artifact layer embeds security checks directly into development workflows. This is where Snyk leans on its existing footprint, integrating into tools developers already use. More than 300 enterprise customers are reportedly running these capabilities in production environments. The behavior layer, still in preview, is arguably the most ambitious. It aims to control what agents actually do in real time - blocking risky actions and enforcing boundaries during execution. If it works as advertised, it would shift security from passive validation to active intervention. That's easier said than done. Real-time enforcement introduces latency, complexity, and the risk of false positives - none of which developers tolerate well. The shift to runtime risk. Beyond development, Snyk is also targeting runtime vulnerabilities, particularly those introduced by AI-generated code. These include business logic flaws - such as broken object-level authorization (BOLA) and insecure direct object references (IDOR) - that are notoriously hard to detect and often slip through traditional testing. The company's approach combines dynamic testing with what it calls "agent red teaming" - using autonomous agents to simulate attacks against AI systems. The idea is to expose weaknesses before they're exploited in production. This aligns with a broader industry trend. As AI systems become more autonomous, testing is moving from static checks to continuous, adversarial validation. Security is no longer a one-time gate; it's an ongoing process. A crowded, still-forming market. Snyk is not alone in chasing this opportunity. The concept of an "AI security fabric" or control layer is quickly becoming a crowded space, with vendors racing to define standards and capture early adopters. What differentiates Snyk - at least for now - is its positioning inside the developer workflow. Rather than treating AI as an external risk, it frames it as an extension of the software supply chain. That's a logical move, but it also raises questions. Enterprises are already juggling multiple security platforms, from CNAPP to API security to identity management. Adding another layer - however well integrated - risks further fragmentation. There's also the question of maturity. Much of the agentic ecosystem is still evolving, and security models built today may need to adapt quickly. Early adopters will likely face a period of trial and error as tools, practices, and risks continue to shift. The bigger picture. What's clear is that AI is forcing a rethink of security fundamentals. The boundary between development and operations is blurring, and the pace of change is accelerating. Snyk's Agent Security platform is an attempt to get ahead of that curve - to impose structure on a rapidly expanding, poorly understood attack surface. It reflects a growing recognition that AI is not just another tool, but a new class of actor within enterprise systems. Whether Snyk's approach becomes a standard - or just another layer in an already complex stack - will depend on how well it balances control with usability. For now, the message is hard to ignore: as AI agents take on more responsibility, the cost of getting governance wrong rises sharply. And in many organizations, that governance is still catching up. Executive insights FAQ. Why are AI agents creating new security risks? Because they can autonomously generate code, invoke tools, and interact with systems, introducing vulnerabilities faster than traditional security processes can detect or prevent them. What is Snyk's Agent Security platform trying to solve? It aims to govern AI agents across their lifecycle, enforcing policies during development and runtime rather than relying solely on post-deployment security controls. Why is "shadow AI" becoming a concern? Organizations often deploy AI tools without full visibility, leading to unmanaged components and dependencies embedded in codebases that bypass existing security frameworks. How does Evo AI-SPM differ from traditional security tools? It focuses on the software supply chain, mapping AI components and enforcing governance policies directly within CI/CD pipelines before code reaches production environments. What should enterprises do next? They need to shift toward continuous, lifecycle-based AI security, combining visibility, policy enforcement, and runtime validation to manage increasingly autonomous systems effectively.
Snyk launches Evo AI-SPM to govern autonomous coding agents. ByTony Bradley, Senior Contributor. Tony Bradley covers the intersection of tech and entertainment. Mar 24, 2026, 07:02pm EDT AI coding agents are writing and shipping code in enterprise environments right now - often without anyone on the security team knowing exactly what those agents have access to, what tools they're invoking, or what they've already pushed to production. It's not a fringe problem. Snyk's 2026 State of Agentic AI Adoption Report found that for every AI model enterprises deploy, they introduce nearly three times as many untracked software components. Organizations that thought they had a handle on their AI footprint found out they didn't. At RSAC 2026 this week, Snyk announced the general availability of Evo AI-SPM and a new Agent Security solution built on top of it. I had an opportunity to chat with Manoj Nair, chief innovation officer at Snyk, ahead of the show to get a better picture of what the company is seeing from customers and what they're trying to solve. Governance policies that nobody enforces. Most organizations have some kind of AI governance board or center of excellence. They've put together a list of approved models. The problem, according to Nair, is that those policies tend to live in a Confluence page or a PDF doc and there's no real mechanism to verify they're being followed. A new model version ships, a developer upgrades and whatever guardrails existed on paper no longer reflect what's actually running in the codebase. When an auditor asks what AI tools the organization is using, many companies can't answer that question at a moment's notice - and that's a governance problem. The code quality issue compounds things. Nair said back-end data from Snyk shows AI-generated code is producing somewhere between two and ten times more security issues than human-written code. And agents tend to produce more business logic and authorization vulnerabilities specifically - the kind that are harder to catch with static analysis and tend to be more dangerous when they're exploited. There's also the matter of what models are actually being used. Nair pointed out that there are more than two million models available to download. Developers upgrade automatically when new versions drop, and in some cases organizations have ended up running models from countries that their own governance policies explicitly prohibit. The MCP and skills problem. Agent skills and MCP servers add another layer to this. Skills are what allow agents to actually do things - move beyond generating text and take action in real systems. Snyk did research across public skill registries and found that roughly a third of what's out there had security issues. Seven percent was actual malware. Developers are pulling in agent skills the same way they've always pulled in open source packages, without necessarily knowing what's inside them. MORE FOR YOU Traditional security tools mostly miss this. Cloud and runtime security platforms see AI after it's deployed - they can flag misbehavior in production, but they don't catch what's introduced earlier in development, in the code, in the CI/CD pipeline, in the third-party components agents pull in. As Nair put it: "Agentic architectures turn governance into a software supply chain problem." That framing positions this as an extension of something the security industry already understands - knowing what's in your software and whether it can be trusted. What Evo AI-SPM does. Evo AI-SPM is built around three automated agents. A Discovery Agent scans code repositories to generate a live AI Bill of Materials - an inventory of models, datasets, agent frameworks, MCP servers and plugins. A Risk Intelligence Agent enriches that inventory with security context, including hallucination and bias metrics and vulnerability signals. A Policy Agent takes governance rules written in plain language and converts them into machine-enforceable guardrails that run natively in CI pipelines. The goal is to give security teams a real-time picture of what AI components exist in their environment and whether those components are actually complying with policy. The Prompt: Get the week's biggest AI news on the buzziest companies and boldest breakthroughs, in your inbox. One thing Nair and I got into was the verification problem. When agents produce code - or make architectural decisions nobody explicitly specified - you can end up with outputs that look fine but are difficult to audit. Static checking alone isn't enough. You also have to understand what environment the agent is running in, what skills and MCP servers it has access to and then dynamically test the result. Snyk's API and Web testing capability, which also hit GA this week, handles that piece - probing deployed applications for authorization flaws like BOLA and IDOR that turn up often in AI-generated code and become more dangerous in agentic contexts. Early access results. WEX, a global payments and workflow company, was among the early access participants. In Snyk's announcement, Jason Langston, director of product security at WEX, said: "It only took an afternoon to set it up and less time to pull a report and have full visibility. Being able to put our arms around the full breadth of what was actually in place was a super helpful foundation to start from." Basic visibility into what AI components are actually running in your environment sounds like a modest goal, but based on what Snyk is seeing across customers, a lot of organizations are starting from scratch on that question. What is available now. Evo AI-SPM and API and Web testing are generally available. Agent Scan and Agent Red Teaming - which runs autonomous agents against AI applications to probe for prompt injection vulnerabilities, data exfiltration paths and multi-step attack vectors - are in open preview. Agent Guard, which monitors live agent behavior and blocks risky tool calls at runtime, is still in private preview. A fair portion of the full platform is still being built out, which is worth knowing if you're trying to put a comprehensive governance architecture in place today. Planting a flag in San Francisco. Snyk also opened a San Francisco innovation hub this week - positioned in the same part of downtown as Anthropic, Cursor, Cognition and other companies building the AI development stack. Nair made the point that when Jensen Huang laid out his vision for the five layers of AI at Davos, security wasn't on the list. Being physically embedded in the ecosystem where the AI stack gets built is part of how Snyk wants to change that. The space is intended to be open to AI engineers generally, with regular technical sessions and hackathons - not just a corporate outpost for Snyk employees. The AI-SPM category is crowded and getting more so. But the problem Snyk is targeting is real. Autonomous agents that write, modify and deploy code at machine speed have outpaced the governance models most organizations have in place. I think getting visibility into what your agents are actually doing - and enforcing policies where the code is written rather than after it ships - is the right approach. How well Snyk and the rest of the market execute on it remains to be seen. Read our community guidelines. NOW PLAYING: The 90-Day AI Playbook Every Small Business Needs FORBES' FEATURED Video Less than $1.50/week. Become a Forbes Member. Unlock limitless insights and exclusive benefits.
Snyk has launched its Agent Security solution and announced general availability of Snyk Evo AI-SPM to secure autonomous AI coding agents like Claude Code, Cursor and Devin. The platform addresses governance gaps as enterprises deploy agents that write and modify code at machine speed. Snyk's 2026 report found organisations introduce nearly three times as many untracked software components for every AI model deployed. During early access, over 500 Evo scans discovered ungoverned agentic AI components that bypassed existing security controls. The solution secures three phases of agentic development: environment through Agent Scan, artifacts via Snyk Studio (deployed across 300+ enterprise customers), and behaviour through Agent Guard. Evo AI-SPM uses automated agents to map attack surfaces, assess risks and enforce security guardrails in CI pipelines. Snyk serves over 4,800 global customers.
Snyk CEO steps aside for someone with more AI chops. CEO makes way for AI visionary after nearly a decade at the helm. Snyk CEO Peter McKay is stepping down as soon as the company can find a more AI-savvy chief executive to replace him. In a LinkedIn post on Thursday, McKay said "After nearly a decade of building and leading Snyk, I've decided the time is right to find my successor and Snyk's next CEO." McKay said he had the full support of the company's board to find "a leader with deep roots in product innovation and AI." Snyk started as a developer-focussed security company finding and fixing vulnerabilities across open-source code, containers, and cloud environments. McKay joined the board in 2016 and became CEO in 2019. Recently, the company has shifted its focus to AI (cue: gasp) with its AI-native SAST tool Snyk Code surpassing $100 million in annual recurring revenue in late 2024. CEO at security wartime. McKay told Runtime in 2024 he'd adopted a "war time" CEO mentality to navigate the industry's shift from selling individual tools to platform offerings - or customers wouldn't get out of bed. He steered the security company through two rounds of layoffs in 2022 and 2023 and what he calls a "monumental pivot" to become a forerunner in AI security. (Snyk was the 2025 Gartner Magic Quadrant leader for its application security testing products which manage AI-powered remediation and AI risk detection.) However, McKay doesn't feel he is the right person to lead the company through the next "era of hyper-intensive AI innovation." "This next chapter requires a visionary, AI-immersed leader ready to commit their full energy to a multi-year journey of technical disruption," the CEO said. He will help find his replacement and said he remains a big supporter of the company, in what appears to be an amicable parting after a decade at the helm. Founder UNO reverse. McKay's departure appears to be the complete opposite of Snyk founder Guy Podjarny's exit from the company almost a year ago. In March 2025, Podjarny stepped down from Snyk's board to pursue his new AI startup Tessl. Podjarny said during the State of OpenCon last February, he was "drawn into the world of AI," adding, "I'm an addict and I wanted to get back to an entrepreneur path on it." Podjarny founded Snyk in 2015 and was the CEO prior to McKay. Tessl, which markets itself as an agent skills and context management platform, raised $125 million in November 2024. The company didn't launch its first products until September 2025: a Spec Registry and the Tessl Framework. Both products are now live. Interviews, insight, intelligence, and exclusive events for digital leaders. No spam. Unsubscribe anytime.
Snyk CEO Peter McKay has announced his departure, stating the company needs a leader with deeper AI expertise for its next phase of growth. McKay cited the company's "monumental pivot to become the leader in AI-native security" as requiring "a visionary, AI-immersed leader" for what he calls the company's "Part Two" era. Under McKay's leadership, Snyk grew to 4,800 customers and $325 million in annual revenue. He will remain as CEO until a replacement is found and plans to stay on as a significant shareholder. McKay described the decision as "deeply emotional" but necessary for long-term success, noting the opportunity ahead is "even greater than what's behind us". The departure is unusual as McKay has no new position lined up and initiated the transition himself.