Full-Time
Posted on 9/24/2025
CDN, cybersecurity, and serverless computing platform
No salary listed
Kolkata, West Bengal, India
Hybrid
| , |
Preparing a concise company summary based on the provided Cloudflare description.
Company Size
5,001-10,000
Company Stage
IPO
Headquarters
San Francisco, California
Founded
2009
Help us improve and share your feedback! Did you find this helpful?
Competitive salaries
Take-what-you-need paid vacation policy
Comprehensive health plans and benefits
Paid maternity and paternity leave
Commuter and ride share options
Returnships
Cloudflare launches Code Mode MCP server to optimize token usage for AI agents. Write for infoq. Feed your curiosity. Help 550k+ global senior developers each month stay ahead. Get in touch Cloudflare has introduced a major evolution in how AI agents access complex APIs by launching a new Model Context Protocol (MCP) server powered by Code Mode, dramatically reducing the cost of interacting with its full API platform. The new approach highlights a new way for agent-to-tool integrations in the MCP ecosystem. At its core, MCP is an emerging standard that lets large language models (LLMs) interface with external tools and APIs by exposing structured tools the model can call during execution. Traditionally, each API endpoint exposed to an agent represented a separate tool definition. While straightforward, this model incurs a significant context window cost every time a tool specification consumes tokens in the model's limited input budget, leaving less room for reasoning about the user's task. Luuk Hofman, Solutions Engineer at Cloudflare, noted: So InfoQ tried: convert MCP tools into a TypeScript API and just ask the LLM to write code against it. Cloudflare's Code Mode instead exposes only two tools, search and execute, backed by a type-aware SDK that allows the model to generate and execute JavaScript inside a secure V8 isolate. This compiles an agent's plan into a small code snippet orchestrating multiple operations against the OpenAPI spec, avoiding the need to load all endpoint definitions into context. Traditional MCP vs Cloudflare Code Mode (Source: Cloudflare Blog Post) The practical impact is significant: Cloudflare reports that Code Mode reduces the token footprint of interacting with over 2,500 API endpoints from more than 1.17 million tokens to roughly 1,000 tokens, a reduction of around 99.9%. This fixed footprint holds regardless of API surface size, enabling agents to work across large, feature-rich platforms without exhausting the model context. Cloudflare emphasized in a Reddit post: The team utilized a specialized encoding strategy to fit expansive API schemas into minimal context windows without losing functional precision. Agents first use search to query the OpenAPI spec by product area, path, or metadata; the spec itself never enters the model's context. Then, execute runs code handling pagination, conditional logic, and chained API calls in a single cycle, cutting round-trip overhead. Cloudflare emphasized the security and sandboxing model during execution. The server runs user-generated code in a Dynamic Worker isolate with no file system, no environment variables exposed, and outbound requests controlled via explicit handlers. This design mitigates risks associated with executing untrusted code while preserving agent autonomy. This new MCP server for the entire Cloudflare API spans DNS, Zero Trust, Workers, and R2 services already and is immediately available for developers to integrate. Cloudflare also open-sourced a Code Mode SDK within its broader Agents SDK to enable similar patterns in third-party MCP implementations. Analysts and practitioners see Code Mode as a key step in scaling agentic workflows beyond simple, single-service interactions toward broad, multi-API automation. The pattern may influence both standard MCP server designs and agent frameworks in the coming year, as industry players grapple with context costs and orchestration complexity in production-grade AI agents. Leela kumili. Leela is a Lead Software Engineer at Starbucks with deep expertise in building scalable, cloud-native systems and distributed platforms. She drives architecture, delivery, and operational excellence across the Rewards Platform, leading efforts to modernize systems, improve scalability, and enhance reliability. In addition to her technical leadership, Leela serves as an AI Champion for the organization, identifying opportunities to improve developer productivity and workflows using LLM-based tools and establishing best practices for AI adoption. She is passionate about building production-ready systems, enhancing developer experience, and mentoring engineers to grow in both technical and strategic impact. Her interests include platform engineering, distributed systems, developer productivity, and bridging technical solutions with business and product goals. This content is in the Model Context Protocol (MCP) topic.
Cloudflare has partnered with Wiz, now part of Google Cloud, to help organisations identify and secure AI-powered applications across their infrastructure. The integration combines Cloudflare's AI Security for Apps with Wiz's Security Graph to provide comprehensive visibility into AI deployments. The partnership addresses the challenge of shadow AI, where organisations deploy AI features faster than security teams can track them. By integrating Cloudflare's security rules into Wiz's platform, security teams can discover unprotected AI endpoints, inspect traffic in real time for threats like prompt injection and data leakage, and verify that guardrails are properly configured. The solution is model and host-agnostic, protecting endpoints regardless of LLM or cloud provider. The integration runs on Cloudflare's global network without adding latency or requiring architectural changes.
Cloudflare is rebuilding its Wrangler command-line interface tool primarily for AI agents, which are becoming its major API users. The company aims to make every Cloudflare product available through a consistent, programmable interface that agents can use to build and operate applications. The redesign includes a technical preview available via npx cf, featuring a new TypeScript schema that defines APIs, CLI commands and arguments. Cloudflare is enforcing default CLI commands at the schema layer to ensure AI agents don't fail due to non-standard commands. For human developers, Cloudflare introduced Local Explorer in open beta, allowing users to inspect Cloudflare Workers bindings and stored data. The company is accepting feedback via its developer Discord channel and plans to expand coverage in coming months, though no specific timeline was provided.
Cloudflare introduces new features for building and deploying agents. With Dynamic Workers, Sandboxes, Artifacts, and the Think framework, the company aims to help AI agents evolve from experiments on local laptops to full-fledged workloads on the Cloudflare network. "The way people build software is fundamentally changing. We are entering a world where agents are the ones writing and executing code," says Matthew Prince, CEO of Cloudflare. "But agents need a home that is secure by default, scales to millions instantly, and persists across long-running tasks." Dynamic Workers, Sandboxes, and Artifacts. The new Dynamic Workers system is an isolate-based runtime that executes AI-generated code in a secure environment. Cloudflare claims that Dynamic Workers start up a hundred times faster than traditional containers and incur only a fraction of the cost, scaling to millions of concurrent executions without warm-up. For longer-running tasks, Cloudflare introduces Sandboxes: full Linux environments where agents clone repositories, install Python packages, and build code. In addition, the company is launching Artifacts, a Git-compatible storage layer that enables developers to create tens of millions of agent repositories. The Think framework within the Agents SDK focuses on persistence: agents use this to support long-running tasks rather than merely responding to individual prompts. Building on the acquisition of Replicate, which gave Cloudflare access to over 50,000 AI models, the company is expanding its model catalog further. Developers can choose from OpenAI models and open-source alternatives via a single interface. Switching between providers requires changing just one line of code, Cloudflare promises.
Virtru integrates Data Security Platform with Cloudflare R2 to enable object-level Access Control. Image Credit: Sashkin/Bigstockphoto.com Virtru, the leader in data-centric security, today announced that its Data Security Platform now delivers object-level data governance to Cloudflare R2 cloud storage. The integration enables organizations to enforce cryptographic, attribute-based access policies on individual objects stored in R2, transforming a single storage bucket into a governed repository where different files carry different access rules, enforced by the data itself. As a result, organizations can store, search, analyze, and connect AI tools to their most sensitive data in Cloudflare R2 while retaining persistent, granular control over every object - ensuring only authorized individuals and systems can access sensitive data, regardless of location or application. Shifting Access Control from the Bucket Level to the Data Level Like all S3-compatible object storage, Cloudflare R2 governs access at the bucket level. Everyone with access to a bucket can see everything inside it. Organizations have historically worked around this limitation by proliferating buckets - creating separate buckets for different sensitivity levels, different departments, and different regulatory regimes. The result is architectural complexity that drives up cost, slows operations, and creates data silos with governance gaps. The Virtru Data Security Platform eliminates that tradeoff. With Trusted Data Format (TDF) encryption and attribute-based access control (ABAC) applied at the individual object level, a single R2 bucket can hold objects with entirely different governance profiles. A finance analyst and an engineering lead can both access the same bucket, but each can only open the files for which they are authorized. Contracts, engineering specifications, research data, and compliance records coexist in a single repository, each governed by its own policy and enforced cryptographically by the data itself. From Protected Storage to Governed Operations "Securing data at rest has never been a hard problem," said John Ackerly, CEO and Co-Founder of Virtru. "The hard problem is governing what happens to sensitive data once it's put to work - searched, analyzed, queried, or accessed by AI tools and automated workflows. Data owners shouldn't have to choose between the operational and economic benefits of modern cloud storage and the ability to govern their most sensitive data. Now, with the Virtru Data Security Platform and Cloudflare R2, they no longer have to." Because every object in R2 now carries its own cryptographically enforced access policy, the Virtru Data Security Platform enables organizations to move beyond static storage protection into governed operations where sensitive data can be actively searched, analyzed, and acted upon while policy enforcement remains continuous and granular. Every operation is evaluated in real time against the requesting user's attributes and enforced by the object's own TDF-wrapped policy. Governance doesn't depend on the application, the network, or the storage provider. It travels with the data. Why Cloudflare R2 R2's zero egress fees make object-level governance especially practical. When data retrieval incurs no transfer costs, real-time policy evaluation adds no compounding overhead. Organizations get the storage economics they chose R2 for - plus the ability to commingle data with different sensitivity levels in the same repository, each object individually protected, revocable at any time, and auditable across every access event. Cloudflare secures the network and infrastructure. The Virtru Data Security Platform secures the data itself. Virtru + Cloudflare: Complementary Security Architecture R2 and the Virtru Data Security Platform operate at complementary layers of the security stack: Infrastructure layer (Cloudflare): Encryption at rest and in transit, DDoS protection, global distribution across 330+ data centers, S3-compatible API, and native Workers integration for edge compute Data layer (Virtru): Object-level TDF encryption, attribute-based access control, real-time policy enforcement, access revocation, and comprehensive audit logging across every access event TDF encryption ensures that objects stored in R2 remain cryptographically protected even at rest - Cloudflare infrastructure cannot decrypt the contents. Only users, systems, or applications whose attributes satisfy the object's ABAC policy can access the plaintext. Data sovereignty stays with the data owner, not the storage provider. Now Available to Early Adopters The integration is available now through an early adopter program. Organizations interested in deploying object-level data governance across their Cloudflare R2 environments can learn more at virtru.com/data-security-platform or contact their Virtru account representative. John Ackerly, CEO and Co-Founder of Virtru Securing data at rest has never been a hard problem. The hard problem is governing what happens to sensitive data once it's put to work - searched, analyzed, queried, or accessed by AI tools and automated workflows. Data owners shouldn't have to choose between the operational and economic benefits of modern cloud storage and the ability to govern their most sensitive data. Now, with the Virtru Data Security Platform and Cloudflare R2, they no longer have to. Ray Sharma is an Industry Analyst and Editor at The Fast Mode. He has over 15 years of experience in mobile broadband technologies and solutions, conducting research and analysis on various technology segments and producing articles and write-ups on the latest developments within the sector. He is also in charge of social media engagement and industry liaisons. The Fast Mode 9658 likes TWEETS 28.4K FOLLOWING 3479 FOLLOWERS 13.2K