Full-Time

Datacenter Deployments Engineer

Posted on 4/24/2025

Cloudflare

Cloudflare

5,001-10,000 employees

CDN, cybersecurity, and serverless computing platform

Compensation Overview

$93k - $135k/yr

+ Base salary

Company Historically Provides H1B Sponsorship

Austin, TX, USA + 2 more

More locations: Denver, CO, USA | Atlanta, GA, USA

Hybrid

Austin requires 2 days in-office per week; other locations not specified.

Category
DevOps & Infrastructure (2)
,
Requirements
  • Minimum of 3 years of prior relevant experience in Data Center Operations, Site Reliability Engineering, Linux systems administration, Network Engineering, and/or DevOps experience
  • Familiarity with day-to-day tasks and projects common in Data Center Operations.
  • Experience with optical transport technologies such as CWDM/DWDM
  • Configuration management tool experience like Saltstack, Chef, Puppet or Ansible
  • Network hardware administration.
  • Knowledge and exposure to Network Protocols, Topologies and Enterprise architecture.
  • Experience writing network configuration and design documentation.
  • Experience solving problems through automation.
  • Ability to write scripts for internal tools.
  • Experience running and improving operational processes in a rapidly changing environment.
Responsibilities
  • Provisioning, monitoring and maintaining hardware, software, and network in new Cloudflare data centers.
  • Creating documentation and managing remote contractors to complete datacenter installations and upgrades (rack and stack), including hardware manufacturers, datacenter and network providers, logistics partners and other service providers in support of our 335+ growing datacenter locations
  • Aggressively seek opportunities to introduce cutting-edge technology and automation solutions that are effective, efficient and scalable in order to improve our ability to deploy and maintain our global infrastructure.
  • Planning and implementing network and server installations, including in the areas of facility power (AC/DC), cooling, security/access, rack layout and cable management.
  • Providing technical leadership and guidance during deployment activities.
  • Creating and maintaining documentation, plans, SOP’s, MOP’s etc.
  • Collaborating with internal teams (infrastructure engineering, network engineering and SRE) for day to day activities.
  • Coordinating installation of cross-connects globally in support of physical network expansion.
  • Assisting with the definition, documentation and implementation of consistent processes across all regions.
  • Limited travel
Desired Qualifications
  • Bachelor’s degree, technical background in engineering, computer science, or MIS a plus.
  • Direct experience executing on datacenter / infrastructure projects with many moving parts.
  • Previous experience installing / maintaining datacenter (and other IT) infrastructure and DCIM tools.
  • Experience running and improving operational processes in a rapidly changing environment.
  • Strong understanding of BGP and anycast routing
  • Professional level network certification(s) (JNCIP, CCNP, etc) or higher
  • Good working knowledge of Juniper Junos, Cisco IOS, Cisco NX-OS and Arista EOS
  • Load balancing and reverse proxies such as Nginx, Varnish, HAProxy, Apache
  • Proficient in one or more programming languages and willing to learn new ones when required
  • Strong verbal and written communication skills, problem-solving skills, attention to detail, and interpersonal skills.
  • Must be proactive with proven ability to learn fast and execute on multiple tasks simultaneously.
  • Ability to manage MS excel and Google spreadsheets.
  • Comfortable handling basic program management responsibilities (prioritization, planning, scheduling, status reporting) such as JIRA
  • Experience managing remote contractors
  • Must be a team player.
  • Bonus Points
  • Multi-lingual; experience working with infrastructure in multiple countries.
  • Experience with continuous / rapid deployment
  • Experience working in a 24/7/365 mission-critical service environment
  • Comfortable with remote “lights-out” and out-of-band access to data center resources
  • Linux certifications.
  • Knowledge of the OSI-model and experience isolating network, hardware and software issues.

Preparing a concise company summary based on the provided Cloudflare description.

Company Size

5,001-10,000

Company Stage

IPO

Headquarters

San Francisco, California

Founded

2009

Simplify Jobs

Simplify's Take

What believers are saying

  • Q1 2026 revenue hit $639.8M with 34% growth, adding record $5M+ customers.
  • AI traffic from bots surges, surpassing human traffic by 2027 per management.
  • Guides 30% revenue growth in 2026, targeting Rule of 50 profile next year.

What critics are saying

  • 20% workforce cut of 1,100 employees triggers talent exodus to Zscaler within 3-6 months.
  • Akamai undercuts pricing, wins DoD contract in March 2026, eroding enterprise share.
  • Gross margins drop below 70% from AI GPU costs, forcing price hikes by Q4 2026.

What makes Cloudflare unique

  • Cloudflare consolidates CDN, DDoS protection, and Zero Trust into single connectivity cloud.
  • Workers platform enables serverless edge computing with KV storage and Cron Triggers globally.
  • Proxies 20% of web traffic, delivering built-in threat intelligence across 330 cities.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Competitive salaries

Take-what-you-need paid vacation policy

Comprehensive health plans and benefits

Paid maternity and paternity leave

Commuter and ride share options

Returnships

Growth & Insights and Company News

Headcount

6 month growth

-1%

1 year growth

-1%

2 year growth

0%
InfoQ
Apr 16th, 2026
Cloudflare launches Code Mode MCP server to optimize token usage for AI agents.

Cloudflare launches Code Mode MCP server to optimize token usage for AI agents. Write for infoq. Feed your curiosity. Help 550k+ global senior developers each month stay ahead. Get in touch Cloudflare has introduced a major evolution in how AI agents access complex APIs by launching a new Model Context Protocol (MCP) server powered by Code Mode, dramatically reducing the cost of interacting with its full API platform. The new approach highlights a new way for agent-to-tool integrations in the MCP ecosystem. At its core, MCP is an emerging standard that lets large language models (LLMs) interface with external tools and APIs by exposing structured tools the model can call during execution. Traditionally, each API endpoint exposed to an agent represented a separate tool definition. While straightforward, this model incurs a significant context window cost every time a tool specification consumes tokens in the model's limited input budget, leaving less room for reasoning about the user's task. Luuk Hofman, Solutions Engineer at Cloudflare, noted: So InfoQ tried: convert MCP tools into a TypeScript API and just ask the LLM to write code against it. Cloudflare's Code Mode instead exposes only two tools, search and execute, backed by a type-aware SDK that allows the model to generate and execute JavaScript inside a secure V8 isolate. This compiles an agent's plan into a small code snippet orchestrating multiple operations against the OpenAPI spec, avoiding the need to load all endpoint definitions into context. Traditional MCP vs Cloudflare Code Mode (Source: Cloudflare Blog Post) The practical impact is significant: Cloudflare reports that Code Mode reduces the token footprint of interacting with over 2,500 API endpoints from more than 1.17 million tokens to roughly 1,000 tokens, a reduction of around 99.9%. This fixed footprint holds regardless of API surface size, enabling agents to work across large, feature-rich platforms without exhausting the model context. Cloudflare emphasized in a Reddit post: The team utilized a specialized encoding strategy to fit expansive API schemas into minimal context windows without losing functional precision. Agents first use search to query the OpenAPI spec by product area, path, or metadata; the spec itself never enters the model's context. Then, execute runs code handling pagination, conditional logic, and chained API calls in a single cycle, cutting round-trip overhead. Cloudflare emphasized the security and sandboxing model during execution. The server runs user-generated code in a Dynamic Worker isolate with no file system, no environment variables exposed, and outbound requests controlled via explicit handlers. This design mitigates risks associated with executing untrusted code while preserving agent autonomy. This new MCP server for the entire Cloudflare API spans DNS, Zero Trust, Workers, and R2 services already and is immediately available for developers to integrate. Cloudflare also open-sourced a Code Mode SDK within its broader Agents SDK to enable similar patterns in third-party MCP implementations. Analysts and practitioners see Code Mode as a key step in scaling agentic workflows beyond simple, single-service interactions toward broad, multi-API automation. The pattern may influence both standard MCP server designs and agent frameworks in the coming year, as industry players grapple with context costs and orchestration complexity in production-grade AI agents. Leela kumili. Leela is a Lead Software Engineer at Starbucks with deep expertise in building scalable, cloud-native systems and distributed platforms. She drives architecture, delivery, and operational excellence across the Rewards Platform, leading efforts to modernize systems, improve scalability, and enhance reliability. In addition to her technical leadership, Leela serves as an AI Champion for the organization, identifying opportunities to improve developer productivity and workflows using LLM-based tools and establishing best practices for AI adoption. She is passionate about building production-ready systems, enhancing developer experience, and mentoring engineers to grow in both technical and strategic impact. Her interests include platform engineering, distributed systems, developer productivity, and bridging technical solutions with business and product goals. This content is in the Model Context Protocol (MCP) topic.

Business Wire
Apr 14th, 2026
Cloudflare partners with Wiz to eliminate shadow AI blind spots across global infrastructure

Cloudflare has partnered with Wiz, now part of Google Cloud, to help organisations identify and secure AI-powered applications across their infrastructure. The integration combines Cloudflare's AI Security for Apps with Wiz's Security Graph to provide comprehensive visibility into AI deployments. The partnership addresses the challenge of shadow AI, where organisations deploy AI features faster than security teams can track them. By integrating Cloudflare's security rules into Wiz's platform, security teams can discover unprotected AI endpoints, inspect traffic in real time for threats like prompt injection and data leakage, and verify that guardrails are properly configured. The solution is model and host-agnostic, protecting endpoints regardless of LLM or cloud provider. The integration runs on Cloudflare's global network without adding latency or requiring architectural changes.

The Register
Apr 13th, 2026
Cloudflare rebuilds Wrangler CLI for AI agents, not human developers

Cloudflare is rebuilding its Wrangler command-line interface tool primarily for AI agents, which are becoming its major API users. The company aims to make every Cloudflare product available through a consistent, programmable interface that agents can use to build and operate applications. The redesign includes a technical preview available via npx cf, featuring a new TypeScript schema that defines APIs, CLI commands and arguments. Cloudflare is enforcing default CLI commands at the schema layer to ensure AI agents don't fail due to non-standard commands. For human developers, Cloudflare introduced Local Explorer in open beta, allowing users to inspect Cloudflare Workers bindings and stored data. The company is accepting feedback via its developer Discord channel and plans to expand coverage in coming months, though no specific timeline was provided.

Dolphin Publications
Apr 13th, 2026
Cloudflare introduces new features for building and deploying agents.

Cloudflare introduces new features for building and deploying agents. With Dynamic Workers, Sandboxes, Artifacts, and the Think framework, the company aims to help AI agents evolve from experiments on local laptops to full-fledged workloads on the Cloudflare network. "The way people build software is fundamentally changing. We are entering a world where agents are the ones writing and executing code," says Matthew Prince, CEO of Cloudflare. "But agents need a home that is secure by default, scales to millions instantly, and persists across long-running tasks." Dynamic Workers, Sandboxes, and Artifacts. The new Dynamic Workers system is an isolate-based runtime that executes AI-generated code in a secure environment. Cloudflare claims that Dynamic Workers start up a hundred times faster than traditional containers and incur only a fraction of the cost, scaling to millions of concurrent executions without warm-up. For longer-running tasks, Cloudflare introduces Sandboxes: full Linux environments where agents clone repositories, install Python packages, and build code. In addition, the company is launching Artifacts, a Git-compatible storage layer that enables developers to create tens of millions of agent repositories. The Think framework within the Agents SDK focuses on persistence: agents use this to support long-running tasks rather than merely responding to individual prompts. Building on the acquisition of Replicate, which gave Cloudflare access to over 50,000 AI models, the company is expanding its model catalog further. Developers can choose from OpenAI models and open-source alternatives via a single interface. Switching between providers requires changing just one line of code, Cloudflare promises.

The Fast Mode
Apr 13th, 2026
Virtru integrates Data Security Platform with Cloudflare R2 to enable object-level Access Control.

Virtru integrates Data Security Platform with Cloudflare R2 to enable object-level Access Control. Image Credit: Sashkin/Bigstockphoto.com Virtru, the leader in data-centric security, today announced that its Data Security Platform now delivers object-level data governance to Cloudflare R2 cloud storage. The integration enables organizations to enforce cryptographic, attribute-based access policies on individual objects stored in R2, transforming a single storage bucket into a governed repository where different files carry different access rules, enforced by the data itself. As a result, organizations can store, search, analyze, and connect AI tools to their most sensitive data in Cloudflare R2 while retaining persistent, granular control over every object - ensuring only authorized individuals and systems can access sensitive data, regardless of location or application. Shifting Access Control from the Bucket Level to the Data Level Like all S3-compatible object storage, Cloudflare R2 governs access at the bucket level. Everyone with access to a bucket can see everything inside it. Organizations have historically worked around this limitation by proliferating buckets - creating separate buckets for different sensitivity levels, different departments, and different regulatory regimes. The result is architectural complexity that drives up cost, slows operations, and creates data silos with governance gaps. The Virtru Data Security Platform eliminates that tradeoff. With Trusted Data Format (TDF) encryption and attribute-based access control (ABAC) applied at the individual object level, a single R2 bucket can hold objects with entirely different governance profiles. A finance analyst and an engineering lead can both access the same bucket, but each can only open the files for which they are authorized. Contracts, engineering specifications, research data, and compliance records coexist in a single repository, each governed by its own policy and enforced cryptographically by the data itself. From Protected Storage to Governed Operations "Securing data at rest has never been a hard problem," said John Ackerly, CEO and Co-Founder of Virtru. "The hard problem is governing what happens to sensitive data once it's put to work - searched, analyzed, queried, or accessed by AI tools and automated workflows. Data owners shouldn't have to choose between the operational and economic benefits of modern cloud storage and the ability to govern their most sensitive data. Now, with the Virtru Data Security Platform and Cloudflare R2, they no longer have to." Because every object in R2 now carries its own cryptographically enforced access policy, the Virtru Data Security Platform enables organizations to move beyond static storage protection into governed operations where sensitive data can be actively searched, analyzed, and acted upon while policy enforcement remains continuous and granular. Every operation is evaluated in real time against the requesting user's attributes and enforced by the object's own TDF-wrapped policy. Governance doesn't depend on the application, the network, or the storage provider. It travels with the data. Why Cloudflare R2 R2's zero egress fees make object-level governance especially practical. When data retrieval incurs no transfer costs, real-time policy evaluation adds no compounding overhead. Organizations get the storage economics they chose R2 for - plus the ability to commingle data with different sensitivity levels in the same repository, each object individually protected, revocable at any time, and auditable across every access event. Cloudflare secures the network and infrastructure. The Virtru Data Security Platform secures the data itself. Virtru + Cloudflare: Complementary Security Architecture R2 and the Virtru Data Security Platform operate at complementary layers of the security stack: Infrastructure layer (Cloudflare): Encryption at rest and in transit, DDoS protection, global distribution across 330+ data centers, S3-compatible API, and native Workers integration for edge compute Data layer (Virtru): Object-level TDF encryption, attribute-based access control, real-time policy enforcement, access revocation, and comprehensive audit logging across every access event TDF encryption ensures that objects stored in R2 remain cryptographically protected even at rest - Cloudflare infrastructure cannot decrypt the contents. Only users, systems, or applications whose attributes satisfy the object's ABAC policy can access the plaintext. Data sovereignty stays with the data owner, not the storage provider. Now Available to Early Adopters The integration is available now through an early adopter program. Organizations interested in deploying object-level data governance across their Cloudflare R2 environments can learn more at virtru.com/data-security-platform or contact their Virtru account representative. John Ackerly, CEO and Co-Founder of Virtru Securing data at rest has never been a hard problem. The hard problem is governing what happens to sensitive data once it's put to work - searched, analyzed, queried, or accessed by AI tools and automated workflows. Data owners shouldn't have to choose between the operational and economic benefits of modern cloud storage and the ability to govern their most sensitive data. Now, with the Virtru Data Security Platform and Cloudflare R2, they no longer have to. Ray Sharma is an Industry Analyst and Editor at The Fast Mode. He has over 15 years of experience in mobile broadband technologies and solutions, conducting research and analysis on various technology segments and producing articles and write-ups on the latest developments within the sector. He is also in charge of social media engagement and industry liaisons. The Fast Mode 9658 likes TWEETS 28.4K FOLLOWING 3479 FOLLOWERS 13.2K

INACTIVE