Full-Time
Posted on 10/7/2025
Cloud infrastructure provider offering SSD VPS
No salary listed
Noida, Uttar Pradesh, India
In Person
Vultr provides cloud infrastructure services that let developers, startups, and enterprises run high-performance cloud servers (SSD VPS), storage, and networking. Its products include cloud compute instances, storage solutions, and networking capabilities, which can be deployed globally in 60 seconds and billed on a subscription (pay-for-use) basis. Customers manage resources through an online dashboard and API, scaling up or down as needed. Vultr differentiates itself by emphasizing fast global deployment, straightforward management, and strong customer support at a competitive price, supported by a large footprint of users and deployed servers. The company’s goal is to simplify cloud computing and offer scalable, reliable infrastructure that is easy to use and cost-effective for a worldwide audience.
Company Size
201-500
Company Stage
Debt Financing
Total Funding
$662M
Headquarters
West Palm Beach, Florida
Founded
2014
Help us improve and share your feedback! Did you find this helpful?
Remote Work Options
401(k) Retirement Plan
401(k) Company Match
Professional Development Budget
Paid Vacation
Paid Sick Leave
Home Office Stipend
Phone/Internet Stipend
Gym Membership
Vultr says its nvidia-powered AI infrastructure costs 50% to 90% less than hyperscalers. Vultr integrates Nvidia GPUs and OpenClaw AI agents to automate developer infrastructure, offering platform engineering teams costs 50-90% below hyperscalers. Vultr is using Nvidia GPUs and AI agents like OpenClaw to automate infrastructure setup for developers - and says the result costs 50% to 90% less than comparable offerings from major hyperscalers. The platform, built for internal developer portals, lets platform engineering teams train AI on their own security policies, networking rules, and compliance requirements, then expose that as a library of preconfigured options developers can deploy with a click. During KubeCon+CloudNativeCon Europe, Vultr's chief marketing officer, Kevin Cochrane, who has acquired a deep technical background over the past two decades, described Nvidia as providing the "fuel" or "electricity" for what Vultr offers. "We want to help platform engineers build a frickin' BMW so that when they get on the freeway, they're actually getting on the autobahn, and they're going 240 kilometers an hour." "We want to help platform engineers build a frickin' BMW so that when they get on the freeway, they're actually getting on the autobahn, and they're going 240 kilometers an hour," Cochrane said. "You are going to consume the fuel or electricity, which we offer." Cheaper compute, automated setup. With the availability of Nvidia's resources, Vultr says it can offer a less expensive experience, with its cheaper "fuel" or "electricity" for the compute aspect that is often prohibitively expensive for organizations. "The challenge is that if you have a BMW, and you're going to go really fast, you're otherwise going to wind up investing a lot in compute," Cochrane added. Vultr is on a mission to make high-performance cloud infrastructure easy to use, affordable, and locally accessible for enterprises and AI innovators around the world. Vultr is trusted by hundreds of thousands of active customers across 185 countries for its flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. Hear more from our sponsor Vultr has created this alternative to give platform engineering teams the best of both worlds: high-powered, AI-integrated templates for creating and managing internal developer portals at a very competitive cost. According to Vultr, those savings hold across both this new functionality and its existing offerings. Skill files replace manual scripts. In this new approach to AI infrastructure, the platform engineer's role shifts from manual setup to high-level architectural design. Instead of hand-coding every script, they focus on building core skills, Cochrane told The New Stack. These skills are essentially "skill files" that an AI agent, such as OpenClaw, uses to perform specific operational tasks. To create these, the platform engineer develops a corpus or library of artifacts that serve as a training set. These artifacts represent a "known good set of principles" that have been "pre-baked" and "blessed by everybody" on the technical teams, Cochrane said. For example, a network engineering team might create a network skill. This file tells the AI exactly how to "create a VPC," establish a "direct connect" between specific cities, and set up "failover" regions, Cochrane said. Once these skills are exposed through a developer portal, downstream developers can deploy an application without worrying about "networking," "data center locations," or "attaching storage," Cochrane said. Since the platform is "100% API-driven," the AI agent simply uses those skill files to automate the entire configuration, Cochrane says. This ensures that complex requirements, like data privacy and security policies, are handled automatically, preventing developers from "messing that up". Ultimately, all the infrastructure complexity is "completely obfuscated," so the developer can focus solely on the application itself, Cochrane said. Nvidia's stack powers Vultr. Cochrane described three components of Nvidia's stack that Vultr orchestrates: * Nvidia Dynamo: An "AI operating system" for infrastructure management, both stateful and stateless,s for Kubernetes. * Nvidia Vera Rubin Platform: An integrated system combining GPUs, CPUs, networking, and storage to push the "efficient frontier of tokenomics." * Agentic AI & NemoClaw: An emphasis on an open source stack (including OpenClaw/NemoClaw) that provides a secure foundation for autonomous agents through higher-level "skills" and abstractions. Developers click, not configure. Once the platform engineering team has set up the IDP, the developer can click what they want and need (and not have access to an AI agent that might damage infrastructure or inadvertently run up a $50,000 cloud bill). The developer can use the IDP API to choose the server, by selecting Cloud GPU or Optimized Cloud, for example, and its location, whether in New Jersey, London, or Tokyo. A "Marketplace" tab is accessed with the Nvidia NemoClaw icon. Other configurations on the menu that Vultr's system automates include server size, Nvidia GPU models (such as H100 or A100), RAM, and monthly price. "All of that complexity should get handled by the platform engineering team, and everything else should be completely obfuscated to the developer. They shouldn't need to know anything about it." "Any developer that's building some downstream application can use something like OpenClaw and take those skills... they basically just let their claw set up their pipelines, models, pipelines for their codes, and then just go," Cochrane said. "All of that complexity should get handled by the platform engineering team, and everything else should be completely obfuscated to the developer. They shouldn't need to know anything about it." Vultr sponsored this post.
Vultr seeks at least $1bn in funding - report. Could be debt or equity, or a combination of the two March 25, 2026 Privately owned cloud company Vultr is reportedly looking to raise at least $1 billion in new capital. Vultr is said to be working with Goldman Sachs on the funding effort, which could potentially reach "billions of dollars" of debt and equity. According to The Information, citing people "with direct knowledge of the process," the capital will be used to help Vultr compete in the AI compute race. DCD has contacted Vultr for comment. The cloud company previously completed a funding round in December 2024 that raised $333 million, led by LuminArx Capital Management and AMD Ventures. Goldman Sachs & Co. LLC also served as financial advisor to Vultr for that transaction. That funding round valued Vultr at $3.5 billion, though a new deal would likely push that valuation up. In 2025, DCD exclusively reported that the company was looking to do an initial public offering at "some time in the coming years," with chief marketing officer Kevin Cochrane explaining that the December 2024 round was done, in part, to "start setting us upon the path." Vultr, founded in 2014, currently operates out of 32 cloud locations globally. AMD is an investor in the company, and Vultr has previously announced plans for a 50MW GPU cluster using AMD chips in Springfield, Ohio. The company has also confirmed plans to offer an "optimized inference stack" based on the Nvidia Rubin platform, which can be deployed on public, private, or sovereign clouds by customers. More in investment / M&A / financing.
Vultr adopts NVIDIA Rubin Platform, NVIDIA Dynamo, and NVIDIA Nemotron to reinvent enterprise AI inference. Mar 17, 2026 Prev Next 1 of 42,817 Vultr, the world's largest privately-held cloud infrastructure company, announced it is delivering an optimized inference stack on NVIDIA Rubin platform. This latest milestone in NVIDIA and Vultr's long-standing collaboration yields industry-leading tokenomics to support enterprises with ready-to-deploy composable cloud infrastructure leveraging NVIDIA's optimized open-source model and inference frameworks. Vultr announced immediate availability for full stack NVIDIA AI Enterprise Inference solutions through partner NetApp with planned NVIDIA Vera Rubin support in Q4 2026. The new optimized stack unlocks the full power of NVIDIA Vera Rubin for agentic AI and inference in the enterprise As part of this industry-first solution, Vultr is adopting the NVIDIA Dynamo inference framework and NVIDIA Nemotron model family to accelerate industry-specific AI outcomes and targeted use cases. These powerful open-source resources enable higher throughput and seamless scaling of inference workloads. Combined with Vultr's high-performance infrastructure, NVIDIA Dynamo and Nemotron accelerate the path to deployment while reducing the cost of inference - a critical barrier to scaling enterprise AI initiatives. "The rise of agentic AI demands powerful, reliable AI infrastructure and a production-ready full stack to accelerate innovation," said J.J. Kardwell, CEO of Vultr. "With NVIDIA and our software partners, we are delivering an integrated AI environment that enables enterprises to deploy next-generation models efficiently and at scale on NVIDIA's Rubin Platform." As a Preferred NVIDIA Cloud Partner, Vultr deploys NVIDIA AI infrastructure at any scale, globally. Customers can build once and deploy widely, driving scale and reducing time-to-value for AI applications. The new enterprise inference stack can be deployed on public, private or sovereign clouds, making it suitable for a broad spectrum of enterprise use cases, including those involving highly sensitive data. Vultr and NVIDIA are also working together on NVIDIA NemoClaw - an open source stack that simplifies running OpenClaw always-on assistants, more safely, with a single command. As part of the NVIDIA Agent Toolkit, it installs the NVIDIA OpenShell runtime - a secure environment for running autonomous agents, and open source models like NVIDIA Nemotron. "Vultr's global reach and hyperscaler-level capacity make them a powerful partner in this next evolution of the AI era," said Dave Salvator, Director of Accelerated Computing Products at NVIDIA. "Innovating with Vultr allows us to optimize our robust open-source portfolio for enterprise AI workloads, propelling advancements in agentic AI and reinventing the economics of inference. Unlocking NVIDIA Vera Rubin systems means unlocking the future of the enterprise, where AI takes productivity, efficiency, and quality of service to new heights." Further enhancing the stack, Vultr has partnered with NetApp to deliver the resilient, high-performance foundation required for an AI-ready data estate. NetApp's AFX, a disaggregated data management platform, delivers the performance and scale needed for building modern AI-driven business solutions. When combined with NetApp's AI Data Engine, built on the NVIDIA AI Data Platform reference design, AI services are now accelerated with AI-ready data transformed in-place, secured and performant for enterprise-scale inferencing driving agentic AI workflows. "Our collaboration with Vultr was founded on a shared mission to help enterprises navigate today's data management challenges and push the boundaries of AI," said Syam Nair, Chief Product Officer at NetApp. "In bringing an enterprise-grade data platform delivering GPU-saturating performance with built-in security to this optimized stack for the next generation of AI infrastructure, we're helping customers leverage AI with agility and deliver business outcomes without compromises." Vultr's expansive global footprint and scaling capabilities have set the standard for enterprise-ready cloud infrastructure solutions. Hundreds of thousands of enterprises and developers across the globe rely on Vultr's flexible, cost-efficient infrastructure to support their most demanding workloads. With 33 cloud data center regions across six continents, Vultr provides GPU-forward, infrastructure-first solutions with the data residency, security, and accessibility needed to support mission-critical AI workloads, including in highly regulated industries.
Vultr adopts NVIDIA Rubin Platform, NVIDIA Dynamo, and NVIDIA Nemotron to reinvent enterprise AI inference. March 17, 2026 Press play to listen to this content WEST PALM BEACH, Fla., March 17, 2026 - Vultr has announced it is delivering an optimized inference stack on NVIDIA Rubin platform. This latest milestone in NVIDIA and Vultr's long-standing collaboration yields industry-leading tokenomics to support enterprises with ready-to-deploy composable cloud infrastructure leveraging NVIDIA's optimized open-source model and inference frameworks. Vultr announced immediate availability for full stack NVIDIA AI Enterprise Inference solutions through partner NetApp with planned NVIDIA Vera Rubin support in Q4 2026. As part of this industry-first solution, Vultr is adopting the NVIDIA Dynamo inference framework and NVIDIA Nemotron model family to accelerate industry-specific AI outcomes and targeted use cases. These powerful open-source resources enable higher throughput and seamless scaling of inference workloads. Combined with Vultr's high-performance infrastructure, NVIDIA Dynamo and Nemotron accelerate the path to deployment while reducing the cost of inference - a critical barrier to scaling enterprise AI initiatives. "The rise of agentic AI demands powerful, reliable AI infrastructure and a production-ready full stack to accelerate innovation," said J.J. Kardwell, CEO of Vultr. "With NVIDIA and our software partners, we are delivering an integrated AI environment that enables enterprises to deploy next-generation models efficiently and at scale on NVIDIA's Rubin Platform." As a Preferred NVIDIA Cloud Partner, Vultr deploys NVIDIA AI infrastructure at any scale, globally. Customers can build once and deploy widely, driving scale and reducing time-to-value for AI applications. The new enterprise inference stack can be deployed on public, private or sovereign clouds, making it suitable for a broad spectrum of enterprise use cases, including those involving highly sensitive data. Vultr and NVIDIA are also working together on NVIDIA NemoClaw - an open source stack that simplifies running OpenClaw always-on assistants, more safely, with a single command. As part of the NVIDIA Agent Toolkit, it installs the NVIDIA OpenShell runtime - a secure environment for running autonomous agents, and open source models like NVIDIA Nemotron. "Vultr's global reach and hyperscaler-level capacity make them a powerful partner in this next evolution of the AI era," said Dave Salvator, Director of Accelerated Computing Products at NVIDIA. "Innovating with Vultr allows us to optimize our robust open-source portfolio for enterprise AI workloads, propelling advancements in agentic AI and reinventing the economics of inference. Unlocking NVIDIA Vera Rubin systems means unlocking the future of the enterprise, where AI takes productivity, efficiency, and quality of service to new heights." Further enhancing the stack, Vultr has partnered with NetApp to deliver the resilient, high-performance foundation required for an AI-ready data estate. NetApp's AFX, a disaggregated data management platform, delivers the performance and scale needed for building modern AI-driven business solutions. When combined with NetApp's AI Data Engine, built on the NVIDIA AI Data Platform reference design, AI services are now accelerated with AI-ready data transformed in-place, secured and performant for enterprise-scale inferencing driving agentic AI workflows. "Our collaboration with Vultr was founded on a shared mission to help enterprises navigate today's data management challenges and push the boundaries of AI," said Syam Nair, Chief Product Officer at NetApp. "In bringing an enterprise-grade data platform delivering GPU-saturating performance with built-in security to this optimized stack for the next generation of AI infrastructure, we're helping customers leverage AI with agility and deliver business outcomes without compromises." Vultr's expansive global footprint and scaling capabilities have set the standard for enterprise-ready cloud infrastructure solutions. Hundreds of thousands of enterprises and developers across the globe rely on Vultr's flexible, cost-efficient infrastructure to support their most demanding workloads. With 33 cloud data center regions across six continents, Vultr provides GPU-forward, infrastructure-first solutions with the data residency, security, and accessibility needed to support mission-critical AI workloads, including in highly regulated industries. About Vultr Vultr is on a mission to make high-performance cloud infrastructure easy to use, affordable, and locally accessible for enterprises and AI innovators around the world. Vultr is trusted by hundreds of thousands of active customers across 185 countries for its flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. In December 2024, Vultr announced an equity financing at a $3.5 billion valuation. Founded by David Aninowsky and self-funded for over a decade, Vultr has grown to become the world's largest privately-held cloud infrastructure company. SAN JOSE, Calif., March 17, 2026 - HPE today announced the HPE AI Grid, an end-to-end... WASHINGTON, March 17, 2026 - The U.S. Department of Energy (DOE) today announced funding to advance... SAN JOSE, Calif., March 17, 2026 - Supermicro, Inc. today unveiled one of the industry's first... SAN JOSE, Calif., March 17, 2026 - Samsung Electronics Co., Ltd., a global leader in advanced... TAIPEI, Taiwan, March 17, 2026 - One of COMPUTEX's organizers - TAITRA (Taiwan External Trade Development Council)... SAN JOSE, Calif., March 17, 2026 - Super Micro Computer, Inc. has unveiled its upcoming system portfolio...
Vultr AMD AI cluster expands cloud infrastructure. Cloud Vultr AMD AI cluster expands cloud infrastructure. Vultr AMD A game changer for AI infrastructure. The Vultr AMD initiative marks a major step in the artificial intelligence infrastructure market. Vultr plans to invest nearly $1 billion to build a large AI chip cluster in Ohio. The facility will operate as a powerful 50-megawatt data center. It will also run on advanced processors developed by Advanced Micro Devices. As a result, the project highlights the growing demand for AI computing resources across industries. Today, many organizations rely on artificial intelligence to process large amounts of data. Therefore, cloud providers are building stronger infrastructure to support these workloads. AI systems require powerful hardware to train models and analyze data quickly. Because of this need, companies are investing in specialized data centers designed for AI operations. Moreover, the new facility will help Vultr expand its cloud computing capabilities. It will provide high-performance infrastructure for machine learning, analytics, and automation tools. In addition, developers and enterprises will gain access to powerful computing resources without building their own hardware systems. Artificial intelligence is transforming industries such as healthcare, finance, manufacturing, and logistics. Consequently, businesses are searching for reliable cloud platforms that can handle demanding AI workloads. By developing dedicated infrastructure, Vultr aims to support this growing technological shift. Benefits and challenges of the Vultr AMD initiative. The new AI cluster is expected to deliver several advantages for companies developing artificial intelligence solutions. The Vultr AMD infrastructure will allow organizations to process large datasets more efficiently. As a result, developers can train machine learning models faster and deploy applications sooner. For example, data scientists often need powerful hardware to run experiments and simulations. With specialized cloud infrastructure, they can scale their workloads quickly. In addition, they can test new algorithms without worrying about hardware limitations. This flexibility allows companies to innovate faster and improve their AI systems. Furthermore, enterprises will benefit from improved performance when running real-time applications. Systems such as fraud detection platforms, predictive analytics tools, and automated monitoring solutions require fast data processing. Therefore, high-performance AI clusters can help maintain smooth and reliable operations. However, building a large AI facility also presents several challenges. Data centers designed for artificial intelligence consume significant amounts of electricity. Consequently, companies must invest in efficient cooling technologies and energy management systems. Another challenge involves workforce expertise. Managing complex AI infrastructure requires highly skilled engineers and system specialists. As demand for artificial intelligence talent grows, technology companies must focus on training and recruitment. Otherwise, maintaining such large computing environments may become difficult. Nevertheless, the long-term benefits of advanced AI infrastructure remain significant. When managed properly, large computing clusters can drive innovation and improve business productivity. Future outlook for Vultr AMD. The Vultr AMD project also reflects a broader trend in the technology industry. Many cloud providers are investing heavily in infrastructure designed specifically for artificial intelligence. As AI adoption increases, the demand for powerful computing environments will continue to rise. For instance, organizations now use artificial intelligence for automation, predictive analytics, and intelligent decision-making. Therefore, cloud platforms must provide scalable environments capable of handling large workloads. Specialized AI clusters make this possible. In addition, cloud infrastructure enables companies of all sizes to experiment with artificial intelligence technologies. Small startups and research teams can access powerful computing resources without purchasing expensive hardware. Consequently, innovation becomes more accessible across the industry. Looking ahead, investments in AI infrastructure are likely to increase worldwide. Technology companies understand that advanced computing environments are essential for future innovation. As a result, new facilities similar to this AI cluster may appear in different regions. Ultimately, Vultr's investment demonstrates how cloud providers are adapting to the demands of artificial intelligence. By developing scalable and specialized computing environments, the company aims to support developers, enterprises, and researchers working with advanced AI technologies.