Full-Time
Cloud service for GPU-accelerated workloads
$200k - $262k/yr
Expert
Livingston, NJ, USA + 2 more
More locations: New York, NY, USA | Sunnyvale, CA, USA
Candidates not living within 30 miles of an office may be considered for remote work, but onboarding will occur at one of the hubs within the first month of employment.
Upload your resume to see how it matches 16 keywords from the job description.
PDF, DOC, DOCX, up to 4 MB
CoreWeave provides cloud computing services that focus on GPU-accelerated workloads, which are essential for tasks requiring high computational power like Generative AI, Machine Learning, and Visual Effects rendering. Their services allow clients to access powerful computing resources without needing to invest in expensive hardware, operating on a pay-as-you-go model. This flexibility is particularly beneficial for tech companies, film studios, and enterprises that need scalable solutions for data processing. CoreWeave utilizes a fully managed, bare metal serverless Kubernetes platform, which enhances performance while minimizing operational burdens for clients. By offering a variety of NVIDIA GPUs, they enable clients to optimize performance and costs based on their specific needs. CoreWeave's goal is to provide efficient and scalable cloud computing resources tailored to industries that demand high-performance computing.
Company Size
501-1,000
Company Stage
IPO
Headquarters
New York City, New York
Founded
2017
Help us improve and share your feedback! Did you find this helpful?
Health Insurance
Dental Insurance
Vision Insurance
Life Insurance
Disability Insurance
Health Savings Account/Flexible Spending Account
Tuition Reimbursement
Mental Health Support
Family Planning Benefits
Paid Parental Leave
Hybrid Work Options
401(k) Company Match
Unlimited Paid Time Off
Catered lunch each day in our office and data center locations
A casual work environment
CoreWeave, which provides cloud software to power AI, said Tuesday that Cohere, IBM and Mistral AI were the first customers to gain access to NVIDIA GB200 NVL72 rack-scale systems and CoreWeave’s portfolio of cloud services. The combination of these services is intended to advance AI model development and deployment. NVIDIA GB200 NVL72. -NVIDIA
CoreWeave has launched a cutting-edge cloud service featuring NVIDIA's GB200 NVL72 systems, making it one of the first providers to offer these advanced technologies at scale.
Initial customers include IBM, Mistral AI and CohereLIVINGSTON, N.J., April 15, 2025 /PRNewswire/ -- CoreWeave , the AI Hyperscaler™, today announced Cohere, IBM and Mistral AI are the first customers to gain access to NVIDIA GB200 NVL72 rack-scale systems and CoreWeave's full stack of cloud services — a combination designed to advance AI model development and deployment.AI innovators across enterprises and other organizations now have access to advanced networking and NVIDIA Grace Blackwell Superchips purpose-built for reasoning and agentic AI, underscoring CoreWeave's consistent record of being among the first to market with advanced AI cloud solutions."CoreWeave is built to move faster – and time and again, we've proven it by being first to operationalize the most advanced systems at scale," said Michael Intrator, Co-Founder and Chief Executive Officer of CoreWeave. "Today is a testament to our engineering prowess and velocity, as well as our relentless focus on enabling the next generation of AI. We are thrilled to see visionary companies already achieving new breakthroughs on our platform. By delivering the most advanced compute resources at scale, CoreWeave empowers enterprise and AI lab customers to innovate faster and deploy AI solutions that were once out of reach.""Enterprises and organizations around the world are racing to turn reasoning models into agentic AI applications that will transform the way people work and play," said Ian Buck, vice president of Hyperscale and HPC at NVIDIA. "CoreWeave's rapid deployment of NVIDIA GB200 systems delivers the AI infrastructure and software that are making AI factories a reality."CoreWeave offers advanced AI cloud solutions while maximizing efficiency and breaking performance records. The company recently achieved a new industry record in AI inference with NVIDIA GB200 Grace Blackwell Superchips, reported in the latest MLPerf v5.0 results
CoreWeave is the first cloud service provider to submit MLPerf Inference v5.0 results for NVIDIA GB200 Superchips. LIVINGSTON, N.J., April 2, 2025 /PRNewswire/ -- CoreWeave, the AI Hyperscaler™, today announced its MLPerf v5.0 results, setting a new industry benchmark in AI inference with NVIDIA GB200 Grace Blackwell Superchips. Using a CoreWeave instance with NVIDIA GB200, featuring two NVIDIA Grace CPUs and four NVIDIA Blackwell GPUs, CoreWeave delivered 800 tokens per second (TPS) on the Llama 3.1 405B model1—one of the largest open-source models. "CoreWeave is committed to delivering cutting-edge infrastructure optimized for large-model inference through our purpose-built cloud platform," said Peter Salanki, Chief Technology Officer at CoreWeave. "These benchmark MLPerf results reinforce CoreWeave's position as a preferred cloud provider for leading AI labs and enterprises."