Full-Time
Cloud service for GPU-accelerated workloads
No salary listed
Junior, Mid
Newark, CA, USA
Must be in-office at least three times a week; candidates should reside within a 30-mile radius of New Jersey, New York, Philadelphia, Sunnyvale, or Bellevue.
Upload your resume to see how it matches 15 keywords from the job description.
PDF, DOC, DOCX, up to 4 MB
CoreWeave provides cloud computing services that focus on GPU-accelerated workloads, which are essential for tasks requiring high computational power like Generative AI, Machine Learning, and Visual Effects rendering. Their services allow clients to access powerful computing resources without needing to invest in expensive hardware, operating on a pay-as-you-go basis. CoreWeave's infrastructure utilizes a fully managed, bare metal serverless Kubernetes platform, which enhances performance while minimizing operational complexity for clients. This setup is particularly beneficial for tech companies, film studios, and enterprises that need scalable and efficient computing solutions. Unlike many competitors, CoreWeave offers a wide range of NVIDIA GPUs, enabling clients to tailor their performance and costs to their specific needs. The company's goal is to provide flexible and scalable computing resources that meet the growing demands of various industries.
Company Size
501-1,000
Company Stage
IPO
Headquarters
New York City, New York
Founded
2017
Help us improve and share your feedback! Did you find this helpful?
Health Insurance
Dental Insurance
Vision Insurance
Life Insurance
Disability Insurance
Health Savings Account/Flexible Spending Account
Tuition Reimbursement
Mental Health Support
Family Planning Benefits
Paid Parental Leave
Hybrid Work Options
401(k) Company Match
Unlimited Paid Time Off
Catered lunch each day in our office and data center locations
A casual work environment
CoreWeave is the first cloud service provider to submit MLPerf Inference v5.0 results for NVIDIA GB200 Superchips. LIVINGSTON, N.J., April 2, 2025 /PRNewswire/ -- CoreWeave, the AI Hyperscaler™, today announced its MLPerf v5.0 results, setting a new industry benchmark in AI inference with NVIDIA GB200 Grace Blackwell Superchips. Using a CoreWeave instance with NVIDIA GB200, featuring two NVIDIA Grace CPUs and four NVIDIA Blackwell GPUs, CoreWeave delivered 800 tokens per second (TPS) on the Llama 3.1 405B model1—one of the largest open-source models. "CoreWeave is committed to delivering cutting-edge infrastructure optimized for large-model inference through our purpose-built cloud platform," said Peter Salanki, Chief Technology Officer at CoreWeave. "These benchmark MLPerf results reinforce CoreWeave's position as a preferred cloud provider for leading AI labs and enterprises."
The 'AI economy is currently a closed loop' - and that's probably why OpenAI, not Microsoft, invested a whopping $12bn in CoreWeave.
AI infrastructure company CoreWeave has secured $1.5 billion in its initial public offering, achieving a valuation of approximately $23 billion, according to Bloomberg reports on Thursday.
CoreWeave's IPO was priced at $40 per share, lower than the expected $47-$55 range, raising $1.5 billion and valuing the company at $19 billion. The offering was reduced to 37.5 million shares. Despite attracting Nvidia as a major investor, concerns about CoreWeave's reliance on key customers, high debt, and cash burn persist. Analysts are skeptical about its long-term sustainability, with 90% of surveyed investors doubting its "sustainable moat." The IPO is a key test for the tech market's appetite for new offerings.