Work Here?
Industries
AI & Machine Learning
Company Size
201-500
Company Stage
Series D
Total Funding
$372.6M
Headquarters
San Jose, California
Founded
2012
Lambda Labs provides cloud-based services for artificial intelligence (AI) training and inference, focusing on large language models and generative AI. Their main product, the AI Developer Cloud, utilizes NVIDIA's GH200 Grace Hopper™ Superchip to deliver efficient and cost-effective GPU resources. Customers can access on-demand and reserved cloud GPUs, which are essential for processing large datasets quickly, with pricing starting at $1.99 per hour for NVIDIA H100 instances. Lambda Labs serves AI developers and companies needing extensive GPU deployments, offering competitive pricing and infrastructure ownership options through their Lambda Echelon service. Additionally, they provide Lambda Stack, a software solution that simplifies the installation and management of AI-related tools for over 50,000 machine learning teams. The goal of Lambda Labs is to support AI development by providing accessible and efficient cloud GPU services.
Help us improve and share your feedback! Did you find this helpful?
Total Funding
$372.6M
Above
Industry Average
Funded Over
6 Rounds
Industry standards
Lambda Labs Inc. has secured $480 million in a Series D funding round, valuing the company at $2.5 billion. The round was led by Andra Capital and SGW, with participation from Nvidia Corp., Super Micro Computer Inc., and Andrej Karpathy. Lambda plans to use the funds to enhance its AI cloud infrastructure and software, including the addition of Nvidia's Blackwell B200 GPUs and the development of Lambda Chat, a service offering free access to open-source large language models.
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Lambda Labs (also known as Lambda Cloud and just Lambda) is a 12-year-old San Francisco company best known for offering graphics processing units (GPUs) on demand as a service to machine learning researchers and AI model builders and trainers.But today it’s taking its offerings a step further with the launch of the Lambda Inference API (application programming interface), which it claims to be the lowest-cost service of its kind on the market, allowing enterprises to deploy AI models and applications into production for end-users without worrying about procuring or maintaining compute. The launch compliments its existing focus on providing GPU clusters for training and fine-tuning machine learning models. “Our platform is fully verticalized, meaning we can pass dramatic cost savings to end users compared to other providers like OpenAI,” said Robert Brooks, Lambda’s Vice President of Revenue, in a video call interview with VentureBeat. “Plus, there are no rate limits inhibiting scaling, and you don’t have to talk to a salesperson to get started.”In fact, as Brooks told VentureBeat, developers can head over to Lamda’s new Inference API webpage, generate an API key, and get started in less than 5 minutes.Lambda’s Inference API supports leading-edge models such as Meta’s Llama 3.1, Nous’s Hermes-3, and Alibaba’s Qwen 2.5, making it one of the most accessible options for the machine learning community
Liftr Insights data puts objective numbers to AI growth across the globe.AUSTIN, Texas, Dec. 11, 2024 /PRNewswire/ -- Liftr Insights, a pioneer in market intelligence driven by unique data, has revealed a strong growth of AI in the cloud when it comes to AI options and AI costs.The quarter is not over, but Liftr Insights data is already showing strong signs for AI in Cloud in Q4. The number of unique AI instances types has increased 11.3% over the last three months. Liftr Insights data show the bulk of this growth began in the second half of September 2024
Casual observers could be forgiven for wondering where this company had come from, as there had been little in the way of the usual fanfare that surrounds most startups' journey to IPO — no roadshows; no horn tootin'; no confetti-laden ceremonies; nothing, not a peep. That's because Nebius is an unusual beast: a public company, but a startup in just about every sense of the word. The core Nebius business sells GPUs (graphical processing units) "as-a-service" to companies needing "compute" — that is, processing power and resources to carry out computational tasks such as running algorithms and executing machine learning models. Last month, the company debuted a holistic cloud computing platform designed for the "full machine learning lifecycle," spanning data processing, training, fine-tuning, and inference. With the restructuring complete, and Volozh free to run the show from the company's new HQ in the Netherlands, Nasdaq green-lighted Nebius to recommence trading last month. The situation was pretty much unprecedented, though: a public company whose trading was put on pause, only to resume nearly three years later under a new name and entirely different business proposition? In many ways, it would've made sense to have delisted and grown with private capital, the good old-fashioned startup way
Liftr Insights data reveals differences of Oracle Cloud on many factors from number of cores to price per core, but not always.AUSTIN, Texas, Oct. 30, 2024 /PRNewswire/ -- Liftr Insights, a pioneer in market intelligence driven by unique data, revealed that Oracle offers an average price of 5.1 cents per CPU core per hour, lower than the other cloud providers that represent 75% of the public cloud market.Prices matter for enterprises consuming diverse workloads, particularly when they understand the importance of multi-cloud environments deployed across multiple regions of the world
Find jobs on Simplify and start your career today
Industries
AI & Machine Learning
Company Size
201-500
Company Stage
Series D
Total Funding
$372.6M
Headquarters
San Jose, California
Founded
2012
Find jobs on Simplify and start your career today