Full-Time
AI inference technology for scalable solutions
$186.9k - $305.9k/yr
Senior, Expert
San Jose, CA, USA
Remote
Some roles may require being located near or on our primary sites, as indicated in the job description.
Groq specializes in AI inference technology, providing the Groq LPU™, which is known for its high compute speed, quality, and energy efficiency. The Groq LPU™ is designed to handle AI processing tasks quickly and effectively, making it suitable for both cloud and on-premises applications. Unlike many competitors, Groq ensures that all its products are designed, fabricated, and assembled in North America, which helps maintain high standards of quality and performance. The company targets a wide range of clients who need fast and efficient AI processing capabilities. Groq's goal is to deliver scalable AI inference solutions that meet the demands of industries requiring rapid data processing.
Company Size
201-500
Company Stage
Growth Equity (Non-Venture Capital)
Total Funding
$2.8B
Headquarters
Mountain View, California
Founded
2016
Help us improve and share your feedback! Did you find this helpful?
Remote Work Options
Company Equity
Groq, a rival to Nvidia, is reportedly seeking a $6 billion valuation amid rising AI chip demand. In August 2023, Groq raised $640 million at a $2.8 billion valuation from investors like Cisco, Samsung, and BlackRock. The company was chosen by Saudi-backed AI firm HUMAIN for inference operations. Groq recently opened its first European data center in Helsinki to support global expansion.
U.S. semiconductor startup Groq is in talks to raise $300 million to $500 million at a $6 billion valuation, according to The Information. The funds are intended to support a deal with Saudi Arabia, which committed $1.5 billion in February for Groq's AI chips. Groq projects $500 million in revenue this year from Saudi contracts. Previously, Groq raised $640 million in a Series D round, reaching a $2.8 billion valuation.
Groq, one of the global pioneers in AI inference, announced on July 7th the continued expansion of its global data centre network, establishing its first European data centre footprint in Helsinki, Finland, to meet the growing demands of European customers.
Groq launches European data center footprint in Helsinki, Finland.
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more. Groq, the artificial intelligence inference startup, is making an aggressive play to challenge established cloud providers like Amazon Web Services and Google with two major announcements that could reshape how developers access high-performance AI models.The company announced Monday that it now supports Alibaba’s Qwen3 32B language model with its full 131,000-token context window — a technical capability it claims no other fast inference provider can match. Simultaneously, Groq became an official inference provider on Hugging Face’s platform, potentially exposing its technology to millions of developers worldwide.The move is Groq’s boldest attempt yet to carve out market share in the rapidly expanding AI inference market, where companies like AWS Bedrock, Google Vertex AI, and Microsoft Azure have dominated by offering convenient access to leading language models.“The Hugging Face integration extends the Groq ecosystem providing developers choice and further reduces barriers to entry in adopting Groq’s fast and efficient AI inference,” a Groq spokesperson told VentureBeat. “Groq is the only inference provider to enable the full 131K context window, allowing developers to build applications at scale.”How Groq’s 131k context window claims stack up against AI inference competitorsGroq’s assertion about context windows — the amount of text an AI model can process at once — strikes at a core limitation that has plagued practical AI applications. Most inference providers struggle to maintain speed and cost-effectiveness when handling large context windows, which are essential for tasks like analyzing entire documents or maintaining long conversations.Independent benchmarking firm Artificial Analysis measured Groq’s Qwen3 32B deployment running at approximately 535 tokens per second, a speed that would allow real-time processing of lengthy documents or complex reasoning tasks