Internship

AI Application Engineering Intern

Confirmed live in the last 24 hours

Groq

Groq

201-500 employees

AI inference technology for scalable solutions

AI & Machine Learning

Compensation Overview

$30 - $50Hourly

Palo Alto, CA, USA

Hybrid position in Palo Alto, CA.

Category
FinTech Engineering
Full-Stack Engineering
Software Engineering
Required Skills
JavaScript
HTML/CSS
Requirements
  • Excellent written and verbal communication skills
  • Creativity
  • Attention to detail
  • Familiarity with traditional web technologies such as javascript, html, css. Some web development skills and cloud deployment experience are a big plus.
  • Familiarity in building applications that leverage API endpoints such as OpenAI.
  • Familiarity with use cases for LLMs on their own or combined with other components such as voice and database.
  • Located in the San Francisco Bay Area. Being local to the Bay Area to help support various industry events is a big plus.
  • Must be authorized to work in the United States.
Responsibilities
  • Create real demos showing that the Groq LPU™ AI Inference Technology is the fastest way to run LLMs and other GenerativeAI applications (here is an example)
  • Create examples of Language User Interfaces (LUIs) enabling HumanPlus, demonstrating how individuals take advantage of advancements in AI technology
  • Collaborate with Brand and Marketing on content for prompt engineering tutorials, social content, customer API documentation, and user experience feedback to Product
  • Attend and support Bay Area industry events including hackathons, developer meetups, and conferences

Groq specializes in AI inference technology, providing the Groq LPU™, which is known for its high compute speed, quality, and energy efficiency. The Groq LPU™ is designed to handle AI processing tasks quickly and effectively, making it suitable for both cloud and on-premises applications. Unlike many competitors, Groq's products are designed, fabricated, and assembled in North America, which helps maintain high quality and performance standards. The company targets a wide range of clients who need fast and efficient AI processing capabilities. Groq's goal is to deliver scalable AI inference solutions that meet the demands of industries requiring rapid data processing.

Company Stage

Series D

Total Funding

$1.3B

Headquarters

Mountain View, California

Founded

2016

Growth & Insights
Headcount

6 month growth

8%

1 year growth

-1%

2 year growth

-4%
Simplify Jobs

Simplify's Take

What believers are saying

  • Groq secured $640M in Series D funding, boosting its expansion capabilities.
  • Partnership with Aramco Digital aims to build the world's largest inferencing data center.
  • Integration with Touchcast's Cognitive Caching enhances Groq's hardware for hyper-speed inference.

What critics are saying

  • Increased competition from SambaNova Systems and Gradio in high-speed AI inference.
  • Geopolitical risks in the MENA region may affect the Saudi Arabia data center project.
  • Rapid expansion could strain Groq's operational capabilities and supply chain.

What makes Groq unique

  • Groq's LPU offers exceptional compute speed and energy efficiency for AI inference.
  • The company's products are designed and assembled in North America, ensuring high quality.
  • Groq emphasizes deterministic performance, providing predictable outcomes in AI computations.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Remote Work Options

Company Equity