Full-Time

Machine Learning Engineer

Customer Facing

Posted on 11/13/2024

Lamini AI

Lamini AI

11-50 employees

Generative AI for enterprise software development

Enterprise Software
AI & Machine Learning

Compensation Overview

$150k - $200kAnnually

Senior, Expert

Menlo Park, CA, USA

Hybrid position based in Menlo Park.

Category
Applied Machine Learning
Natural Language Processing (NLP)
AI & Machine Learning
Required Skills
LLM
Machine Learning

You match the following Lamini AI's candidate preferences

Employers are more likely to interview you if you match these preferences:

Degree
Experience
Requirements
  • 3+ years of experience with deep learning models in production
  • 2+ years of experience in a customer-facing role, such as a Customer Engineer, Forward Deployed Engineer, Sales Engineer, Solutions Architect, or Platform Engineer
  • Strong technical aptitude to partner with engineers and proficiency in software engineering
  • The ability to navigate and execute amidst ambiguity, and to flex into different domains based on the business problem at hand, finding simple, easy-to-understand solutions
  • Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders.
Responsibilities
  • Act as the primary technical advisor for prospective customers evaluating LLM and finetuning projects on Lamini platform.
  • Partner closely with account executives to understand customer requirements.
  • Drive technical decision making by advising on optimal setup, architecture, and integration of Claude into the customer's existing infrastructure.
  • Support customer onboarding by working cross-functionally to ensure successful ramp and adoption.
  • Travel occasionally to customer sites for workshops, implementation support, and building relationships.
Desired Qualifications
  • Designed novel and innovative solutions for technical platforms in a developing business area
  • Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
  • A love of teaching, mentoring, and helping others succeed

Lamini AI develops enterprise software that utilizes generative AI and machine learning to enhance business operations. Their main product is a Large Language Model (LLM) engine that helps companies automate workflows and improve software development efficiency. This engine allows businesses to create customized AI applications using their own data, which can outperform general-purpose models. Lamini provides a library and an API that enables software engineers to quickly build and deploy tailored AI models without the burden of hosting or computational issues. Unlike competitors, Lamini focuses on fine-tuning models through reinforcement learning on client-specific data, ensuring high performance and specialization. The goal of Lamini is to make generative AI accessible and customizable for enterprises, ultimately leading to more productive software development.

Company Stage

Series A

Total Funding

$24.3M

Headquarters

Menlo Park, California

Founded

2022

Growth & Insights
Headcount

6 month growth

22%

1 year growth

15%

2 year growth

22%
Simplify Jobs

Simplify's Take

What believers are saying

  • Lamini secured $25M funding for scaling operations and technical advancements.
  • Collaboration with AMD enhances Lamini's high-performance computing capabilities.
  • Lamini's Memory Tuning achieves 95% accuracy, attracting enterprise clients.

What critics are saying

  • Increased competition from well-funded AI startups like Proxima and Borderless AI.
  • Reliance on external funding may pressure Lamini for quick returns.
  • Supply chain risks from AMD collaboration could affect hardware availability.

What makes Lamini AI unique

  • Lamini offers a unique LLM engine tailored for enterprise-specific AI applications.
  • The company provides an API for seamless model deployment without hosting concerns.
  • Lamini's Memory Tuning significantly reduces hallucinations, enhancing LLM reliability.

Help us improve and share your feedback! Did you find this helpful?

INACTIVE