Full-Time

Applied AI Engineer

Mem0

Mem0

11-50 employees

LLM data platform with community collaboration

Compensation Overview

$150k - $180k/yr

San Francisco, CA, USA

In Person

Office-based in SF Bay Area; in-person collaboration emphasized.

Category
AI & Machine Learning (2)
,
Required Skills
LLM
FastAPI
Python
JavaScript
React.js
TypeScript
Next.js
REST APIs
Flask
Django
Requirements
  • Full-stack fluency: Next.js/React on the front end and Python backends (FastAPI/Django/Flask) or Node where needed.
  • Strong Python and TypeScript/JavaScript; comfortable building APIs, wiring data models, and deploying quick demos.
  • Hands-on with the LLM/RAG stack: embeddings, vector databases, retrieval strategies, prompt engineering.
  • Track record of rapid prototyping: moving from idea → demo in days, not months; clear documentation of results and trade-offs.
  • Ability to design small, meaningful evaluations for a use case (quality + latency) and iterate based on evidence.
  • Excellent communication with Research and Backend; crisp specs, readable code, and honest status updates.
Responsibilities
  • Build POCs for real use cases: Stand up end-to-end demos (UI + APIs + data) that integrate Mem0 in the customer’s flow.
  • Experiment with memory retrieval: Try different embeddings, indexing, hybrid search, re-ranking, chunking/windowing, prompts, and caching to hit task-level quality and latency targets.
  • Prototype with Research: Implement paper ideas and new techniques from scratch, compare baselines, and keep what wins.
  • Create eval harnesses: Define small gold sets and lightweight metrics to judge POC success; instrument demos with basic telemetry.
  • Integrate AI tooling: Combine LLMs, vector DBs, Mem0 SDKs/APIs, and third-party services into coherent workflows.
  • Collaborate tightly: Work with Backend on clean contracts and data models; with Research on hypotheses; share learnings and next steps.
  • Package & hand-off: Write concise docs, scripts, and templates so Engineering can productionize quickly.
Desired Qualifications
  • Model serving/fine-tuning experience (vLLM, LoRA/PEFT) and lightweight batch/async pipelines.
  • Deployments on Vercel/serverless, Docker, basic Kubernetes familiarity; CI for demo apps.
  • Data visualization and UX polish for compelling demos.
  • Prior Forward-Deployed/Solutions/Prototyping role turning customer needs into working software.

Embedchain.ai provides a data platform for large language models (LLMs) that helps developers integrate and collaborate on improving LLM performance. It enables a community-driven workflow where users contribute via GitHub by filing issues or submitting pull requests, and even schedule calls with the founder for direct feedback. The platform is designed to streamline data for LLMs, offering early access and potential premium features to monetize the service. Compared to competitors, Embedchain.ai emphasizes open, community-led development and direct interaction with its founder, aiming to continuously enhance the platform through user input. Its goal is to grow a strong developer community around LLM data integration and use, and to advance LLM capabilities through collaborative contribution.

Company Size

11-50

Company Stage

Series A

Total Funding

$24M

Headquarters

San Francisco, California

Founded

2023

Simplify Jobs

Simplify's Take

What believers are saying

  • API calls surged from 35 million Q1 to 186 million Q3 2025 by Fortune 500 teams.
  • Netflix, Lemonade, Rocket Money adopted Mem0 for persistent AI memory capabilities.
  • $24M funding from Basis Set Ventures, Y Combinator, Peak XV in October 2025.

What critics are saying

  • LangChain memory modules divert developers within 6-12 months.
  • Open-source forks commoditize Mem0's 41k-star repo in 12-18 months.
  • OpenAI GPT-5 native memory obsoletes Mem0 in 18-24 months.

What makes Mem0 unique

  • Mem0 uses hybrid datastore combining vector, graph, and key-value stores for efficient memory.
  • Mem0 integrates with three lines of code across OpenAI, Anthropic, and LangChain frameworks.
  • Mem0g architecture dynamically extracts and retrieves key info for long AI conversations.

Help us improve and share your feedback! Did you find this helpful?

Your Connections

People at Mem0 who can refer or advise you

Benefits

Health Insurance

Paid Vacation

Remote Work Options

Flexible Work Hours

Wellness Program

Mental Health Support

Stock Options

401(k) Retirement Plan

Conference Attendance Budget

Professional Development Budget

Pet Insurance

Phone/Internet Stipend

Home Office Stipend

Family Planning Benefits

Growth & Insights and Company News

Headcount

6 month growth

-14%

1 year growth

-5%

2 year growth

-5%
Mem0
Nov 3rd, 2025
Mem0 raises $24M to build the memory layer for AI

Mem0 raised $24M across Seed and Series A. Mem0's Seed round was led by Kindred Ventures, and Series A was led by Basis Set Ventures, with participation from Peak XV Partners, GitHub Fund, and Y Combinator.

TechCrunch
Oct 28th, 2025
Mem0 raises $24M for AI memory

Mem0, founded by Taranjeet Singh, has raised $24M to develop a "memory passport" for AI apps, allowing AI memory to persist across platforms. The funding includes $3.9M in seed funding and a $20M Series A led by Basis Set Ventures, with participation from Y Combinator, Peak XV Partners, and others. The startup's open-source API has gained significant traction, with over 41,000 GitHub stars and 13 million Python package downloads.

VentureBeat
May 8th, 2025
Mem0’S Scalable Memory Promises More Reliable Ai Agents That Remembers Context Across Lengthy Conversations

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Researchers at Mem0 have introduced two new memory architectures designed to enable Large Language Models (LLMs) to maintain coherent and consistent conversations over extended periods. Their architectures, called Mem0 and Mem0g, dynamically extract, consolidate and retrieve key information from conversations. They are designed to give AI agents a more human-like memory, especially in tasks requiring recall from long interactions. This development is particularly significant for enterprises looking to deploy more reliable AI agents for applications that span very long data streams.The importance of memory in AI agentsLLMs have shown incredible abilities in generating human-like text. However, their fixed context windows pose a fundamental limitation on their ability to maintain coherence over lengthy or multi-session dialogues. Even context windows that reach millions of tokens aren’t a complete solution for two reasons, the researchers behind Mem0 argue. As meaningful human-AI relationships develop over weeks or months, the conversation history will inevitably grow beyond even the most generous context limits