Full-Time

Senior Research Engineer

Mem0

Mem0

11-50 employees

LLM data platform with community collaboration

Compensation Overview

$175k - $210k/yr

San Francisco, CA, USA

In Person

Category
AI & Machine Learning (1)
Required Skills
LLM
Pytorch
A/B Testing
Requirements
  • Experience in RAG or information retrieval (retrieval, ranking, query understanding) for real products.
  • Model training/fine-tuning experience (LLMs/encoders) with a strong footing in experimental design and iteration.
  • Strong Python; deep experience with PyTorch and familiarity with vLLM and modern serving frameworks.
  • Built evaluation for complex vision-and-language tasks (gold sets, offline metrics, online tests).
  • Able to orchestrate data pipelines to run these models in production with low-latency SLAs (batch + streaming).
  • Clear, concise communication with stakeholders (engineering, product, GTM, and customers).
Responsibilities
  • Fine-tune and train models for memory extraction, updates, consolidation/forgetting, and conflict resolution; iterate based on data and outcomes.
  • Read, reproduce, and implement research: quickly prototype paper ideas, benchmark against baselines, and productionize what wins.
  • Build evaluation at scale: automated relevance/accuracy/consistency metrics, gold sets, online A/B & interleaving, and clear dashboards.
  • Work closely with customers to uncover pain points, turn them into research hypotheses, and validate solutions through field trials.
  • Partner with Engineering to ship: design APIs and data contracts, plan safe rollouts, and maintain SOTA latency, reliability, and cost at scale.
Desired Qualifications
  • Publications at venues like CVPR, NeurIPS, ICML, ACL, etc.
  • Experience with privacy-preserving ML (redaction, differential privacy, data governance).
  • Deep familiarity with memory/retrieval literature or prior work on memory systems.
  • Expertise with embeddings, vector-DB internals, deduplication, and contradiction detection.

Embedchain.ai provides a data platform for large language models (LLMs) that helps developers integrate and collaborate on improving LLM performance. It enables a community-driven workflow where users contribute via GitHub by filing issues or submitting pull requests, and even schedule calls with the founder for direct feedback. The platform is designed to streamline data for LLMs, offering early access and potential premium features to monetize the service. Compared to competitors, Embedchain.ai emphasizes open, community-led development and direct interaction with its founder, aiming to continuously enhance the platform through user input. Its goal is to grow a strong developer community around LLM data integration and use, and to advance LLM capabilities through collaborative contribution.

Company Size

11-50

Company Stage

Series A

Total Funding

$24M

Headquarters

San Francisco, California

Founded

2023

Simplify Jobs

Simplify's Take

What believers are saying

  • API calls surged from 35 million Q1 to 186 million Q3 2025 by Fortune 500 teams.
  • Netflix, Lemonade, Rocket Money adopted Mem0 for persistent AI memory capabilities.
  • $24M funding from Basis Set Ventures, Y Combinator, Peak XV in October 2025.

What critics are saying

  • LangChain memory modules divert developers within 6-12 months.
  • Open-source forks commoditize Mem0's 41k-star repo in 12-18 months.
  • OpenAI GPT-5 native memory obsoletes Mem0 in 18-24 months.

What makes Mem0 unique

  • Mem0 uses hybrid datastore combining vector, graph, and key-value stores for efficient memory.
  • Mem0 integrates with three lines of code across OpenAI, Anthropic, and LangChain frameworks.
  • Mem0g architecture dynamically extracts and retrieves key info for long AI conversations.

Help us improve and share your feedback! Did you find this helpful?

Your Connections

People at Mem0 who can refer or advise you

Benefits

Health Insurance

Paid Vacation

Remote Work Options

Flexible Work Hours

Wellness Program

Mental Health Support

Stock Options

401(k) Retirement Plan

Conference Attendance Budget

Professional Development Budget

Pet Insurance

Phone/Internet Stipend

Home Office Stipend

Family Planning Benefits

Growth & Insights and Company News

Headcount

6 month growth

-14%

1 year growth

-5%

2 year growth

-5%
Mem0
Nov 3rd, 2025
Mem0 raises $24M to build the memory layer for AI

Mem0 raised $24M across Seed and Series A. Mem0's Seed round was led by Kindred Ventures, and Series A was led by Basis Set Ventures, with participation from Peak XV Partners, GitHub Fund, and Y Combinator.

TechCrunch
Oct 28th, 2025
Mem0 raises $24M for AI memory

Mem0, founded by Taranjeet Singh, has raised $24M to develop a "memory passport" for AI apps, allowing AI memory to persist across platforms. The funding includes $3.9M in seed funding and a $20M Series A led by Basis Set Ventures, with participation from Y Combinator, Peak XV Partners, and others. The startup's open-source API has gained significant traction, with over 41,000 GitHub stars and 13 million Python package downloads.

VentureBeat
May 8th, 2025
Mem0’S Scalable Memory Promises More Reliable Ai Agents That Remembers Context Across Lengthy Conversations

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Researchers at Mem0 have introduced two new memory architectures designed to enable Large Language Models (LLMs) to maintain coherent and consistent conversations over extended periods. Their architectures, called Mem0 and Mem0g, dynamically extract, consolidate and retrieve key information from conversations. They are designed to give AI agents a more human-like memory, especially in tasks requiring recall from long interactions. This development is particularly significant for enterprises looking to deploy more reliable AI agents for applications that span very long data streams.The importance of memory in AI agentsLLMs have shown incredible abilities in generating human-like text. However, their fixed context windows pose a fundamental limitation on their ability to maintain coherence over lengthy or multi-session dialogues. Even context windows that reach millions of tokens aren’t a complete solution for two reasons, the researchers behind Mem0 argue. As meaningful human-AI relationships develop over weeks or months, the conversation history will inevitably grow beyond even the most generous context limits