Full-Time

Member of Technical Staff

Research

Confirmed live in the last 24 hours

Contextual AI

Contextual AI

51-200 employees

Develops customized language models for enterprises

Enterprise Software
AI & Machine Learning

Compensation Overview

$150k - $300kAnnually

+ Equity + Benefits

Senior, Expert

Company Does Not Provide H1B Sponsorship

Mountain View, CA, USA

Salary Range for California Based Applicants: $150,000 - $300,000 + equity + benefits.

Category
Backend Engineering
Software Engineering
Required Skills
Machine Learning

You match the following Contextual AI's candidate preferences

Employers are more likely to interview you if you match these preferences:

Degree
Experience
Requirements
  • Bachelor's degree in Computer Science, Software Engineering, or a related field. Master's or PhD preferred.
  • Strong software engineering fundamentals and proven track record of building complex systems
  • Strong understanding of fundamental machine learning concepts, and practical experience in language modeling, retrieval, large-scale training, or evaluation.
  • Research experience (academic or industry) is a strong plus, including familiarity with experimental design, analysis, and research methodology
  • Strong problem-solving skills and ability to thrive in a fast-paced research environment
Responsibilities
  • Drive research projects in areas such as language modeling, retrieval, alignment, retrieval augmented generation, end-to-end training, evaluation, etc.
  • Design and implement scalable infrastructure and tooling to enable efficient and effective research and development of models and systems.
  • Read and stay up to date with literature, latest advancements, and best practices
  • Contribute to the full research pipeline from ideation to experimentation, analysis, and deployment
  • Publish research papers with novel contributions to state-of-the-art research
Desired Qualifications
  • Master's or PhD preferred

Contextual.ai develops customized language models specifically designed for businesses. Their approach involves pre-training, fine-tuning, and integrating AI components to create reliable systems that enhance workflows and decision-making. One of their key features is the Kahneman Tversky Optimization (KTO), which allows for efficient alignment of large language models with enterprise data, achieving high performance without needing preference data. This makes their solutions both effective and cost-efficient. Unlike many competitors, Contextual.ai focuses on tailoring AI solutions to meet the unique needs of various industries, such as financial research and customer engineering. The company's goal is to address real-world challenges through advanced AI, ensuring that their products continuously evolve to meet customer demands.

Company Size

51-200

Company Stage

Series A

Total Funding

$97.3M

Headquarters

San Francisco, California

Founded

2023

Simplify Jobs

Simplify's Take

What believers are saying

  • Raised $80M in Series A funding to scale production-grade LLMs for enterprises.
  • Partnership with Google Cloud enhances scalable infrastructure for large-scale AI deployments.
  • Rising demand for AI-driven financial research tools expands market opportunities.

What critics are saying

  • Competition from Microsoft's Orca-Math model challenges Contextual AI's market position.
  • Over-reliance on Google Cloud may affect flexibility in cloud services.
  • Rapid advancements by competitors like OpenAI and Google may outpace Contextual AI's innovation.

What makes Contextual AI unique

  • Contextual AI specializes in customized language models for enterprise use.
  • Kahneman Tversky Optimization (KTO) aligns large language models efficiently with enterprise data.
  • Led by veterans from top AI institutions, ensuring strong leadership and innovation.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Hybrid Work Options

Growth & Insights and Company News

Headcount

6 month growth

3%

1 year growth

-3%

2 year growth

12%
Datanami
Aug 7th, 2024
WEKA Partners with Contextual AI to Boost Data Infrastructure for Advanced Contextual Language Models

WEKA partners with Contextual AI to boost data infrastructure for advanced Contextual Language Models.

Contextual AI
Aug 3rd, 2024
Contextual AI Raises $80M Series A to Scale Production-Grade LLMs for Enterprises - Contextual AI

I consent to receiving email communications and marketing material from Contextual AI

PaySpace Magazine
Aug 1st, 2024
Contextual AI Raises $80M in Series A

Contextual AI, a Mountain View-based startup, raised $80 million in a Series A funding round led by Greycroft, with participation from Bain Capital Ventures and Lightspeed. The company's valuation is estimated at around $609 million by PitchBook. CEO Douwe Kiela, formerly of Meta, aims to scale the use of retrieval augmented generation (RAG) technology with the new funds.

VentureBeat
Mar 5th, 2024
Microsoft’S New Orca-Math Ai Outperforms Models 10X Larger

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.Students and STEM researchers of the world, rejoice! Particularly if you struggled with math (as I did as a youngster, and still do compared to many of the people I write about) or are just looking to supercharge your abilities, Microsoft has your back.Yesterday afternoon, Arindam Mitra, a senior researcher at Microsoft Research and leader of its Orca AI efforts, posted on X a thread announcing Orca-Math, a new variant of French startup Mistral’s Mistral 7B model (itself a variant of Meta’s Llama 2), that excels “in math word problems” while retaining a small size to train and run as an inference. It’s part of the Microsoft Orca team’s larger quest to supercharge the capabilities of smaller-sized LLMs.Orca Math: doing a lot with a littleIn this case, they seem to have reached a new level of performance in a small size: besting the performance of models with 10 times more parameters (the “weights” and “biases,” or numerical settings that tell an AI model how to form its “artificial neuron” connections between words, concepts, numbers and, in this case, mathematical operations, during its training phase).Mitra noted and posted a chart showing that Orca Math bests most other 7-70 billion parameter-sized AI large language models (LLMs) and variants — with the exceptions of Google’s Gemini Ultra and OpenAI’s GPT-4 — at the GSM8K benchmark, a series of 8,500 different mathematics word problems and questions originally released by OpenAI that take between 2-8 steps each to solve, and that are designed by human writers to be solvable by a “bright” human middle-school aged child (up to grade 8).VB Event The AI Impact Tour – Boston We’re excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data infrastructure and integration, data validation methods, anomaly detection for security applications, and more. Space is limited, so request an invite today. Request an inviteIntroducing Orca-Math, our Mistral-7B offshoot excelling in math word problems! ??– Impressive 86.81% score on GSM8k– Surpasses models 10x larger or with 10x more training data– No code, verifiers, or ensembling tricks needed pic.twitter.com/ncV1VUEAK5 — arindam mitra (@Arindam1408) March 4, 2024This is especially impressive given that Orca-Math is only a 7-billion parameter model and is competitive with, and nearly matches, the performance of what are assumed to be much larger parameter models from OpenAI and Google

TS2 Space
Aug 21st, 2023
Contextual AI Partners with Google Cloud to Scale AI Capabilities for Enterprises

Contextual AI has announced a strategic partnership with Google Cloud as its preferred cloud provider to build, run, and scale its AI capabilities for the enterprise.