Full-Time
Confirmed live in the last 24 hours
Develops customized language models for enterprises
$150k - $300kAnnually
Senior, Expert
Company Does Not Provide H1B Sponsorship
Mountain View, CA, USA
Salary Range for California Based Applicants: $150,000 - $300,000 + equity + benefits.
You match the following Contextual AI's candidate preferences
Employers are more likely to interview you if you match these preferences:
Contextual.ai develops customized language models specifically designed for businesses. Their approach involves pre-training, fine-tuning, and integrating AI components to create reliable systems that enhance workflows and decision-making. One of their key features is the Kahneman Tversky Optimization (KTO), which allows for efficient alignment of large language models with enterprise data, achieving high performance without needing preference data. This makes their solutions both effective and cost-efficient. Unlike many competitors, Contextual.ai focuses on tailoring AI solutions to meet the unique needs of various industries, such as financial research and customer engineering. The company's goal is to address real-world challenges through advanced AI, ensuring that their products continuously evolve to meet customer demands.
Company Size
51-200
Company Stage
Series A
Total Funding
$97.3M
Headquarters
San Francisco, California
Founded
2023
Help us improve and share your feedback! Did you find this helpful?
Hybrid Work Options
WEKA partners with Contextual AI to boost data infrastructure for advanced Contextual Language Models.
I consent to receiving email communications and marketing material from Contextual AI
Contextual AI, a Mountain View-based startup, raised $80 million in a Series A funding round led by Greycroft, with participation from Bain Capital Ventures and Lightspeed. The company's valuation is estimated at around $609 million by PitchBook. CEO Douwe Kiela, formerly of Meta, aims to scale the use of retrieval augmented generation (RAG) technology with the new funds.
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.Students and STEM researchers of the world, rejoice! Particularly if you struggled with math (as I did as a youngster, and still do compared to many of the people I write about) or are just looking to supercharge your abilities, Microsoft has your back.Yesterday afternoon, Arindam Mitra, a senior researcher at Microsoft Research and leader of its Orca AI efforts, posted on X a thread announcing Orca-Math, a new variant of French startup Mistral’s Mistral 7B model (itself a variant of Meta’s Llama 2), that excels “in math word problems” while retaining a small size to train and run as an inference. It’s part of the Microsoft Orca team’s larger quest to supercharge the capabilities of smaller-sized LLMs.Orca Math: doing a lot with a littleIn this case, they seem to have reached a new level of performance in a small size: besting the performance of models with 10 times more parameters (the “weights” and “biases,” or numerical settings that tell an AI model how to form its “artificial neuron” connections between words, concepts, numbers and, in this case, mathematical operations, during its training phase).Mitra noted and posted a chart showing that Orca Math bests most other 7-70 billion parameter-sized AI large language models (LLMs) and variants — with the exceptions of Google’s Gemini Ultra and OpenAI’s GPT-4 — at the GSM8K benchmark, a series of 8,500 different mathematics word problems and questions originally released by OpenAI that take between 2-8 steps each to solve, and that are designed by human writers to be solvable by a “bright” human middle-school aged child (up to grade 8).VB Event The AI Impact Tour – Boston Weâre excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data infrastructure and integration, data validation methods, anomaly detection for security applications, and more. Space is limited, so request an invite today. Request an inviteIntroducing Orca-Math, our Mistral-7B offshoot excelling in math word problems! ??– Impressive 86.81% score on GSM8k– Surpasses models 10x larger or with 10x more training data– No code, verifiers, or ensembling tricks needed pic.twitter.com/ncV1VUEAK5 — arindam mitra (@Arindam1408) March 4, 2024This is especially impressive given that Orca-Math is only a 7-billion parameter model and is competitive with, and nearly matches, the performance of what are assumed to be much larger parameter models from OpenAI and Google
Contextual AI has announced a strategic partnership with Google Cloud as its preferred cloud provider to build, run, and scale its AI capabilities for the enterprise.