Full-Time

Virtual Primary Care Physician

CT Licensure

Confirmed live in the last 24 hours

Galileo

Galileo

51-200 employees

Platform for improving machine learning models

Data & Analytics
AI & Machine Learning

Compensation Overview

$138Hourly

Mid, Senior

Remote in USA

Candidates must have active state licensure in CT.

Category
Physicians & Surgeons
Medical, Clinical & Veterinary

You match the following Galileo's candidate preferences

Employers are more likely to interview you if you match these preferences:

Degree
Experience
Requirements
  • Have 4+ years of clinical experience post residency in complex care, chronic care management, or primary care.
  • Are board certified in Family Medicine or Internal Medicine
  • Active state licensure in CT
  • Are highly comfortable with using technology and various applications to deliver care to patients virtually.
Responsibilities
  • Provide best-in-class care for Galileo patients managing complex and chronic illnesses through our innovative, proprietary app and technology and your excellent clinical judgment.
  • Provide compassionate and empathetic care to patients from age 0-100 across all needs, including complex, chronic medical conditions such as: diabetes, hypertension, obesity and mental health.
  • Work with a diverse patient population.
  • Solve patient problems in an efficient, nimble manner by drawing on resourcefulness, collaborating with team members to leverage their expertise, and a 'can-do' approach.
Desired Qualifications
  • Are experienced in or have an appetite for learning digital-first healthcare (e.g. virtual medicine).
  • Are interested in new, innovative models of care that balance evidence-based approaches with creative ideas.
  • It's a bonus if you are professionally fluent in English/Spanish including reading, writing, speaking, and understanding the cultural nuances for native Spanish speakers in a medical setting.

Galileo offers a platform that helps machine learning teams enhance their models and reduce annotation costs using data-centric algorithms for Natural Language Processing. It allows teams to quickly identify and fix data issues that affect model performance, while also providing a collaborative space to manage models from raw data to production. Unlike competitors, Galileo integrates easily with existing tools and focuses on actionability, security, and privacy, enabling efficient data labeling and detection of mis-annotated data. The company's goal is to optimize the model development process for machine learning teams.

Company Size

51-200

Company Stage

Series B

Total Funding

$66.2M

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Galileo raised $45M to enhance AI model accuracy and observability.
  • The Luna EFMs reduce GenAI evaluation costs by 97% and increase speed 11x.
  • Open-source AI models are closing the gap with proprietary models, democratizing AI capabilities.

What critics are saying

  • Rapid AI agent adoption raises concerns about reliability and potential errors.
  • Narrowing performance gap between open-source and proprietary models increases competition.
  • EU regulatory changes could impose new compliance requirements on AI companies like Galileo.

What makes Galileo unique

  • Galileo integrates with existing tools in minutes, enhancing actionability and privacy.
  • It offers a collaborative data bench for tracking models from raw data to production.
  • Galileo's platform auto-detects mis-annotated data and supports bulk labeling in one place.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Dental Insurance

Vision Insurance

Disability Insurance

Parental Leave

Flexible Work Hours

401(k) Retirement Plan

401(k) Company Match

Growth & Insights and Company News

Headcount

6 month growth

-2%

1 year growth

-27%

2 year growth

-8%
PYMNTS
Feb 6th, 2024
Ai Firm Galileo Debuts Retrieval Augmented Generation Tool

Generative AI firm Galileo has debuted a tool to help businesses develop trustworthy artificial intelligence (AI) solutions.The San Francisco-based company on Tuesday (Feb. 6) announced the release of a new retrieval augmented generation (RAG) and agent analytics solution.As Galileo noted in a news release, RAG systems “have become increasingly popular with developers of LLMs,” or large language models.“RAG supplements an LLM’s general knowledge with domain-specific context, so the LLM can provide domain-specific results,” the company said.However, the release added, “the complexity of RAG systems and their many moving parts have required labor-intensive manual evaluation, and their inner workings can be somewhat of a black box for AI builders.”Galileo said its tool changes this process “by embedding advanced insights and metrics directly into the user’s existing workflow, with easy access through an intuitive Galileo user interface,” offering visibility into each stage of the RAG workflow, and enabling rapid evaluation, error detection and iteration.“Galileo’s RAG & Agent Analytics is a game-changer for AI practitioners building RAG-based systems who are eager to accelerate development and refine their RAG pipelines,” said Vikram Chatterji, CEO and co-founder of Galileo. “Streamlining the process is essential for AI leaders aiming to reduce costs and minimize hallucinations in AI responses.”The launch of Galileo’s new offering comes as many companies are — as PYMNTS wrote Tuesday — “fishing with dynamite” when it comes to AI systems.“That’s because the biggest and most impressive large language models (LLMs), including OpenAI’s GPT-4, are trained on over 1 trillion parameters, and cost hundreds of thousands of dollars a day to run,” that report said. “Using such models for daily tasks with minimal impact or low complexity, or for small-scale personal queries, is, well, a bit overkill.”While Big Tech’s big AI models have popularized and familiarized the technology across a broad global audience, the future of AI’s commercial applications “likely lies in smaller models that have fewer parameters but perform well on specialized tasks,” PYMNTS wrote.For AI to be “truly democratized,” that report said, it will need to be built atop smaller, more cost-efficient systems, as smaller models are what companies like OpenAI, Google and Apple are hoping to commercialize

PYMNTS
Feb 4th, 2025
Open-Source Vs Proprietary Ai: Which Should Businesses Choose?

When deploying generative artificial intelligence (AI), one of the most fundamental decisions businesses face is whether to choose open-source or proprietary AI models — or aim for a hybrid of the two. “This basic choice between the open source ecosystem and a proprietary setting impacts countless business and technical decision, making it ‘the AI developer’s dilemma,’” according to an Intel Labs blog post. This choice is critical because it affects a company’s AI development, accessibility, security and innovation. Businesses must navigate these options carefully to maximize benefits while mitigating risks

VentureBeat
Jan 23rd, 2025
Galileo Launches ‘Agentic Evaluations’ To Fix Ai Agent Errors Before They Cost You

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Galileo, a San Francisco-based startup, is betting that the future of artificial intelligence depends on trust. Today, the company launched a new product, Agentic Evaluations, to address a growing challenge in the world of AI: making sure the increasingly complex systems known as AI agents actually work as intended.AI agents—autonomous systems that perform multi-step tasks like generating reports or analyzing customer data—are gaining traction across industries. But their rapid adoption raises a crucial question: How can companies verify these systems remain reliable after deployment? Galileo’s CEO, Vikram Chatterji, believes his company has found the answer.“Over the last six to eight months, we started to see some of our customers trying to adopt agentic systems,” said Chatterji in an interview. “Now LLMs can be used as a smart router to pick and choose the right API calls towards actually completing a task

PR Newswire
Oct 15th, 2024
Galileo Raises $45M Series B Funding to Bring Evaluation Intelligence to Generative AI Teams Everywhere

/PRNewswire/ -- Galileo, a leader in generative AI evaluation and observability for enterprises, today announced it raised $45M in Series B funding led by...

PYMNTS
Jul 29th, 2024
Anthropic Named ‘Best Performing’ Llm As Ai Arms Race Intensifies

Generative artificial intelligence (AI) firm Galileo has released a new ranking of top large language models (LLMs). The company on Monday (July 29) announced its latest “Hallucination Index,” which ranks the performance of AI LLMs from the likes of OpenAI, Anthropic, Google and Meta. “This year’s Index added 11 models to the framework, representing the rapid growth in both open- and closed-source LLMs in just the past 8 months,” the company said in a news release. “As brands race to create bigger, faster and more accurate models, hallucinations remain the main hurdle to deploying production-ready Gen AI products.”

VentureBeat
Jul 29th, 2024
Open-Source Ai Narrows Gap With Proprietary Leaders, New Benchmark Reveals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Artificial intelligence startup Galileo released a comprehensive benchmark on Monday revealing that open-source language models are rapidly closing the performance gap with their proprietary counterparts. This shift could reshape the AI landscape, potentially democratizing advanced AI capabilities and accelerating innovation across industries.The second annual Hallucination Index from Galileo evaluated 22 leading large language models on their tendency to generate inaccurate information. While closed-source models still lead overall, the margin has narrowed significantly in just eight months.“The huge improvements in open-source models was absolutely incredible to see,” said Vikram Chatterji, co-founder and CEO of Galileo, in an interview with VentureBeat. “Back then [in October 2023] the first five or six were all closed source API models, mostly OpenAI models

PYMNTS
Jun 6th, 2024
Galileo Releases Evaluation Foundation Models To Help Enterprises Develop Genai

Generative artificial intelligence (GenAI) developer Galileo has released a suite of evaluation foundation models (EFMs) designed to help enterprises bring trustworthy AI into production. The new Galileo Luna EFMs are designed to make GenAI evaluations faster, more cost-effective and more accurate, the company said in a Thursday (June 6) press release. “For GenAI to reach mass adoption, it’s crucial that enterprises can evaluate hundreds of thousands of AI responses for hallucinations, toxicity, security risk and more, in real time,” Vikram Chatterji, co-founder and CEO of Galileo, said in the release. “In speaking with customers, we found that existing approaches, such as human evaluation or LLM-based evaluation, were too expensive and slow, so we set out to solve that.”

Solondais
Oct 16th, 2024
AI observability company Galileo raises $45M to improve AI model accuracy

Galileo Technologies Inc., a provider of enterprise AI observability and assessment platforms, today announced that it has raised $45 million in new funding.

VentureBeat
Nov 15th, 2023
Galileo Hallucination Index Identifies Gpt-4 As Best-Performing Llm For Different Use Cases

VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass. A new hallucination index developed by the research arm of San Francisco-based Galileo, which helps enterprises build, fine-tune and monitor production-grade large language model (LLM) apps, shows that OpenAI’s GPT-4 model works best and hallucinates the least when challenged with multiple tasks. Published today, the index looked at nearly a dozen open and closed-source LLMs, including Meta’s Llama series, and assessed each of their performance at different tasks to see which LLM experiences the least hallucinations when performing different tasks.In the results, all LLMs behaved differently with different tasks, but OpenAI’s offerings remained on top with largely consistent performance across all scenarios. The findings of the index come as the latest way to help enterprises navigate the challenge of hallucinations — which has kept many teams from deploying large language models across critical sectors like healthcare, at scale.

PYMNTS
Feb 9th, 2024
This Week In Ai: Enterprise Acceleration, Eu Rulemaking, Defeating Deepfakes

Artificial intelligence (AI) is changing everyday life, heralding a paradigm shift beyond smarter chatbots. And the technology is showing no signs of slowing down — in fact, it’s speeding up. This, as the European Union’s 27 member states appear to think the AI ecosystem could use some speed bumps, or at least a yield sign or two

VentureBeat
Jun 6th, 2024
Galileo’S Luna Redefines Genai Evaluation, Boasting 97% Lower Costs And 11X Faster Speeds

VB Transform 2024 returns this July! Over 400 enterprise leaders will gather in San Francisco from July 9-11 to dive into the advancement of GenAI strategies and engaging in thought-provoking discussions within the community. Find out how you can attend here. Galileo, a trailblazer in enterprise generative AI, has unveiled Galileo Luna, a groundbreaking suite of Evaluation Foundation Models (EFMs) that promises to transform how enterprises evaluate their GenAI systems. With Luna, Galileo aims to address the critical challenges of speed, cost, and accuracy that have hindered the widespread adoption of generative AI in production environments.“Galileo created Luna to address the limitations of current GenAI evaluation methods, which were slow, expensive, and often inaccurate,” said Vikram Chatterji, Co-Founder and CEO of Galileo, in an interview with VentureBeat. “The motivation stemmed from the need for ultra-low-latency, cost-effective, and high-accuracy evaluations in production environments.”The development of Luna marks a significant milestone for Galileo, which has been at the forefront of enterprise GenAI since its inception in early 2021. The company’s dedication to pushing the boundaries of AI evaluation is evident in the nearly year-long intensive RD process that led to Luna’s creation.Luna, Galileo’s groundbreaking suite of Evaluation Foundation Models, outperforms leading AI evaluation methodologies in a benchmark comparison of area under the receiver operating characteristic curve (AUROC) scores

VentureBeat
Aug 12th, 2024
Apple’S Toolsandbox Reveals Stark Reality: Open-Source Ai Still Lags Behind Proprietary Models

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Researchers at Apple have introduced ToolSandbox, a novel benchmark designed to assess the real-world capabilities of AI assistants more comprehensively than ever before. The research, published on arXiv, addresses crucial gaps in existing evaluation methods for large language models (LLMs) that use external tools to complete tasks.ToolSandbox incorporates three key elements often missing from other benchmarks: stateful interactions, conversational abilities, and dynamic evaluation. Lead author Jiarui Lu explains, “ToolSandbox includes stateful tool execution, implicit state dependencies between tools, a built-in user simulator supporting on-policy conversational evaluation and a dynamic evaluation strategy.”This new benchmark aims to mirror real-world scenarios more closely. For instance, it can test whether an AI assistant understands that it needs to enable a device’s cellular service before sending a text message — a task that requires reasoning about the current state of the system and making appropriate changes.Proprietary models outshine open-source, but challenges remainThe researchers tested a range of AI models using ToolSandbox, revealing a significant performance gap between proprietary and open-source models.This finding challenges recent reports suggesting that open-source AI is rapidly catching up to proprietary systems