Full-Time

Population Health RN

Posted on 9/4/2024

Galileo

Galileo

51-200 employees

Platform for improving machine learning models

Compensation Overview

$46 - $55/hr

Mid

Remote in USA

Category
Nursing & Allied Health Professionals
Medical, Clinical & Veterinary
Required Skills
Communications
Cold Calling
Requirements
  • 2+ years of experience in population health, case management, or primary care with Medicare and Medicaid populations with complex chronic conditions
  • Nurse Licensure Compact (NLC) required
  • Highly culturally competent with diverse populations
  • Tech savvy and effective communicators with EMR systems
  • comfortable with using technology and various applications to deliver care to patients virtually
  • Eager to offer suggestions for improvements and able to problem solve on the go
  • Excellent written and verbal communication skills
  • Thrive in a flexible start-up environment where changing workflows, systems, and tools may be frequent.
Responsibilities
  • Telephonic outreach, including cold calls, to patients for post discharge assessments of clinical symptoms, barriers to medication adherence, safety concerns, social needs
  • Educate and coordinate preventative health screenings
  • Perform chronic disease management and medication adherence education
  • Navigate conversations with patients seeking insight on Galileo’s care model
  • Facilitate the coordination of care between health care services, including hospital/ED care, pharmacies and community providers to improve patient outcomes
  • Develop an understanding of various health plan contracts / goals, Galileo markets, and needs of various patient populations
  • Be accountable to performance targets as an individual contributor
  • Collaborate internally with Engagement and Population Health leadership to improve population outcomes

Galileo offers a platform that helps machine learning teams enhance their models and lower annotation costs by using data-centric algorithms for Natural Language Processing. It allows teams to quickly identify and fix data issues that affect model performance and provides a collaborative space to manage models from raw data to production. Unlike competitors, Galileo integrates easily with existing tools and focuses on actionability, security, and privacy, while also streamlining the data labeling process. The company's goal is to optimize the efficiency of machine learning teams in developing and maintaining their models.

Company Size

51-200

Company Stage

Series B

Total Funding

$68.1M

Headquarters

San Francisco, California

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Integration with NVIDIA NeMo enhances continuous improvement of generative AI models.
  • Collaboration with Cisco and LangChain creates AGNTCY for AI agent interoperability.
  • Recent $45M Series B funding boosts AI model accuracy and observability.

What critics are saying

  • AgentSpec framework may reduce demand for Galileo's external evaluation tools.
  • AGNTCY's open-source framework could increase competition for Galileo's solutions.
  • Rapid improvement of open-source AI models may challenge Galileo's business model.

What makes Galileo unique

  • Galileo integrates quickly with existing tools, enhancing actionability, security, and privacy.
  • It offers a collaborative data bench for tracking models from raw data to production.
  • Galileo auto-detects mis-annotated data and enables bulk labeling in one platform.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Dental Insurance

Vision Insurance

Disability Insurance

Parental Leave

Flexible Work Hours

401(k) Retirement Plan

401(k) Company Match

Growth & Insights and Company News

Headcount

6 month growth

2%

1 year growth

-10%

2 year growth

-5%
VentureBeat
Mar 28th, 2025
New Approach To Agent Reliability, Agentspec, Forces Agents To Follow Rules

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. AI agents have a safety and reliability problem. Agents would allow enterprises to automate more steps in their workflows, but they can take unintended actions while executing a task, are not very flexible, and are difficult to control.Organizations have already sounded the alarm about unreliable agents, worried that once deployed, agents might forget to follow instructions. OpenAI even admitted that ensuring agent reliability would involve working with outside developers, so it opened up its Agents SDK to help solve this issue. But researchers from the Singapore Management University (SMU) have developed a new approach to solving agent reliability. AgentSpec is a domain-specific framework that lets users “define structured rules that incorporate triggers, predicates and enforcement mechanisms.” The researchers said AgentSpec will make agents work only within the parameters that users want.Guiding LLM-based agents with a new approachAgentSpec is not a new LLM but rather an approach to guide LLM-based AI agents. The researchers believe AgentSpec can be used not only for agents in enterprise settings but useful for self-driving applications.   The first AgentSpec tests integrated on LangChain frameworks, but the researchers said they designed it to be framework-agnostic, meaning it can also run on ecosystems on AutoGen and Apollo. Experiments using AgentSpec showed it prevented “over 90% of unsafe code executions, ensures full compliance in autonomous driving law-violation scenarios, eliminates hazardous actions in embodied agent tasks, and operates with millisecond-level overhead.” LLM-generated AgentSpec rules, which used OpenAI’s o1, also had a strong performance and enforced 87% of risky code and prevented “law-breaking in 5 out of 8 scenarios.”Current methods are a little lackingAgentSpec is not the only method to help developers bring more control and reliability to agents

PR Newswire
Mar 20th, 2025
Galileo Announces Integration With Nvidia Nemo For Rapid Genai Development

Platform Powers End-to-End Continuous Improvement of Agentic ApplicationsSAN FRANCISCO, March 18, 2025 /PRNewswire/ -- Galileo, the AI Evaluation company, today announced an integration with NVIDIA NeMo ™, enabling customers to continuously improve their custom generative AI models. Now, customers can evaluate models comprehensively across the development lifecycle, curating feedback into datasets that power additional fine-tuning. As a result, customers ship GenAI apps that are more reliable, trusted, and cost-effective.Data Flywheel for AIThe majority of enterprises are developing GenAI applications – including agents and RAG-based chatbots – but it can be challenging to ship and scale these applications due to the non-deterministic outputs of Large Language Models (LLMs). There's even more complexity when AI teams wish to test new LLMs, which are constantly evolving in capability and price point. The solution is to build an AI data flywheel, enabling continuous testing and refinement, collecting data about user interactions for subsequent improvement. When AI teams use data to improve outcomes (whether by fine-tuning, prompt engineering, or in-context learning), they gain a competitive advantage.Galileo and NVIDIA accelerate a data flywheel by collecting and curating better data about the interactions of an AI application

VentureBeat
Mar 6th, 2025
A Standard, Open Framework For Building Ai Agents Is Coming From Cisco, Langchain And Galileo

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. One goal for an agentic future is for AI agents from different organizations to freely and seamlessly talk to one another. But getting to that point requires interoperability, and these agents may have been built with different LLMs, data frameworks and code.To achieve interoperability, developers of these agents must agree on how they can communicate with each other. This is a challenging task. A group of companies, including Cisco, LangChain, LlamaIndex, Galileo and Glean, have now created AGNTCY, an open-source collective with the goal of creating an industry-standard agent interoperability language. AGNTCY aims to make it easy for any AI agent to communicate and exchange data with another.Uniting AI Agents“Just like when the cloud and the internet came about and accelerated applications and all social interactions at a global scale, we want to build the Internet of Agents that accelerate all of human work at a global scale,” said Vijoy Pandey, head of Outshift by Cisco, Cisco’s incubation arm, in an interview with VentureBeat. Pandey likened AGNTCY to the advent of the Transmission Control Protocol/Internet Protocol (TCP/IP) and the domain name system (DNS), which helped organize the internet and allowed for interconnections between different computer systems. “The way we are thinking about this problem is that the original internet allowed for humans and servers and web farms to all come together,” he said

PYMNTS
Feb 4th, 2025
Open-Source Vs Proprietary Ai: Which Should Businesses Choose?

When deploying generative artificial intelligence (AI), one of the most fundamental decisions businesses face is whether to choose open-source or proprietary AI models — or aim for a hybrid of the two. “This basic choice between the open source ecosystem and a proprietary setting impacts countless business and technical decision, making it ‘the AI developer’s dilemma,’” according to an Intel Labs blog post. This choice is critical because it affects a company’s AI development, accessibility, security and innovation. Businesses must navigate these options carefully to maximize benefits while mitigating risks

VentureBeat
Jan 23rd, 2025
Galileo Launches ‘Agentic Evaluations’ To Fix Ai Agent Errors Before They Cost You

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Galileo, a San Francisco-based startup, is betting that the future of artificial intelligence depends on trust. Today, the company launched a new product, Agentic Evaluations, to address a growing challenge in the world of AI: making sure the increasingly complex systems known as AI agents actually work as intended.AI agents—autonomous systems that perform multi-step tasks like generating reports or analyzing customer data—are gaining traction across industries. But their rapid adoption raises a crucial question: How can companies verify these systems remain reliable after deployment? Galileo’s CEO, Vikram Chatterji, believes his company has found the answer.“Over the last six to eight months, we started to see some of our customers trying to adopt agentic systems,” said Chatterji in an interview. “Now LLMs can be used as a smart router to pick and choose the right API calls towards actually completing a task

INACTIVE