Staff Software Engineer
Data Integration
Confirmed live in the last 24 hours
Nuna

201-500 employees

Data & analytics for healthcare
Company Overview
Nuna's mission is to help make high-quality healthcare affordable and accessible for everyone. They do this by building data solutions for healthcare payers and providers to measure and improve their cost and quality outcomes.
Data & Analytics

Company Stage

Series B

Total Funding

$90M

Founded

2010

Headquarters

San Francisco, California

Growth & Insights
Headcount

6 month growth

-2%

1 year growth

15%

2 year growth

68%
Locations
San Francisco, CA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Spark
AWS
Data Analysis
Data Science
Docker
Google Cloud Platform
JavaScript
Java
Airflow
Kubernetes
Python
TypeScript
NoSQL
CategoriesNew
Software Engineering
Requirements
  • Bachelor's or Master's degree in Computer Science, Engineering, or related field
  • 8 years of experience in software engineering with a focus on backend software development
  • Proficiency in programming languages such as Python, Javascript, Typescript, Go or Java
  • Deep understanding of data storage and retrieval technologies in both relational and NoSQL databases
  • Excellent problem-solving skills and a passion for delivering high-quality data solutions
  • Ability to diagram, articulate, and document data science and engineering concepts
  • Strong communication and collaboration skills
  • Experience with cloud technologies in AWS or GCP as well as container systems such as Docker or Kubernetes
Responsibilities
  • Architect and Develop Data Ingestion Solutions: Design, develop, and maintain robust and scalable data ingestion pipelines that efficiently collect data from various sources, ensuring data quality and reliability
  • Data Curation and Transformation: Transform and curate raw data into structured and usable formats. Implement data validation, cleaning, and enrichment code to maintain data integrity
  • Performance Optimization: Continuously optimize data ingestion and curation pipelines for speed, efficiency, and scalability
  • Collaboration: Collaborate with cross-functional teams including data scientists, data analysts, and domain experts to understand their requirements and deliver actionable insights
  • Quality Assurance: Implement best practices for quality monitoring, validation, and error handling to ensure data accuracy and reliability
  • Documentation: Maintain comprehensive documentation for data ingestion and curation processes, making it easy for team members to understand and use the pipelines
  • Mentorship: Provide technical leadership and mentorship to junior engineers, fostering their growth and development
  • Stay Current: Keep abreast of emerging technologies and industry best practices in data engineering and data management
Desired Qualifications
  • Experience with data processing frameworks and tools, e.g. Apache Spark or similar
  • Experience with data modeling, ETL processes, and data warehousing
  • Familiarity with orchestration tools such as Airflow or Prefect
  • Experience with Great Expectations library