Senior Analytics Engineer



201-500 employees

AI-Powered Technology for Value-Based Care

Data & Analytics
AI & Machine Learning

$90,000 - $144,000


San Diego, CA, USA

Required Skills
Apache Spark
Apache Kafka
Data Analysis
  • 5+ years Data Engineering experience
  • 3+ years hands-on experience working with large, structured/unstructured datasets using partitioned cloud storage architecture using query engines such as Spark, Delta Lake
  • 3+ years experience designing, developing, deploying and testing in Databricks
  • 3+ years of hands-on experience in Python/Pyspark/SparkSQL
  • 2+ years experience on Big data pipelines/DAG tools like Airflow, dbt is required
  • 2+ years of SQL experience, specifically to write complex, highly optimized queries across large volumes of data
  • Experience in the AWS computing environment and storage services such as s3/glacier is required
  • Experience with conceptual, logical and/or physical database designs is required
  • Good knowledge in Linux and shell scripting is highly desired
  • Past experience in healthcare data extraction, transformation and normalization is highly desired
  • Hands-on experience with Kafka or other live streaming technology is highly desired
  • Experience with Data Visualization tools like Looker / Tableau is desired
  • Ability to lead and complete projects independently once given objective and direction
  • Ability to independently look for resourceful solutions and enhancements; and communicate any issues timely if they arise, and present solutions to the problems
  • Strong communication skills to partner with data integration, engineering, DevOps, customer success, and product teams and collect technical and business requirements
  • High agency, integrity, ownership, and curiosity
  • Conduct thorough data discovery to identify and understand the organization's data sources, adhering to data quality and data governance policies
  • Collaborate with stakeholders across Apixio (Ops, Product, Marketing, G&A) and within the team to understand business requirements and translate them into data models that effectively capture data and transform them into metrics
  • Design and implement data models using Databricks Delta Lake, ensuring data integrity, consistency, and scalability
  • Document data models and data governance processes to maintain knowledge transfer and ensure data quality standards
  • Develop and implement data pipelines using Databricks and dbt to collect, process, and transform data from various sources
  • Utilize dbt to automate and streamline ELT processes, ensuring data quality and consistency
  • Implement data quality checks and data cleansing techniques to ensure data accuracy and consistency
  • Optimize data pipelines for performance and scalability to handle large volumes of data
  • Develop and maintain data visualizations and dashboards using tools like Looker
  • Create interactive dashboards that provide business users with self-service access to data insights
  • Communicate data insights effectively to stakeholders, translating technical concepts into clear and actionable recommendations

Apixio is using Artificial Intelligence to change the way healthcare is measured, care is delivered, and discoveries are made.

Company Stage


Total Funding



San Mateo, California



Growth & Insights

6 month growth


1 year growth


2 year growth



Compensation - Competitive salary, 401k match, and stock options

Health insurance - Exceptional medical, dental, and vision coverage

Career development - Annual stipend for professional events and education

Health & Wellness - Annual stipend for health and wellness programs

Time off - Generous vacation policy, paid holidays, and parental leave

Grub - Catered lunches, healthy snacks, and refreshing drinks

Commuter benefits - Free parking, pre-tax transit benefits, and shuttle service

Events - Team parties, outings, employee recognition, and happy hours