Senior Data Engineer
Posted on 7/19/2023
INACTIVE
Transcarent

201-500 employees

Comprehensive health & care platform
Company Overview
Trascarent’s mission is to create a new, different, and better health and care experience that puts health consumers in charge, directly connecting them with high-quality care, transparent information, and trusted guidance on their terms – measurably improving member experience, increasing health outcomes, and reducing costs.
Consumer Software

Company Stage

Series C

Total Funding

$298M

Founded

2020

Headquarters

San Francisco, California

Growth & Insights
Headcount

6 month growth

0%

1 year growth

-1%

2 year growth

28%
Locations
Remote in USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Redshift
Python
Airflow
NoSQL
Data Structures & Algorithms
Apache Spark
SQL
Apache Kafka
Java
AWS
Go
REST APIs
C/C++
Data Analysis
Snowflake
CategoriesNew
Data & Analytics
Requirements
  • Put people first, and make decisions with the Member's best interests in mind
  • Are active learners, constantly looking to improve and grow
  • Are driven by our mission to measurably improve health and care each day
  • Bring the energy needed to transform health and care, and move and adapt rapidly
  • Are laser focused on delivering results for Members, and proactively problem solving to get there
  • Be a data champion and seek to empower others to leverage the data to its full potential
  • Create and maintain optimal data pipeline architecture with high observability and robust operational characteristics
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
  • Build the infrastructure required for optimal data extraction, transformation, and loading using SQL, python, and dbt from various sources
  • Work with stakeholders, including the Executive, Product, Clinical, Data, and Design teams, to assist with data-related technical issues and support their data infrastructure needs
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
  • You are entrepreneurial and mission-driven and can present your ideas with clarity and confidence
  • You are a high-agency person. You refuse to accept undue constraints and the status quo and will not rest until you figure things out
  • Advanced expertise in python and dbt for data pipelines
  • Advanced working SQL knowledge and experience working with relational databases
  • Experience building and optimizing big data pipelines, architectures, and data sets. A definite plus with healthcare experience
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management
  • A successful history of manipulating, processing, and extracting value from large disconnected datasets
  • Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores
  • Strong project management and organizational skills
  • Experience supporting and working with cross-functional teams in a dynamic environment
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
  • Good to have healthcare domain experience
  • Experience with cloud-based data warehouse: Snowflake
  • Experience with relational SQL and NoSQL databases
  • Experience with object-oriented/object function scripting languages: Golang, Python, Java, C++, Scala, etc
  • Experience with big data tools: Spark, Kafka, etc
  • Experience with data pipeline and workflow management tools like Airflow
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc