Software Engineer II
Data Engineering
Posted on 2/6/2024
DoorDash

10,001+ employees

Local food delivery from restaurants
Company Overview
DoorDash is working to empower local communities and in turn, creating new ways for people to earn, work, and thrive. The company operates the largest food delivery platform in the United States.
Consumer Goods

Company Stage

Series H

Total Funding

$2.5B

Founded

2013

Headquarters

San Francisco, California

Growth & Insights
Headcount

6 month growth

0%

1 year growth

-2%

2 year growth

-9%
Locations
Seattle, WA, USA • San Francisco, CA, USA • Sunnyvale, CA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Redshift
Python
Airflow
Apache Flink
Apache Spark
SQL
Apache Kafka
Java
Postgres
Quality Assurance (QA)
Tableau
AWS
Apache Hive
Looker
Snowflake
Google Cloud Platform
CategoriesNew
Data & Analytics
Software Engineering
Requirements
  • 3+ years of professional experience working in data engineering, business intelligence, or a similar role
  • Proficiency in programming languages such as Python/Java
  • 3+ years of experience in ETL orchestration and workflow management tools like Airflow, Flink, Oozie and Azkaban using AWS/GCP
  • Expert in Database fundamentals, SQL and distributed computing
  • 3+ years of experience with the Distributed data/similar ecosystem (Spark, Hive, Druid, Presto) and streaming technologies such as Kafka/Flink
  • Experience working with Snowflake, Redshift, PostgreSQL and/or other DBMS platforms
  • Excellent communication skills and experience working with technical and non-technical teams
  • Knowledge of reporting tools such as Tableau, Superset and Looker
  • Comfortable working in fast paced environment, self starter and self organizing
  • Ability to think strategically, analyze and interpret market and consumer information
  • Must be located near one of the engineering hubs
Responsibilities
  • Work with business partners and stakeholders to understand data requirements
  • Work with engineering, product teams and 3rd parties to collect required data
  • Design, develop and implement large scale, high volume, high performance data models and pipelines for Data Lake and Data Warehouse
  • Develop and implement data quality checks, conduct QA and implement monitoring routines
  • Improve the reliability and scalability of ETL processes
  • Manage a portfolio of data products that deliver high-quality, trustworthy data
  • Help onboard and support other engineers as they join the team
Desired Qualifications
  • Experience with reporting tools such as Tableau, Superset and Looker
  • Experience with streaming technologies such as Kafka/Flink
  • Experience with Snowflake, Redshift, PostgreSQL and/or other DBMS platforms