Data Engineer
Enablement
Posted on 10/12/2023
INACTIVE
FanDuel

1,001-5,000 employees

Fantasy sports and online U.S. sportsbook
Company Overview
Fanduel is on a mission to make sports more exciting. The company provides a daily fantasy sports platform with a range of game types for players with a guaranteed prize pool for the winners.
Data & Analytics

Company Stage

N/A

Total Funding

$417.5M

Founded

2009

Headquarters

New York, New York

Growth & Insights
Headcount

6 month growth

8%

1 year growth

24%

2 year growth

84%
Locations
Atlanta, GA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Spark
AWS
Apache Kafka
Data Analysis
Data Structures & Algorithms
Hadoop
Airflow
Redshift
SQL
Tableau
Python
Looker
CategoriesNew
Data & Analytics
Requirements
  • Working SQL knowledge and experience working with relational databases
  • Experience building processes supporting data transformation, data structures, metadata, dependency, and workload management
  • Show proficiency in understanding ETL processes
  • Demonstrate the ability to optimize data pipelines
  • Knowledge of data integrity and relational rules
  • Understanding of AWS and Google Cloud
  • Ability to quickly learn new technologies is critical
  • Comfortable writing Python scripts
Responsibilities
  • Handle escalated data and platform related issues, providing solutions in a timely manner
  • Work alongside engineering teams and stakeholders to offer expertise on more complex issues
  • Ensure data accuracy and consistency across platforms and systems
  • Maintain and update technical documentation related to data processes and issue resolutions
  • Support ETL/ELT pipelines built using Python and Databricks
  • Leverage SQL skills to troubleshoot issues related to SQL
  • Oversee and administer data workflows with Apache Airflow and DBT
  • Support a broad suite of platforms across our data ecosystem including Redshift, Tableau, Athena, Spectrum, Spark, Hadoop, Trino, Kafka
  • Provide leadership and teaching within a cross-functional team, embodying a willingness to grow and to grow those around you and actively seek continued learning opportunities
  • Apply experience and intellect as part of an autonomous team with end-to-end ownership of key components of our data infrastructure
  • Serve as a mentor to more junior engineers not only in cultivating craftsmanship but also in achieving operational excellence – system reliability, automation, data quality, and cost-efficiency
Desired Qualifications
  • Experience with big data environments
  • Familiarity with Apache Airflow and DBT
  • Knowledge of AWS Glue or other data cataloging tools
  • Experience with data visualization tools such as Tableau or Looker
  • Experience with streaming data processing frameworks such as Kafka or Kinesis