Facebook pixel

Marketing Technology Data Engineer
Posted on 6/28/2022
Novato, CA, USA
Experience Level
Desired Skills
Apache Hive
Apache Spark
Data Science
  • Take ownership of the operational management of several pipelines across the portfolio
  • Learn the inventory of current data pipelines and develop a familiarity of the purpose and audience they serve
  • Take full ownership of several projects driving them toward completion:
  • Implementing a standard approach to inbound data pipelines that deliver reliable data to the end systems
  • Develop and manage stable, scalable data pipelines that cleanse, structure and integrate disparate big data sets into a readable and accessible format for end user analyses and targeting using stream and batch processing architectures
  • Develop and improve the current data architecture, data quality, monitoring, and data availability
  • Collaborate with the labels to incorporate new data sources into the growing data model
  • Develop data quality framework to ensure delivery of high-quality data and analyses to stakeholders
  • Overall responsibility for maintenance of enterprise-wide data dictionary
  • Define and implement monitoring and alerting policies for data solutions
  • Join our team in the Novato, Ca office on a hybrid schedule
  • 4+ years of experience with detailed knowledge of data warehouse technical architectures -
  • Infrastructure components
  • ETL/ ELT
  • Reporting/analytic tools
  • 4+ years of hands-on experience working with AWS technologies stack including
  • Redshift, RDS, S3, EMR, Treasure Data, Snowflake or similar solutions build around Hive/Spark etc
  • 4+ years of hands-on experience in using advanced SQL queries (analytical functions)
  • Experience in writing and optimizing highly efficient SQL queries
  • Gaming Industry Experience
  • The ability to develop robust end to end data pipelines of disparate data into consistent usable data
  • An understanding of big data pipelines and experience getting them operating without the need for manual intervention
  • A fast-paced and agile approach, with a strong focus on delivery and enabling change within a creative and rapidly changing environment
  • Ability to design, develop and automate scalable ETL and reporting solutions that transforms data into accurate and actionable business information
  • Experienced in testing and monitoring data for anomalies and rectifying them
  • Comfort in working with business customers to gather requirements and gain a deep understanding of varied datasets
  • Experience with large data sets and distributed computing (Hive/Hadoop)
  • Proven track record of delivering big data solutions - batch and real-time
  • Knowledge of software coding practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
  • Presto SQL & Hive SQL
  • Digdag, Embulk
  • Python
  • Hands on experience with Treasure Data
  • Developing solutions using Docker
  • Experience with CDPs or DMPs
  • GDPR / CCPA compliance experience
  • Experience with CRM platforms such as Salesforce
  • Familiarity with Marketing Technologies
  • Experience working in an agile environment
Take Two
Game publisher