Facebook pixel

Staff Software Engineer
Data Platform and Products
Confirmed live in the last 24 hours
Locations
Palo Alto, CA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Hive
Apache Spark
Apache Kafka
Data Analysis
Data Structures & Algorithms
Docker
Java
Airflow
Linux/Unix
Postgres
SQL
Apache Flink
Kubernetes
Python
Requirements
  • 5+ years of proven track record in building and maintaining big data platforms for streaming and batch data processing
  • 3+ years of experience in data engineering, building backend systems and APIs
  • Solid background in the fundamentals of computer science, distributed systems, concurrency, resiliency, caching, large scale data processing, database schema design and data warehousing
  • Strong hands-on coding experience in Java, Python, SQL and comfortable diving into any new language or technology
  • Experience with some or similar or all of Spark, Flink, Airflow, Hive, Druid, Presto, PostgreSQL, DBT, ETL, and familiarity with key/value databases , Kafka, and Kubernetes
  • Experience working with modern cloud based microservice architectures
  • Experience with Linux and containers using Docker and Kubernetes is a big plus
  • Good understanding and experience in modern ETL (incremental, one-time) with DAG design patterns, data quality checks etc
  • Ability to display a significant ownership of features and systems and pursue results driven development approaches consistent with pragmatism
  • Ability to build systems that balance scalability, availability, and latency
  • Strong ability to advocate for improving engineering efficiency, continuous deployment and automation tooling, monitoring solutions, and self-healing systems that enhance developer experience
  • Good communication skills, mentoring, and a force-multiplying track record
  • A desire to learn and grow, push yourself and your team, share lessons with others and provide constructive and continuous feedback, and be receptive to feedback from others
Responsibilities
  • Architect, build and manage real-time and batch data pipelines and data aggregation systems to empower self service reporting on our big data platform
  • Lead the design and implementation of complex distributed systems - be it a new service to power new functionality or data pipelines to ingest large volumes of data or implementing state of the art complex algorithms
  • Build APIs to backend complex data systems across a range of technologies to support new and improved product functionality
  • Partner with data scientists, data analysts, fraud specialists, infrastructure engineers and product managers to design, build and deliver big data projects and new data platform capabilities
  • Debug hard problems - that's a given! When things break -- and they will -- you will find yourself debugging those challenging bugs and will be eager to fix things
  • Continuously learn something new, whether it's a new technology or a quirk of a language we otherwise didn't know. On occasion, you may find yourself picking up a new language or working with an unfamiliar platform
Branch

501-1,000 employees