Senior Backend Engineer
Data
Confirmed live in the last 24 hours
Madhive

51-200 employees

Automated platform for modern, scalable TV advertising
Company Overview
Madhive is a leading force in modern TV advertising technology, offering a self-service platform that streamlines the ad buying process, providing advertisers with enhanced simplicity, accountability, and control. The company's robust infrastructure, which processes 260 billion ad opportunities daily, ensures precise, brand-safe audience targeting at scale, and is trusted by major content owners, creators, and distributors, including FOX and TEGNA's Premion. With a customizable, full-stack platform that can integrate with existing data and systems, Madhive is uniquely positioned to assist partners in capitalizing on the shift from linear to digital advertising.
Data & Analytics
B2B

Company Stage

Seed

Total Funding

$306.4M

Founded

2016

Headquarters

New York, New York

Growth & Insights
Headcount

6 month growth

0%

1 year growth

24%

2 year growth

78%
Locations
Remote in USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Kubernetes
Microsoft Azure
Airflow
NoSQL
BigQuery
SQL
Apache Kafka
Postgres
Docker
AWS
Terraform
Firebase
Google Cloud Platform
CategoriesNew
Software Engineering
Requirements
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • Minimum of 5+ years of experience in software engineering with a focus on data engineering.
  • Strong proficiency in data pipelining and modeling, including experience with tools like Apache Airflow.
  • In-depth knowledge of data streaming technologies, especially Apache Kafka.
  • Expertise in designing and implementing ETL processes.
  • Proficiency in SQL and NoSQL databases, understanding the tradeoffs between different types.
  • Experience with columnar and store databases.
  • Strong problem-solving and debugging skills.
  • Excellent communication and teamwork abilities.
  • Familiarity with cloud platforms like AWS, GCP, or Azure is a plus.
Responsibilities
  • Design and implement data pipelines for ingesting, processing, and transforming large volumes of data.
  • Develop and maintain data models to support analytical and reporting needs.
  • Optimize data pipelines for performance, scalability, and reliability.
  • Implement real-time data streaming solutions using streaming technologies.
  • Create and manage ETL processes to extract, transform, and load data from various sources into data stores.
  • Monitor and troubleshoot data pipeline and ETL issues.
  • Evaluate, select, and implement columnar and store databases that best-fit project requirements.
  • Perform database optimization and tuning for efficient data retrieval and storage.
  • Collaborate closely with data scientists, analysts, and other engineering teams to understand data requirements and deliver solutions.
  • Document data pipelines, models, and ETL processes for knowledge sharing and troubleshooting.
  • Promote and enforce best practices in data engineering and data governance.
  • Participate in an on-call rotation schedule to provide timely response and support for engineering-related issues outside of regular business hours, ensuring the continuous operation of critical systems and infrastructure.
Desired Qualifications
  • Familiarity with cloud platforms like AWS, GCP, or Azure.
  • Experience with BigQuery, Postgres, Airflow, Bigtable, Docker, Spanner, Firebase, Kubernetes, and Terraform.