Senior BI Data Analytics Engineer
Canada
Confirmed live in the last 24 hours
mParticle

201-500 employees

Multichannel customer data management platform
Company Overview
mParticle's mission is to make customer data more accessible and actionable for the whole company, and this includes companies of all sizes. The company has created a customer data platform that makes it easy to holistically manage customer data along the entire product and customer lifecycle.
AI & Machine Learning

Company Stage

Series E

Total Funding

$304M

Founded

2012

Headquarters

New York, New York

Growth & Insights
Headcount

6 month growth

-6%

1 year growth

-6%

2 year growth

-8%
Locations
Remote
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Redshift
Sales
Airflow
Apache Spark
SQL
Looker
Data Analysis
Snowflake
CategoriesNew
Data Engineering
Data Science
Data Analysis
Data & Analytics
Requirements
  • 7+ years of proven success working in backend of large-scale software development
  • Expertise in SQL-like languages and tools is a must
  • Working knowledge of Redis, Dynamodb, Cassandra, Druid, Fivetran, Snowflake, Redshift, Looker, Spark, Luigi/Airflow, etc.. (We use Snowflake)
  • BS/MS in Computer Science or related field, or equivalent professional experience
  • Ability to learn quickly and display solid analytical/engineering thinking
  • Experience in building scalable data pipelines for analytics processes and/or training machine learning models
  • Able to design and develop quality cloud-based systems and operate them in an automated fashion
  • Demonstrable experience in taking projects from spec to release
  • A passion for understanding business questions and making data-driven insights. Excellent analytical skills.
  • Familiarity with the monitoring, observability, log aggregation
  • Ability to present data concisely through written and oral communication. Expert at influencing business stakeholders
Responsibilities
  • Build, maintain, and document automated ETL & Reverse ETL pipelines
  • Architect, implement, and support scalable/reliable data pipelines that process up to terabytes of data per day
  • Continuously monitor and optimize the pipelines and data schemas
  • Create best practices for version control, documentation, testing, etc.
  • Proactively improve efficiencies to our data pipelines
  • Build automated alerting to improve efficiency of our team’s operations, including but not limited to time series forecasting, anomaly detection, text classifications, etc.
  • Write & iterate on Tech Specs to properly plan and execute complex projects
  • Partner with business stakeholders (Sales, Customer Success, Professional Services etc) to understand their data needs
  • Ensure that the data we need to understand and serve our stakeholders is available, accurate, and accessible
  • Build complex Looks/Dashboards/Explores in BI Data tools that are not able to be done in a self serve way