Facebook pixel

Sr. Data Engineer
Updated on 3/23/2023
Locations
Québec, QC, Canada • Remote • Quebec City, QC, Canada • United States
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Hive
Apache Spark
BigQuery
Apache Kafka
Data Analysis
Data Science
Data Structures & Algorithms
REST APIs
Snowflake
SQL
Python
Requirements
  • Expertise in gathering and cleaning data across multiple sources via various scripting languages - fluency with Python, No-SQL, and SQL environments preferred
  • Experience with transforming, developing data structures, metadata, dependency, and data workflows to support data analytics and data science
  • Experience with data warehousing and ingestion, data modeling and transformation, code development, and data pipelining
  • Proficient in at least one distributed SQL framework (Hive, BigQuery, Spark, Snowflake, etc.)
  • Demonstrated experience with web architecture, scaling, debugging code, performance analysis, and writing highly-optimized SQL and python scripts
  • Superior performance in prior roles with increasing levels of responsibility and independence; detail-oriented, demonstrated ability to handle multiple projects and solve complex problems
  • Fluency in project management - leading a project from inception and scoping to execution and postmortem. Flexible, adaptive, quick learner - works well in a collaborative, communicative environment
  • Excellent communication skills, both verbal and written; ability to condense complex information into simple language for the appropriate audience
  • Take initiative to drive projects forward, recommend and implement solutions
  • Project manage data projects from inception to completion negotiating requirements and deliverables with key stakeholders
  • Excellent communication skills, both verbal and written; ability to condense complex information into simple language for the appropriate audience
Responsibilities
  • Collaborate with engineering and analytics teams to shape and drive the tactical and strategic development of data infrastructure, reporting, and analytical applications
  • Design, develop and own ETL pipelines, developing data infrastructure for scalable warehousing to power internal analytics for product and business teams to make data-informed decisions
  • Develop and maintain API integrations
  • Create data tools and platforms to help streamline and automate workflows for the data team and other teams across the organization
  • Create data frameworks and processes to help with data quality, data integrity, data integration, & self-service
  • Continue to improve the performance and reliability of our data warehouse
  • Build robust scalable data processing and data integration pipelines (batch & real-time) using Python, Kafka, Spark, REST API endpoints, and microservices to ingest data from a variety of external data sources to Snowflake
Desired Qualifications
  • 4+ years of experience in high data volume environments, preferably in the software, internet industry. Graduate work or degree in computer science or engineering is a plus
GRIN Technologies

501-1,000 employees

Creator management platform