Senior Software Engineer
Data Infrastructure
Posted on 12/15/2022
INACTIVE
Growth marketing automation platform
Company Overview
Klaviyo's missions is to help companies retain customers and maximize their ROI. Klaviyo’s data–proven customer platform allows companies to send relevant, well–timed emails and SMS that increase lifetime values.
Consumer Software
Company Stage
N/A
Total Funding
$1.5B
Founded
2012
Headquarters
Boston, Massachusetts
Growth & Insights
Headcount
6 month growth
↑ 23%1 year growth
↑ 29%2 year growth
↑ 57%Locations
Dorchester, Boston, MA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Microsoft Azure
Apache Spark
SQL
AWS
Apache Hive
Data Analysis
CategoriesNew
DevOps & Infrastructure
Software Engineering
Requirements
- 5+ years of production experience in designing, creating and maintaining data driven business solutions and solving big data problems using a wide variety of technologies and modern architectures
- Hands-on experience designing reliable, fault-tolerant, and high performance distributed systems
- Experience designing systems leveraging:
- Apache Spark
- Cloud-based DFS (eg. AWS S3, Google Storage Buckets, Azure Blob Storage)
- HDFS
- Hive
- You understand data modeling, data access, and data storage, caching, replication, and optimization techniques
- Experience with automated analysis of data quality and validation
- Hands-on experience with data transformation, SQL queries, and optimization
- High proficiency working with large, heterogeneous datasets in building/optimizing data pipelines using ELT, data replication, API access, data virtualization, stream data integration, and emerging technologies
- Experience partnering with Data Scientists, Engineers, and Product Managers to understand data needs and opportunities
Responsibilities
- You will work on a deep analytics system that provides insight into hundreds of terabytes of data
- You will help design robust and high performance data processing and storage systems leveraging new data models to serve different internal and external use cases. Contribute to open source data processing technologies
- You will design and develop data pipelines to move data from disparate sources for consolidation and analysis
- Establish industry leading design patterns around data ingestion, lifecycle management, governance, and consumption
- Work with other leaders and team members to create a technology roadmap, address technical debt, and put together an execution plan
- You will design and develop data schemas and implementation strategies to provide fast insight turnaround on petabytes of data
- You will develop robust monitoring infrastructure and strategy to return real-time visibility into various stages of data handling
- You will evangelize design and processes for data handling, CI/CD, security, extensibility, maintainability, etc
- You will be responsible for coaching engineers, managing/reviewing technical documentation and articulating a phased approach to achieving the team's overall technical vision