Full-Time

Engineering Manager

Posted on 3/22/2024

Tecton

Tecton

51-200 employees

Machine learning feature platform with data pipelines

Data & Analytics

Mid, Senior

Remote in USA

Required Skills
Kotlin
Kubernetes
Redshift
Python
UI/UX Design
NoSQL
BigQuery
Apache Spark
SQL
Java
Docker
AWS
Snowflake
Requirements
  • 7+ years of software engineering experience for high-scale products or infrastructure
  • 2+ years of people management experience for a group of engineers
  • Experience working in large Python, Java, Kotlin, or Go codebases and running cloud-native production systems using Kubernetes, AWS, and Docker
  • Product and user-first mindset with attention to UX detail
  • Experience with distributed systems, SQL, and NoSQL databases
  • Bias to action and passion for delivering high-quality solutions
  • Strong communication and ability to write detailed technical specifications
  • Excitement about coaching and mentorship of junior engineers
Responsibilities
  • Building and scaling Tecton's Rift compute engine across multiple cloud providers
  • Deep Integrations with Spark (e.g. Databricks, EMR, Dataproc) and Data Warehouses (e.g. Snowflake, BigQuery and Redshift)
  • Performant training data computation pipelines from offline stores
  • Orchestration platform responsible for reliably and performantly managing compute workloads
  • Data engineering for key-value stores and data lakes
  • Leading, and coaching a talented engineering team ranging from junior to staff-level engineers
  • Building product and infrastructure for Tecton's Compute platform and enabling customer adoption of new features
  • Recruiting top-tier engineering talent, building an organized and sustainable team, and partnering with internal and external stakeholders
  • Partnering with cross-functional teams to build product vision and strategy
  • Efficiently executing on an aggressive product roadmap by building new features and managing technical debt levels

Tecton offers a feature platform for machine learning, enabling the creation, automation, and centralization of production-ready batch, streaming, and real-time data pipelines. The platform leverages technologies such as SQL, PySpark, SnowPark, and Python to define feature logic, and supports batch, streaming, and real-time data processing for fresh feature values.

Company Stage

Series C

Total Funding

$160M

Headquarters

San Francisco, California

Founded

2019

Growth & Insights
Headcount

6 month growth

9%

1 year growth

18%

2 year growth

124%

Benefits

Comprehensive health plans

Remote-friendly work environment

Parental leave

Competitive salary, equity and 401(k) savings plans

Wellbeing benefits

Flexible PTO

INACTIVE