Principal Engineer
Brand Platform, Fully Remote
Confirmed live in the last 24 hours
LTK

501-1,000 employees

Creator commerce platform
Company Overview
LTK’s mission is to empower the world's premium lifestyle creators to be as economically successful as possible. The company is unparalleled in its ability to build influencer marketing campaigns for brands of all sizes with self-to-full service products, proven ROI and full-funnel brand performance measurement from discovery to transaction.
Locations
United States
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Spark
AWS
Apache Kafka
Data Analysis
Docker
Google Cloud Platform
Hadoop
Microsoft Azure
Pytorch
Tensorflow
Apache Beam
Apache Flink
Kubernetes
NoSQL
Cassandra
CategoriesNew
AI & Machine Learning
Software Engineering
Requirements
  • Experience: more than 8 years demonstrating a deep understanding of software development principles, architecture, and best practices
  • Degree: A bachelor's or master's degree in Computer Science, Software Engineering, or a related field is preferred, although relevant work experience can sometimes compensate for formal education
Responsibilities
  • Technical Expertise: Bring a strong command of multiple programming languages, frameworks, and technologies relevant. Expertise in designing and building complex systems, architecture, and solutions
  • Problem Solving: Demonstrate your ability to solve complex technical challenges, provide innovative solutions, and influence technical direction
  • Innovation: Track record of introducing new technologies, tools, or practices that enhance engineering efficiency, productivity, and quality
  • System Design: Proficiency in system design and architecture, with the ability to create scalable, maintainable, and performant solutions
  • Problem Ownership: Willingness to take ownership of complex issues, drive problem resolution, and act as a technical leader in crisis situations
Desired Qualifications
  • Data Storage: Expertise in working with various data storage technologies, including relational databases, NoSQL databases (like Hadoop HDFS, Apache Cassandra), and columnar databases
  • Data Processing: Strong understanding of data processing frameworks like Apache Spark, Apache Flink, Apache Beam, or Hadoop MapReduce. Experience in designing and optimizing data processing pipelines for performance and scalability is important
  • Streaming Data: Knowledge of stream processing frameworks like Apache Kafka or Apache Pulsar is valuable for handling real-time data streams
  • Cloud Platforms: Experience with cloud platforms like AWS, Azure, or Google Cloud Platform, and their big data services (e.g., Amazon EMR, Azure HDInsight, Google Cloud Dataproc)
  • Containerization and Orchestration: Familiarity with containerization (Docker) and orchestration tools (Kubernetes) for deploying and managing big data applications
  • Data Modeling and ETL: Understanding of data modeling concepts and experience with Extract, Transform, Load (ETL) processes for transforming and moving data between systems
  • Optimization: Proficiency in optimizing big data applications for performance, throughput, and resource utilization
  • Machine Learning and AI: Depending on the use case, familiarity with machine learning frameworks (like TensorFlow, PyTorch) and AI techniques can be beneficial for building intelligent applications on big data