Data Engineer
Posted on 10/18/2023
INACTIVE
TheoremOne

201-500 employees

Custom enterprise software & consulting platform
Company Overview
TheoremOne is on a mission to dismantle the traditional consulting ecosystem and replace it with an effective framework for innovation that transforms the way businesses think about and solve problems from the inside out. The company advises clients on product strategy, engineering, design, and culture, then partners with them to build and launch technology-driven solutions to their most complex problems.
Consulting

Company Stage

N/A

Total Funding

N/A

Founded

2007

Headquarters

Los Angeles, California

Growth & Insights
Headcount

6 month growth

-20%

1 year growth

-6%

2 year growth

-6%
Locations
United States
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Spark
AWS
Apache Kafka
Data Analysis
Hadoop
Java
Microsoft Azure
SQL
Python
NoSQL
CategoriesNew
Data & Analytics
Requirements
  • Proficient in Python, Java, Scala, or similar programming languages
  • Experience with big data tools such as Spark, Kafka, and Hadoop
  • Strong knowledge in building and maintaining ETL pipelines
  • Demonstrated experience with cloud platforms like AWS, Google Cloud, or Azure, and their data-related services
  • First-hand production experience with stream-processing systems, such as Kafka or Storm
  • Proven experience with relational SQL and NoSQL databases
  • Experience with data warehousing solutions and architectures
  • Strong problem-solving skills and attention to detail
  • Ability to work in a collaborative environment
  • Excellent communication skills for both technical and non-technical audiences
  • Applicants must be located in the United States or Europe time zones to ensure alignment with our team's working hours
  • New product development
  • Pure R & D
  • Legacy modernization
  • Revenue generation
  • Process optimization
  • Organizational transformation
Responsibilities
  • Designing, constructing, installing, and maintaining large-scale processing systems and infrastructure
  • Managing and optimizing data pipelines, architectures, and data sets
  • Working with both streaming and batch data processing
  • Handling and analyzing data to identify patterns and trends
  • Implementing ETL processes and tools
  • Collaborating with AI specialists to ensure the smooth flow and availability of data for AI models
  • Implementing mechanisms for data curation, search, and discovery
  • Ensuring data architecture will support the requirements of the business
  • Building infrastructure for optimal extraction, transformation, and loading (ETL) of data from various sources using cloud technologies
  • Keeping up-to-date with the latest data engineering tools, strategies, and best practices