Data Engineer / Data Warehouse Developer
Confirmed live in the last 24 hours
Crypto mining & cloud computing on flared-gas
Company Overview
Crusoe is on a mission to align the future of cloud computation and crypto with the future of the climate—particularly reducing the routine flaring of natural gas. Crusoe provides oil and gas companies with a fast, low cost and simple solution to natural gas flaring through harnessing natural gasses and using them to power computing systems and crypto mining rigs.
B2B
Company Stage
Later Stage VC
Total Funding
$747.5M
Founded
2018
Headquarters
Denver, Colorado
Growth & Insights
Headcount
6 month growth
↑ 35%1 year growth
↑ 50%2 year growth
↑ 245%Locations
San Francisco, CA, USA
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Agile
BigQuery
Data Analysis
Airflow
Postgres
Looker
CategoriesNew
Data & Analytics
Requirements
- Bachelor's degree in Computer Science, Data Engineering, or related field, or 5-8+ years relevant work experience
- Several years of relevant experience in data engineering, data warehouse development, or a similar role
- Familiarity with a variety of database technologies and expert level proficiency in one or more of: Postgres, VictoriaMetrics, InfluxDB, Prometheus, BigQuery
- Proficiency in common data analytics and visualization tools such as Grafana or Looker
- Proficiency in data pipeline orchestration tools such as dbt or Apache Airflow
- Strong understanding of ELT/ETL processes and data integration techniques
- Strong understanding of data infrastructure design, including the performance and financial trade-offs of various designs
- Excellent communication skills
- Ability to work independently and lead a small team if necessary
- Embody the Company values
- Salary range is between $155,000 - $210,000. Restricted Stock Units are included in all offers. Salary to be determined by the applicant's education, experience, knowledge, skills, and abilities, as well as internal equity and alignment with market data
Responsibilities
- Database architecture: Evaluate, select, and implement the most suitable database technologies for our data storage, processing, and analytics needs, including but not limited to Postgres, InfluxDB, VictoriaMetrics, Prometheus, and BigQuery
- Data pipeline development: Lead the design and development of efficient and robust data pipelines using tools such as dbt and Apache Airflow. Ensure data flows smoothly from source systems to the data lake and data warehouse
- Data lake and data warehouse: Oversee the development and maintenance of our data lake and data warehouse, ensuring scalability, security, and performance. Implement best practices for data modeling
- Metrics definition: Collaborate with business stakeholders to define and document key business metrics and KPIs that will drive decision-making across the organization. Build and maintain complex dashboards within Grafana (or similar software) to measure and report on performance
- Data quality assurance: Implement data quality checks and reconciliation/validation processes to ensure the accuracy and reliability of data stored in the data lake and data warehouse
- Performance optimization: Continuously monitor and optimize the performance of data pipelines and databases to meet business requirements and maintain high availability
- Team collaboration: Work closely with cross-functional teams, including data analysts, data scientists, and software engineers, to understand their data requirements and ensure data is readily accessible
- Documentation: Maintain comprehensive documentation of data architecture, data pipelines, and best practices to ensure knowledge sharing within the organization
Desired Qualifications
- Experience working in an Agile or Scrum environment is a plus