Devops Engineer
Confirmed live in the last 24 hours
Habu

51-200 employees

Enables integrated data analysis for decentralized sources
Company Overview
Habu stands out as a leader in the realm of decentralized data, providing a platform that seamlessly integrates data from various sources, thereby enabling businesses to uncover hidden trends and insights. Their unique approach to data analysis has significantly improved measurement capabilities, leading to data-driven decisions that foster business growth. The company's culture promotes collaboration and intelligence, fostering an environment that encourages technical advancement and industry leadership.
Data & Analytics
B2B

Company Stage

Series B

Total Funding

$42M

Founded

2018

Headquarters

San Francisco, California

Growth & Insights
Headcount

6 month growth

-6%

1 year growth

19%

2 year growth

65%
Locations
Remote
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
Apache Spark
AWS
Development Operations (DevOps)
Docker
Elasticsearch
Google Cloud Platform
Java
Airflow
Microsoft Azure
Postgres
React.js
Redis
Scala
Snowflake
Terraform
Kubernetes
Python
Go
gRPC
Datadog
Quality Assurance (QA)
CategoriesNew
DevOps & Infrastructure
Software Engineering
Requirements
  • Bachelor's degree or higher in Computer Science
  • Experience in the early-mid stages of a fast-growing company
  • Automated unit and integration tests
  • Continuous Integration/Continuous Deployment (CI/CD)
  • Container Orchestration (Docker, Kubernetes, Helm)
  • The key pillars of Observability (Logging, Metrics and Tracing)
  • Using tools like OpenTelemetry, Prometheus, DataDog etc
  • Monitoring and Alerting using Kubernetes Prometheus stack (Prometheus, Grafana, AlertManager)
  • At least one programming language (preferably Go, Python, Scala, and/or Java, but any language will do)
  • Golang gRPC
  • ReactJS, MaterialUI, Redux, Redux-Saga
  • Spark / Apache Airflow / Scala / Java / Python
  • Postgres / ScyllaDB / Snowflake / Redis / ElasticSearch
  • Google Big Query
  • Docker / Kubernetes / Terraform / AWS / GCP / Azure
Responsibilities
  • Provide expertise in the design, implementation, and operation of scalable distributed systems to assist development teams in making the right decisions early
  • Develop new methods and tools to automate and self-heal production environments
  • Apply software development workflows to operational environments
  • Test and tune newly developed systems to prepare them for production deployment and ensure maximum performance at minimum cost
  • Automate packaging, deployment, and configuration of internally developed applications
  • Be the expert in our internal applications, from the high-level architecture down to the code
  • Implement new application features, especially features supporting operational excellence: stability, scalability, redundancy, etc
  • Build tools that make your colleagues more effective
  • Lead, coach and grow our GitDevSecOps practice by bridging development, Ops and QA
  • Become and stay an expert in current and emerging technologies and tools
  • Contribute to Open Source solutions and communities we use wherever you can
  • Measure everything, providing critical operational insight into our applications