Senior Software Engineer
Infrastructure
Posted on 9/11/2023
INACTIVE
Phaidra

51-200 employees

AI-driven control systems for industrial facilities
Company Overview
Phaidra stands out as a leading AI company, leveraging deep reinforcement learning to create intelligent control systems that enhance stability, energy efficiency, and sustainability in industrial facilities. The company's AI-driven systems are self-learning, continuously improving over time, and have already demonstrated significant results such as a 40% energy saving at Google's data centers. With a leadership team boasting expertise from Google-Deepmind, Trane, and Johnson Controls, Phaidra's AI co-pilot reduces the risk of human error and allows operations teams to focus on higher value activities, making it a promising workplace for those interested in AI and industrial optimization.
AI & Machine Learning
Industrial & Manufacturing
B2B

Company Stage

Private

Total Funding

$30.5M

Founded

2019

Headquarters

Seattle, Washington

Growth & Insights
Headcount

6 month growth

33%

1 year growth

38%

2 year growth

245%
Locations
Remote
Experience Level
Entry
Junior
Mid
Senior
Expert
Desired Skills
AWS
Data Structures & Algorithms
Development Operations (DevOps)
Docker
Google Cloud Platform
Linux/Unix
Microsoft Azure
Operating Systems
Terraform
Kubernetes
Python
CategoriesNew
DevOps & Infrastructure
Software Engineering
Requirements
  • Please only apply to one opening. If you are a better fit for another opening, our team will move your application. Candidates who apply to multiple openings will not be considered
  • Bachelors or Masters in Computer Science, or equivalent experience
  • Proven software engineering experience, ideally with Python or Go
  • Experience with Internal Developer Platform products such as Backstage, Port or Upbound
  • Experience working with developers with a focus on infrastructure automation
  • Proven experience automating Cloud on AWS, GCP or Azure
  • Experience developing Kubernetes Operators and general Kubernetes related automation
  • Good understanding of Linux-based Operating Systems, Containerisation and Orchestration technologies like Docker and Kubernetes
  • Good understanding of DevOps and SRE principles
  • Experience with Terraform or other configuration management tools like Jsonnet, Kapitan, Helm or Kustomize
  • Share our company values: curiosity, ownership, transparency & directness, outcome-based performance, and customer empathy
  • Experience with Software Engineering
  • Experience developing Internal Developer Platforms and tooling
  • Expertise with multi and hybrid cloud environments
  • Expertise with some parts of our tech stack is a big plus
  • Experience in automating scalable multi-tenant systems architectures with high availability, fault tolerance, performance tuning, monitoring, and statistics/metrics collection
  • You will have been fully integrated in the team and with team members across the company
  • You will get a more in-depth understanding of our system architecture and infrastructure
  • You will have completed your first on-call experience helping monitor and improve our production environments
  • You will have become an expert with our tooling
  • You will have started to contribute to knowledge sharing throughout Phaidra
Responsibilities
  • We use reinforcement learning algorithms to provide this intelligence, converting raw sensor data into high-value actions and decisions
  • We focus on industrial applications, which tend to be well-sensorized with measurable KPIs - perfect for reinforcement learning
  • We enable domain experts (our users) to configure the AI control systems (i.e. agents) without writing code. They define what they want their AI agents to do, and we do it for them
  • You will build an internal developer portal and tooling for abstracting infrastructure with a self service approach
  • You will work closely with developers to identify infrastructure pain points and build the platform accordingly
  • You will help build and manage infrastructure for:
  • Large-scale data ingestion and processing
  • Distributed model training, evaluation and inference
  • Automating the end-to-end system for continuous improvement and deployment
  • You will work with cloud services like AWS, Azure, GCP
  • You will work with Cloud Native technologies like Kubernetes
  • You will help build CI/CD pipelines and take part in DevOps duties
  • You will write and maintain tooling and documentation for infrastructure, supported applications and processes
  • You will apply SRE principles for observability, automation and change management
  • Build and maintain cross-functional relationships with internal teams to drive initiatives