We’re building Al thought partners to make people smarter and more creative, accelerating the creation and sharing of knowledge in financial services. We’re unabashedly ambitious, and we’re dead set on building the biggest Financial AI company in the world. Our team is lean, smart, and enormously ambitious. We’re growing fast out of our beautiful office in NYC.
WHY JOIN ROGO?
Exceptional traction: strong PMF with the world’s largest investment banks, hedge funds, and private equity firms.
World-class team: we take talent density seriously. We like working with incredibly smart, driven people.
Velocity: we work fast, which means you learn a lot and constantly take on new challenges.
Frontier technology: we’re developing cutting-edge AI systems, pushing the boundaries of published research, redefining what’s possible, and inventing the future.
Cutting Edge Product: Our platform is state-of-the-art and crazily powerful. We’re creating tools that make people smarter, reinventing how you discover, create, and share knowledge.
About the role
As a Distributed Systems Engineer at Rogo, you will help build out our real-time data pipelines for millions of unstructured financial documents to feed our financial LLM. It’s cutting-edge data engineering at the AI frontier.
Responsibilities
Architect distributed systems with AI to handle petabyte-scale content, reliably, quickly, and fuel our underlying LLM infrastructure.
Build REST APIs that are backed by stable, scalable server side implementations and maximize web client flexibility for rapidly meeting evolving product requirements
Ship secure and compliant code: implement security concepts to develop software dealing with sensitive data, work with the security teams choosing what to build vs buy.
Write robust code that’s easy to read, maintain and test
Raise the bar for code quality, reliability and product velocity. Collaboratively, you’ll push yourself and peers to develop technically and interpersonally.
Hard Requirements:
4+ years of industry experience as a data engineer
You love scaling workloads amongst many machines to handle petabyte-scale tasks
Highly proficient with Python and SQL, and an intuitive understanding of multi-threading, multi-processing, asyncio, and other concurrency primitives
Mastery of: Postgres, Snowflake or Elasticsearch
2+ years of experience with Apache Airflow
Experience deploying and monitoring mission-critical ETL pipelines with large and heterogenous datasources
Experience with Distributed systems
Experience with AWS or other cloud environment
Bonus Requirements:
Experience with a strongly typed language (e.g., Rust)
Experience at a hypergrowth startup
Financial Services work experience
Experience with stream processing
Knowledge of Datadog and other Telemetry tooling
WHO YOU ARE
You thrive in fast-paced environments. You are high-intensity and care a lot about what you do, and you’re ecstatic to work at a start-up
You are ambitious. You have fun solving problems that others think are impossible.
You are curious. You find joy in learning about AI, technology, and finance
You are an owner. You are autonomous, self-directed, and comfortable working with ambiguity
You are collaborative, organized, and thoughtful.