Full-Time
Posted on 7/19/2025
Open-source programming model for reliable apps
$212k - $286k/yr
Remote in USA + 1 more
More locations: Remote in Canada
Remote
Temporal Technologies provides an open-source programming model and cloud platform to help developers build reliable, scalable distributed applications. Its core product is an open-source workflow orchestration system, with Temporal Cloud offering a managed, scalable runtime for running those workflows in production. Developers define workflows and activities, while Temporal handles timing, retries, state persistence, and event-driven execution, making code easier to write and enabling easier problem observation through a central Workflow ID in the UI. It differentiates itself by focusing on an open-source model and a shared, scalable runtime that supports both individual developers and large enterprises, with the goal of reducing code, improving reliability, and speeding feature delivery.
Company Size
201-500
Company Stage
Series D
Total Funding
$754.5M
Headquarters
Bellevue, Washington
Founded
2019
Help us improve and share your feedback! Did you find this helpful?
Unlimited Paid Time Off
Health Insurance
Dental Insurance
Vision Insurance
Life Insurance
Disability Insurance
401(k) Retirement Plan
401(k) Company Match
Phone/Internet Stipend
Wellness Program
Professional Development Budget
Conference Attendance Budget
Home Office Stipend
Temporal Raises $300M Series D to Make Agentic AI Real for Companies
A16z leads $300m series D for US software startup Temporal. Temporal, a US-based software startup, has raised US$300 million in a series D funding round led by Andreessen Horowitz, valuing the company at US$5 billion. The funding follows a secondary round in October that valued the company at US$2.5 billion. Existing investors, including Sequoia Capital, Lightspeed Venture Partners, and Sapphire Ventures, also participated. Founded in 2019, Temporal develops open-source software and cloud services that ensure reliable execution of code, allowing applications to recover after failures without custom recovery logic. The company's platform is used by AI firms like OpenAI, and other clients such as Netflix, JPMorgan Chase, and Snap. Food for thought. Implications, context, and why it matters. Temporal's business model delivers multi-million dollar savings (in a case study). * Temporal keeps its core software open-source under an MIT license, with some software development kits (SDKs) under Apache 2.0. Revenue comes from its managed cloud service with consumption-based pricing 1, 2. * Pricing covers "Actions" such as starting a workflow, plus data storage and support plans. The entry-level plan starts at $100 per month 3. * In one Temporal case study based on a single Temporal Cloud client, a company could save $2.25 million a year after moving to Temporal Cloud. The estimate came from lower infrastructure costs plus less engineering time spent resolving incidents 4. AI's shift to multi-step agents makes 'durable execution' a foundational need. * Interest in Temporal tracks AI moving past request-response tools toward "agentic" systems that handle complex work over long periods 5. * Some agents run for hours or days, recover after failures mid-task, and keep state (the information a system needs to remember between steps) across many steps. Traditional backend systems often struggle with those demands 6, 7. * That shift makes "durable execution" useful as a core infrastructure layer for agentic AI, since it keeps long-running workflows reliable and able to resume after failures. Lead investor Andreessen Horowitz has described Temporal as becoming a foundational execution layer for the AI era and as the difference between a demo and a production system for long-running agents 7. How would you feel if you could no longer use Tech in Asia? Share, tag us, and land on our Wall of!
Temporal, an open-source workflow startup, has raised $300 million in a Series D round led by Andreessen Horowitz, doubling its valuation to $5 billion. Lightspeed and Sapphire joined alongside existing backer Sequoia. The company builds workflow orchestration software with "durable execution" capabilities, allowing workflows to automatically resume after outages rather than fail. This reliability becomes critical as AI systems move from simple chat to complex, multi-step tasks where disruptions can derail entire processes. Temporal follows an open-source business model, offering free core software whilst charging for managed cloud services. Its customers include OpenAI, JPMorgan Chase, Netflix and Snap. The funding will support research and development and go-to-market efforts as demand grows for infrastructure that ensures AI agents can handle long-running tasks reliably in production environments.
Temporal raises $300M at $5B valuation to solve AI's reliability problem at scale. AI agents can write code, book flights, and draft legal memos. But ask them to execute long-running tasks across distributed systems without breaking, and the cracks start to show. That gap is where Temporal sees its opportunity. The San Francisco AI infrastructure startup has raised $300 million in a Series D round led by Andreessen Horowitz, valuing the company at $5 billion, according to Reuters. The new valuation doubles the $2.5 billion mark Temporal reached in October following a secondary transaction led by GIC, Singapore's sovereign wealth fund. Lightspeed Venture Partners and Sapphire Ventures joined the round, alongside existing backers including Sequoia Capital. AI infrastructure startup Temporal raises $300M led by Andreessen Horowitz as reliability becomes the gating factor for AI agents. Founded in 2019, Temporal builds open-source software and a managed cloud platform that focuses on what it calls "durable execution." The idea is simple but technically demanding: applications should resume exactly where they left off after failures, without engineers writing custom recovery logic every time something crashes. That problem is moving from niche to mainstream as AI systems shift from generating text to performing real-world work. "We've been building Temporal for over a decade now and what we are trying to solve is these core reliability problems for distributed systems," co-founder and CEO Samar Abbas said in an interview, according to Reuters. "When the software moves from generating answers to executing work, the tolerance of failure basically becomes tiny." In other words, a chatbot can retry a failed response. An AI agent handling payments, infrastructure updates, or customer workflows cannot afford silent errors or partial execution. Abbas pushed back on the idea that the company is riding an AI hype cycle. The funding, he said, was not about "chasing an AI moment," but about building a platform made to address reliability challenges in complex, long-running processes common for AI agents. Temporal's open-source software is available for free. Revenue comes from Temporal Cloud, a multi-tenant managed service that charges customers based on usage. That model has helped the company win both startups and enterprises. Customers include OpenAI, Snap, Netflix, and JPMorgan Chase. The bet from Andreessen Horowitz reflects a broader shift in how investors view AI infrastructure. Model performance still matters, but reliability is emerging as a gating factor. "Reliability is not like an optimization, it's actually a gating factor for these systems to work," said Sarah Wang, a partner at Andreessen Horowitz who led the investment. "Temporal is essentially the execution layer for all of that, so we believe this is the perfect gen AI infrastructure bet." That framing positions Temporal less as a workflow tool and more as plumbing for the agent era. If AI systems are going to run payroll, move money, orchestrate supply chains, or manage healthcare processes, they need guarantees around state, retries, and fault tolerance. That is the layer Temporal is building. The company employs more than 380 people and plans to use the new capital for research, product development, and expanding sales and marketing. At a $5 billion valuation, investors are betting that reliability will define the next stage of AI adoption. Flashy demos may win headlines. Production-grade execution wins contracts. And as AI shifts from answering questions to executing tasks, the margin for failure keeps shrinking. - Advertisement - Discover more Startup news subscription Startup Valuation Calculator CAD software reviews Innovation trend analysis Business startup guide Humanoid robots
Rust at the core: accelerating polyglot SDK development by Spencer Judge at QCon SF 2025. * Steef-Jan Wiggers Cloud Queue Lead Editor | Domain Architect At QCon SF 2025, Spencer Judge, SDK Team Lead at Temporal, presented a case for using a shared Rust core to build and maintain robust multi-language SDKs efficiently. The talk detailed Temporal's journey toward a highly efficient, unified architecture centered on a shared core written in Rust, which Judge framed as a safer, more portable, and ultimately cheaper alternative to traditional C-based shared logic. Temporals' strategy addresses the high cost and complexity senior developers face in maintaining consistent business logic across an expanding matrix of SDKs (including Python, Go, TypeScript, Java, and Ruby). The core problem, as Judge outlined, is that developers today expect to be met in their preferred language. Rewriting complex, client-side logic, such as durable execution state machines, for every language is costly, redundant, and error-prone. Rust was chosen over C for this foundational role due to its exceptional safety, speed, portability, and strong C/C++ Foreign Function Interface (FFI). Temporal's architecture is layered to isolate complexity and maximize efficiency: * Shared Core (Rust): Contains the complex, centralized, non-redundant business logic. * Rust Bridge: Thin, language-specific layers that facilitate FFI, handling the communication primitives. * SDK (Host Language): The minimal outer layer that exposes a high-quality, idiomatic API to the end user. This approach results in dramatically fewer bugs by reducing code redundancy and enables a small team to scale coverage efficiently. The technical complexity is concentrated in the FFI bridge. Judge detailed the use of specialized crates to simplify binding the Rust core to various language runtimes: PyO3 for Python, Neon for Node.js/TypeScript, and Magnus for Ruby. Successfully managing the cross-boundary communication requires strict adherence to technical best practices: * Type Management: While Interface Description Languages (IDLs), like Protobuf, are used for type code generation, a critical principle is to keep the bridge layer slim, relying on simple, generic types (e.g., primitives or buffers) for data transfer. * Asynchronous Handling: This is particularly challenging, requiring specialized strategies (often involving callbacks or internal queueing) to connect Rust's async model to host-language constructs and safely navigate concurrency challenges such as the Global Interpreter Lock (GIL). * Memory Management: Judge emphasized a non-negotiable rule: "Allocation and Deallocation must happen in the same environment." This prevents memory leaks and corruption that commonly plague traditional C-based FFI architectures. Judge concluded with a look at future improvements and a core philosophical takeaway for senior engineers: Temporal is actively addressing the "Distribution Dilemma", the enormous challenge of packaging and shipping platform-specific native binaries (lib*.so or *.dll files) across diverse ecosystems. The most promising path forward is compiling the Rust core to WebAssembly (Wasm), which would eliminate many cross-platform native extension pain points and greatly improve portability. On the performance front, Judge advised developers to benchmark assumptions, as optimized serialization can sometimes outperform direct object creation. The team is also investigating non-serializing IDLs such as FlatBuffers and Cap'n Proto for further speed gains. Ultimately, Judge stressed that the primary focus must remain on the user. Engineers must take the time to ensure the final SDK delivers a "nice, clean, magic experience," rather than a "crappy auto-generated" one, by designing generic extension points early on and enabling the host language to easily inject custom behavior into vital functions such as logging and metrics. He stated that by adopting this Rust-based shared core, software leaders can efficiently expand their language coverage and deliver superior quality to their users, thereby reducing complexity and technical debt across the organization. Steef-Jan Wiggers is one of infoq's senior cloud editors and works as a Domain Architect at VGZ in the netherlands. His current technical expertise focuses on implementing integration platforms, azure devops, AI, and azure platform solution architectures. Steef-Jan is a regular speaker at conferences and user groups and writes for infoq. Furthermore, microsoft has recognized him as a microsoft azure MVP for the past fifteen years. This content is in the Rust topic. Related sponsors. * Migrate from GlassFish to Payara Server 5 with Confidence Worried about GlassFish's limited support and infrequent updates? Its step-by-step guide shows how to migrate to Payara Server 5 for reliable, secure Jakarta EE deployments - on-prem, cloud, or hybrid - covering migration paths, feature replacements, and best practices. * No Nonsense Guide to JVM Implementations: OpenJDK, OpenJ9, GraalVM Ten years after OpenJDK was open-sourced, developers face many JVM choices. This guide compares OpenJDK, OpenJ9, and GraalVM, exploring their history, differences, and strengths to help you select the best JVM for your environment. * Sponsored by