Full-Time

Staff Product Manager

Posted on 7/8/2025

Opaque Systems

Opaque Systems

11-50 employees

Data clean rooms with confidential computing

Compensation Overview

$170k - $210k/yr

Remote in USA + 1 more

More locations: San Francisco, CA, USA

Hybrid

Category
Product (2)
,
Requirements
  • 5–8 years of relevant product management experience, with a strong preference for work in enterprise software, data platforms, AI / ML, or data privacy.
  • Proven ability to deliver product execution independently—you move fast, untangle ambiguity, and drive results without needing coaching.
  • Strong technical fluency—comfortable digging into architecture, APIs, and data workflows, even if not writing code.
  • Ability to quickly onboard into new technical contexts, ask smart questions, and identify what’s truly important.
  • Demonstrated experience with user discovery, shipping iteratively, and owning KPIs from definition to tracking.
  • Strong communicator—able to clearly explain tradeoffs, customer impact, and product plans to both internal and external audiences.
  • Prior startup or high-growth company experience serving enterprise customers is strongly preferred. Candidates with only consumer or SMB backgrounds are unlikely to be a fit.
Responsibilities
  • Drive near-term execution of our product roadmap, while informing future strategy through customer and market insights.
  • Work closely with Engineering to define and prioritize features, triage issues, and maintain delivery momentum.
  • Own cross-functional planning with engineering, design, and the broader product team to align on quarterly goals and milestones.
  • Partner with Product Design on discovery, user research, and iterative prototyping to ensure we are building what’s desirable and viable.
  • Communicate roadmap and product decisions to internal stakeholders and customers in both written and live formats.
  • Ruthlessly prioritize across input sources (customer feedback, stakeholder input, technical constraints) and make confident product calls.
  • Champion execution by holding teams accountable and keeping workstreams aligned across product, design, and engineering.

Opaque Systems provides a platform for secure data analytics using Data Clean Rooms powered by Confidential Computing. It enables multiple data teams to share and analyze encrypted data across organizations while preserving each party’s access only to their own data and insights. Data remains encrypted at rest, in transit, and during processing, with analytics and AI computations performed inside confidential environments so no data is exposed during computation. This differentiates Opaque from competitors by offering end-to-end protection, easy migration to Confidential Computing clouds, and built-in support for regulatory compliance. The company's goal is to let organizations securely analyze sensitive information and collaborate on data projects without compromising privacy or security.

Company Size

11-50

Company Stage

Series B

Total Funding

$55.5M

Headquarters

San Francisco, California

Founded

2020

Simplify Jobs

Simplify's Take

What believers are saying

  • Sovereign cloud expansion unlocks regulated enterprises unable to use public cloud AI.
  • 4-5x faster pilot-to-production deployment reduces enterprise AI time-to-value significantly.
  • ServiceNow, Anthropic, Accenture partnerships validate platform for production-grade sensitive data.

What critics are saying

  • FHE and MPC computational overhead degrades performance 100-1000x, limiting real-world adoption.
  • Hyperscalers embedding native confidential computing directly threatens OPAQUE's standalone platform viability.
  • Hardware TEE side-channel vulnerabilities expose customer data despite cryptographic guarantees.

What makes Opaque Systems unique

  • Only platform delivering cryptographic proof across full AI lifecycle: training, inference, agents.
  • Acquired UAE-developed FHE and MPC technologies enabling confidential model training at scale.
  • Hardware-attested runtime governance on AMD SEV and Intel SGX with post-quantum protections.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Competitive compensation

Health insurance

Flexible work options

Unlimited PTO

401k

Home office reimbursement

Commute reimbursement

Meals on us

Growth & Insights and Company News

Headcount

6 month growth

-6%

1 year growth

-4%

2 year growth

0%
Startup Scene
May 5th, 2026
OPAQUE acquires tech from abu dhabi's Technology Innovation Institute.

OPAQUE acquires tech from abu dhabi's Technology Innovation Institute. May 05, 2026 May 04, 2026 May 05, 2026 The deal integrates advanced encryption technologies into OPAQUE's San Francisco-based platform, extending secure AI deployment across the full lifecycle. May 05, 2026 OPAQUE, a San Francisco-based company specialising in confidential artificial intelligence systems, has acquired advanced cryptographic AI technologies from Technology Innovation Institute, the applied research arm of the Advanced Technology Research Council. The acquisition adds capabilities including confidential model training using multi-party computation and fully homomorphic encryption, as well as post-quantum cryptographic protections. These technologies are designed to allow organisations to use sensitive data in AI systems without exposing it, addressing a key challenge in enterprise adoption. OPAQUE said the integration extends its platform across the full AI lifecycle, covering training, fine-tuning, inference, and agent-based execution, while maintaining strict data confidentiality. The technologies have already been validated in real-world applications and are aimed at sectors such as healthcare, financial services, defence, and software development, where data privacy and regulatory compliance are critical. The platform is designed to generate verifiable, hardware-backed evidence of data protection and policy enforcement, aligning with international standards including SOC 2, ISO 27001, GDPR Article 32, and the EU AI Act. It also supports sovereign cloud deployments, allowing organisations to maintain control over data residency and jurisdiction. According to the companies, the system is already in use by organisations including ServiceNow, enabling AI deployment without exposing sensitive customer information. The deal marks the first time cryptographic AI technologies developed in the UAE have been acquired and deployed at global scale by a US-based company, reflecting growing international demand for secure AI infrastructure. The acquisition follows OPAQUE's $24 million Series B funding round, which valued the company at $300 million, and builds on partnerships with companies including Anthropic, Accenture, and Encore Capital Group. Discover more Mobile Apps & Add-Ons Video Software Discover more Mobile Apps & Add-Ons Video Software

Opaque Systems
Feb 25th, 2026
Unlocking the AI Value of Your Most Sensitive Data

Unlocking the AI value of your most sensitive data. AMD and OPAQUE 2026-02-25 5 minutes Read How OPAQUE and AMD make trust verifiable with Confidential AI. The enterprise AI challenge. As AI adoption accelerates and AI systems become more powerful, enterprises face mounting pressure to harness sensitive, regulated, and proprietary data for innovation and competitive advantage, while navigating strict regulations, preventing breaches, and maintaining trust. Yet most enterprises hit a massive roadblock: the privacy-utility tradeoff. Their most valuable data - customer records, financial information, healthcare data - remains largely off limits to AI workflows due to security and compliance concerns. This key challenge continues to hold enterprises back from operationalizing AI: how to use sensitive data without exposing it. The privacy-utility tradeoff holding back enterprise AI. You have the proprietary data needed to build world-class AI agents, but moving that data into the cloud or shared environments often means "de-risking" it through anonymization or masking - techniques that frequently degrade data quality and reduce model accuracy. Traditional encryption approaches protect data at rest and in transit, but leave it vulnerable to exposure during AI processing since data must often be decrypted to be used. This security gap has become one of the biggest sources of risk in modern AI systems, especially as organizations operate across cloud environments, partners, and jurisdictions, and it has kept enterprises from realizing AI's full potential. Until now. The solution: AMD SEV and OPAQUE Confidential AI. OPAQUE and AMD address this challenge head-on in a new joint white paper, From Risk to Resilience: Confidential Computing with AMD and OPAQUE. It explores how confidential computing and Confidential AI close this gap by protecting data in use by running workloads inside hardware-based Trusted Execution Environments (TEEs). These environments ensure that data and code remain isolated - even from cloud operators, system administrators, or compromised infrastructure. For security, risk, and data leaders, this is the missing third pillar of data protection that makes AI with sensitive data viable at scale - and why it's quickly becoming essential infrastructure for enterprise AI. This collaboration demonstrates how combining AMD's hardware-backed security with OPAQUE's verifiable runtime governance is enabling enterprises to finally unlock their most sensitive data for AI innovation. It highlights how AMD Secure Encrypted Virtualization (SEV) technology, featured in AMD EPYC(TM) Series of data center CPUs, creates a strong foundation for confidential computing and powers virtual machines (VMs) that protects data while it is being processed. At the silicon layer, AMD SEV delivers hardware-enforced memory encryption, cryptographic attestation of the execution environment, integrity protection against tampering, and more, with broad ecosystem support and no application code changes required. This allows enterprises to run sensitive workloads in the cloud with significantly reduced trust assumptions. But hardware alone isn't enough to operationalize AI securely. OPAQUE Confidential AI: verifiable governance on hardware-backed trust. OPAQUE builds on the AMD SEV technology foundation, delivering software-level verifiable runtime governance, remote attestation, policy enforcement, and cryptographic audit logs on top of hardware-backed trust. With the OPAQUE Confidential AI Platform, enterprises get an end-to-end solution for AI that ensures every AI workflow, agent, and model processes data securely, with cryptographic proof that data and model weights remain private and that policies are enforced before, during, and after runtime: * BEFORE: Attest - Remote attestation cryptographically verifies that AI workloads run on genuine AMD SEV confidential VMs with expected code before any sensitive data is processed. * DURING: Enforce - Verifiable runtime policy enforcement ensures data remains encrypted throughout its lifecycle, including during AI execution, with cryptographic proof that access controls and data-use policies are enforced at runtime. If any cryptographic measurement fails or a violation is identified, the system automatically blocks the workload. * AFTER: Audit - Exportable, tamper-proof audit logs and attestation reports provide cryptographic proof of how data was processed, and which policies were enforced, giving auditors and regulators concrete evidence of compliance and data protection. The result? Enterprises can finally unlock proprietary and regulated data to power more accurate AI agents and workflows, with every computation verifiable, every access governed by policy, and full auditability, transforming Confidential AI into a trust layer for enterprise AI systems. A real-world case study: securing consumer financial data. A real-world case study in the paper brings this to life, highlighting a deployment with a large U.S. credit management company that previously struggled with manual, insecure processing of sensitive consumer debt files shared across hundreds of debt settlement partners. Their challenge was clear: manual workflows were slow and inefficient, sensitive data was decrypted for analysis, and enforcing data policies across hundreds of partners was nearly impossible. By switching to OPAQUE on AMD confidential VMs, the organization replaced manual, high-risk workflows with automated, encrypted pipelines - while maintaining full auditability and policy control. They completely transformed how they handle sensitive consumer PII. Remote attestation ensures only trusted AMD SEV confidential VMs can accept data, automated workflows execute only approved queries on encrypted files, and tamper-proof audit reports provide regulators with cryptographic proof of how data was used and which policies were enforced. The result? Dramatically reduced risk, faster operations, and scalability for growing data volumes: * Automated, encrypted pipelines replaced manual, high-risk workflows. * Workflows scaled to support hundreds of partners without sacrificing security. * Cryptographic proof of policy enforcement and data deletion * Tamper-proof audit trails provide regulators with concrete proof of compliance. * Enhanced security posture kept PII encrypted end-to-end, even during use. As AMD Corporate VP Madhusudhan Rangarajan notes: "Confidential AI is about turning sensitive data into advantage. AMD is thrilled to power partners like OPAQUE who help customers do that securely and at scale." Confidential AI: from compliance checkbox to innovation foundation. As AI systems become more autonomous and more deeply embedded in business operations, trust becomes the limiting factor. Enterprises need to prove not just what their AI does, how it does it, where it runs, and what data it can access, but also which policies are enforced, and how outcomes are produced. OPAQUE and AMD believe Confidential AI is the foundation that makes this possible, transforming AI security from a compliance checkbox into a driver of innovation. As OPAQUE Co-Founder and CTO Rishabh Poddar explains: "Confidential AI allows enterprises to unlock the full value of their most sensitive data without ever exposing it. By combining AMD hardware-backed trust with OPAQUE's software-level enforcement, organizations can now run AI workloads securely on encrypted data at cloud scale. Every computation is cryptographically verifiable, every access governed by policy, and every outcome auditable. This turns data privacy from a compliance checkbox into a foundation for innovation." If you're building AI on sensitive, regulated, or proprietary data, this white paper offers a practical blueprint for moving from risk to resilience. Don't let data privacy be the bottleneck that stalls your AI initiatives. Read it to discover how combining AMD's proven SEV technology with OPAQUE's verifiable Confidential AI Platform unlocks your sensitive data for AI innovation, enforces policy at runtime, and scale securely in the cloud. Ready to unlock your sensitive data for AI innovation? Contact its team to learn how OPAQUE's Confidential AI Platform unblocks AI value by making trust verifiable.

Unite.AI
Feb 12th, 2026
OPAQUE Secures $24M in Series B at a $300 Million Valuation to Push Confidential AI Forward

OPAQUE secures $24M in Series B at a $300 million valuation to push Confidential AI forward. Published February 12, 2026 Enterprise AI adoption continues to accelerate, but trust remains one of its biggest constraints. This week, OPAQUE announced a $24 million Series B funding round, valuing the company at approximately $300 million post-money and bringing total funding to $55.5 million. The round was led by Walden Catalyst, with participation from existing backers including Intel Capital, Race Capital, Storm Ventures, and Thomvest, alongside new strategic investor Advanced Technology Research Council (ATRC). The raise underscores a growing consensus across the enterprise landscape: AI cannot scale on sensitive data without stronger, verifiable guarantees around privacy, governance, and security. From experimental AI to enterprise mandate. Over the past year, confidential AI has moved from a largely academic concept to a practical requirement for organizations deploying generative models and AI agents in production. As AI systems increasingly touch regulated data, proprietary IP, and mission-critical workflows, traditional approaches to security - focused on data at rest or in transit - have proven insufficient. OPAQUE's work is centered on protecting data and models while they are being used, not just before or after. That distinction matters. Many enterprise AI initiatives stall after early pilots because CISOs, legal teams, and compliance leaders cannot verify what happens to sensitive data during AI execution. The result is hesitation, delays, and in many cases, abandoned deployments. Confidential AI aims to close this gap by offering cryptographic guarantees that data remains private, policies are enforced, and models are not exposed - even during runtime. Addressing the enterprise "trust gap" Enterprises today are eager to deploy AI agents on proprietary data to gain productivity advantages and operational insights. Yet those same data assets are often the most sensitive an organization owns. Without verifiable assurances, AI quickly shifts from opportunity to risk. OPAQUE positions its platform as a trust layer for enterprise AI, designed to provide provable privacy, policy enforcement, and model integrity before, during, and after AI execution. Rather than relying on assumptions or contractual assurances, the platform focuses on evidence - making it possible to demonstrate compliance and governance in real time. This approach reflects a broader shift in enterprise thinking. AI systems are no longer evaluated only on performance or accuracy. Increasingly, organizations are asking whether they can prove how AI behaves, what data it accessed, and whether it followed approved rules. What the new funding supports. The Series B capital will be used to accelerate development and deployment of OPAQUE's Confidential AI platform, with a focus on helping enterprises move from experimentation to production more quickly and safely. In parallel, the company is expanding into areas such as post-quantum security, confidential AI training, and sovereign cloud environments. These initiatives target organizations operating under strict regulatory, national security, or data residency constraints, where visibility and control over AI workloads are non-negotiable. OPAQUE has also recently launched OPAQUE Studio, a development environment aimed at simplifying how teams build and deploy confidential AI agents. The goal is to make runtime-verifiable privacy and compliance a default part of the AI development lifecycle rather than an afterthought. Broader implications for Enterprise AI. The rise of confidential AI points to a deeper evolution in how organizations will deploy intelligent systems. As AI becomes embedded in decision-making, automation, and customer interactions, governance must shift from policy documents to technical enforcement. Technologies that can demonstrate, in real time, that data was protected and rules were followed may become foundational to enterprise AI stacks. This is especially true in regulated industries like financial services, healthcare, and insurance, where compliance requirements are tightening rather than loosening. Confidential AI could also enable new forms of collaboration. Organizations may be able to analyze shared or pooled datasets without exposing raw data, unlocking insights that were previously out of reach due to privacy concerns. In this sense, trust-preserving infrastructure may not just reduce risk - it could expand what is possible with AI. Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI. As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.

FinSMEs
Feb 12th, 2026
Opaque Raises $24M in Series B at $300M Valuation

Opaque raises $24M in Series B at $300M valuation. February 12, 2026 Opaque, San Francisco, CA-based confidential AI company for enterprise AI, raised $24M Series B funding, at a $300M valuation. The round was led by Walden Catalyst, with participation from Intel Capital, Race Capital, Storm Ventures Thomvest and new investor and strategic partner, Advanced Technology Research Council (ATRC). The raise brought the total funding to $55.5M. The company intends to use the funds to expand operations and its development efforts. Led by CEO Aaron Fulkerson, Opaque is a Confidential AI company which solves challenges blocking AI adoption such as security concerns about data leaks or compliance violations. The company provides verifiable privacy and governance for AI so organizations can safely run models, agents, and workflows on their most sensitive data. Its platform delivers verifiable runtime governance backed by cryptographic proof that data, models, and agent actions remain private, governed, and compliant with approved policies throughout every AI workflow. Born from UC Berkeley's RISELab, Opaque is expanding into post-quantum security, confidential AI training, and sovereign cloud environments, enabling enterprises to scale AI across their sensitive workloads. This funding follows the launch of OPAQUE Studio, a development environment that lets enterprises build and deploy Confidential AI agents with runtime-verifiable privacy, policy compliance, and auditability. Customers and partners include ServiceNow, Anthropic, Encore Capital, Accenture, and leaders across high tech, financial services, insurance, and healthcare. 12/02/2026

PR Newswire
Feb 12th, 2026
OPAQUE raises $24M Series B at $300M valuation to advance confidential AI for enterprise

OPAQUE, a confidential AI platform company, has raised $24 million in a Series B round led by Walden Catalyst, bringing its post-money valuation to approximately $300 million. Intel Capital, Race Capital, Storm Ventures and Thomvest returned as investors, whilst Advanced Technology Research Council joined as a new backer. The round brings OPAQUE's total funding to $55.5 million. Founded from UC Berkeley's RISELab, OPAQUE provides verifiable privacy and governance for enterprise AI systems, enabling organisations to deploy AI on sensitive data whilst maintaining compliance. The platform uses cryptographic proof to verify that data, models and policies remain protected during runtime. The company recently launched OPAQUE Studio, a development environment for building confidential AI agents. Customers include ServiceNow, Anthropic and Encore Capital across technology, financial services and healthcare sectors.

INACTIVE