Full-Time

Enterprise Account Manager

Public Sector

Posted on 8/15/2025

Proofpoint

Proofpoint

5,001-10,000 employees

Cybersecurity subscription services protecting digital channels

No salary listed

Reading, UK

In Person

Category
Sales & Account Management (1)
Required Skills
Sales
Requirements
  • Experienced technology sales professional, with a particular focus on SaaS value-based technologies. Experience within Cyber Security is an advantage.
  • Proven track record of over-achieving targets, net new logo achievements and capacity to leverage channel partnerships
  • Relentless attention to detail and never-give-up attitude with a high level of activity, i.e., customer and partner meetings emphasizing executive value selling (CISO, CIO, CMO, etc.)
  • Ability to establish business relationships at the executive level, and ability to become a trusted client advisor
  • Experience of closing complex opportunities in the range of $100k – $1m
  • Proficient user of formal sales playbook methodologies e.g. MEDDIC, Challenger, TAS, Command of the Message.
  • Growth mindset, willingness to be coached, and diligence to follow a proven sales process
  • Entrepreneurial self-starter with a consistent focus on account planning, pipeline generation and opportunity progression. You’re strategic in approach, but always act with urgency.
Responsibilities
  • Establish strong business relationships at the executive level within Public Sector accounts above 2,500 users (cross-vertical focus)
  • Focus on both penetrating new accounts as well as expanding our presence with existing customers by introducing them to all our security, compliance and information protection platform.
  • Articulate and promote the company’s value proposition and services to become a trusted advisor within your customer base
  • Work with internal resources, including aligned System Engineers to prepare account strategies and plans.
  • Collaborate with Systems Engineers to organise and deliver compelling and flawless product demonstrations
  • Partner with the channel ecosystem to gain access into new accounts
  • Deliver operational excellence, to include forecast accuracy and pipeline generation progression
  • Maintain up-to-date knowledge of Proofpoint’s competitive positioning in the marketplace
Desired Qualifications
  • Preferably you’ll have experience of using Salesforce

Proofpoint is a cybersecurity company that protects organizations from advanced threats and compliance risks. It serves enterprises, government agencies, and small to mid-sized businesses with a subscription-based suite of solutions that safeguard email, social media, and other digital communication channels from phishing, malware, and ransomware. The products use machine learning and artificial intelligence to detect and mitigate threats in real time and are designed to be easy to integrate with existing IT systems. Revenue comes from recurring subscription tiers and professional services such as threat assessments and incident response. Compared with competitors, Proofpoint emphasizes real-time threat detection, broad coverage of communication channels, and a focus on ease of integration and user-friendly operation. The company's goal is to help organizations strengthen their security posture and reduce compliance and cyber risk across their digital communications.

Company Size

5,001-10,000

Company Stage

IPO

Headquarters

Sunnyvale, California

Founded

2002

Simplify Jobs

Simplify's Take

What believers are saying

  • Casepoint partnership integrates Proofpoint Archive for seamless eDiscovery, eliminating manual exports in 2026.
  • Acuvity acquisition enables Satori platform, addressing 89% YoY AI attack rise with behavioral guardrails.
  • Gartner 2025 Magic Quadrant names Proofpoint Leader in Digital Communications Governance and Archiving.

What critics are saying

  • Microsoft Defender for Office 365 bundles AI detection into M365, eroding Proofpoint's email market share by November 2026.
  • Zscaler captures SASE segment from Proofpoint's cloud security via superior zero-trust controls by November 2027.
  • FedRAMP High delay to 2027 blocks DoD contracts, costing Proofpoint 25% public sector revenue immediately.

What makes Proofpoint unique

  • Proofpoint pioneered cloud-based security-as-a-service since 2002, protecting email and data with machine learning.
  • Launched Prism Investigator in June 2026, first autonomous AI platform reducing investigations from weeks to minutes.
  • Introduced Agent Integrity Framework at RSAC 2026, securing AI agents with intent-based detection across interactions.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health, dental, & vision

Employer-paid life, disability & employee assistance programs

Unlimited PTO

401K match

Remote work option

Growth & Insights and Company News

Headcount

6 month growth

0%

1 year growth

0%

2 year growth

0%
IT Europa Media & Intelligence
May 6th, 2026
Proofpoint's Paris opening better supports customers and partners

Proofpoint's Paris opening better supports customers and partners

Mexico Business News
Apr 9th, 2026
Cyber risk becomes systemic across ecosystems.

Cyber risk becomes systemic across ecosystems. By Diego Valverde | Journalist & Industry Analyst - Thu, 04/09/2026 - 15:53 In Mexico, supply chain attacks and the alleged breach of Ministry of Navy underscore systemic exposure, while globally, nation-state actors exploit industrial systems like those from Rockwell Automation. The implication for leadership is that cybersecurity is turning into an enterprise-wide risk function, where human behavior, third-party dependencies, and AI-driven attack vectors redefine the role of the CISO. Ready? This is your Week in Cybersecurity! Supply chain cyberattacks in Mexico reached a critical threshold as 43% of organizations reported incidents in the last 12 months, reveals Kaspersky. These figures place the country above the global average and highlight a significant rise in threats targeting trust-based corporate relationships. A cybersecurity journalist reports that the Ministry of the Navy (SEMAR) experienced a data exfiltration from its Safe Smart Port (PIS) platform, affecting 640,000 port operators. A threat actor identified as "marssepe" from the group Sociedad Privada 157 leaked 39.7GB of sensitive information on a public forum. Indra Group inaugurated new corporate offices in Parque Toreo in the State of Mexico as part of its strategy to strengthen its presence in the country and expand its role in digital transformation projects across Latin America. The new facilities complement the company's existing operations in Mexico City, Queretaro, Merida, and Monterrey, aligning Indra Group's plans to achieve double-digit sales growth and create new jobs in Mexico over the next three years. Iranian-affiliated actors are exploiting internet-facing Rockwell Automation programmable logic controllers, report the US Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), and the National Security Agency (NSA). These malicious activities target project files and human machine interface displays, causing operational disruptions and financial losses across multiple US critical infrastructure sectors. MBN Experts As 93% of Mexican organizations race to deploy AI agents by 2027, Proofpoint is redefining security for the "agentic workspace." Following the acquisition of Acuvity, the firm is moving beyond traditional email protection to secure the interactions between humans and AI agents, where "prompt engineering" has become the new social engineering. With Mexican CISOs ranking human vulnerability as the top threat globally, Proofpoint's new Satori platform introduces behavioral guardrails and AI-driven automation to ensure that rapid innovation doesn't come at the cost of catastrophic data loss. Read the full interview with Luis Isselin, Country Manager, Proofpoint, on MBN! With AI-driven attacks in Mexico skyrocketing 89% year-over-year, the window between vulnerability and exploit has shrunk from weeks to minutes. Borealix is countering this "new baseline" by moving beyond basic compliance to offer behavior-based detection and "secure by design" application development. By integrating auditor-developers and automated guardrails, the firm helps Mexican SMEs and regulated fintechs bridge the gap between rapid digital innovation and the escalating risk of autonomous, AI-led breaches. Read the full interview with Juan Carlos Calderón, CEO, Borealix, on MBN!

Epium Limited
Mar 30th, 2026
Proofpoint expands security for the agentic workspace.

Proofpoint expands security for the agentic workspace. Proofpoint introduced new email and data security capabilities designed for workplaces where humans and Artificial Intelligence agents interact across communication and data environments. The updates combine email protection models, add data access governance for human and non-human identities, and extend data security posture management into on-premises systems. Proofpoint unveiled new capabilities across its collaboration security and data security portfolios aimed at securing the agentic workspace, where people and Artificial Intelligence agents operate across email, cloud, and data environments. The company said enterprise risk is changing as organisations deploy assistants and autonomous agents that draft communications, access sensitive data, and take action at machine speed. In this environment, static access controls and identity checks alone are no longer enough, increasing the need for behavioural insight across communication and data activity. A central part of the update is a unified email security architecture that combines Secure Email Gateway and API-based protection. Proofpoint said the integrated model links perimeter protection for north-south traffic with defence for east-west internal email activity, allowing threat intelligence and behavioural signals to flow across pre-delivery and post-delivery controls. The company said this gives customers a single workbench to manage inbound, outbound, and internal email protection, while improving visibility into compromised accounts, automated agents, internal-to-internal compromise, and direct send vulnerabilities. Proofpoint said the approach reduces console switching, simplifies policy management, and eases investigation and response workflows. The company said the platform delivers 99.999% detection efficacy. Proofpoint also introduced Artificial Intelligence data access governance capabilities that provide visibility into access to sensitive data across SaaS, cloud, and on-prem environments. The scope includes human users, service accounts, and Artificial Intelligence agents. Security teams can identify stale entitlements, orphaned accounts, and over-permissioned access, while automated remediation workflows are designed to reduce exposure without manual, ticket-driven processes. By correlating identity activity, data sensitivity, access patterns, data loss prevention signals, and other risk indicators within the Data Security Graph, the platform is intended to support continuous risk reduction based on behavioural context and inferred intent. The company is also extending its Artificial Intelligence-native data security posture management capabilities to on-premises environments, adding intelligent data discovery and classification across hybrid and cloud systems as well as legacy infrastructure. Proofpoint said this broader coverage gives organisations more consistent visibility into sensitive data regardless of where it resides, helps prioritise risk more accurately, and reduces exposure caused by fragmented tooling across cloud and on-prem systems. These capabilities are expected to become available in Q2 2026, with timing subject to standard product rollout considerations and regional availability. 52. Impact score. March 30, 2026 Mistral has expanded its Voxtral family with a text-to-speech system aimed at enterprise voice applications. The company is positioning the open-weights model as a flexible alternative for organizations that want more control over deployment, cost and customization. March 30, 2026 A UK Parliament committee is examining how Artificial Intelligence is changing business and work, with a focus on both economic opportunity and labour disruption. The inquiry is seeking evidence on government priorities as adoption expands across the economy. March 30, 2026 Microsoft is changing Windows 11 kernel policy so new drivers must be signed through the Windows Hardware Compatibility Program. Older trusted drivers will still be allowed in some cases to preserve compatibility during the transition. March 30, 2026 Generative Artificial Intelligence is becoming a mainstream consumer and marketing tool in the US, reshaping targeting, creative production, measurement, and search visibility. Growing adoption is paired with persistent concerns around trust, accuracy, and governance. March 30, 2026 The Universities of Wisconsin list a broad mix of Artificial Intelligence courses, certificates, webinars, forums, and training programs for non-degree learners. Offerings span business, education, government, healthcare, agriculture, engineering, and technical development across multiple campuses.

GetAIGovernance
Mar 29th, 2026
AI agents carry the same insider risk profile as human employees. Your governance program was not built for that.

AI agents carry the same insider risk profile as human employees. Your governance program was not built for that. At RSAC Conference 2026 - the annual gathering where the security industry's most consequential product and strategy conversations happen - Sumit Dhawan, CEO of Proofpoint, made a statement that cuts directly across the AI governance category. He said AI agents behave like humans and carry the same risk profile. They operate non-deterministically. They can be manipulated through prompt engineering. They require what he called "a purpose-built integrity framework" - an AI behavior safeguard layer - that must be coded into the technology itself rather than applied as a policy or a governance document afterward. This is not a vendor press release or a marketing claim. This is an observation delivered to a security practitioner audience at the industry's most scrutinized stage. Traditional insider risk programs were built around one core detection mechanism - behavioral deviation. When a human employee's behavior diverges from their established pattern, the system escalates. Access to unusual systems, data exfiltration outside normal hours, communication with unknown external parties. The program works because human behavior is mostly predictable, deviations are detectable, and the human is accountable to a code of conduct. Dhawan's point is that AI agents satisfy none of those preconditions. They have no code of conduct. Their behavior is non-deterministic by design. They can be manipulated into taking unintended actions through inputs that look legitimate. They operate at machine speed across multiple connected systems simultaneously. The insider risk model was built for human actors with predictable behavioral patterns. AI agents are internal actors that can cause the same category of damage but through a fundamentally different mechanism - one the model was never designed to detect. The security industry is adapting its frameworks to cover AI agents because the threat model requires it. The governance industry has not yet made the equivalent adaptation. Most AI governance programs were built around the assumption that the systems being governed produce outputs that humans then interpret and act on. AI agents take action autonomously, which means the governance framework built around human interpretation of outputs does not reach the layer where agent behavior actually occurs. The governance question centers on what the agent did, why it did it, what systems it accessed, and what happened as a result. That is a behavioral governance problem. What makes AI agent risk structurally different. The non-determinism problem is central. Traditional security controls were designed for Boolean pattern-based logic. An action either matches a known pattern or it does not. AI agents do not operate this way. Their outputs are probabilistic. The same input can produce different outputs at different times depending on context, model state, and the chain of tools and systems the agent is interacting with. This means behavioral baseline approaches - the foundation of insider risk detection - are significantly harder to establish and significantly easier for an adversary to operate below. An agent that has been manipulated through prompt engineering may produce outputs that look normal in isolation while the cumulative pattern of its actions represents a significant deviation that only becomes visible after the damage has occurred. The accountability gap is equally important. When a human insider causes harm, there is a clear accountability chain. The person made a decision. There is a record of their access. There is a supervisor and a reporting structure. When an AI agent causes harm, the accountability chain is much less clear. Who authorized the agent to access those systems? What credential was it operating under? Who was the named human supervisor responsible for reviewing its behavior? What was the approval scope for the actions it took? In most current enterprise deployments the answers to those questions either do not exist or require significant forensic reconstruction after the fact. Dhawan's point is that this gap must be closed at the technology layer - coded into the system as an integrity framework - rather than addressed through policy documents that do not connect to what the agent actually does. What a purpose-built integrity framework actually means. Dhawan's specific language is important. He said AI agents require "a technology layer which is an AI behavior safeguard layer." That is a governance architecture description. What he is describing is a layer that sits between an AI agent and the systems it can access, observes the agent's behavior continuously, applies defined integrity constraints, and generates an audit trail from what the agent actually did rather than from what was approved before it was deployed. This is identical in function to what continuous production monitoring delivers in the AI governance context - a system that observes behavior as it happens rather than reviewing documentation after the fact. In the agentic AI context the stakes are higher because agents act autonomously and at speed, which means the gap between what was approved and what actually happened can grow very large in a very short window. The CISO bifurcation Dhawan named is also analytically useful. He said CISOs are splitting into two camps on AI safeguard implementation - proactive and wait-and-see. The proactive CISOs are building the behavioral governance layer now because they understand that the agent deployment surface is expanding faster than any reactive governance program can track. The wait-and-see CISOs are treating AI agent governance the same way they treated early cloud security - as something that can be addressed after the deployment has already scaled. The history of cloud security suggests that position creates a significant remediation problem when the regulatory or incident pressure arrives. What enterprise teams should be doing right now. Before deploying any AI agent into a production environment, three things need to exist. An authorization register - a written document specifying exactly what actions the agent is permitted to take, under what credentials, and who the named human supervisor is. A behavioral baseline - an established record of what the agent's normal output and action patterns look like so deviations can be detected rather than guessed at. And an audit trail mechanism - a technical system that records what the agent actually did, which systems it accessed, and what data it touched, automatically and continuously rather than reconstructed from logs after an incident. If none of those three things exist before an agent is deployed, the governance program has a gap regardless of how complete the pre-deployment approval process was. The moment this gap becomes a problem is when an agent takes an unauthorized action and no one can reconstruct the exact sequence that led to it. Its take. Dhawan's framing at RSAC is significant not because Proofpoint is building a governance platform - they are building a security product - but because the Proofpoint CEO is describing a governance requirement in a security context at the industry's most visible annual event. When security leaders at that level start defining AI agent behavioral integrity as a governance problem that requires a dedicated technology layer, it means the security market is arriving at the same conclusion that the governance market has been slow to reach. The two conversations - AI governance and AI security - are converging on the same operational problem. Enterprises need a layer that observes what AI agents actually do in production and enforces behavioral constraints as the agent runs, not after it has already acted. This aligns directly with the NIST AI RMF GOVERN function, which explicitly requires accountability across system components, including agentic behaviors, throughout the full system lifecycle. What remains unresolved is that the insider risk model for AI agents does not yet have the equivalent of 20 years of enterprise insider risk program development behind it. The behavioral baseline problem for non-deterministic systems is genuinely hard. The credential and identity framework for AI agents is still being built across the identity governance market. If your organization is deploying AI agents without an authorization register, a behavioral baseline, and a continuous audit trail mechanism, the GAIG marketplace is where to evaluate the platforms building that layer. Enterprise teams can compare solutions in the AI Security and AI Monitoring categories that are specifically designed for production agent behavior rather than pre-deployment documentation. Follow GetAIGovernance on LinkedIn

Neon River
Mar 27th, 2026
Technology intelligence: March 2026.

Technology intelligence: March 2026. Executive-level intelligence on the trends, people, and deals shaping the technology sector in March 2026 News and exec appointments. Google completed its $32bn all-cash acquisition of Israeli cloud security company Wiz, the largest deal in Google's history. AI security software company Darktrace has appointed Ed Jennings as its new CEO, who joins from Quickbase, where he served as CEO. Jennings is the former COO of Mimecast. OpenAI has acquired Promptfoo, a startup focused on securing LLMs and AI agents, with the technology set to be integrated into OpenAI Frontier. Meta acquired Moltbook, the AI-agent social network, with founders Matt Schlicht and Ben Parr joining Meta Superintelligence Labs. Deal terms were not disclosed. Matt Brittin, the former EMEA President for Business and Operations for Google has been appointed as Director-General of the BBC. Apple acquired MotionVFX, a developer of plug-ins and templates for Final Cut Pro. Financial terms were not disclosed. Lovable said it crossed $400m ARR in February after adding $100m in revenue in a single month, with just 146 employees. Cursor, an AI-native code editor, reportedly surpassed $2bn in annualized revenue, with enterprise customers now accounting for about 60% of revenue. Atlassian cut 10% of its workforce, around 1,600 roles, as it redirects more investment toward AI and enterprise sales. New Relic, the observability software company, appointed Michael Frendo as CTO, with the former Proofpoint engineering executive tasked with helping drive the company's AI-led observability strategy. SAP appointed Thomas Saueressig as Chief Customer Officer, expanding his remit to lead the new Customer Value Group across customer success, services and delivery. Wise appointed former Intercontinental Exchange CFO Scott Hill to its board as an independent non-executive director. Trustpilot appointed Marcus Roy, currently CFO of The Economist Group, as its new CFO, succeeding Hanno Damm later this year. The Trade Desk, a programmatic advertising technology company, appointed Reddit CFO Drew Vollero to its board of directors. FactSet, a financial data and analytics company, appointed Kate Stepp as Chief AI Officer and former Citi and JPMorgan executive Bob Stolte as Chief Technology Officer, as it steps up its enterprise AI push. Contentsquare, a digital experience analytics company, added three senior leaders: Costa Harbilas as President, Go-to-Market, Patrice Attia as Chief Revenue Officer, and Rachel Obstler as Chief Product Officer. Harbilas joins from Intapp, where he was CRO, while Attia and Obstler were promoted internally from SVP EMEA/APJ and SVP Product, respectively. Spendesk appointed Alan Wright as Chief Technical Officer, as the company said it had reached profitability. Wright previously served as VP of Engineering at Signal AI. Bluesky CEO Jay Graber stepped down and moved into a Chief Innovation Officer role, with Toni Schneider named interim CEO. Bluesky is a decentralised social network built on the open-source AT Protocol Fundraising. Nscale, the British AI infrastructure company, hit a $14.6bn valuation after a $2bn Series C. Harvey, a Legal AI company, confirmed a $200m raise at an $11bn valuation, with GIC and Sequoia co-leading the round. Quince, a direct-to-consumer retail brand, raised a $500m Series E at a $10.1bn valuation, led by Iconiq. Replit, an AI software development platform, raised a $400m Series D at a $9bn valuation, just six months after reaching $3bn. Legora, a Stockholm-based legal AI company, raised a $550m Series D at a $5.55bn valuation as the AI legal tech boom continues. French health insurance startup Alan reached a €5bn valuation. AMI Labs, co-founded by Yann LeCun, raised $1.03bn at a $3.5bn pre-money valuation to build "world models." Cloaked, a privacy and identity protection company, secured $375m in Series B and growth financing as it expands from consumer privacy tools into enterprise. Armadin, Kevin Mandia's new AI-native cybersecurity startup, raised $189.9m in combined seed and Series A funding. Eridu, an AI networking infrastructure startup, emerged from stealth with a $200m Series A. Israeli AI agent startup Wonderful raised a $150m Series B at a $2bn valuation. Granola, a London-based AI meeting notes platform, raised $125m at a $1.5bn valuation as it expands from meeting notes into broader enterprise AI workflows. Rox AI, a sales automation startup, reportedly hit a $1.2bn valuation in a new funding round led by General Catalyst. Mirage, the company behind the AI video editor Captions, raised $75m in growth financing and is positioning itself more clearly as an AI lab. Hiring trends. Whilst embracing AI is now an obvious strategic priority, the harder challenge is cultural. For some employees, AI is a force multiplier. For others, it threatens to commoditise their craft. In time, the best technology companies may operate with smaller engineering teams, heavily augmented by AI agents. But today's reality is more nuanced. Most AI coding tools remain immature, and teams are still learning how to use them effectively. Faster prototyping is often offset by slower debugging - particularly when dealing with AI-generated code. No Latency. No Latency provides long form analysis of the systems and strategies underpinning the technology ecosystem; About Neon River Neon River is a boutique executive search firm that helps technology and digital companies hire exceptional leadership talent. We work across VC-backed scaleups, PE-owned businesses, and global tech companies, bringing deep sector expertise, high-touch service, and a track record of delivering outstanding candidates quickly and effectively.

INACTIVE