Full-Time
Posted on 2/10/2025
AI-powered data leak prevention for SaaS
No salary listed
San Francisco, CA, USA
In Person
| , , |
Nightfall.ai provides data leak prevention for SaaS and cloud environments using AI-powered API integrations that developers embed into their applications to scan for sensitive data. The system classifies and protects data with configurable detection rules, covering platforms like Slack, Google Drive, GitHub, Confluence, Jira, Salesforce, Asana, and Zendesk. It distinguishes itself by being developer-facing and embeddable within existing workflows rather than offering a standalone security product. The goal is to reduce the risk of data breaches and help organizations achieve data protection and compliance in cloud and SaaS environments.
Company Size
51-200
Company Stage
Series B
Total Funding
$60.3M
Headquarters
San Francisco, California
Founded
2018
Help us improve and share your feedback! Did you find this helpful?
Hybrid Work Options
Nightfall has launched an AI Browser Security solution to prevent real-time data exfiltration through AI tools and modern web workflows that traditional data loss prevention systems cannot monitor. The platform addresses security gaps as employees increasingly use ChatGPT, Claude, and other AI tools by pasting code, uploading spreadsheets and sharing screenshots. The solution operates directly within browsers including Chrome, Edge and AI-native browsers, intercepting file uploads, clipboard actions and screenshot sharing before transmission. It combines browser-native interception with endpoint monitoring and SaaS API enforcement, using machine learning and computer vision to detect sensitive information including credentials, source code and proprietary data. The San Francisco-based company is backed by Bain Capital Ventures, Venrock and WestBridge Capital.
Nightfall expands data protection with AI Browser Security for browsers, endpoints and SaaS. Cloud data protection startup Nightfall today announced the launch of its AI Browser Security solution, a new solution designed to stop real-time data theft through artificial intelligence tools, AI-powered browsers and modern web workflows that legacy data loss prevention solutions cannot see or control. The solution seeks to assist with the issue that has arisen as employees increasingly rely on ChatGPT, Claude, Gemini, Copilot and emerging AI-native browsers to analyze documents, debug code and summarize business data. Sensitive information is routinely exposed through browser-based uploads, clipboard pastes, screenshots and autonomous agent interactions. Nightfall argues that traditional data loss prevention tools, which were built for email attachments, USB drives and static pattern matching, lack visibility inside browsers and encrypted sessions. As a result, they leave organizations blind to their fastest-growing data loss vector. The new AI Browser Security solution closes the gap through the provision of an AI-native security architecture. It operates directly with the browser and with endpoint and software-as-a-service layers where exfiltration, or theft, occurs in 2026 to deliver real-time prevention before sensitive data ever leaves the organization. According to Nightfall, in 2026, data exfiltration can include proprietary source code being pasted directly into AI chat interfaces, financial and customer data being dragged into AI tools over encrypted HTTPS, screenshots and images that bypass file-based controls entirely and data lineage that is lost as content moves between SaaS apps, endpoints and browsers. Because traditional data loss prevention relies on regular-expression rules, network inspection and after-the-fact alerts, Nightfall says, these workflows often go undetected until sensitive data has already left the organization. "Employees aren't bypassing security out of malice; they're pasting code, uploading spreadsheets and sharing screenshots to get work done," said co-founder and Chief Executive Rohan Sathe. "Legacy data loss prevention was never designed to see or understand those actions. Nightfall's AI-native browser security gives teams visibility and control at the exact moment data is shared." Nightfall's AI Browser Security solution delivers coverage across every major data exfiltration path, starting with browser-native interception. The solution works directly inside modern browsers and AI browsers to provide real-time visibility into file uploads, clipboard paste actions, form submissions and screenshot-based sharing to any website or AI application. The content is analyzed and blocked before transmission, without proxies, SSL inspection or workflow disruption. For endpoint protection, the solution offers protection beyond the browser by monitoring cloud sync tools, desktop AI applications, Git and command-line operations, USB transfers, printing and clipboard activity across applications. For SaaS, the solution works with leading cloud platforms to deliver continuous scanning of data at rest and in motion, with full visibility into where sensitive data originates, how it's transformed and where it's headed. Core to the offering is Nightfall's AI-native detection engine, which applies machine learning and large language models to identify sensitive data with high precision and without manual tuning. The engine can detect credentials, personally identifiable information, payment card data and protected health information while also understanding business context. That way it can classify content such as source code, customer lists, financial projections, board materials and proprietary intellectual property. The detection engine comes with computer vision and optical character recognition to identify sensitive information embedded in screenshots, scanned documents and images before they are shared. It also offers unified data lineage capabilities to trace content from its source to its attempted destination. That gives security teams forensic-grade visibility and enables real-time enforcement across browsers, endpoints and SaaS applications. Nightfall applies a single, unified policy framework across all layers of enforcement that allows security teams to define rules without the need to manage disconnected tools or inconsistent controls. The company adds that the unified approach allows organizations to safely enable AI adoption while maintaining the visibility, governance and control required in regulated and high-risk environments. Image: siliconangle/ideogram. A message from John Furrier, co-founder of SiliconANGLE: Support its mission to keep content open and free by engaging with theCUBE community. Join theCUBE's Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities. * 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more * 11.4k+ theCUBE alumni - Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network. SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios - with flagship locations in Silicon Valley and the New York Stock Exchange - SiliconANGLE Media operates at the intersection of media, technology and AI. Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Its new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
Just as cloud platforms quickly scaled to provide enterprise computing infrastructure, Menlo Ventures sees the modern AI stack following the same growth trajectory and value creation potential as public cloud platforms. The venture capital firm says the foundational AI models in use today are highly similar to the first days of public cloud services, and getting the intersection of AI and security right is critical to enabling the evolving market to reach its market potential.Menlo Ventures’ latest blog post, “Part 1: Security for AI: The New Wave of Startups Racing to Secure the AI Stack,” explains how the firm sees AI and security combining to help drive new market growth. “One analogy I’ve been drawing is that these foundation models are very much like the public clouds that we’re all familiar with now, like AWS and Azure. But 12 to 15 years ago, when that infrastructure as a service layer was just getting started, what you saw was massive value creation that spawned after that new foundation was created,” said Rama Sekhar, Menlo Venture’s new partner who is focusing on cybersecurity, AI and cloud infrastructure investments told VentureBeat
Image created by Bridge with DreamStudio by Stability AI. ChatGPT はシャドー IT の新たな DNA であり、組織を誰も予想していなかった新たなリスクにさらしている。IT とサイバーセキュリティのリーダーは、セキュリティを犠牲にすることなく、そのスピードを活用する方法を見つける必要がある。OpenAI の報告によると、企業での導入が急増しており、フォーチュン500企業の従業員や部門の80%以上がアカウントを持っている。. ハーバード大学の最近の研究によると、企業の従業員は ChatGPT のおかげで40%のパフォーマンス向上を得ている。マサチューセッツ工科大学(MIT)の研究では、ChatGPT がスキルの不平等を減らし、文書作成時間を短縮する一方で、従業員が時間をより効率的に使えるようになることを発見した。ChatGPT は、企業で働く人々がより短時間でより多くの仕事をこなせるよう支援しているが、その一方で、ツールを何に使っているのかを共有したがらず、70%が上司にそのことを話していない。
ChatGPT is the new DNA of shadow IT, exposing organizations to new risks no one anticipated. IT and cybersecurity leaders need to find a way to capitalize on its speed without sacrificing security. OpenAI reports that enterprise adoption is surging, with over 80% of Fortune 500 companies’ employees and departments having accounts.Enterprise workers are gaining a 40% performance boost thanks to ChatGPT based on a recent Harvard University study. A second study from MIT discovered that ChatGPT reduced skill inequalities and accelerated document creation times while enabling enterprise workers to be more efficient with their time. ChatGPT is helping enterprise workers get more done in less time, with workers reluctant to share what they’re using the tool for. Seventy percent haven’t told their bosses about it. Reducing the risk of intellectual property loss without sacrificing speed ChatGPT’s greatest risk is having employees accidentally share intellectual property (IP), confidential pricing, cost, financial analysis and HR data with large language models (LLMs) accessible by anyone