Work Here?
Industries
Cybersecurity
AI & Machine Learning
Company Size
201-500
Company Stage
Late Stage VC
Total Funding
$184.1M
Headquarters
Bristol, United Kingdom
Founded
2017
Immersive Labs offers a cybersecurity training platform that focuses on human-centric learning for large organizations. The platform uses Artificial Intelligence (AI) to create hands-on labs that simulate real-world scenarios, helping teams improve their response to cyber threats. It also provides performance metrics to measure progress and supports recruitment and retention of cybersecurity talent. The goal is to enhance organizational resilience against cyber incidents and streamline the management of security vulnerabilities.
Help us improve and share your feedback! Did you find this helpful?
Total Funding
$184.1M
Above
Industry Average
Funded Over
5 Rounds
Health Insurance
Dental Insurance
Disability Insurance
Mental Health Support
401(k) Company Match
401(k) Retirement Plan
Unlimited Paid Time Off
Flexible Work Hours
Remote Work Options
Paid Vacation
Enhanced Parental Leave
Professional Development Budget
Recognition & Rewards
SiliconAngle reports that Immersive Labs has introduced the AI Scenario Generator, an advanced tool designed to help organizations create customized cybersecurity threat simulations.
Immersive Labs, the leader in people-centric cyber resilience, today announced the appointment of Oliver Newbury to its Board of Directors.
Immersive Labs, the global leader in people-centric cyber resilience, today announced the launch of the Immersive Labs community, called the "Human Connection."
Immersive Labs launches online community to give its customers the Human edge against cyber threats.
BRISTOL, England & BOSTON--(BUSINESS WIRE)--Immersive Labs, the global leader in people-centric cyber resilience, today published its “Dark Side of GenAI” report about a Generative Artificial Intelligence (GenAI)-related security risk known as a prompt injection attack, in which individuals input specific instructions to trick chatbots into revealing sensitive information, potentially exposing organizations to data leaks. Based on analysis of Immersive Labs’ prompt injection challenge*, GenAI bots are especially susceptible to manipulation by people of all skill levels, not just cyber experts.Among the most alarming findings was the discovery that 88% of prompt injection challenge participants successfully tricked the GenAI bot into giving away sensitive information in at least one level of an increasingly difficult challenge. Nearly a fifth of participants (17%) successfully tricked the bot across all levels, underscoring the risk to organizations using GenAI bots.This report asserts that public and private-sector cooperation and corporate policies are required to mitigate security risks posed by the extensive adoption of GenAI bots. Leaders need to be aware of prompt injection risks and take decisive action, including establishing comprehensive policies for GenAI use within their organizations.“Based on our analysis of the ways people manipulate GenAI, and the relatively low barrier to entry to exploitation, we believe it’s imperative that organizations implement security controls within Large Language Models and take a ‘defense in depth’ approach to GenAI,” said Kev Breen, Senior Director of Threat Intelligence at Immersive Labs and a co-author of the report. “This includes implementing security measures, such as data loss prevention checks, strict input validation and context-aware filtering to prevent and recognize attempts to manipulate GenAI output.”Key Findings from Immersive Labs “Dark Side of GenAI” StudyThe team observed the following key takeaways based on their data analysis, including:GenAI is no match for human ingenuity (yet): Users successfully leverage creative techniques to deceive GenAI bots, such as tricking them into embedding secrets in poems or stories or altering their initial instructions, to gain unauthorized access to sensitive information.Users successfully leverage creative techniques to deceive GenAI bots, such as tricking them into embedding secrets in poems or stories or altering their initial instructions, to gain unauthorized access to sensitive information. You don’t need to be an expert to exploit GenAI: The report’s findings show that even non-cybersecurity professionals and those unfamiliar with prompt injection attacks can leverage their creativity to trick bots, indicating that the barrier to exploiting GenAI in the wild using prompt injection attacks may be easier than one would hope.The report’s findings show that even non-cybersecurity professionals and those unfamiliar with prompt injection attacks can leverage their creativity to trick bots, indicating that the barrier to exploiting GenAI in the wild using prompt injection attacks may be easier than one would hope
Find jobs on Simplify and start your career today
Industries
Cybersecurity
AI & Machine Learning
Company Size
201-500
Company Stage
Late Stage VC
Total Funding
$184.1M
Headquarters
Bristol, United Kingdom
Founded
2017
Find jobs on Simplify and start your career today