Winter 2025

Software Engineering Residency 2025

Multiple Teams

Posted on 9/4/2025

Persona

Persona

501-1,000 employees

Identity verification platform with SDKs

Compensation Overview

$60.10 - $64.90/hr

San Francisco, CA, USA

Hybrid

Relocation support for those based outside of San Francisco Bay Area.

Category
Software Engineering (2)
,
Requirements
  • Innovative mindset - You go beyond implementing what is tasked and make product suggestions and features that help push our product forward. You are naturally curious and embrace change.
  • Independent thinking - You can turn around features within days because you learn from your mistakes quickly, and know how to unblock yourself if you get stuck. You give yourself agency to take on new problems and drive them to impactful solutions.
  • People first - You care not just about the code, but the real-world impact your work can have on people. You are excited about building products that real people use, every day.
  • Continuous learner - You have a hunger for knowledge. You’re driven to continuously improve your skills, whether mastering new programming languages or diving deep into emerging technologies.
  • Collaborative spirit - Collaboration isn’t just a buzzword for you, it’s your secret to success. You leverage the diverse perspectives of your teammates and sometimes go out of your way to learn other functions to get the job done.
Responsibilities
  • Work on projects that span technologies, systems, and processes where you will design, test, and ship great code every day.
  • Communicate with our customers and help us build Persona!
  • Collaborate with our engineering, design, and product team on important features.

Generating company summary.

Company Size

501-1,000

Company Stage

Series D

Total Funding

$417.5M

Headquarters

San Francisco, California

Founded

2018

Simplify Jobs

Simplify's Take

What believers are saying

  • Chainlink partnership powers cross-chain credentials for $10-16T tokenized assets market by 2030.
  • Gartner ranks Persona Leader and #2 in Workforce use case, driving enterprise adoption amid 50x deepfake surge.
  • GENIUS Act clarity boosts crypto compliance demand, with stablecoins hitting 25% global volume in two years.

What critics are saying

  • Discord terminated Persona over FinCEN-linked codebase exposing surveillance ties, sparking client exodus.
  • Twitch mandates Persona despite controversies, triggering Twitch affiliate backlash and payment delays.
  • Nametag's patented deepfake engine erodes Persona's workforce IDV share within 12-24 months.

What makes Persona unique

  • Persona's Connect shareable KYC enables identity reuse across crypto platforms without duplicating processes.
  • Persona's KYA infrastructure verifies AI agents, developers, and owners for agentic commerce compliance.
  • Persona's Candidate Verification integrates with Greenhouse, Ashby, and Workday to combat 25% fake profiles by 2028.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Dental Insurance

Vision Insurance

401(k) Company Match

Unlimited Paid Time Off

Family Planning Benefits

Professional Development Budget

Wellness Program

Growth & Insights and Company News

Headcount

6 month growth

1%

1 year growth

3%

2 year growth

-1%
Biometric Update
Mar 13th, 2026
Biometric IDV helps secure enterprise identity workflow

Biometric IDV helps secure enterprise identity workflow. Nametag patents, Persona software protects against deepfake fraud from employee onboarding to system access Mar 13, 2026, 11:02 am EDT Masha Borak Enterprises are facing rising threats from deepfakes and synthetic identities, from attackers posing as employees to fake job seekers. Companies including Nametag and Persona are developing new technologies and products to secure the enterprise identity workflow. 2 Nametag patents protect against enterprise deepfake workforce fraud. Nametag has been issued two U.S. patents related to human identity verification: the first relates to a technology that helps associate a verified person with the correct account within enterprise identity systems, while the second allows previously verified individuals to confirm their identity again using only a selfie. Both patents, listed under 12,561,418 and 12,562,904, have a role in defending against deepfakes and synthetic identities. The technologies are used in Nametag's Deepfake Defense identity assurance engine, which relies on biometrics, cryptography, AI analysis, and more to detect and block sophisticated attacks. The tool is designed to help organizations confirm the right person is accessing their systems. Enterprises have previously focused on verifying credentials and devices. Attackers, however, are increasingly managing to convince organizations that they are legitimate employees. "Generative AI is making impersonation dramatically easier," says Aaron Painter, CEO of Nametag. "Organizations can no longer assume that someone who appears to be an employee actually is one." Nametag says its technology also helps confirm a person's identity outside of normal authentication processes, such as when resetting multi-factor authentication (MFA). The company's Selfie Chaining feature can help individuals verify their identity again in just seconds. The U.S.-based company has already integrated its Deepfake Defense tool and ID verification into other products, including those from identity and access management (IAM) provider Beyond Identity. Another big win for the firm has been a deal with the Indian Aadhaar, one of the world's most widely adopted digital identity systems. Nametag announced last year that it was bringing its eID verification system and deepfake detection tools to Aadhaar through a licensed third-party provider authorized under Indian regulations. Persona's biometric candidate verification tools streamlines employee onboarding. Enterprises are not just threatened by attackers posing as employees; fake candidates and AI-generated resumes have been flooding recruitment teams, inspiring identity platform Persona to launch a new tool designed for future employee onboarding. The company's new candidate verification product plugs directly into existing applicant tracking systems, including Greenhouse, Ashby, and Workday, and prompts candidates to confirm their identity via government ID and selfie at three points in the hiring process: initial screening, before key interviews, and at the offer stage. Unlike one-off verification tools, Persona's system carries a candidate's confirmed identity forward into onboarding, IT provisioning, and ongoing authentication,bridging a gap between HR and IT that the company says most competitors leave open. For candidates, the check takes seconds on mobile. Suspicious sessions get flagged for human review without slowing down legitimate applicants. The tool is part of Persona's broader workforce identity suite, available across more than 200 countries. Persona has been working with gig working platforms such as TaskRabbit, GetYourGuide, Fiverr and Lyft. Last year, Persona announced a partnership with background screening platform Yardstik to create an integrated product that can help organizations verify workers. The tool brings together identity verification with screening products such as criminal and MVR Driving Records checks, drug screening and more. Article topics. Latest biometrics news. Mar 13, 2026, 12:14 pm EDT The way digital identity is commonly presented - as a tool for protecting against fraud and other benign uses -... Mar 13, 2026, 12:10 pm EDT The U.S. Centers for Medicare & Medicaid Services (CMS) announced that is rolling out what it calls enhanced login options... Mar 13, 2026, 11:01 am EDT Trulioo has appointed a new chief risk and strategy officer, chief transformation officer, and chief financial officer as the company... Mar 12, 2026, 6:54 pm EDT A handful of new facial recognition algorithms have been added to the NIST FRTE 1:N Identification this year, but most... Mar 12, 2026, 4:38 pm EDT The European Commission has published an age verification Use Case Manual, showcasing how citizens will be able to prove they... Mar 12, 2026, 3:55 pm EDT In a new fraud strategy, the UK government is showing its commitment in fighting fraud and the way it has...

PR Newswire
Mar 11th, 2026
Persona launches candidate verification as Gartner warns 25% of job profiles will be fake by 2028

Persona has launched Candidate Verification, a solution that verifies job applicants' identities during the hiring process. The platform integrates with Ashby, Greenhouse and Workday, enabling recruiters to confirm candidates are real before granting access. The tool addresses rising hiring fraud driven by increased application volumes and AI-enabled deepfakes. Gartner estimates one in four candidate profiles will be fake by 2028, whilst cybersecurity firms report hundreds of Fortune 500 companies have unknowingly hired North Korean operatives since 2020. The solution verifies government-issued IDs across 200-plus countries using live selfies and real-time liveness checks, enhanced with device, behavioural and network signals. Verification flows adapt automatically based on session risk and geography. Candidate Verification extends Persona's existing workforce identity platform, which integrates with Okta, Cisco Duo and major HRIS systems.

TheGamer
Feb 24th, 2026
Twitch Has Partnered With The Same Age Verification Company Linked To Mass Surveillance That Discord Cut Ties With

Twitch has partnered with the same age verification company linked to mass surveillance that Discord cut ties with. Published Feb 24, 2026, 12:15 PM EST Jack Coleman (He/Him) is a News Editor from Ireland. This is his third full year in games media, having previously worked freelance for various outlets, including DualShockers and NME. A lifelong gamer, Jack is primarily interested in RPGs and narrative experiences. He's also been playing League of Legends for a decade, unfortunately. Update: 2026/02/24 20:08 EST by Jack Coleman. The original article was updated to reflect comments from Discord regarding its relationship with Persona. The original copy and headline suggested Discord parted ways with Persona because of controversy surrounding the latter company, which isn't the case. According to Discord, the relationship between the two was always intended to be temporary. The introduction of the Online Safety Act in the United Kingdom has forced companies to self-moderate their online platforms to ensure underage users can't access content not intended for their consumption. This legislation has created an immediate need for age verification tools and, predictably, companies are outsourcing this responsibility to third-party software. Discord had originally partnered with Persona, a software company that provided exactly the service required. However, controversy surrounding Persona's alleged links to United States government mass surveillance soon emerged. According to a spokesperson from Discord, the relationship between Discord and Persona was always intended to be a temporary one, and the two companies did not part ways because of any allegations levelled against the latter, as many believed. Discord's chief executive officer, Stanislav Vishnevskiy, recently released a blog post outlining the company's plans regarding age verification. Discord intends to be fully transparent about the third parties it intends to utilise for age verification, and plans to offer multiple ways for users to verify their age. Two plus two equals four. Now, through the Bluesky user Tawny Code Cat (nice spot, GamesRadar+), it's been revealed that Twitch has partnered with the very same company for its age verification procedures. In fact, Twitch appears to be withholding payments from affiliates until they submit their user data to Persona. The screenshots provided by Tawny Code Cat show that Twitch requires a screenshot of a "government-issued identification card" and "a selfie taken using a webcam or phone camera." Beginning the process moves the user off of Twitch and onto Persona's own platform, which means the streaming platform has very little, if any, control over how the data is used after the user is age-verified. When the user asked if Twitch offered any alternative age verification measures, they were told that there weren't any available, and the use of Persona was mandatory. Discord's age verification processes will require users to scan their face or their ID starting from March. Feb 9, 2026 The alleged connection between Persona and government mass surveillance was revealed in a report by a group of researchers who discovered the connection buried in the software's source code. "The same company that takes your passport photo when you sign up for ChatGPT also operates a government platform that files suspicious activity reports (SAR) with FinCEN [Financial Crimes Enforcement Network] and tags them with intelligence program codenames, same code base. Confirmed by matching git commit hashes across deployment," reads the report. "Is there a direct pipeline between OpenAI's [which uses Persona] millions of monthly screenings and the government's [suspicious activity report] filing system? The code doesn't prove it, but the code does prove that Persona operates both systems, that both run the same software, and that both are live right now," the report continues. Persona's chief operating officer, Christie Kim, responded to these allegations in an email to customers, writing, "Over the past week, multiple social media posts and online articles have circulated repeating misleading claims about Persona, insinuating conspiracies around our work with Discord and our investors." "Transparently, we are actively working on a couple of potential contracts which would be publicly visible if we move forward," the email continued. "However, these engagements are strictly for workforce account security of government employees and do not include ICE or any agency within the Department of Homeland Security." Twitch has yet to comment on the platform's use of Persona. * founded - June 6, 2011 * founders - Emmett Shear * number of users - 2.78 Average Concurrent Users (2021)

NogenTech
Jan 21st, 2026
From "Open Chat" to "Age-Aware AI"

From "open chat" to "age-aware AI" The age prediction rollout grew out of a series of high-pressure regulatory demands and tragic cases involving minor safety. While ChatGPT was originally a blank slate for all users, OpenAI noticed that teens were frequently engaging in high-risk conversations that the system wasn't originally designed to manage. "Young people deserve technology that both expands opportunity and protects their well-being," OpenAI stated in their official Teen Safety Blueprint. The goal is to move away from a "one-size-fits-all" AI and toward a context-aware assistant that understands who is on the other side of the screen. Built on patterns, not papers. One of the most striking details about this rollout is how the AI determines age. Instead of asking for ID upfront, the model scans "behavioral markers." If a user is active primarily during after-school hours, uses specific slang, or asks about homework-related topics, the system's "confidence score" for that user being a minor increases. Observers called it a bold experiment in algorithmic profiling, where your "vibe" determines your digital rights. If the system is unsure, it is programmed to "play it safe" and default the user to the restricted under-18 experience. Direct verification via Persona. For adults who feel the AI has misjudged them, the process is not automatic. OpenAI has integrated with Persona, a third-party identity service. To "verify out" of the teen mode, users must provide a live selfie or a government-issued ID. While OpenAI claims Persona deletes this data within 7 days, privacy advocates remain skeptical. They argue that users are being forced into a "privacy-for-freedom" trade: either let the AI guess your age by monitoring your chats, or give your face scan to a third-party database. Who is affected and what's next. Currently, the feature is active for all ChatGPT consumer plans globally. Users in the European Union will see the update in the coming weeks as OpenAI navigates the region's stricter GDPR and AI Act requirements. OpenAI admits the system is "not perfect" and will get it wrong sometimes. However, the company has drawn a clear line. Anonymous, age blind AI access is no longer acceptable at scale. This move establishes a new baseline for consumer AI platforms and signals where the industry is heading, whether users approve or not.

Automios
Jan 21st, 2026
OpenAI Rolls Out Age Prediction Technology to Make ChatGPT Safer for Teens

OpenAI rolls out age prediction technology to make ChatGPT safer for teens. OpenAI is taking a major step toward protecting younger users with a new age prediction system now being deployed across ChatGPT. The feature uses artificial intelligence to estimate whether an account belongs to someone under 18, automatically triggering stricter content filters and setting the stage for a future "adult mode" designed for verified grown-ups. How the age prediction system works. The technology doesn't ask users to input their birthdate. Instead, it analyzes behavioral patterns and account characteristics to make an educated guess about age. OpenAI's model examines factors like how long an account has been active, when users typically log in, and their interaction patterns over time. If the system determines there's a strong likelihood the user is a teen, it quietly activates enhanced safety measures in the background. These protections are designed to shield minors from content that could be harmful or inappropriate, think graphic violence, self-harm discussions, dangerous internet challenges, or other sensitive material that shouldn't land in front of young eyes. The approach aligns with OpenAI's Teen Safety Blueprint and its published principles for how AI should interact with users under 18. What happens if you're flagged incorrectly. Of course, no prediction system is perfect. Adults who get mistakenly classified as teens can quickly fix the issue through an age verification process. OpenAI has partnered with Persona, a trusted identity verification service, to handle these cases. Users simply submit a selfie to confirm they're over 18, and once verified, all content restrictions are lifted. It's a straightforward solution that balances safety with user autonomy. The bigger picture: adult mode coming soon. This age prediction rollout isn't just about protecting kids, it's also laying the groundwork for something new. OpenAI plans to introduce an adult content mode in early 2026, allowing verified adults to access more mature, nuanced responses that might not be suitable for younger audiences. Fidji Simo, CEO of OpenAI's applications division, has highlighted this feature as part of the company's vision to create more personalized AI experiences. With over 800 million people using ChatGPT every week, these changes could reshape how AI platforms approach content moderation and user safety on a global scale. OpenAI is betting that smarter age detection, combined with optional verification, can create a safer environment for teens while giving adults the freedom to use AI on their own terms. About automios. Automios is an AI-first technology company offering information technology, software development, AI/ML development, and application development solutions for fast growing businesses. Learn more about its offerings here.

INACTIVE