Full-Time
Posted on 9/2/2025
Develops real-time ML-based BCI software
No salary listed
Los Angeles, CA, USA
In Person
| , |
AE Studio develops custom software for neurotechnology, focusing on Brain-Computer Interface (BCI) systems. Its work involves collaborating with clinical researchers and hardware makers worldwide to create software that interprets brain activity in real time with machine learning, enabling direct communication between the brain and external devices. The products are built as bespoke software solutions—covering ML models, real-time processing, and system integration—to help clients advance research and bring neurotechnology products to market faster. Unlike some competitors, AE Studio emphasizes user autonomy and responsible tech use, prioritizing human agency and reducing risk of manipulation. It also supports public-good efforts through grants and open-source resources. The company’s goal is to make neurotechnology tools usable and accessible, speeding scientific progress while ensuring technology serves and empowers people rather than constrains them.
Company Size
51-200
Company Stage
N/A
Total Funding
N/A
Headquarters
Los Angeles, California
Founded
2016
Help us improve and share your feedback! Did you find this helpful?
Company Equity
Introducing the AE Alignment Podcast (ep. 1: Endogenous Steering Resistance with Alex McKenzie). - March 27, 2026 Why it matters. ESR reveals internal consistency mechanisms that could harden models against adversarial manipulation while potentially obstructing safety tools that rely on activation steering, making it a pivotal focus for AI alignment efforts. Key takeaways. * - ESR observed in Llama-3.3-70B self-corrects off-topic steering * - 26 SAE latents linked causally to ESR behavior * - Zero-ablating latents cuts multi-attempt rate by 25% * - Meta-prompting can quadruple ESR self-correction frequency * - ESR may hinder activation-steering safety interventions Summary. AE Studio has launched the AE Alignment Podcast, debuting with an interview featuring Alex McKenzie on Endogenous Steering Resistance (ESR). ESR describes a surprising behavior in large language models - such as Llama-3.3-70B - where they interrupt off-topic steering and self-correct mid-generation. The accompanying paper identifies 26 sparse autoencoder latents that drive this effect and shows that zero-ablating them cuts the multi-attempt rate by 25%. Researchers also demonstrate that meta-prompting can boost ESR's self-correction rate fourfold, highlighting both safety opportunities and challenges. Pulse analysis. The discovery of Endogenous Steering Resistance adds a nuanced layer to the AI safety discourse, emphasizing that large language models are not merely passive executors of external prompts. Instead, they appear to host internal monitoring circuits that can detect and counteract artificial perturbations. This behavior aligns with broader research on model interpretability, where sparse autoencoders expose latent structures governing specific functions. By pinpointing 26 SAE latents responsible for ESR, the study provides a concrete target for future alignment work, offering a rare causal link between model internals and observable safety-relevant outcomes. From a technical standpoint, the ability to modulate ESR through meta-prompting and fine-tuning suggests that these self-correction pathways are plastic rather than fixed. Zero-ablation experiments, which reduce the multi-attempt generation rate by a quarter, demonstrate that intervening on a small set of latents can materially alter model behavior. This mirrors biological attention-control systems, hinting at convergent solutions across natural and artificial intelligence. For practitioners, the findings raise practical questions about the reliability of activation-steering techniques used in representation engineering, reinforcement learning from human feedback, and other alignment interventions. Strategically, ESR's dual nature - potentially shielding models from adversarial steering while complicating safety tooling - poses a strategic dilemma for AI developers and policymakers. As alignment teams integrate these insights, they must balance leveraging ESR for robustness against ensuring that safety mechanisms remain effective. The AE Alignment Podcast serves as a conduit for disseminating such cutting-edge research, fostering community dialogue, and accelerating the translation of academic findings into industry practice. Continued funding from entities like the AI Alignment Foundation and the UK AI Security Institute underscores the growing institutional commitment to resolving these alignment challenges. Trent Hodgeson PULSE is launching the AE Alignment Podcast, a new series from AE Studio's alignment research team where PULSE talk with researchers about their work on AI safety and alignment. In its first episode, host James Bowler sits down with Alex McKenzie to discuss Endogenous Steering Resistance (ESR), a phenomenon where large language models spontaneously resist activation steering during inference, sometimes recovering mid-generation to produce improved responses even while steering remains active. What is ESR? When you artificially perturb a language model's internal activations using sparse autoencoder (SAE) latents to push it off-topic, you'd expect the model to just go along with it. Smaller models do, but Llama-3.3-70B does something unexpected: it sometimes catches itself mid-generation, says something like "Wait, that's not right," and course-corrects back to the original task. The paper identifies 26 SAE latents that activate differentially during off-topic content and are causally linked to this self-correction behavior. Zero-ablating these latents reduces the multi-attempt rate by 25%, providing causal evidence for dedicated internal consistency-checking circuits. Key findings include: * ESR can be deliberately enhanced through meta-prompting (4x increase in self-correction rate) and fine-tuning * ESR has dual implications for safety: it could protect against adversarial manipulation, but it might also interfere with beneficial safety interventions that rely on activation steering * The phenomenon parallels endogenous attention control in biological systems, connecting to work on attention schema theory Why this matters. This work raises important open questions for the alignment community. If models develop internal mechanisms to resist externally imposed changes to their activations, that's both potentially good news (robustness against adversarial attacks) and potentially bad news (resistance to safety interventions like representation engineering). Understanding and controlling these mechanisms seems important for developing transparent and controllable AI systems. The research was funded by the AI Alignment Foundation (formerly Flourishing Future Foundation), and the continuation of this work is now supported by a grant from the UK AI Security Institute through the Alignment Project. PULSE plan to release episodes regularly featuring conversations with alignment researchers about their work. If you have feedback on the episode or suggestions for future topics, PULSE'd love to hear from you. PULSE is also hiring alignment data scientists and alignment technical PMs who want to work on alignment full-time. Want to join the conversation?
The partnership comes on the heels of Nylas being named to the Deloitte Technology Fast 500(TM) for the second consecutive year and being recognized as a Gartner(R) Cool Vendor in Composable Customer Engagement Platforms, while AE Studio was recently named a Clutch Top Artificial Intelligence Company for the year.
But now AE Studio has introduced three new problems.
AE Studio recently announced a collaboration with Blackrock Neurotech, which aims to release the first commercial BCI platform next year.
AE Studio has recently announced its partnership with Edge of NFT to launch ‘Edge of AE’ Studio at NFT LA.