Full-Time

Applied AI Data Scientist

AE Studio

AE Studio

51-200 employees

Develops real-time ML-based BCI software

No salary listed

Los Angeles, CA, USA

Hybrid

Category
Data & Analytics (2)
,
Requirements
  • Fluency in Python.
  • Experience with LLM lifecycle: prompt design/engineering, prompting techniques (RAG, few-shot, CoT, etc.), vector databases, multimodality, fine-tuning, and evaluation.
  • Proven data science experience: You’ve been a key contributor to impactful projects and know how to deliver results with real-life outcomes.
  • Statistical & causal ML fundamentals: expertise in experimental design, uncertainty quantification, and rigorous model evaluation across tabular, time-series, and foundation-model fine-tuning tasks.
  • Deep learning experience: Expertise in building/training NLP or computer vision models with PyTorch, TensorFlow, or JAX.
  • Agile & AI-powered development: You run lean Kanban/Scrum cycles and leverage AI-powered tools like Cursor to prototype quickly and ship high-impact solutions.
  • Growth mindset: You embrace challenges, value progress over perfection, and constantly seek to improve.
  • Self-management: You can work independently, take ownership of projects, and deliver without constant oversight.
  • Product and UX understanding: You care about delivering a seamless, user-friendly experience.
  • Effective communication in English: Clear, concise, and maybe even witty communication is essential.
Responsibilities
  • You’ll work on a mix of client projects and internal research initiatives.
  • You’ll solve real-world problems, help build meaningful products, and contribute to groundbreaking AI alignment research.
  • Your work will involve building data-driven solutions, leveraging machine learning models, and using your creativity to deliver results that matter.
  • In addition to client projects, you’ll have the opportunity to propose and pursue high-impact research projects, especially those aligned with our mission of increasing agency.
  • Promising projects could even become prioritized skunkworks initiatives.
Desired Qualifications
  • Self-managed projects: Ideally something that you helped develop from zero and shipped to real users.
  • Startup experience: You thrive in dynamic environments and enjoy taking on new challenges.
  • Client relationship management: Experience managing clients and delivering excellent results.
  • Passion for AI alignment: You care deeply about humanity’s future and want to help navigate the challenges of advancing AI.

AE Studio develops custom software for neurotechnology, focusing on Brain-Computer Interface (BCI) systems. Its work involves collaborating with clinical researchers and hardware makers worldwide to create software that interprets brain activity in real time with machine learning, enabling direct communication between the brain and external devices. The products are built as bespoke software solutions—covering ML models, real-time processing, and system integration—to help clients advance research and bring neurotechnology products to market faster. Unlike some competitors, AE Studio emphasizes user autonomy and responsible tech use, prioritizing human agency and reducing risk of manipulation. It also supports public-good efforts through grants and open-source resources. The company’s goal is to make neurotechnology tools usable and accessible, speeding scientific progress while ensuring technology serves and empowers people rather than constrains them.

Company Size

51-200

Company Stage

N/A

Total Funding

N/A

Headquarters

Los Angeles, California

Founded

2016

Simplify Jobs

Simplify's Take

What believers are saying

  • Blackrock Neurotech partnership targets commercial BCI platform launch in 2025.
  • ESR research on LLM self-correction positions AE as AI alignment thought leader.
  • Enterprise clients like Nylas and Scotch & Soda expand recurring revenue streams.

What critics are saying

  • Blackrock Neurotech launches independently, capturing full BCI market revenue.
  • NFT market collapse erodes Web3 credibility and diverts resources from core BCI.
  • Specialized BCI firms like Neuralink capture clinical researcher clients amid funding cuts.

What makes AE Studio unique

  • Bootstrapped agency reroutes consulting profits into BCI and alignment research.
  • Team of 150 includes ML PhDs, founders, and product builders with decade-long AI expertise.
  • Donates 5% revenue to effective charities while maintaining profitability and growth.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Company Equity

Company News

PulseBot
Mar 27th, 2026
Introducing the AE Alignment Podcast (ep. 1: Endogenous Steering Resistance with Alex McKenzie).

Introducing the AE Alignment Podcast (ep. 1: Endogenous Steering Resistance with Alex McKenzie). - March 27, 2026 Why it matters. ESR reveals internal consistency mechanisms that could harden models against adversarial manipulation while potentially obstructing safety tools that rely on activation steering, making it a pivotal focus for AI alignment efforts. Key takeaways. * - ESR observed in Llama-3.3-70B self-corrects off-topic steering * - 26 SAE latents linked causally to ESR behavior * - Zero-ablating latents cuts multi-attempt rate by 25% * - Meta-prompting can quadruple ESR self-correction frequency * - ESR may hinder activation-steering safety interventions Summary. AE Studio has launched the AE Alignment Podcast, debuting with an interview featuring Alex McKenzie on Endogenous Steering Resistance (ESR). ESR describes a surprising behavior in large language models - such as Llama-3.3-70B - where they interrupt off-topic steering and self-correct mid-generation. The accompanying paper identifies 26 sparse autoencoder latents that drive this effect and shows that zero-ablating them cuts the multi-attempt rate by 25%. Researchers also demonstrate that meta-prompting can boost ESR's self-correction rate fourfold, highlighting both safety opportunities and challenges. Pulse analysis. The discovery of Endogenous Steering Resistance adds a nuanced layer to the AI safety discourse, emphasizing that large language models are not merely passive executors of external prompts. Instead, they appear to host internal monitoring circuits that can detect and counteract artificial perturbations. This behavior aligns with broader research on model interpretability, where sparse autoencoders expose latent structures governing specific functions. By pinpointing 26 SAE latents responsible for ESR, the study provides a concrete target for future alignment work, offering a rare causal link between model internals and observable safety-relevant outcomes. From a technical standpoint, the ability to modulate ESR through meta-prompting and fine-tuning suggests that these self-correction pathways are plastic rather than fixed. Zero-ablation experiments, which reduce the multi-attempt generation rate by a quarter, demonstrate that intervening on a small set of latents can materially alter model behavior. This mirrors biological attention-control systems, hinting at convergent solutions across natural and artificial intelligence. For practitioners, the findings raise practical questions about the reliability of activation-steering techniques used in representation engineering, reinforcement learning from human feedback, and other alignment interventions. Strategically, ESR's dual nature - potentially shielding models from adversarial steering while complicating safety tooling - poses a strategic dilemma for AI developers and policymakers. As alignment teams integrate these insights, they must balance leveraging ESR for robustness against ensuring that safety mechanisms remain effective. The AE Alignment Podcast serves as a conduit for disseminating such cutting-edge research, fostering community dialogue, and accelerating the translation of academic findings into industry practice. Continued funding from entities like the AI Alignment Foundation and the UK AI Security Institute underscores the growing institutional commitment to resolving these alignment challenges. Trent Hodgeson PULSE is launching the AE Alignment Podcast, a new series from AE Studio's alignment research team where PULSE talk with researchers about their work on AI safety and alignment. In its first episode, host James Bowler sits down with Alex McKenzie to discuss Endogenous Steering Resistance (ESR), a phenomenon where large language models spontaneously resist activation steering during inference, sometimes recovering mid-generation to produce improved responses even while steering remains active. What is ESR? When you artificially perturb a language model's internal activations using sparse autoencoder (SAE) latents to push it off-topic, you'd expect the model to just go along with it. Smaller models do, but Llama-3.3-70B does something unexpected: it sometimes catches itself mid-generation, says something like "Wait, that's not right," and course-corrects back to the original task. The paper identifies 26 SAE latents that activate differentially during off-topic content and are causally linked to this self-correction behavior. Zero-ablating these latents reduces the multi-attempt rate by 25%, providing causal evidence for dedicated internal consistency-checking circuits. Key findings include: * ESR can be deliberately enhanced through meta-prompting (4x increase in self-correction rate) and fine-tuning * ESR has dual implications for safety: it could protect against adversarial manipulation, but it might also interfere with beneficial safety interventions that rely on activation steering * The phenomenon parallels endogenous attention control in biological systems, connecting to work on attention schema theory Why this matters. This work raises important open questions for the alignment community. If models develop internal mechanisms to resist externally imposed changes to their activations, that's both potentially good news (robustness against adversarial attacks) and potentially bad news (resistance to safety interventions like representation engineering). Understanding and controlling these mechanisms seems important for developing transparent and controllable AI systems. The research was funded by the AI Alignment Foundation (formerly Flourishing Future Foundation), and the continuation of this work is now supported by a grant from the UK AI Security Institute through the Alignment Project. PULSE plan to release episodes regularly featuring conversations with alignment researchers about their work. If you have feedback on the episode or suggestions for future topics, PULSE'd love to hear from you. PULSE is also hiring alignment data scientists and alignment technical PMs who want to work on alignment full-time. Want to join the conversation?

AiThority
Jan 9th, 2024
Nylas and AE Studio Speed Up Development With Large Language Models

The partnership comes on the heels of Nylas being named to the Deloitte Technology Fast 500(TM) for the second consecutive year and being recognized as a Gartner(R) Cool Vendor in Composable Customer Engagement Platforms, while AE Studio was recently named a Clutch Top Artificial Intelligence Company for the year.

AE Studio
Aug 18th, 2023
We Donate 5% Of Our Profits - Could We Do More If We Didn't?

But now AE Studio has introduced three new problems.

Physics World
Sep 10th, 2022
AE Studio partners with Blackrock Neurotech

AE Studio recently announced a collaboration with Blackrock Neurotech, which aims to release the first commercial BCI platform next year.

NFTevening
Mar 27th, 2022
AE Studio partnered with Edge of NFT on Feb 27th 22'.

AE Studio has recently announced its partnership with Edge of NFT to launch ‘Edge of AE’ Studio at NFT LA.