Full-Time

Account Executive

Enterprise

Updated on 5/12/2026

Fal

Fal

51-200 employees

NLP-based sentiment & anomaly detection

Compensation Overview

$300k - $360k/yr

+ Equity

San Francisco, CA, USA

In Person

Relocation assistance to San Francisco available; visa sponsorship offered.

Category
Sales & Account Management (1)
Requirements
  • Five or more years of Business-to-Business sales experience in artificial intelligence, Software-as-a-Service, or technology startups, with a strong track record of exceeding quotas
  • Proficiency in engaging and selling to Chief Executive Officers or other high-level decision-makers within complex organizational structures
  • Exceptional negotiation skills, including the ability to navigate multi-stakeholder deals with technical, legal, and financial components
  • Outstanding communication and presentation abilities, capable of addressing both technical and non-technical audiences effectively
  • Growth mindset with a passion for generative AI and a drive to pioneer cutting-edge solutions in a fast-paced environment

Fal.ai helps businesses improve data analytics using NLP and ML, focusing on sentiment analysis and anomaly detection within dbt data models. It analyzes text from customer reviews, support tickets, and surveys to label sentiment as positive, negative, or neutral, and flags unusual patterns in data transformations. The platform integrates with existing data infrastructure using dbt models and is offered via tiered subscriptions that include basic sentiment analysis, advanced anomaly detection, and premium support. Its goal is to help data-driven organizations make informed decisions, improve customer satisfaction, and get continuous analytics updates.

Company Size

51-200

Company Stage

Series D

Total Funding

$593.9M

Headquarters

Seattle, Washington

Founded

2021

Simplify Jobs

Simplify's Take

What believers are saying

  • Fal scaled from $1M to $40M ARR in one year, serving 1M+ developers.
  • Fal raised $140M Series D from Sequoia, Kleiner Perkins, and NVIDIA in 2026.
  • Fal handles 100M+ daily requests with 99.99% uptime for Adobe and Shopify.

What critics are saying

  • OpenAI exits GPT-Image-2 waitlist by November 2026, eliminating fal's access moat.
  • ByteDance launches Seedance 2.0 direct API in 12 months, capturing fal's margins.
  • Canva and Adobe build in-house inference by Q4 2026, slashing fal's $50M ARR.

What makes Fal unique

  • Fal delivers fastest inference for 1000+ generative media models via serverless GPUs.
  • Fal launches Day 0 APIs for frontier models like GPT-Image-2 and HappyHorse-1.0.
  • Fal's single API optimizes video, audio, image, and 3D models with 10x lower latency.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Dental Insurance

Vision Insurance

Company Equity

Relocation Assistance

Growth & Insights and Company News

Headcount

6 month growth

8%

1 year growth

1%

2 year growth

10%
The Associated Press
Apr 11th, 2026
ByteDance's Seedance 2.0 AI video generation API launches on fal platform

Seedance 2.0, ByteDance's latest AI video generation model, is now live on fal via API and playground. The multimodal audio-video architecture supports text, image, audio and video inputs, focusing on cinematic quality and motion realism. The model handles complex camera movements including dolly zooms, rack focuses and tracking shots, whilst generating synchronised audio natively. fal offers six API endpoints covering text-to-video, image-to-video and reference-to-video workflows, each available in standard and fast versions. fal is a generative media platform serving over 2 million developers and enterprises including Adobe, Shopify and Canva. ByteDance selected fal as its enterprise partner for the Seedance 2.0 rollout, providing official access with over 99.99% uptime. The platform offers per-second pricing and free credits for developers.

fal.ai
Mar 24th, 2026
Inworld TTS-1.5 Max now available on fal.

Inworld TTS-1.5 Max now available on fal. Fal.ai Inc. is excited to add Inworld TTS-1.5 Max to fal, expanding its set of cutting-edge real-time voice models on the platform. The model focuses on low-latency speech generation, improved expressiveness, and multilingual support for production use cases. As voice becomes a core interface across applications, from assistants to media experiences, developers need models that balance latency, quality, and cost. TTS-1.5 Max is designed to operate within these constraints while supporting real-time interactions. What is Inworld TTS-1.5 Max? Inworld TTS-1.5 Max is a text-to-speech model built for expressive, low-latency voice synthesis. It is part of the TTS-1.5 family, which includes both Max (higher quality) and Mini (lower latency) variants. The Max model is positioned as the default option for most applications, prioritizing voice quality and expressive range while maintaining near-realtime responsiveness. Key characteristics. Realtime latency TTS-1.5 Max achieves time-to-first-audio under ~250ms (P90), enabling conversational and interactive use cases where response time impacts user experience. Improved expressiveness and accuracy Compared to earlier versions, the model introduces higher expressive range and lower word error rates. This reduces artifacts such as mispronunciations, cutoffs, and unnatural pacing. Multilingual support The model supports 15 languages, including expanded coverage for global applications and use cases like localization and translation. Cost profile Pricing is structured at approximately $0.01 per minute ($10 per million characters), positioning it as a lower-cost option relative to many comparable realtime TTS systems. Try it on fal. You can start using Inworld TTS-1.5 Max on fal to generate expressive speech, test latency-performance tradeoffs, and integrate voice into your applications. Stay tuned to its X, blog or Reddit for the latest updates on generative media and new model releases!

The Information
Mar 19th, 2026
Video Hosting Startup Fal in Funding Talks at $8 Billion Valuation

Fal, a fast-growing cloud service for accessing and storing AI models that generate images, video, and audio, is in talks to raise $300 million to $350 million, according to a person with direct knowledge of the fundraise. The deal, which would nearly double the company’s paper valuation to ...

The Business Standard
Mar 9th, 2026
Editorialge Media launches ImagineLab.art: A unified AI studio set.

Editorialge Media launches ImagineLab.art: A unified AI studio set. Editorialge Media LLC has launched ImagineLab.art, a browser-based AI workspace that brings image, video, infographic, voice generation, and prompt assistance into a single platform, the company said. Creators and agencies often face the same frustrating headache: paying for five different subscriptions, juggling half a dozen browser tabs, and still struggling to get a project done. The renowned US-UK-based company, Editorialge Media, built ImagineLab Art to fix exactly that. Instead of a messy, scattered process, ImagineLab gives you everything in one place - one platform, one simple bill, and direct access to the best AI models out there today. Launched on 7 March 2026 from Sheridan, Wyoming, USA, the platform is designed to reduce the need for multiple software subscriptions and separate tools by allowing users to create, edit and export multimedia content from one interface. According to the company, ImagineLab.art integrates premium AI models through Google Vertex AI and Fal.ai. The platform includes Google VEO 3.1 for video generation, Imagen 4.0 Ultra for image creation, Nano Banana Pro for infographic design, and a Gemini-powered voice engine for audio and voiceovers. ImagineLab.art replaces that fragmented workflow with one platform, one billing cycle, and access to the most capable AI models available today. Editorialge Media said the platform is built for a wide range of users, including independent creators, agencies, business professionals, students and educators. It said the system allows users to move from image generation to animation and voice integration within a single workflow. Under the strategic and visionary direction of Editorialge Media's Founder and CEO Sukanta Kundu Parthib, the platform bridges the gap between raw human creativity and enterprise-scale AI compute. The core leadership philosophy is driving the project. As the first Bangladeshi, Sukanta Kundu made this unified multimedia AI creative platform. Unlike platforms that merely aggregate different AI services, ImagineLab.art is engineered for a seamless workflow. A creative team can generate a photorealistic product image, animate it into a cinematic 4K video, add a high-fidelity voiceover, and export the final marketing asset - all within a single session. The company also introduced a unified billing system through what it calls the Editorialge Token (EDT), a credit-based model intended to simplify usage costs across tools. Payment options currently include bKash in Bangladesh and PayPal and Stripe for international users, with UPI support planned for India, the company said. Sukanta Kundu Parthib, founder and CEO of renowned global media company Editorialge Media LLC, said, "With ImagineLab.art, we want to empower creators around the globe. We designed this platform to be user-friendly and efficient." The company said the platform is supported by a team based across the United States and Southeast Asia. Tapos Kumar is serving as acting CTO, Aushnik Das as deputy CEO and COO, Sayedul Haq Mihir as chief technical adviser, and Gausul Hira as chief coordinator and QA lead. Through their BiTS (Binary Tech Station) strategic partnership, Tech Lead S. Saha built the unified EDT token economy, while Deputy Tech Lead Md. F. Al Mamun secured global accessibility by integrating vital local payment gateways like bKash and UPI. Users can register for a free account and receive welcome tokens by verifying their identity through Meta (Facebook and WhatsApp), giving them a chance to test the platform before buying credits.

fal.ai
Jan 29th, 2026
Grok Imagine is Now Available on fal

Grok Imagine is now available on fal. Fal.ai Inc. is excited to introduce Grok Imagine, a new multimodal release that brings five new model endpoints to a single creative stack, covering both generation and editing across image and video workflows. With these additions, teams can move faster from idea to polished output, whether they're generating assets from scratch or transforming existing media with precise, instruction-based edits. At the core of this release is a full generation stack that supports text-to-image, image editing, and a full range of video generation and video editing workflows. Grok Imagine also adds native audio video generation, making it possible to create richer, fully synchronized clips without relying on separate tools or post-production stitching. Built for speed and quality, Grok's video models support 480p and 720p generation. This marks xAI's biggest launch of generative models, and in this blog Fal.ai Inc. is going to go through all models launched in detail, analyze their core strengths and use cases unlocked. Model overview. Key model strengths. Cinematic aesthetic. Grok Imagine's cinematic outputs stand out because the acting reads as believable, the lighting stays physically consistent, and the focus behaves naturally. Characters move with coherent body language and timing, scenes maintain stable exposure and sensible light direction, and the camera's depth-of-field pulls attention the way you'd expect from a real lens. What's especially useful is that this "cinematic look" holds across both realistic renders and stylized generations: the model keeps the same discipline around exposure, depth-of-field, and composition even as the art direction changes. Native audio generation. Grok Imagine can generate video with native audio, so the final output includes sound that is perfectly synchronized with the video. This is useful for building clips that don't need post-processing. Native audio supports dialogue between multiple characters, with distinct turns and pacing that match the scene. Key capabilities include: * Natural back-and-forth: clear conversational timing (interruptions, pauses, reactions) * Character separation: different voices/tones per speaker, suitable for two-person (or more) scenes * Scene-aware delivery: dialogues are expressive and tonality aligns well with the moment Style adaptation. Grok Imagine's style adaptation is production-ready, especially for anime workflows. In the text-to-image example, Grok Imagine shows strong prompt adherence: the style stays uniform across the entire frame, fine design elements remain coherent, and the final image lands with a clean, high-end aesthetic. In the video example, the anime output holds up in motion with realistic mouth movement and tight synchronization alongside consistently beautiful visuals. Advanced world & physics understanding. Grok Imagine shows strong world and physics understanding, producing scenes that feel coherent rather than "animated on top" of reality. It handles motion, timing, and material behavior reliably. In the ball-drop example below, the VFX are tightly synchronized with the impacts: each bounce lands with the right cadence, and the effect triggers exactly when the ball contacts the surface. The audio also matches the materials convincingly; a heavier, sharper metallic ring for the metal ball and a denser, clacking marble sound that sells weight and texture. A subtle but telling detail is the ball's reflections: the cameraman's reflection appears on the ball and grows larger as it rolls closer, even though that wasn't specified in the prompt. That kind of "unasked-for" physical correctness is a strong signal that the model is tracking the scene's geometry, viewpoint, and reflective surfaces not just generating motion frame-by-frame. Use case spotlight. Video game animation & ads. Grok Imagine is a strong fit for video game content generation, especially when the goal is to produce clips that look and feel like real gameplay. In the video examples below, Fal.ai Inc. show a compilation of 20+ distinct game-style clips spanning different characters, camera angles, and environments. What stands out is the consistency of game-specific structure alongside smooth motion. Across very different scenes, the animation remains stable and fluid, and common UI elements like the minimap and other HUD components appear in the correct positions and feel naturally integrated. This kind of spatial and layout consistency matters for game content because it preserves the "game look" even as environments and characters change. This consistency makes Grok Imagine image and video models perfect for video game ad creatives and video ads. Endpoints. Getting started with Grok Imagine. The easiest way to explore Grok Imagine's capabilities is through fal's Playground, where you can experiment with prompts and see immediate results. A detailed guide on how to integrate Grok Imagine into your platform is available in its API documentation. Stay tuned to its Reddit, blog, X, or Discord for the latest updates on generative media and new model releases!