Full-Time
Posted on 3/17/2026
Multimodal AI tools for text-to-image
$160k - $250k/yr
San Francisco, CA, USA + 1 more
More locations: New York, NY, USA
Remote
Runway Research provides multimodal AI tools via runwayml.com to help creatives transform text into images, edit and generate visuals from existing media, and produce new video content from prompts. Users access a web platform with ready-made models or the option to train custom models to create image-to-image, text-to-video, and frame-interpolated outputs. It differentiates itself by offering an integrated suite tailored for professionals (filmmakers, brands, enterprises) that supports an end-to-end workflow from concept to final visuals. Its goal is to help customers tell stories faster and more affordably by expanding what is possible with synthetic media, typically through a subscription or usage-based pricing model.
Company Size
201-500
Company Stage
Series E
Total Funding
$861.5M
Headquarters
New York City, New York
Founded
2018
Help us improve and share your feedback! Did you find this helpful?
Remote Work Options
Seedance 2.0 is now on Runway as the viral AI model continues its takeover. Runway is the latest AI video platform to announce that the viral AI video model Seedance 2.0 is now available on its platform. Is the takeover here, or is this just more smoke and mirrors? Runway Seedance 2.0 Credit: Runway Apr 07, 2026 Even for those who follow the AI video space, it can be hard to tell exactly which models are live and available at any given time. With the Wild West of AI still very much in place, there seem to be endless combinations of models, platforms, and options to explore. You can call it "AI filmmaking" or "AI slop" or whatever you want, but AI is still here, and the model that is making the most headlines these days is still ByteDance's Seedance 2.0. After a brief pause, which likely had to do with the myriad of lawsuits and legal threats levied against its Chinese parent company, Seedance 2.0 appears to be live once again and is being added to more AI platforms each day. The latest of which is Runway, which has announced that Seedance 2.0 is now live, but only available on plans and accounts outside of the US for now. Seedance 2.0 on Runway. One of the earliest players in the AI video space, Runway is a US-based generative AI company that has its own powerful model (Runway Gen 4.5), but also offers the ability for AI users to bring their entire workflows together inside of its own app by offering access to the world's best image, video, audio, editing, and language models. Alongside the war to develop the most sophisticated generative AI models, the AI companies appear to be battling just as hard to develop the platform that AI creators will call home, as Runway's offering here is in competition with the likes of Adobe Firefly, Higgsfield, and others to provide access to these different models. The big news here is that Runway has announced that Seedance 2.0 is now on its platform. In a post on Runway's official social channels, the company shares that users can now "use text, image, video, or audio as inputs to generate stunning multi-shot video sequences with full sound design and dialogue." Price, availability, and the always present ethical debates. However, at the end of Runway's announcement, the company shared that Seedance 2.0 will be available only on Unlimited plans and Enterprise accounts outside the US. And that appears to be consistent with how Seedance 2.0 is being rolled out and made available on other platforms as well. As the top performer and viral hit, Seedance 2.0 is the current lightning rod for the practical and ethical debates surrounding AI video generation. Since going viral with its first generation shared online, it appears the model has been trained on copyrighted materials and is likely to face lengthy legal battles in the future. However, as the AI industry is moving at such a breakneck speed, it also appears clear that the company simply might not care and is looking to grab as much as it can now before being surpassed by the next model. It's hard to say what will happen next, but for anyone interested in or simply terrified of Seedance 2.0, it is now on another top AI platform, but only available to the highest plans and accounts, and not available to those in the US... for now. From Your Site Articles
Runway launches $10 million venture fund for AI startups. March 31, 2026 at 3:53 PM - by MLQ Agent Key points. * Runway announced a $10 million venture fund targeting pre-seed and seed-stage startups in AI, media, and world simulation sectors with checks up to $500,000.2 * The company introduced a Builders program providing eligible startups with 500,000 free API credits and access to its Characters real-time video agent technology.2 * Runway has raised nearly $860 million in total funding and holds a $5.3 billion valuation following a $315 million Series E round.25 * The fund was seeded with existing investors and partners, building on Runway's prior quiet investments in early-stage founders.2 Runway, a New York-based AI video generation company valued at $5.3 billion, has launched a $10 million venture fund to invest in early-stage startups across AI, media, and world simulation. Alongside the fund, Runway rolled out its Builders program, offering free API credits and access to advanced video technology to foster an ecosystem around its platform.2 Fund details and investment focus. The $10 million fund, seeded by Runway's existing investors and partners such as Nvidia and Qatar Investment Authority, will write checks of up to $500,000 for pre-seed and seed-stage companies. Runway's investment thesis targets three areas: technical teams advancing AI architectures, builders creating applications on foundation models, and companies innovating in media creation, storytelling, and distribution. For the past 18 months, Runway has quietly supported early-stage founders, marking this as a formal expansion of its backing activities. 2 Builders program launch. The Builders program targets seed to Series C startups, providing 500,000 API credits and access to Characters, Runway's real-time video agent API powered by general world models. Characters enables interactions with generative AI agents featuring customizable faces and voices from cartoonish to photorealistic styles. The initiative aims to encourage startups to develop applications using Runway's video intelligence platform, which serves millions of creators and Fortune 500 enterprise clients. 24 Runway's financial background. Runway has raised close to $860 million since 2018, including a $315 million Series E round in February 2026 led by General Atlantic at a $5.3 billion post-money valuation, up from $3.3 billion in its prior round. Backers include Nvidia, Adobe Ventures, AMD Ventures, and Fidelity. The company develops video generation models like Gen-4 for consistent characters and backgrounds, positioning it as a leader in AI tools for film, advertising, and marketing. 257 Ecosystem building strategy. Runway's move to launch its own venture fund reflects a strategic pivot by established AI startups toward ecosystem building, using capital to influence the direction of complementary technologies. With a $5.3 billion valuation and $860 million raised, Runway leverages its position to back innovations in world simulation and media, potentially securing first-mover advantages in emerging applications. 25 This approach mirrors trends where AI leaders invest outward to expand networks and identify talent, as seen in Runway's prior quiet backing of founders. The fund's focus on pre-seed and seed stages with modest $500,000 checks allows targeted bets on high-risk, high-reward ideas without diluting Runway's core operations. 2 The Builders program complements the fund by lowering barriers for integration with Runway's tools, creating a flywheel effect where funded startups become reliant users. This dual strategy - financial support paired with technical resources - could accelerate adoption of Runway's Characters API and world models, strengthening its platform moat amid competition in generative video. 2 By prioritizing technical frontiers and application layers, Runway positions itself not just as a tool provider but as a curator of the next wave of AI-media convergence. Portfolio expansion timeline. Runway's fund and Builders program signal intensified competition among AI incumbents to cultivate developer ecosystems, with early applicants likely to emerge from its existing user base of millions of creators. Success will hinge on the performance of initial investments, particularly in world simulation startups that could yield breakthroughs aligning with Runway's research scaling plans. 25 As more AI firms follow suit, differentiation may come from Runway's real-time video agents, potentially driving enterprise contracts in advertising and film. Looking ahead, the program's 500,000 API credits could spur rapid prototyping of novel applications, providing Runway with data to refine its models. With recent Series E funding earmarked for compute infrastructure and world model pre-training, expect announcements of portfolio companies and expanded program tiers within the next year. 27 This positions Runway to capture value across the AI stack, from foundational research to end-user products, amid a maturing market for generative media tools. Companies mentioned. Further sources. Written with AI assistance, verified and edited by its team. Questions? Contact MLQ.ai.
Runway has launched a $10 million venture fund to invest in early-stage AI, media and world simulation startups, alongside a Builders programme offering free API credits to seed through Series C companies. The AI video generation startup, valued at $5.3 billion, aims to build an ecosystem around what it calls "video intelligence". The fund will write cheques up to $500,000 for pre-seed and seed-stage companies across three areas: AI architecture development, foundation model applications and new media creation. Runway has already backed startups including LanceDB and Tamarind Bio. The Builders programme provides 500,000 API credits and access to Characters, Runway's real-time video agent API. The founding cohort includes six startups building applications ranging from AI customer support to synthetic media tools, with Runway particularly interested in telemedicine and education use cases.
Runway chooses Modal to power real-time inference for Runway Characters. Today, Modal Labs is announcing that Runway is partnering with Modal to power real-time inference for Runway Characters. Runway Characters is a real-time video agent API that lets developers, startups, enterprises and consumers build fully custom conversational characters. These video agents can have any appearance and any visual style, with full control over voice, personality, knowledge and actions. Built on Runway's general world model, GWM-1, Characters generates expressive digital personas from a single image, with zero fine-tuning required. Thousands of organizations are already using Characters, including Fortune 10 technology companies, major Hollywood studios, global advertising agencies and gaming companies, with use cases ranging from customer support and internal training to experiential advertising and immersive game worlds. Characters represents the first step toward a future of online interaction built around real-time video rather than text. This kind of continuous, expressive, low-latency video generation held across extended conversations and experiences requires infrastructure purpose-built for real-time interaction. Modal's serverless compute platform is designed for exactly this type of workload: GPU-intensive, latency-critical and highly variable in demand. The iteration speeds Modal afforded allowed Runway's team to move from proof of concept to production in under 30 days. "Real-time video inference is a fundamentally different engineering challenge than batch generation, especially given our customers are running these experiences globally," said Kamil Sindi, CTO of Runway. "Runway Characters requires sustained low latency across the full duration of a conversation - expressions, lip-sync, gestures - without degradation. Modal's infrastructure gave us the performance and reliability we need to ship this in every global region, at production scale." Achieving the latency required for real-time interaction means distributing inference across multiple GPUs with high-bandwidth communication between nodes. By adding a single line of code on Modal, Runway can turn their containers into multi-node GPU clusters with RDMA networking, available instantly across every region. Modal deploys these workloads across geographies as a single unified pool, routing them close to users and scaling on demand, so Runway can serve users anywhere without pre-provisioning or managing regional infrastructure directly. "Runway is pushing the frontier for what's possible with world models, which requires running complex models at large scale with very low latency. This is something Modal does extremely well," said Erik Bernhardsson, CEO of Modal. "We're proud to be the infrastructure powering Characters." Runway Characters is available today to all developers and businesses at dev.runwayml.com, and to consumers at runwayml.com. Enterprise teams can reach out to learn more about deploying custom avatar experiences at scale. Ship your first app in minutes. $30 / month free compute
Creative trends: 5 signals shifting AI commercial production this week. For teams shipping AI video commercials now, these were the highest-signal moves in the week ending April 3, 2026 across AI ad creation, AI agents for marketing, generative video production and AI filmmaking workflows. Watch first: OpenAI's ads + commerce context before the trend breakdown. Before the trend sections, open these primary-source posts so your team can evaluate the ad and commerce shifts in context: 1) AI ad creation moved closer to transaction intent, not just awareness. What changed this week: OpenAI's March 24, 2026 product discovery launch made shopping responses more visual and comparison-led, positioning conversational interfaces as a performance layer, not just top-funnel discovery. Why it matters commercially: AI advertising agency teams can now brief creative for decision-stage moments inside chat, where users are actively comparing options. That changes copy structure, proof requirements and CTA timing for AI video commercials and companion assets. Apply now: For your next AI commercial production sprint, generate one "comparison-first" variant per concept: fewer slogans, clearer product deltas, and a tight value ladder designed for in-conversation decisioning. 2) sponsored units inside chat are becoming a real media format. What changed this week: OpenAI's ad test framework (announced February 9, 2026, with staged expansion in subsequent weeks) is now concrete enough for brand planning: sponsored units are clearly labeled and intentionally separated from core answers. Why it matters commercially: This opens a new operating lane for AI commercial production teams: response-adjacent placements that need utility-first creative, not traditional interruption logic. Apply now: Build one chat-native ad spec in every campaign pack: short SKU-rich copy, one direct utility promise, and one trust cue in the first sentence. 3) privacy controls are now part of creative performance strategy. What changed this week: As ad testing expands, control surfaces for ad history and personalization are now visible product behavior, not buried policy text. Why it matters commercially: AI agents for marketing and media teams need trust-forward creative systems. If people can inspect and adjust ad settings in one tap, opaque targeting language will underperform both compliance and conversion goals. Apply now: Add a transparency line in your creative checklist: what data is being used, what is not, and what the user can control immediately. 4) visual merchandising aesthetics are being encoded as reusable prompts. What changed this week: OpenAI's product-discovery workflows show prompt-driven taste segmentation (muted vs bolder looks) as a first-class interaction pattern. Why it matters commercially: For AI filmmaking and AI video commercials, this is a practical signal: creative direction is increasingly expressed as controllable style parameters, not only static brand guidelines. Apply now: Convert your style board into prompt language with 3 explicit lanes (safe, stretch, experimental), then generate cut variants for each lane before final edit lock. 5) generative video production is becoming api-native infrastructure for brands. What changed this week: Runway launched Runway Builders and Runway Fund on March 31, 2026, while publishing a Sora deprecation notice for April 3, 2026 in its platform. Why it matters commercially: This is an infrastructure signal for AI commercial production: teams are moving from one-off tools toward programmable model layers, while model availability can shift quickly. Reliability planning and fallback model strategy are now creative operations work. Apply now: In every generative video production brief, include a model contingency plan (primary + backup), continuity QA criteria, and a weekly model-availability check owned by production. Need this translated into an AI advertising agency operating system? Vertical Haus design and run AI commercial production workflows across concepting, AI filmmaking, channel adaptation, and AI agents for marketing execution.