Full-Time
Autonomous AI agents automate software lifecycle
No salary listed
London, UK + 2 more
More locations: San Francisco, CA, USA | New York, NY, USA
In Person
SF office requires five days on-site; no remote option indicated.
Factory.ai offers an agent-native development platform that uses autonomous AI assistants called Droids to automate tasks across the software development lifecycle, such as refactoring, bug fixes, incident response, and migrations. The Droids plug into developers’ existing tools—IDE, command line, and CI/CD pipelines—and can work with models from OpenAI, Anthropic, and Google, without requiring changes to tools or models. It targets enterprise engineering teams with token-based billing and features like SSO, audit logs, and on‑premise options, and it has customers including Nvidia, Adobe, MongoDB, Bayer, and Zapier. The goal is to raise engineering velocity by letting engineers focus on high-level design while Droids handle repetitive work and automate routine development tasks.
Company Size
501-1,000
Company Stage
Series C
Total Funding
$220M
Headquarters
San Francisco, California
Founded
2023
Help us improve and share your feedback! Did you find this helpful?
Hybrid Work Options
Flexible Work Hours
Remote Work Options
Paid Vacation
Wellness Program
Mental Health Support
AI news week: The Great Compression. Kimi K2.6 ships open-weight at frontier quality, GPT-5.5 resets the foundation at 2x the price, DeepSeek V4 lands from China, and the UN finally opens a global AI governance dialogue. Apr 24, 2026 Something fundamental shifted in AI this week. Not in a slow, incremental way - but all at once, across three separate announcements, from three separate teams, on three separate days. The gap between closed frontier models and open-weight alternatives collapsed. The economics of building with AI were renegotiated in real time. And the community watching closest - builders, founders, researchers, and practitioners - felt it immediately. This was the week of the Great Compression. Here is what happened, why it matters, and what comes next. Kimi K2.6: open weights finally reach the frontier. On Monday, Moonshot AI dropped Kimi K2.6, a 1-trillion-parameter mixture-of-experts model that ships fully open-weight. That alone would be notable. What made it extraordinary was the performance: K2.6 holds its ground against Claude Opus 4.6 on the benchmarks that matter most for agentic work - HLE with Tools (54.0), SWE-Bench Pro (58.6), Terminal-Bench 2.0 (66.7), and DeepSearchQA (92.5). It supports swarms of up to 300 sub-agents across 4,000 coordinated steps and can sustain autonomous coding runs of 12 or more hours. The context window is 256K tokens. The price: roughly $0.95 per million input tokens and $4.00 per million output tokens - approximately 10% of what Claude Opus 4.6 costs. Factory AI and OpenCode integrated K2.6 within hours of release. For the AIBUBEN community and for anyone building production agents, this changes the calculus. Open weights at frontier quality means you can self-host, fine-tune, and own your stack end to end - without paying closed-model pricing. The question is no longer which frontier model to use. It is whether you still need a closed one at all. GPT-5.5: OpenAI resets the foundation. On Thursday, OpenAI released GPT-5.5, and the headline is not incremental improvement - it is a complete architectural reset. GPT-5.5 is the first fully retrained base model since GPT-4.5. The architecture, pretraining corpus, and agent-oriented objectives have all been rebuilt from scratch. The practical result: GPT-5.5 is significantly stronger at analyzing data, writing and debugging code, operating software, researching online, and producing long-form documents. It ships with a 1 million-token context window and is now available to ChatGPT Plus, Pro, Business, and Enterprise users, as well as through the API. The pricing tells its own story. GPT-5.4 was $2.50 input / $15 output per million tokens. GPT-5.5 is $5 / $30 - a 2x increase, the largest single-release price jump OpenAI has made in the GPT-5.x series. OpenAI is betting that the capability improvement justifies the cost. Given that K2.6 now exists at a tenth of the price, that is an interesting bet to make. OpenAI also held a livestream on Monday teasing expanded Codex capabilities and enterprise deployment rails - a counterpunch to the K2.6 release that arrived the same morning. The pace of response in this market is now measured in hours. DeepSeek V4: China's flagship returns. On Friday, DeepSeek dropped preview versions of V4, its long-awaited flagship successor. The release comes almost exactly one year after DeepSeek's original emergence upended Silicon Valley's assumptions about what Chinese AI labs could do. V4 ships in two variants - Flash and Pro - and introduces a Hybrid Attention Architecture that improves long-context memory across extended conversations. DeepSeek claims V4 Pro Max delivers superior performance on standard reasoning benchmarks relative to GPT-5.2 and Gemini 3.0-Pro, falling only marginally short of GPT-5.4 and Gemini 3.1-Pro. The context window is 1 million tokens. Pricing is aggressive: $0.14 / $0.28 per million tokens for Flash, and $1.74 / $3.48 for Pro. DeepSeek V4 arriving the same week as GPT-5.5 and Kimi K2.6 is not a coincidence - it is the competitive flywheel of modern AI, spinning faster than ever. And with Huawei's Ascend AI cluster confirmed as the hardware backbone for V4, this is also a story about China's growing infrastructure independence from Nvidia. The policy moment: UN opens global AI governance dialogue. While the model race dominated headlines, a quieter but equally significant event took place in Türkiye this week. The United Nations held its first Global Dialogue on AI Governance, co-hosted alongside the AI for Good Innovation Factory. The significance here is timing. For years, the pace of AI capability development has vastly outrun the pace of policy response. This week's UN dialogue - the first of its kind at a global, multilateral level - signals that the gap may finally be closing. Discussions focused on safety frameworks, equitable access, and the governance of agentic systems. For the AIBUBEN community building in Armenia and across the region, this is worth watching closely. International AI governance frameworks will shape the regulatory environment for every builder, every startup, and every organization working with these tools. Getting a seat at the table - even indirectly, through community engagement and advocacy - matters now. MIT maps the terrain: 10 things that matter in AI right now. MIT Technology Review unveiled its first-ever "10 Things That Matter in AI Right Now" list at the EmTech AI conference this week. The selections offer a useful map of where the field's most serious observers see the action: AI companions and social agents, mechanistic interpretability, hyperscale data centers, and agentic coding all made the list. What is striking is not any single item but the overall shape of the list. The emphasis on interpretability and companions alongside infrastructure signals a field that is simultaneously scaling its capabilities and beginning to grapple seriously with what it means to deploy AI in human lives. These are not separate conversations anymore - they are the same one. What to watch next week. DeepSeek's V4 Flash and Pro are currently in preview; a full public release and API rollout is expected imminently. OpenAI's Codex enterprise expansion, teased in Monday's livestream, should bring more detail on how the company plans to capture the agentic coding market in the wake of K2.6's pricing challenge. And the UN AI Governance Dialogue's working-group outcomes, expected to be published in the coming days, will be worth reading closely for any builder thinking about the medium-term regulatory environment. Expect at least one more major model announcement before the week is out - the current release cadence leaves little room for pauses. Closing. The AIBUBEN community exists precisely for weeks like this one. Weeks where the landscape shifts fast enough that staying informed is itself a competitive advantage - where knowing the difference between a preview and a full release, between benchmark marketing and real-world capability, between a governance dialogue and binding regulation, is what separates builders who move confidently from those who freeze. aibuben cover this because it matters, and because you deserve analysis, not just headlines. Keep building, keep questioning, and aibuben will be back Monday. See you next week
Last week in AI: enterprise focus drives $50B+ valuations. The AI landscape this week showcased a decisive shift toward enterprise applications, with massive funding rounds, strategic pivots, and infrastructure challenges shaping the industry's trajectory. AI chip startup Cerebras filed for IPO targeting a $35B valuation, becoming the first major AI hardware company to go public in the generative era. The company's strong AWS partnership and reported $10B+ OpenAI deal signal massive enterprise demand for specialized AI infrastructure, potentially opening floodgates for AI hardware investments. AI coding platform Cursor is in talks for a massive funding round that would value it at $50B, with a16z and Thrive expected to lead. The unprecedented valuation reflects exploding enterprise demand for AI development tools and positions Cursor as a formidable competitor to GitHub Copilot in the rapidly expanding AI coding market. Anthropic CEO Dario Amodei met with White House Chief of Staff after unveiling Claude Mythos, a restricted cybersecurity AI model capable of discovering zero-day vulnerabilities. The model's dual-use capabilities are attracting government interest while raising concerns about AI in warfare, as relations with the Trump administration appear to be thawing despite Pentagon tensions. OpenAI enhanced its Codex development platform with desktop app control, in-app browsing, and enterprise-grade sandbox execution. The updates directly compete with Anthropic's Claude Code and signal OpenAI's aggressive pivot toward enterprise development tools, with stronger governance controls targeting business buyers' security concerns. Tesla launched autonomous robotaxi operations in Dallas and Houston, expanding beyond Austin to serve three Texas cities. This represents the largest commercial rollout of fully driverless vehicles in the US, signaling accelerating adoption of autonomous AI systems in transportation and validating the commercial viability of self-driving technology. Enterprise AI coding startup Factory secured $150M led by Khosla Ventures, reaching a $1.5B valuation after just three years. The funding highlights the massive enterprise opportunity for AI development tools, positioning Factory to compete with established players like GitHub Copilot and emerging challengers like Cursor. OpenAI introduced GPT-Rosalind, a specialized frontier model for drug discovery, genomics, and protein analysis. The model targets the $2T+ life sciences market and represents OpenAI's strategy to build vertical-specific AI systems for high-value enterprise applications, as reported by Axios, marking a shift beyond general-purpose chat applications. DRAM shortages affecting AI chip production may persist until 2030, with suppliers meeting only 60% of demand by 2027. The crisis is already forcing GPU production cuts and higher prices, potentially constraining the AI boom as memory becomes a critical bottleneck, with industry analysts warning of prolonged supply constraints. OpenAI discontinued its Sora video generation tool and lost key executives including Sora team leader Bill Peebles and product chief Kevin Weil. The moves reflect OpenAI's strategic shift away from consumer "side quests" toward enterprise-focused coding and business applications, with leadership departures signaling a major strategic pivot underway. The enterprise-first momentum is unmistakable, but supply chain constraints could test the industry's ambitious expansion plans. Leaders should prepare for both unprecedented opportunities and potential infrastructure bottlenecks ahead.
Additionally, Wipro Ventures participated in Factory’s recent funding round
Wipro partners factory to advance agent-native software development for enterprises. Wipro has announced a strategic partnership with Factory aimed at supporting enterprise adoption of agent-native software development. Separately, Wipro Ventures confirmed its participation in Factory's recent funding round. The collaboration is focused on enabling engineering teams to operationalise AI-driven development workflows by delegating portions of the software lifecycle to autonomous agents. The factory's platform allows organisations to assign tasks such as feature development, code refactoring, migrations and testing to AI agents - referred to as "Droids" - while maintaining engineering standards and architectural consistency. Under the agreement, Wipro will integrate Factory's capabilities into its WEGA agent-native delivery platform, extending the company's broader AI portfolio. The factory's tools are expected to be deployed across a large base of engineers, with the objective of accelerating production-ready code creation and shortening development cycles. Wipro also plans to offer factory-enabled solutions to clients across sectors including banking and financial services, healthcare, manufacturing, retail and technology. Sandhya Arun, Chief Technology Officer at Wipro, said the partnership reflects a broader transition among enterprises from AI experimentation to production-scale implementation within engineering functions. Ali Wasti, Managing Partner at Wipro Ventures, noted that organisations are under increasing pressure to accelerate innovation while maintaining security and code quality, adding that the investment aligns with the firm's focus on enterprise AI platforms. Matan Grinberg, co-founder and CEO of Factory, said the collaboration combines Factory's agent-native development platform with Wipro's enterprise relationships and engineering capabilities to support improvements in software delivery performance. decision-making.
Wipro has partnered with Factory, an agent-native software development platform, to accelerate AI-driven software development for global enterprises. Wipro Ventures also participated in Factory's recent funding round. Factory's platform enables engineering teams to delegate software development tasks to AI agents called Droids, which handle feature development, refactoring, migrations and testing whilst maintaining engineering standards. Wipro will integrate Factory's capabilities into its WEGA platform and deploy it across tens of thousands of engineers. The partnership will offer Factory-enabled solutions to clients across banking, healthcare, manufacturing, retail and technology sectors. Factory, founded in 2023 and based in San Francisco, is backed by investors including Sequoia Capital, NEA, NVIDIA and J.P. Morgan. Wipro Ventures manages over $500 million in assets focused on enterprise software startups.