Full-Time

Senior Social Media Manager

LTX Model

Posted on 11/10/2025

Lightricks

Lightricks

501-1,000 employees

Mobile photo-editing tools for creators

Compensation Overview

$100k - $135k/yr

+ Stock options + 401(k) match

New York, NY, USA

Hybrid

Hybrid role; on-site presence in New York City.

Category
Social Media (1)
Requirements
  • 3+ years managing social media or community channels for a tech product, creative platform, or AI-first company.
  • Experience operating social tools (e.g., scheduling platforms, analytics dashboards, creative software like Canva, Figma, or Descript).
  • Sharp writer and content creator with a track record of building followings or shaping narratives in public.
  • Strong understanding of generative AI, open source ecosystems, or creative tech - especially within X/Twitter, Reddit, and Discord culture.
  • Confident working across formats: motion clips, memes, live-tweeting, developer content, community spotlights, platform-native video.
  • Comfortable working cross-functionally across time zones - especially with Product Marketing, Developer Advocacy, R&D, and Creative.
  • Bonus: you’ve helped grow a brand from niche to known - and you’ve got the screenshots to prove it
Responsibilities
  • Define and execute the social strategy for LTX-2 - including platform mix, tone of voice, content types, and publishing cadence.
  • Develop a distinctive social voice for LTX-2 that balances technical credibility with creative accessibility - and stands out in a crowded GenAI landscape.
  • Operate day-to-day social accounts (X/Twitter, LinkedIn, TikTok, Instagram, Reddit, YouTube Shorts), including scheduling, moderation, posting, and escalation.
  • Collaborate with the Creative AI Marketing team to shape and ship original social content. Own the briefs, creative direction, and narrative.
  • Partner with PMM, Developer Advocate, and R&D teams to translate technical milestones into public-facing stories and assets.
  • Showcase what the community is building with LTX-2 - amplifying UGC, hackathon winners, open-source demos, and research experiments.
  • Identify and build relationships with key voices in GenAI: researchers, toolmakers, AI artists, and ecosystem partners.
  • Work closely with the Developer Advocate to stay ahead of shifts in sentiment, technical feedback, and community requests.
  • Track and report on platform-level KPIs: engagement, reach, growth, sentiment, conversion, and community health.
  • Test and optimize across formats - balancing short-form punch, thought leadership, motion-driven engagement, and educational series.
  • Inform the broader marketing and product strategy with insights from audience behavior, cultural trends, and competitor moves.
Desired Qualifications
  • Familiarity with AI model launches, GenAI discourse, and diffusion model frameworks
  • You’ve contributed to a social-led campaign that shaped product perception or drove measurable adoption.
  • Experience hosting or promoting livestreams, AMAs, hackathons, or major launches via social channels.
  • Knowledge of analytics tools and comfort setting OKRs for brand and community growth.

Lightricks creates mobile creator tools for photo and video editing, starting with Facetune and expanding into a suite of apps that help users produce professional-grade content on smartphones. Its products work by applying computer graphics and AI-powered editing features—such as retouching, filters, and other enhancements—through intuitive, touch-based interfaces so users can edit images and videos quickly and visually. The company differentiates itself by combining deep technical know-how in graphics and AI with user-friendly design, offering a broad range of tools in one ecosystem to empower creators, rather than just offering a single app. Its goal is to enable people around the world to turn their ideas into polished digital content using accessible, powerful mobile editing tools.

Company Size

501-1,000

Company Stage

Series D

Total Funding

$335M

Headquarters

Jerusalem, Israel

Founded

2013

Simplify Jobs

Simplify's Take

What believers are saying

  • $375M funding fuels multimodal AI models amid talent retention in Israel.
  • LTX-2 outperforms Sora 2 and Veo 3.1 in speed on consumer GPUs at 4 cents per second.
  • CES 2026 Nvidia partnership drives RTX ecosystem adoption for studio workflows.

What critics are saying

  • OpenAI's Sora 2 erodes 6.6M users via viral app and physics features in 6-12 months.
  • Google's Veo 3.1 captures enterprise contracts with longer clips in 12-18 months.
  • ByteDance's Seedance floods market via TikTok, undercutting API pricing in 3-6 months.

What makes Lightricks unique

  • Lightricks founded in 2013 by Hebrew University PhDs from Unit 8200 builds AI video on 13-year mobile editing expertise.
  • LTX-2 open-source model runs 4K video locally on Nvidia RTX GPUs, eliminating cloud data privacy risks.
  • Proprietary LTXV-13B generates videos 30X faster than rivals using licensed Getty and Shutterstock data.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Dental Insurance

Vision Insurance

Life Insurance

401(k) Company Match

Stock Options

Unlimited Paid Time Off

Paid Vacation

Hybrid Work Options

Professional Development Budget

Wellness Program

Growth & Insights and Company News

Headcount

6 month growth

0%

1 year growth

2%

2 year growth

2%
Ajla Karajko
Feb 13th, 2026
The race to make space babies has begun

The race to make space babies has begun. Startups and researchers are now competing to see if humans can safely conceive, carry pregnancies, and raise children off Earth - a key requirement for permanent bases on the Moon and Mars. Biotech startup SpaceBorn United is developing a mini-IVF lab for embryos in orbit, with the first non-human prototype already launched aboard a SpaceX rocket. Early experiments with mouse embryos in space show that development is possible, but with higher risks of failure and potential DNA damage. Ethicists warn that commercial space stations could become a "Wild West" for high-risk reproductive experiments. While the risks are enormous, plans from SpaceX, Blue Origin, and national space agencies for lunar and Martian settlement mean the concept of space babies is slowly taking shape. In Brief: Tech World Highlights * Microsoft renamed its Office 365 productivity suite to the Microsoft 365 Copilot app, using the same branding as its AI assistant. * Nvidia showcased the Rubin platform at CES 2026, combining six new chips into a single AI supercomputer, offering five times more training power than the Blackwell line. * Liquid AI released LFM 2.5, a new family of SOTA open-weight AI models for devices, covering text, image, and audio, outperforming similarly sized competitors on benchmarks. * Lightricks unveiled the open-source LTX-2, an AI video system capable of generating native 4K content with synchronized audio and detailed camera/motion control. * AMD CEO Lisa Su stated at CES 2026 that global AI users will exceed 5 billion in the next five years, and computing power will need to increase 100-fold to meet demand. AI Trending Tools: * Copilot Checkout - Enables completing purchases directly within Microsoft Copilot. * Unwrap Customer Intelligence - Gain AI insights from unstructured customer feedback to guide product development. * Claude Cowork - Brings Claude's agent capabilities to everyday tasks. Podijeli objavu:

Ajla Karajko
Feb 12th, 2026
AI safety report finds risks are no longer theoretical

AI safety report finds risks are no longer theoretical. More than 100 AI experts have published the second International AI Safety Report, with Yoshua Bengio as the lead author, warning that threats such as deepfake scams and biological weapons are no longer hypothetical but are appearing in the real world. The authors highlight growing evidence of AI being used for cyberattacks, manipulation, criminal activities, and deepfake fraud. They also warn about the rising use of AI assistants, citing studies that link their use to increased loneliness and decreased social interaction. The report emphasizes that AI systems sometimes behave differently in safety tests than in the real world, which can lead to loss of control and make oversight more difficult. While the findings are supported by more than 30 countries, the US, despite past involvement, chose not to contribute to this year's report. What is particularly concerning is how much the risks have shifted from theoretical to real-world in just 12 months, while the US withdrawal from this process remains a key fact to monitor. In Brief: Tech World Highlights * Microsoft renamed its Office 365 productivity suite to Microsoft 365 Copilot app, using the same branding as its AI assistant. * Nvidia unveiled the Rubin platform at CES 2026, combining six new chips into a single AI supercomputer and offering five times the training compute power compared to the Blackwell line. * Liquid AI released LFM 2.5, a new family of SOTA open-weight AI models for on-device use covering text, image, and audio, outperforming similar-sized competitors in benchmarks. * Lightricks launched the open-source LTX-2 model, a video AI system capable of generating native 4K content with synchronized audio and detailed control over camera and movement. * AMD CEO Lisa Su stated at CES 2026 that the number of AI users worldwide will exceed 5 billion in the next five years, requiring computing power to increase 100-fold to meet demand. AI Trending Tools: * Copilot Checkout: Enables completing purchases directly within Microsoft Copilot. * Unwrap Customer Intelligence: Extracts AI-driven insights from unstructured customer feedback to guide product development. * Claude Cowork: Brings Claude's agent capabilities to everyday tasks. Podijeli objavu:

Technology Form
Jan 8th, 2026
Nvidia Just Made AI Video Run on Your Laptop. Studios Will Care

Nvidia just made AI video run on your laptop. Studios will care. Most AI video tools need cloud servers to work. Your laptop simply lacks the horsepower to generate clips without melting down. That just changed. Lightricks unveiled a new AI video model at CES 2026 that runs entirely on Nvidia-powered devices. No cloud required. Plus, it's open-weight, meaning developers can peek under the hood and modify the model for their needs. For creators worried about data privacy and studios protecting intellectual property, this matters more than better prompts or longer clips. Why on-device video generation is rare. Generating AI video eats computational power like nothing else. A single 5-second clip demands more processing than thousands of image generations. So most video models offload the work to massive data centers. Google's Veo 3 and OpenAI's Sora run on server farms packed with specialized chips. Your prompt gets sent to the cloud, processed on their hardware, then sent back to you. This works fine for casual users. But it creates problems for professionals. Every prompt you send shares data with the company running the model. That data might train future versions of their AI. For entertainment studios or corporate creators, that's a dealbreaker. Besides, cloud processing adds latency. The typical AI video prompt takes 1-2 minutes to generate. Half that time is just network overhead - uploading your request, downloading the result, waiting in the queue. Lightricks-2 changes the math. Lightricks built their second-generation model specifically to run on Nvidia RTX chips. Those are the graphics cards already powering gaming PCs and professional workstations. The specs look competitive with cloud-based rivals. The model generates clips up to 20 seconds long at 50 frames per second. That's on the longer end of current AI video capabilities. It also outputs in 4K resolution with native audio built in. More importantly, everything happens locally. Your prompts never leave your machine. The model processes entirely on your GPU. Results appear faster because there's no network bottleneck. Moreover, the model is open-weight and available now on HuggingFace and ComfyUI. Developers can download it, inspect the architecture, and fine-tune it for specific use cases. That's unusual for video models, which typically stay locked behind proprietary APIs. What open-weight actually means. "Open-weight" sits between fully closed and truly open-source AI models. It's not as transparent as open-source, which requires disclosing training data, code, and everything else. But it reveals far more than closed models. Think of AI model weights like ingredients in a recipe. A closed model is like a restaurant that won't even tell you what's in the dish. An open-weight model lists all the ingredients but not the exact measurements. A truly open-source model gives you the complete recipe with instructions. So developers can see how Lightricks-2 was constructed. They can understand which techniques it uses for motion consistency, temporal coherence, and detail preservation. Then they can modify those components for their specific needs. In fact, studios could fine-tune the model on their own footage styles without sharing that proprietary data with Lightricks. The training happens entirely in-house using the open weights as a foundation. Why studios will pay attention. Entertainment studios have been cautious about generative AI. Many see potential for concept art, storyboarding, and pre-visualization. But they're terrified of IP leakage. Cloud-based video models create legal headaches. When you send a prompt, you're uploading data to someone else's servers. The model might learn from your prompts. Worse, other users might accidentally generate content similar to your unreleased projects. On-device processing eliminates that risk. Your data never leaves your network. The model can't leak what it never sees. For studios developing billion-dollar franchises, that security matters more than any feature improvement. Plus, on-device models scale differently than cloud services. Cloud pricing grows with usage - more clips mean higher bills. Local processing has upfront hardware costs but minimal variable expenses. Generate 10 clips or 10,000, the cost stays flat. That pricing structure favors high-volume professional use over casual experimentation. Which explains why Lightricks positioned this model for "professional creators and big studios" rather than hobbyists. The Nvidia advantage. This model only works because of Nvidia's RTX architecture. Specifically, the tensor cores designed for AI workloads. Standard graphics cards can technically run AI models. But they're painfully slow without specialized AI acceleration hardware. Nvidia's RTX chips include dedicated tensor cores that handle the matrix math required for AI at dramatically higher speeds. So Lightricks optimized their model to leverage those tensor cores efficiently. The result runs fast enough for practical use - not just technically possible but actually usable. However, you'll still need high-end hardware. Lower-end RTX cards might struggle with 4K output or longer clips. The model scales with available GPU memory and compute power. Nvidia showcased this at CES alongside other AI announcements. They're clearly positioning RTX as the platform for local AI workloads. Not just for gaming but for professional creative applications. What's missing from the announcement. Lightricks didn't share concrete performance numbers. How fast does this actually generate video compared to cloud alternatives? What's the quality-versus-speed tradeoff? They also didn't specify minimum hardware requirements. Which RTX cards work? Do you need top-tier 4090s or will mid-range 4070s suffice? And there's no pricing information yet. Is this a one-time purchase? Subscription? Free for non-commercial use? The business model matters almost as much as the technical capabilities. Still, the core promise is clear. High-quality AI video generation without cloud dependencies. That's been the industry unicorn since video models launched. Where this goes next. On-device AI video is early days. Lightricks-2 is a proof of concept more than a finished product. But it proves the concept works. Expect competitors to follow. Adobe, Runway, and others have strong incentives to offer local processing options. Studios will demand it. Regulatory pressure around data privacy will accelerate adoption. However, cloud models won't disappear. They'll stay relevant for users without high-end hardware or for use cases that don't require data privacy. The industry will split into cloud-based consumer tools and on-device professional options. For creators, this means more control and better security. But also higher upfront costs and new technical requirements. You'll need to actually understand your hardware rather than just paying for cloud credits. That tradeoff will appeal to serious professionals. Hobbyists will probably stick with cloud services. Which is exactly what Lightricks intended.

Techtime News
Jan 7th, 2026
Lightricks Goes Open Source with LTX-2, Taking on Big Tech in AI Video

Lightricks goes open source with LTX-2, taking on big tech in AI video. Unlike closed models such as Sora and Veo, Lightricks is releasing not only the model itself - but also its weights and training code Photo above: Lightricks CEO and co-founder Dr. Zeev Farbman. Credit: Riki Rahman. Photo illustration Lightricks announced at CES the full open-source release of its generative video-and-audio model, LTX-2, including model weights and training code. The move is unusual in a market where advanced video models are largely controlled by closed cloud platforms. Announced in partnership with NVIDIA, the launch positions Lightricks as an open alternative to approaches led by companies such as OpenAI and Google, and signals a potential shift in how generative video technology is deployed and adopted. LTX-2 can generate synchronized video and audio at up to 4K resolution, with clip lengths of up to 20 seconds and high frame rates. The model is optimized to run locally on RTX-powered workstations as well as on enterprise DGX systems, and is positioned as production-ready rather than a research demo. Unlike closed platforms such as Sora or Veo, Lightricks allows developers and organizations not only to use the model, but also to retrain, customize and integrate it directly into products and internal workflows. While open video models already exist, most suffer from significant limitations, including lack of audio, lower visual quality or poor suitability for commercial use. LTX-2 is the first to combine full open-source availability with capabilities designed for real-world production, positioning it as a bridge between open research and the operational needs of the media and creative industries. Lightricks is an Israeli company best known for its popular creative and editing apps, including photo and video tools used by millions of users worldwide. In recent years, the company has been expanding beyond consumer applications into the development of AI models and creative infrastructure aimed at professional creators and enterprise customers. Behind the decision to open-source the model lies a clear business strategy. Lightricks is giving up exclusive control over the core technology in order to establish it as a standard platform others can build on. Rather than monetizing usage of the model itself, the company is positioning LTX-2 as the foundation for commercial tools, platforms and paid services developed on top of it. The approach mirrors familiar open-source business models in which economic value is created around the code rather than within it. NVIDIA is not involved in developing the model itself, but plays a central role in positioning LTX-2 as a natural workload for RTX hardware and DGX systems. The partnership reflects a broader vision in which advanced generative video can and should run outside the cloud, on local workstations and within enterprise environments. The release of LTX-2 reflects a broader shift in the generative video market, from closed models optimized for demonstrations and limited cloud-based access, toward open infrastructure designed for deep adoption and large-scale product development. Rather than focusing on producing the most eye-catching demo, Lightricks is aiming to provide the foundation on which the next generation of video creation tools will be built.

CRYPTOMERIA LABS PTE. LTD.
Oct 23rd, 2025
Lightricks Competes With OpenAI, Google, And ByteDance In AI Video Market

Lightricks competes with OpenAI, Google, and ByteDance in AI video market. Lightricks has launched its new LTX-2 AI video model, claiming superior speed, 4K capability, cost efficiency, and licensed content use, entering a competitive market alongside Google's Veo 3.1, OpenAI's Sora 2, and ByteDance's Seedance 1.0. AI company Lightricks introduced its new video generation model, LTX-2, claiming it surpasses competitors in speed and efficiency. The fully open-source model can reportedly produce a six-second Full HD clip in as little as five seconds of compute time and is the first 4K-capable model able to generate video faster than playback. The launch comes shortly after major releases from Google's Veo 3.1, OpenAI's Sora 2, and ByteDance's Seedance 1.0, all of which have drawn attention for their capabilities. Lightricks highlights LTX-2's advantages in speed, video quality, and simultaneous generation of background sounds, music, and dialogue. The model can produce 4K videos at 48 frames per second, though at slightly longer processing times, and emphasizes cost efficiency and open-source accessibility, allowing users to fine-tune the model to their specific needs. LTX-2 is available via the Lightricks API and its professional filmmaking platform, LTX Studio, with the open-source release, including training data and weights, expected on GitHub next month. The API offers competitive pricing, starting at four cents per second for Full HD clips and 12 cents per second for 4K 48 fps videos with synchronized audio, targeting marketers and professionals who require both rapid iteration and high-quality output. Unlike some competitors that require high-performance GPUs, LTX-2 can operate on a single consumer-grade GPU while maintaining visual quality, making it accessible for creators using standard laptops. Lightricks plans to enhance the platform further with features such as pose and depth controls, video input support, and alternative rendering options in the near future. Lightricks launches LTX-2 amid intensifying competition in AI video generation. Lightricks' release of LTX-2 comes at a competitive moment, though it remains uncertain whether it will maintain the company's position as a preferred choice for AI developers, creative teams, marketers, and other professionals. Last week, Google launched Veo 3.1 within its Gemini application for paying users, as well as through its Vertex AI platform and Flow, its AI filmmaking tool, which offers functionality comparable to Lightricks' LTX Studio. Veo 3.1 allows users to upload separate image or video assets and merge them into a single video, add or remove objects, and extend clips up to one minute, matching the maximum output of Lightricks' earlier LTXV-13B model. Assessing aesthetic quality across LTX-2, Veo 3.1, and OpenAI's Sora 2 is subjective, as all three models appear closely matched. OpenAI has introduced a unique social media companion app for sharing, remixing, and discovering AI-generated videos, similar in concept to platforms like Instagram. Sora 2 includes a feature called Cameo, which allows users to upload a face and generate videos featuring it, and like LTX-2 and Veo 3.1, it produces synchronized audio for its videos. OpenAI also highlights its model's advanced physics engine as a differentiator. However, Sora 2 currently has limited accessibility: it is only available on iPhone with an invite code, while Android users can access it via the web but also require an invite. Ethical and copyright concerns rise as AI video models expand, with Lightricks' LTX-2 leveraging licensed content. For creators, the expanding range of high-end AI video models offers new opportunities, but it also comes amid growing debate over the ethical and legal implications of AI-generated content. OpenAI's Sora 2, despite receiving positive feedback from users, has faced criticism for producing videos that appear to incorporate copyrighted material, with the company allowing such use by default unless rights holders formally opt out. This opt-out process requires studios and other intellectual property owners to request that their content not be included, and companies such as Walt Disney have already exercised this option, preventing Sora 2 from generating images of characters like Mickey Mouse. Google's Veo 3.1 has generated less controversy but is not entirely exempt from scrutiny, while ByteDance's Seedream appears to freely use recognizable characters and public figures, creating videos featuring Spider-Man, Batman, and Superman. Lightricks' LTX-2 may avoid some of these concerns, as the company has emphasized its use of licensed, high-quality content from partners such as Getty Images and Shutterstock to train its models.

INACTIVE