Full-Time

AI Native Account Executive

Confirmed live in the last 24 hours

Together AI

Together AI

51-200 employees

Decentralized cloud services for AI development

Enterprise Software
AI & Machine Learning

Compensation Overview

$180k - $250kAnnually

+ Equity + Benefits

Mid, Senior

San Francisco, CA, USA

This is a hybrid role based in the Bay Area.

Category
Inside Sales
Strategic Account Management
Sales & Account Management
Required Skills
Sales

You match the following Together AI's candidate preferences

Employers are more likely to interview you if you match these preferences:

Degree
Experience
Requirements
  • 3-5 years of experience in sales, with a track record of exceeding targets
  • Technical, passion for technology & a desire to work with highly technical teams and products
  • An excellent communicator with both clients and internal teams
  • Adaptability, coachability, high drive and sense of urgency - enjoys working within a fast-paced environment wearing multiple hats
  • Enjoys experimenting with the sales pitch/process to achieve company goals
  • Experience and success with pipeline generation
  • A passion for & experience with AI systems and/or infrastructure / API products highly preferred
Responsibilities
  • Generate pipeline & win new business in the startup ecosystem.
  • Design & execute creative, strategic & customer centric sales strategies to meet & exceed revenue quotas
  • Find creative ways to integrate into the startup ecosystem & become a trusted partner of founders & their teams
  • Collaborate on product roadmaps & features by bringing the voice of the customer into Together
  • Work closely with the SDR team to help refine outbound approach, inform product-market fit, messaging & value prop for Together products.
Desired Qualifications
  • A passion for & experience with AI systems and/or infrastructure / API products highly preferred

Together AI focuses on enhancing artificial intelligence through open-source contributions. The company offers decentralized cloud services that allow developers and researchers to train, fine-tune, and deploy generative AI models. Their platform supports a wide range of clients, from small startups to large enterprises and academic institutions, by providing cloud-based solutions that simplify the development and deployment of AI models. Unlike many competitors, Together AI emphasizes open and transparent AI systems, which fosters innovation and aims to achieve beneficial outcomes for society. The company's goal is to empower users with the tools they need to advance AI technology while maintaining a commitment to openness.

Company Size

51-200

Company Stage

Series A

Total Funding

$222.3M

Headquarters

Menlo Park, California

Founded

2022

Simplify Jobs

Simplify's Take

What believers are saying

  • Together AI leverages Meta's Llama 3.2 Vision, expanding multimodal AI capabilities.
  • FlashAttention-3 optimizes Nvidia GPUs, reducing costs for Together AI's cloud services.
  • Decreasing AI model costs, like DeepSeek R1, allow Together AI to offer cost-effective solutions.

What critics are saying

  • DeepSeek R1's low-cost model could undercut Together AI's pricing strategy.
  • Integration challenges from acquiring CodeSandbox may disrupt service continuity.
  • Meta's Llama 3.2 Vision's free access might reduce demand for Together AI's paid services.

What makes Together AI unique

  • Together AI focuses on open-source contributions, enhancing transparency and innovation.
  • The company offers decentralized cloud services for AI model training and deployment.
  • Together AI's acquisition of CodeSandbox adds a code interpreter to its platform.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Company Equity

Growth & Insights and Company News

Headcount

6 month growth

-4%

1 year growth

0%

2 year growth

0%
The Bridge
Jan 30th, 2025
Deepseek-R1は企業にとってなぜ朗報なのか——Aiアプリをより安価に、構築しやすく、より革新的に

画像クレジット: VentureBeat with Ideogram. DeepSeek-R1 推論モデルのリリースは、主要な AI 株の突然の大量売却に最も顕著に表れているように、テクノロジー業界に衝撃を与えた。DeepSeek がはるかに少ないコストで o1 の競合モデルを開発できたと報告されており、OpenAI や Anthropic のような潤沢な資金を持つ AI ラボの優位性はもはやそれほど確固たるものには見えない。. 一部の AI ラボは現在危機モードにあるが、企業セクターに関する限り、これはほとんど良いニュースである。

VentureBeat
Jan 27th, 2025
Deepseek-R1 Is A Boon For Enterprises — Making Ai Apps Cheaper, Easier To Build, And More Innovative

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. The release of the DeepSeek R1 reasoning model has caused shockwaves across the tech industry, with the most obvious sign being the sudden sell-off of major AI stocks. The advantage of well-funded AI labs such as OpenAI and Anthropic no longer seems very solid, as DeepSeek has reportedly been able to develop their o1 competitor at a fraction of the cost.While some AI labs are currently in crisis mode, as far as the enterprise sector is concerned, it’s mostly good news. Cheaper applications, more applicationsAs we had said here before, one of the trends worth watching in 2025 is the continued drop in the cost of using AI models. Enterprises should experiment and build prototypes with the latest AI models regardless of the price, knowing that the continued price reduction will enable them to eventually deploy their applications at scale. That trendline just saw a huge step change. OpenAI o1 costs $60 per million output tokens versus $2.19 per million for DeepSeek R1

SiliconANGLE
Dec 13th, 2024
Together AI acquires CodeSandbox, adds code interpreter to its AI development platform

Together AI acquires CodeSandbox, adds code interpreter to its AI development platform - SiliconANGLE

YourStory
Nov 7th, 2024
Thesys secures $4M funding led by Together Fund

AI startup Thesys bags $4 million funding in a round led by Together Fund.

Decrypt
Oct 7th, 2024
Meet Flux 1.1 Pro: The Best Ai Image Generator You Can'T Run

Decrypt’s Art, Fashion, and Entertainment Hub. Discover SCENEBlack Forest Labs, the studio behind the Fluxfamily of AI image generators, announced last week the release of Flux 1.1 [Pro]. This comes just two months after the release of its original family of models including Flux 1 Pro (a closed source model with industry-leading capabilities), Flux 1 Dev (a noncommercial, open source model) and Flux Schnell (a fully open source model).The Flux models marked a major leap in generative AI technology with their text generation capabilities, prompt adherence and overall image quality. Even the smaller models, Flux Dev and Flux Schnell, generated results on par with generations from MidJourney, and way better than the outputs provided by SD3, Stability’s much anticipated evolution over SDXL, which turned out to be somewhat underwhelming.The new model has already made a mark, securing the top Elo score in the Artificial Analysis image arena—a leading benchmarking platform for AI models. It has outperformed every other text-to-image model on the market while being almost as fast as its smallest model.The graph below shows the Elo score (image quality) on the Y axis and the generation speeds on the X axis. MidJourney enthusiasts may notice that their model is not represented—it’s so slow it’s literally off the chart

VentureBeat
Sep 26th, 2024
Here’S How To Try Meta’S New Llama 3.2 With Vision For Free

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreTogether AI has made a splash in the AI world by offering developers free access to Meta’s powerful new Llama 3.2 Vision model via Hugging Face.The model, known as Llama-3.2-11B-Vision-Instruct, allows users to upload images and interact with AI that can analyze and describe visual content.Try Llama 3.2 11B Vision for free in this @huggingface space!This model is free in the Together API for the next 3 months.https://t.co/2oYwJK15KW pic.twitter.com/JEh3LTr0M2 — Together AI (@togethercompute) September 26, 2024For developers, this is a chance to experiment with cutting-edge multimodal AI without incurring the significant costs usually associated with models of this scale. All you need is an API key from Together AI, and you can get started today.This launch underscores Meta’s ambitious vision for the future of artificial intelligence, which increasingly relies on models that can process both text and images—a capability known as multimodal AI.With Llama 3.2, Meta is expanding the boundaries of what AI can do, while Together AI is playing a crucial role by making these advanced capabilities accessible to a broader developer community through a free, easy-to-use demo.Together AI’s interface for accessing Meta’s Llama 3.2 Vision model, showcasing the simplicity of using advanced AI technology with just an API key and adjustable parameters. (Credit: Hugging Face)Unleashing Vision: Meta’s Llama 3.2 breaks new ground in AI accessibilityMeta’s Llama models have been at the forefront of open-source AI development since the first version was unveiled in early 2023, challenging proprietary leaders like OpenAI’s GPT models.Llama 3.2, launched at Meta’s Connect 2024 event this week, takes this even further by integrating vision capabilities, allowing the model to process and understand images in addition to text.This opens the door to a broader range of applications, from sophisticated image-based search engines to AI-powered UI design assistants.The launch of the free Llama 3.2 Vision demo on Hugging Face makes these advanced capabilities more accessible than ever.Developers, researchers, and startups can now test the model’s multimodal capabilities by simply uploading an image and interacting with the AI in real time.The demo, available here, is powered by Together AI’s API infrastructure, which has been optimized for speed and cost-efficiency.From code to reality: A step-by-step guide to harnessing Llama 3.2Trying the model is as simple as obtaining a free API key from Together AI.Developers can sign up for an account on Together AI’s platform, which includes $5 in free credits to get started. Once the key is set up, users can input it into the Hugging Face interface and begin uploading images to chat with the model.The setup process takes mere minutes, and the demo provides an immediate look at how far AI has come in generating human-like responses to visual inputs.For example, users can upload a screenshot of a website or a photo of a product, and the model will generate detailed descriptions or answer questions about the image’s content.For enterprises, this opens the door to faster prototyping and development of multimodal applications. Retailers could use Llama 3.2 to power visual search features, while media companies might leverage the model to automate image captioning for articles and archives.The bigger picture: Meta’s vision for edge AILlama 3.2 is part of Meta’s broader push into edge AI, where smaller, more efficient models can run on mobile and edge devices without relying on cloud infrastructure.While the 11B Vision model is now available for free testing, Meta has also introduced lightweight versions with as few as 1 billion parameters, designed specifically for on-device use.These models, which can run on mobile processors from Qualcomm and MediaTek, promise to bring AI-powered capabilities to a much wider range of devices.In an era where data privacy is paramount, edge AI has the potential to offer more secure solutions by processing data locally on devices rather than in the cloud.This can be crucial for industries like healthcare and finance, where sensitive data must remain protected

VentureBeat
Sep 26th, 2024
Here'S How To Try Meta'S New Llama 3.2 With Vision For Free

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreTogether AI has made a splash in the AI world by offering developers free access to Meta’s powerful new Llama 3.2 Vision model via Hugging Face.The model, known as Llama-3.2-11B-Vision-Instruct, allows users to upload images and interact with AI that can analyze and describe visual content.Try Llama 3.2 11B Vision for free in this @huggingface space!This model is free in the Together API for the next 3 months.https://t.co/2oYwJK15KW pic.twitter.com/JEh3LTr0M2 — Together AI (@togethercompute) September 26, 2024For developers, this is a chance to experiment with cutting-edge multimodal AI without incurring the significant costs usually associated with models of this scale. All you need is an API key from Together AI, and you can get started today.This launch underscores Meta’s ambitious vision for the future of artificial intelligence, which increasingly relies on models that can process both text and images—a capability known as multimodal AI.With Llama 3.2, Meta is expanding the boundaries of what AI can do, while Together AI is playing a crucial role by making these advanced capabilities accessible to a broader developer community through a free, easy-to-use demo.Together AI’s interface for accessing Meta’s Llama 3.2 Vision model, showcasing the simplicity of using advanced AI technology with just an API key and adjustable parameters. (Credit: Hugging Face)Unleashing Vision: Meta’s Llama 3.2 breaks new ground in AI accessibilityMeta’s Llama models have been at the forefront of open-source AI development since the first version was unveiled in early 2023, challenging proprietary leaders like OpenAI’s GPT models.Llama 3.2, launched at Meta’s Connect 2024 event this week, takes this even further by integrating vision capabilities, allowing the model to process and understand images in addition to text.This opens the door to a broader range of applications, from sophisticated image-based search engines to AI-powered UI design assistants.The launch of the free Llama 3.2 Vision demo on Hugging Face makes these advanced capabilities more accessible than ever.Developers, researchers, and startups can now test the model’s multimodal capabilities by simply uploading an image and interacting with the AI in real time.The demo, available here, is powered by Together AI’s API infrastructure, which has been optimized for speed and cost-efficiency.From code to reality: A step-by-step guide to harnessing Llama 3.2Trying the model is as simple as obtaining a free API key from Together AI.Developers can sign up for an account on Together AI’s platform, which includes $5 in free credits to get started. Once the key is set up, users can input it into the Hugging Face interface and begin uploading images to chat with the model.The setup process takes mere minutes, and the demo provides an immediate look at how far AI has come in generating human-like responses to visual inputs.For example, users can upload a screenshot of a website or a photo of a product, and the model will generate detailed descriptions or answer questions about the image’s content.For enterprises, this opens the door to faster prototyping and development of multimodal applications. Retailers could use Llama 3.2 to power visual search features, while media companies might leverage the model to automate image captioning for articles and archives.The bigger picture: Meta’s vision for edge AILlama 3.2 is part of Meta’s broader push into edge AI, where smaller, more efficient models can run on mobile and edge devices without relying on cloud infrastructure.While the 11B Vision model is now available for free testing, Meta has also introduced lightweight versions with as few as 1 billion parameters, designed specifically for on-device use.These models, which can run on mobile processors from Qualcomm and MediaTek, promise to bring AI-powered capabilities to a much wider range of devices.In an era where data privacy is paramount, edge AI has the potential to offer more secure solutions by processing data locally on devices rather than in the cloud.This can be crucial for industries like healthcare and finance, where sensitive data must remain protected

Funding Blogger
Sep 24th, 2024
jhana.ai Bags Funds To Build AI Research Tool For Lawyers

Artificial intelligence (AI)-powered legal tech startup jhana.ai has raised $1.6 Mn in its ongoing maiden funding round, led by Freshworks cofounder Girish Mathrubootham's venture capital firm Together Fund.

PYMNTS
Aug 14th, 2024
Global Ai Regulation Efforts Heat Up

Lawmakers on both sides of the Atlantic are racing to establish artificial intelligence (AI) regulations, with California poised to vote on strict AI oversight as the U.S. Congress considers a “regulatory sandbox” for financial services. Meanwhile, the European Union’s AI Act is set to transform the healthcare technology landscape, highlighting the complex balance between fostering innovation and ensuring public safety when it comes to using AI. California Poised for Crucial Vote on AI Regulations

VentureBeat
Jul 15th, 2024
Flashattention-3 Unleashes The Power Of H100 Gpus For Llms

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Attention is a core component of the transformer architecture used in large language models (LLMs). But as LLMs grow larger and handle longer input sequences, the computational cost of attention becomes a bottleneck. To address this challenge, researchers from Colfax Research, Meta, Nvidia, Georgia Tech, Princeton University, and Together AI have introduced FlashAttention-3, a new technique that significantly speeds up attention computation on Nvidia Hopper GPUs (H100 and H800).FlashAttention-3 builds upon previous work on FlashAttention and FlashAttention-2 and further optimizes the use of resources on Nvidia Hopper GPUs to maximize performance and efficiency for LLM training and inference. The challenge of attention computation in LLMsOne of the key innovations of transformers is the attention mechanism, which enables the model to compute the relationship between different tokens in an input sequence.While the attention mechanism is very effective, it is also computationally expensive. The cost of attention computation grows quadratically with the length of the input sequence. As LLMs are scaled to handle longer and longer input sequences, the attention mechanism becomes a major bottleneck. Furthermore, modern hardware accelerators such as GPUs are optimized for matrix multiplication (matmul) operations, which are the building blocks of deep learning models

eFinancialCareers
Mar 21st, 2024
Top Coinbase guy leaves for newly minted Gen AI unicorn backed by NVIDIA

Together.ai has hired Liu as its head of sales operations.

Together AI
Mar 14th, 2024
Announcing $106M round led by Salesforce Ventures

I am excited to share that we’ve raised $106M in a new round of financing led by Salesforce Ventures with participation from Coatue, and existing investors, Lux Capital, Kleiner Perkins, Emergence Capital, Prosperity7 Ventures, NEA, Greycroft, Definition Capital, Long Journey Ventures, Factory, Scott Banister, and SV Angel. We are also thrilled to have participation from industry luminaries including Clem Delangue, CEO of HuggingFace, and Soumith Chintala, the creator of PyTorch.