Full-Time

Open-Source Machine Learning Engineer

International

Hugging Face

Hugging Face

501-1,000 employees

Open-source ML platform for sharing models

No salary listed

Remote in USA + 1 more

More locations: New York, NY, USA

Remote

Category
AI & Machine Learning (1)
Requirements
  • Love open-source and be passionate about making complex technology more accessible
  • Desire to contribute to one of the fastest-growing machine learning ecosystems
  • Willingness to work with open-source libraries such as Transformers, Datasets, or Accelerate
  • Interest in interacting with users and contributors of the open-source machine learning ecosystem
  • Ability to interact with Researchers, ML practitioners and data scientists daily via GitHub, forums, or Slack
  • Interest in fostering a vibrant machine learning community and helping users contribute to and use open-source tools
Responsibilities
  • Improve the open-source machine learning ecosystem by working with existing open-source libraries such as Transformers, Datasets, or Accelerate
  • Interact with users and contributors of the open-source machine learning ecosystem
  • Foster one of the most active machine learning communities by helping users contribute to and use the tools you build
  • Interact with Researchers, ML practitioners and data scientists on a daily basis through GitHub, forums, or Slack
  • Brainstorm with the team to determine meaningful, impactful work for you in the role

Hugging Face provides tools and platforms for building and sharing machine learning applications. Its core offering is the Hugging Face Hub, where developers and researchers share, discover, and collaborate on models, datasets, and applications; users access pre-trained models via the Transformers library and deploy them with services like Inference Endpoints or Private Hub. The company stands out through its large open-source community, vast collections of models and datasets, and tight integrations with cloud providers. Its goal is to democratize machine learning by making advanced AI accessible to individuals and organizations alike.

Company Size

501-1,000

Company Stage

Series D

Total Funding

$395.7M

Headquarters

New York City, New York

Founded

2016

Simplify Jobs

Simplify's Take

What believers are saying

  • Novita partnership enables 5M developers to deploy models with 50ms latency, 50% cost savings.
  • Reachy Mini robotics platform expands addressable market beyond software with 10K units sold.
  • Cohere-transcribe ranks #1 on Open ASR Leaderboard across 14 languages with vLLM integration.

What critics are saying

  • EU AI Act Level 3 classification forces takedowns of high-risk open models by 2027.
  • NVIDIA NeMo platform leverages CUDA exclusivity to pull GPU-dependent enterprises away.
  • Pollen Robotics acquisition drains $10M+ in R&D with zero app monetization, <1% repeat sales.

What makes Hugging Face unique

  • ml-intern autonomously optimizes LLM post-training, achieving 32% GPQA in under 10 hours.
  • 2.4M models and 730K datasets on Hub create network effects competitors cannot replicate.
  • Granite 4.0 3B Vision achieves 92.1 TEDS on table extraction with modular LoRA design.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Flexible Work Environment

Health Insurance

Unlimited PTO

Equity

Growth, Training, & Conferences

Generous Parental Leave

Growth & Insights and Company News

Headcount

6 month growth

-2%

1 year growth

-2%

2 year growth

1%
PR Newswire
Apr 14th, 2026
Novita AI partners with Hugging Face to enable instant AI model deployment for 5M developers

Novita AI has partnered with Hugging Face to provide inference services for over five million developers on the platform. The collaboration introduces a "Deploy on Novita" feature, enabling developers to instantly deploy models as production-ready APIs without managing infrastructure or configuration. The partnership launched with day-zero support for Google's Gemma 4 model. Novita AI claims to offer time-to-first-token as low as 50 milliseconds and cost savings up to 50% compared to most inference endpoints. The platform supports over 120 large language models and multimodal models through a single API. According to COO Junyu Huang, the service eliminates complex deployment steps including downloading model weights, configuring environments and provisioning GPU infrastructure, allowing developers to focus on building products rather than managing infrastructure.

Hugging Face
Apr 7th, 2026
How we ocr'ed 30,000 papers using Codex, open OCR models and jobs.

How Hugging Face ocr'ed 30,000 papers using Codex, open OCR models and jobs. On the hub, Hugging Face index arXiv papers any time someone mentions an arXiv abstract or PDF link in the README of a model, dataset or Space. Besides, any researcher can submit their work to Daily Papers at hf.co/papers/submit, up to 14 days after the publication date on arXiv. Daily Papers view. This enables researchers to promote their work by claiming papers using their Hugging Face account (by simply clicking on your name which will feature it on your account), as well as link the corresponding Hugging Face models, datasets and Spaces, Github URL and project page. Moreover, people can upvote and comment on papers in a Reddit-like way. Finally, it is now also possible to tag papers with organizations, enabling one to feature all research papers on a given organization page such as NVIDIA or Google. The @HuggingPapers account on X also frequently shares about the top trending research on the hub. Each Hugging Face paper page now features a "chat with paper" functionality powered by HuggingChat. Behind the scenes, this uses the HTML web page of the arXiv paper (e.g. https://arxiv.org/abs/2603.26599 can be viewed using https://arxiv.org/html/2603.26599). The HTML gets turned into Markdown, which is then fed to the LLM as context. HuggingChat integration on paper pages. However, as it turned out, about 27,000 papers indexed on Hugging Face do not have a corresponding HTML web page on arXiv, making it not possible to chat with those papers. Hence, the idea was pretty simple: let's use an open Optical Character Recognition (OCR) model to convert those papers to Markdown. Using a state-of-the-art open OCR model. As Hugging Face needed an open OCR model, it might be hard to know which one to use. Luckily, the Hugging Face team is working on a new feature called Evaluation results, which allows to turn Hugging Face datasets into native leaderboards on the hub. Evaluation results are added by opening pull requests on model repositories, which show up on the respective dataset. Find the current leaderboards here. For now, OlmOCRBench by AllenAI is the go-to benchmark for OCR. It's a pretty good place to find which open models are best at converting documents into Markdown, interleaved with HTML for the images and tables contained in them. OlmOCRBench leaderboard on the hub. Hence, Hugging Face simply decided to use the best model at the time of writing, which is Chandra-OCR 2 by Datalab. As the model is openly available with an OpenRAIL license, Hugging Face can freely use it for commercial purposes using frameworks like Transformers and vLLM. Using Hugging Face Jobs. To run models like Chandra at scale to process thousands of papers, it's recommended to leverage vLLM on GPU infrastructure. In its case, Hugging Face leveraged Jobs as the serverless compute platform to run the model. Jobs supports both CPUs and GPUs, from an Nvidia T4 all the way to a 8x Nvidia H200s, with pay-as-you-go pricing where you only pay for seconds used. Hugging Face could write a script ourselves to run the model using vLLM on Jobs. However, as it's 2026, nowadays Hugging Face can simply point a coding agent such as Claude Code, Cursor or Codex to a set of URLs and it will figure it out by itself. So that's exactly what Hugging Face did. Hugging Face simply asked OpenAI's Codex model (via the Codex Desktop app) to implement a script which runs Chandra-OCR-2 on the 27,000 arXiv IDs which currently have their Markdown version missing on the hub on Jobs. Hugging Face point it to Chandra's model card so it knows how to run it with vLLM, and provide it with the Hugging Face Jobs Skill so it knows how to use Hugging Face's serverless GPU infra. Codex and chill. The first prompt I sent to Codex Which GPUs to use? As Jobs offers many GPU flavours, I first asked Codex to do some comparisons on a small scale (120 papers) to see which GPUs to use and to estimate their costs. It did experiments on an Nvidia A10G-large as well as an Nvidia L40S GPU by launching jobs in parallel. It concluded to use the L40S, as it was able to process papers faster (about 60/hour when parsing at most 30 pages for each paper compared to 32/hour on the A10G). Moreover, it recommended to run 16 jobs in parallel, as processing all papers on a single L40S GPU would take multiple weeks. Running 16 parallel jobs would take about 29-30 hours. It estimated the cost to be about $850. Interestingly, 16x A10G-large is cheaper per hour but slower overall, which would ultimately lead to a larger cost of about $1350. For comparison, I also asked Codex how much this would cost with Chandra's own API: $1,841.07 for "fast/balanced" mode and $2,761.60 for "high-accuracy" mode. Codex giving me GPU recommendations Hence, Codex spun up the 16 jobs and monitored their performance. No jobs had to be restarted, they all worked from the first try. Some jobs took longer than others, mainly because they contained many papers with more pages to parse. Mounted buckets. At first, Hugging Face would simply let the script write the results to a Hugging Face dataset. However, the Hugging Face team leverages Buckets for storing the Markdown version of each paper. Buckets are not versioned by git, and instead powered by Xet for fast, cheap and mutable storage. As new papers get added every day, this would result in a huge amount of git commits, hence Buckets are more suited here. Moreover, the team just launched hf-mount, which enables to mount Hugging Face Buckets (as well as model, dataset or Spaces repos) as local filesystems. This means that Hugging Face no longer require to write download/upload functionality: the script (or coding agents in general) can just write to the bucket as it it were local. Hence I simply prompted Codex to write to a mounted bucket instead of a Hugging Face dataset, which made the scripts even faster. The results. During the run, I frequently asked Codex the same thing: "Great. Can you check the progress?". It then got back to me, telling me how many of the 16 parallel jobs had already finished. After about a day, all 16 jobs were finished. Codex babysitting the runs on Jobs I then asked it to merge the 16 buckets into a single one. Finally, Mishig integrated them into Paper Pages, so now you can chat with any paper on the hub, not just the ones which have an HTML version on arXiv! Try it for instance at https://huggingface.co/papers/2603.15031. Resources. Find the code here and the resulting bucket here.

Hugging Face
Mar 31st, 2026
Granite 4.0 3B Vision: compact multimodal Intelligence for enterprise documents.

Granite 4.0 3B Vision: compact multimodal Intelligence for enterprise documents. Today Hugging Face is excited to announce Granite 4.0 3B Vision, a compact vision-language model (VLM) designed for enterprise document understanding. It's purpose-built for reliable information extraction from complex documents, forms, and structured visuals. Granite 4.0 3B Vision excels on the following capabilities: * Table Extraction: Accurately parsing complex table structures (e.g., multi-row, multi-column, etc.) from document images * Chart Understanding: Converting charts and figures into structured machine-readable formats, summaries, or executable code * Semantic Key-Value Pair (KVP) Extraction: Identifying and grounding semantically meaningful key-value field pairs across diverse document layouts The model ships as a LoRA adapter on top of Granite 4.0 Micro, its dense language model, keeping vision and language modular for text-only fallbacks and seamless integration into mixed pipelines. It continues to support vision-language tasks such as producing detailed natural-language descriptions from images (e.g., "Describe this image in detail"). The model can be used standalone or in tandem with Docling to enhance document processing pipelines with deep visual understanding capabilities. How Granite 4.0 3B Vision was built. Granite 4.0 3B Vision's performance is the result of three key investments: A purpose-built chart understanding dataset constructed via a novel code-guided data augmentation approach, a novel variant of the DeepStack architecture that enables high-detail visual feature injection, and a modular design that keeps the model practical for enterprise deployment. ChartNet: teaching models to truly understand charts. Charts present a challenge for vision-language models (VLMs) because understanding them requires jointly reasoning over visual patterns, numerical data, and natural language, a combination most VLMs cannot handle well, especially when spatial precision matters - such as reading exact values off a line chart. To close this gap, Hugging Face has developed ChartNet: a million-scale multimodal dataset purpose-built for chart interpretation and reasoning, described in detail in its upcoming CVPR 2026 paper. ChartNet uses a code-guided synthesis pipeline to generate 1.7 million diverse chart samples spanning 24 chart types and 6 plotting libraries [see Figure 1]. What makes it so distinctive is that each sample consists of five aligned components - plotting code, rendered image, data table, natural language summary, and QA pairs - providing models a deeply cross-modal view of what a chart means, not just what it looks like. The dataset also includes human-annotated and real-world subsets, filtered for visual fidelity, semantic accuracy, and diversity. The result is a training resource that moves VLMs from merely describing charts to genuinely understanding the structured information they encode - with consistent gains across model sizes, architectures, and tasks. Figure 1: ChartNet's synthetic data generation pipeline. DeepStack: smarter visual feature injection. Most VLMs inject visual information into their language model at a single point, which forces the model to handle both high-level semantics and fine-grained spatial detail simultaneously. Granite 4.0 3B Vision takes a different approach with DeepStack Injection: abstract visual features are routed into earlier layers for semantic understanding, while high-resolution spatial features are fed into later layers to preserve detail. The result is a model that understands both what is in a document and where - which is critical for tasks like table extraction, chart understanding, and KVP parsing where layout matters as much as content. For a full technical breakdown, see the Model Architecture section of the model card. Modularity: one model, two modes. Granite 4.0 3B Vision is packaged as a LoRA adapter on top of Granite 4.0 Micro, rather than as a standalone model. In practice, this means the same deployment can serve both multimodal and text-only workloads, automatically falling back to the base model when vision isn't required. This keeps enterprise integration straightforward without sacrificing performance. How it performs. Charts: Evaluated on the human-verified ChartNet benchmark using LLM-as-a-judge, Granite 4.0 3B Vision achieves the highest Chart2Summary (86.4%) score among all evaluated models, including significantly larger ones [see Figure 2]. It also ranks second on Chart2CSV (62.1%), behind only Qwen3.5-9B (63.4%), a model more than double its size. Figure 2: Granite 4.0 3B Vision performance on chart2csv and chart2summary, compared against peer vision-language models using LLM-as-a-judge. Tables: Hugging Face evaluate table extraction in two settings: cropped tables (isolated regions) and full-page documents (tables embedded in complex layouts) [see Figure 3]. The benchmark suite includes TableVQA-extract (cropped table images), OmniDocBench-tables (full-page documents), and PubTables-v2 (both cropped and full-page settings). Models are tasked with extracting tables in HTML format and scored using TEDS, a metric that captures both structural and content accuracy. Granite 4.0 3B Vision achieves the strongest performance across benchmarks, leading on PubTablesV2 on both cropped (92.1) and full-page (79.3), OmniDocBench (64.0), and TableVQA (88.1) scores among all evaluated models. Figure 3: Granite 4.0 3B Vision's table extraction performance across cropped and full-page benchmarks (TableVQA-extract, PubTables-v2, OmniDocBench-tables), measured by TEDS. Semantic KVP: VAREX is a benchmark specifically designed to discriminate between small extraction models, comprising 1,777 U.S. government forms spanning simple flat layouts to complex nested and tabular structures. Models are evaluated using exact match (EM), a strict metric that requires the model's extracted key-value pairs to match the ground truth. Granite 4.0 3B Vision achieves 85.5% EM accuracy zero-shot. How to Use it. Granite 4.0 3B Vision can operate either as a stand-alone visual information extraction engine or as part of a fully automated document-processing pipeline with Docling. The model is designed to support scalable, accurate extraction across diverse document types and visual formats. 1. Stand-Alone Image Understanding Granite 4.0 3B Vision can run directly on individual images, making this option useful for applications with existing workflows that need targeted visual extraction without modifying upstream systems. This offers easy integration into existing automation workflows and is suitable for lightweight, task-specific tools (e.g., form parsers, chart analyzers, etc.). 2. Integrated Document Understanding Pipeline With Docling Granite 4.0 3B Vision can also be integrated seamlessly with Docling to support complete end-to-end document understanding. This mode can offer: * Large-scale processing of multi-page PDFs * Automated detection, segmentation, and cropping of figures, tables, and other visual elements with Docling and redirection of clean crops to Granite Vision model for fine-grained extraction * Efficient workflow with lower overall computational costs and faster throughput * Higher accuracy, more reliable extraction, and significantly improved efficiency across large document collections Example Use Cases * Form Processing: Extract structured fields from invoices, forms, and receipts using KVP capabilities or generate natural-language descriptions of figures using image2text feature (e.g., "Describe this image in detail"). * Financial Report Analysis: Use Docling to parse reports, detect figures, and crop visual elements. Process charts using Granite Vision's chart2csv, chart2code, and tables using tables_json capabilities to convert them into structured, machine-readable data enabling actionable insights. * Research Document Intelligence: Utilize Docling to handle OCR and layout parsing across dense academic PDFs, and pass extracted figures to chart2summary and table crops to tables_html to make visual content discoverable alongside free-form text in a single pipeline. Try it today. Granite 4.0 3B Vision is available now on HuggingFace, released under the Apache 2.0 license. Full technical details, training methodology, and benchmark results are available in the model card. Hugging Face'd love to hear what you build with it - share your feedback in the community tab.

Just AI News
Mar 30th, 2026
Huskeys brings agentic AI to edge security with $8M seed.

Huskeys brings agentic AI to edge security with $8M seed. Key Points * Huskeys raised $8M in seed funding to fix outdated WAF technology using agentic AI at the edge. * Investors include 10D, SV Angel, toDay Ventures, CCL, Alumni Ventures, 30-plus CISOs, and athlete angels Götze, Beachum, and Fitzgerald. * Founded by Unit 8200 alumni, Huskeys works with TikTok and Hugging Face to automate edge security management. March 30, 2026 Credit: Yair Glazer

Clusters Media
Mar 26th, 2026
LM Studio: run any AI model on your computer with a beautiful GUI.

LM Studio: run any AI model on your computer with a beautiful GUI. Not everyone wants to live in a terminal. For developers, researchers, and curious users who prefer a point-and-click experience, LM Studio is the gold standard for running AI models locally. It combines a polished desktop application with serious technical capabilities - making local LLMs accessible to anyone, regardless of their command-line comfort level. Released as a free desktop app for macOS, Windows, and Linux, LM Studio has quietly become one of the most-used tools in the local AI space. As of 2026, it supports thousands of models from Hugging Face, features a built-in chat interface, and offers an OpenAI-compatible local server - all wrapped in one of the cleanest UIs in open source software. What is LM Studio? LM Studio is a desktop application that lets you discover, download, and run open source language models entirely on your local machine. It acts as a friendly frontend for llama.cpp and other inference backends, handling all the technical complexity behind the scenes. Where Ollama focuses on simplicity and developer-first CLI usage, LM Studio prioritizes visual accessibility. You can browse models, read their descriptions, check hardware compatibility warnings, download with a progress bar, and start chatting - all without writing a single line of code. Key features. Hugging Face model browser. LM Studio integrates directly with Hugging Face, giving you access to tens of thousands of models from a searchable in-app directory. Filters help you narrow by model type, size, quantization format, and hardware compatibility. GGUF model support. LM Studio runs models in GGUF format - the standard quantized format for consumer-grade local inference. Quantization shrinks model size by representing weights in lower precision (e.g., 4-bit instead of 32-bit), making large models runnable on everyday hardware with minimal quality loss. Built-in chat interface. Switch between models mid-conversation, adjust system prompts, tweak generation parameters (temperature, top-p, context length) all from the UI - no config files required. Local inference server. LM Studio can run as a local server that mimics the OpenAI API. This means tools like Cursor, Continue, or any custom application built against OpenAI's SDK can be pointed at LM Studio with minimal changes. Multi-model sessions. Recent versions allow running multiple models simultaneously and routing between them - useful for comparing outputs or building multi-agent workflows. How to get started. Step 1 - Download LM Studio Visit lmstudio.ai and download the installer for your platform. It is a standard application installer - no dependencies, no terminal required. Step 2 - Browse and Download a Model Open LM Studio and navigate to the Discover tab. Search for a model - try "Mistral 7B" or "Llama 3.2" to start. LM Studio will show you compatible quantized versions and flag whether they fit in your available RAM. Click Download. A progress bar shows the download status. Most 7B models are 4-6GB depending on quantization. Step 3 - Load the Model and Chat Go to the Chat tab, select your downloaded model from the dropdown, and start typing. The model loads into memory (usually 5-20 seconds depending on size and hardware) and you are ready to go. Step 4 - Start the Local Server Navigate to the Local Server tab, select a model, and click Start Server. LM Studio will run an OpenAI-compatible API at http://localhost:1234/v1. You can now use it with any compatible tool or SDK. Understanding quantization: what do those letters mean? When browsing models in LM Studio, you will see file names like mistral-7b-instruct.Q4_K_M.gguf. The quantization suffix tells you the quality/size trade-off: For most use cases, Q4_K_M is the sweet spot - it fits comfortably in 8GB of RAM and produces output that is nearly indistinguishable from the full-precision model. LM Studio vs Ollama: which should you use? Both tools are excellent. The right choice depends on your workflow. Choose LM Studio if: you prefer a GUI, you want to browse and discover models visually, you are not comfortable with the command line, or you want to quickly compare multiple models side-by-side. Choose Ollama if: you prefer CLI tools, you are building scripts or automated pipelines, you want to integrate with Docker or server environments, or you need the lightest possible footprint. Many practitioners use both - LM Studio for exploration and experimentation, Ollama for integration into development workflows. Privacy: the real selling point. It is worth stepping back and appreciating what LM Studio actually gives you from a privacy perspective. When you use ChatGPT, Claude, or Gemini, every prompt you send travels over the internet to a remote server. Your conversations may be used to improve models, reviewed by human trainers under certain conditions, or stored for extended periods. For many consumer use cases this is fine. For sensitive work - legal documents, medical notes, confidential business strategy, personal journaling - it is a meaningful concern. LM Studio eliminates this entirely. Your prompts never leave your machine. There is no account required (you can use it completely anonymously), no usage data sent to a server, and no terms of service governing what you can say. What you type stays on your computer. The bottom line. LM Studio is the most accessible entry point into local AI for users who want power without complexity. Its clean interface, deep model library, and seamless server functionality make it genuinely useful for both beginners and experienced practitioners. If you have been curious about running AI locally but were put off by command-line tools, LM Studio removes every barrier. Download it, pull a model, and have your first fully private AI conversation today.