Full-Time
Confirmed live in the last 24 hours
AI platform for model development and management
$156k - $255kAnnually
Senior
Remote in USA
Remote first culture with in-office flexibility in San Francisco.
You match the following Weight & Biases's candidate preferences
Employers are more likely to interview you if you match these preferences:
Weights & Biases provides a platform designed to help AI developers create more efficient machine learning models quickly. The platform includes tools for tracking experiments, managing workflows, evaluating model performance, and reproducing models, as well as features for version control and dataset iteration. This allows AI teams to conduct more experiments and improve their productivity. Weights & Biases operates on a software-as-a-service (SaaS) model, where clients pay a subscription fee to access its features, ensuring a steady revenue stream. The company stands out in the growing AI and machine learning market by offering a comprehensive solution that simplifies the development process for machine learning teams.
Company Size
201-500
Company Stage
Late Stage VC
Total Funding
$243.2M
Headquarters
San Francisco, California
Founded
2017
Help us improve and share your feedback! Did you find this helpful?
🏝️ Unlimited vacation time
🩺 100% Medical, Dental, and Vision for employees and Family Coverage
🏠 Remote first culture with in-office flexibility in San Francisco
💵 $1000 home office budget with new high-powered laptop
🥇 Truly competitive salary and equity
🚼 12 weeks of Parental leave
📈 401(k)
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. The industry’s push into agentic AI continues, with Nvidia announcing several new services and models to facilitate the creation and deployment of AI agents. Today, Nvidia launched Nemotron, a family of models based on Meta’s Llama and trained on the company’s techniques and datasets. The company also announced new AI orchestration blueprints to guide AI agents. These latest releases bring Nvidia, a company more known for the hardware that powers the generative AI revolution, to the forefront of agentic AI development.Nemotron comes in three sizes: Nano, Super and Ultra. It also comes in two flavors: the Llama Nemotron for language tasks and the Cosmos Nemotron vision model for physical AI projects
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. The software development world is experiencing its biggest transformation since the advent of open-source coding. Artificial intelligence assistants, once viewed with skepticism by professional developers, have become indispensable tools in the $736.96 billion global software development market. One of the products leading this seismic shift is Anthropic’s Claude.Claude is an AI model that has captured the attention of developers worldwide and sparked a fierce battle among tech giants for dominance in AI-powered coding. Claude’s adoption has skyrocketed this year, with the company telling VentureBeat its coding-related revenue surged 1,000% over just the last three months.Software development now accounts for more than 10% of all Claude interactions, making it the model’s most popular use case
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Artificial intelligence company Cohere unveiled significant updates to its fine-tuning service on Thursday, aiming to accelerate enterprise adoption of large language models. The enhancements support Cohere’s latest Command R 08-2024 model and provide businesses with greater control and visibility into the process of customizing AI models for specific tasks.The updated offering introduces several new features designed to make fine-tuning more flexible and transparent for enterprise customers. Cohere now supports fine-tuning for its Command R 08-2024 model, which the company claims offers faster response times and higher throughput compared to larger models. This could translate to meaningful cost savings for high-volume enterprise deployments, as businesses may achieve better performance on specific tasks with fewer compute resources.A comparison of AI model performance on financial question-answering tasks shows Cohere’s fine-tuned Command R model achieving competitive accuracy, highlighting the potential of customized language models for specialized applications
Biases, Inc. has also revamped the Weave docs, released several new Weave cookbooks, and added a new in-depth Weave course to its curriculum.
Aimpoint Digital partners with AI21 labs and Weights & Biases to revolutionize AI offerings.
SAN FRANCISCO--(BUSINESS WIRE)--Fully Connected -- Weights Biases, the AI developer platform, today announced an expanded integration with NVIDIA NIM microservices to enable enterprises to build custom AI applications and optimize inference for production. Building on Weights Biases’ initial NIM integration announced last month at NVIDIA GTC, the additional customization capabilities announced today – currently in private preview – provide a more comprehensive and accessible way for developers using the Weights Biases platform to customize and deploy domain-specific enterprise-grade AI applications. “Enterprises today want to use LLMs to deploy custom applications trained on their own data, such as a customer support agent that quickly and correctly answers customers' questions,” said Lukas Biewald, CEO at Weights Biases. “Our expanded integration with NVIDIA NIM will give our customers the higher LLM performance they want with models that have been fine-tuned on their business data and optimized for performance and low latency. Enterprise AI also becomes faster to deploy since we’re closing the operational gap between training and inference.”. “Across industries, businesses are seeking an engine to supercharge their generative AI strategies,” said Manuvir Das, vice president of enterprise computing at NVIDIA
SAN FRANCISCO--(BUSINESS WIRE)--Weights & Biases, the AI developer platform, today announced it has received the 2024 Google Cloud Technology Partner of the Year Award for Generative AI - Overall Impact.Weights & Biases is being recognized for achievements in the Google Cloud ecosystem, helping joint customers track, analyze, evaluate, and deploy generative AI applications powered by LLMs at enterprise scale.“Google Cloud's Partner Awards celebrate the transformative impact and value that partners have delivered for customers," said Kevin Ichhpurani, Corporate Vice President, Global Ecosystem and Channels at Google Cloud. "We're proud to announce Weights & Biases as a 2024 Google Cloud Partner Award winner and recognize their achievements enabling customer success from the past year.”Weights & Biases provides AI developers with the tools needed to build and deploy generative AI applications for enterprises of all sizes and in any vertical. The company is trusted by some of the most advanced enterprises and the leaders in the generative AI industry.“We’re committed to building the best tools for machine learning practitioners. That means providing access to compute that is both scalable and price efficient, the right infrastructure to operationalize ML activities, and guardrails for orchestration and deployment to production,” said Seann Gardiner, VP, Business Development at Weights & Biases. “Many of our customers rely on Google Cloud for their generative AI needs, and we’re honored to be recognized as a Google Cloud Partner of the Year for our commitment to the adoption and advancement of generative AI.”Weights & Biases integrates with all machine learning workflows on Google Cloud. Developers leveraging both the scalable infrastructure provided by Google Cloud Storage, Google Compute Engine, and Google Kubernetes Engine and ML development platforms such as VertexAI can take advantage of the Weights & Biases platform
SAN FRANCISCO--(BUSINESS WIRE)--Weights & Biases, the AI developer platform, today announced multiple new platform integrations at NVIDIA GTC, a global AI conference running from March 18 to 21. The integrations include support for the NVIDIA DGX systems, which allows customers to use Weights & Biases software to easily access NVIDIA accelerated computing resources both in the cloud and on-premises, as well as support for NVIDIA NIM and other microservices, part of the NVIDIA AI Enterprise software platform. These new platform integrations are an addition to the existing Weights & Biases integrations with other software included with NVIDIA AI Enterprise, such as the NeMo framework for large language model (LLM) development, NVIDIA MONAI for training medical imaging models for healthcare AI applications, and NVIDIA TAO Toolkit. Weights & Biases is a DGX-Ready Software partner, NVIDIA AI Enterprise partner and a member of NVIDIA Inception.W&B Launch, part of the Weights & Biases AI developer platform, provides seamless portability of AI workloads across compute clusters, allowing users to scale training up and out without infrastructure friction or permission sprawl. W&B Launch has supported access to high-performance compute required for building and deploying AI models on AWS, Google Cloud, and Microsoft Azure.“W&B Launch gives our machine learning (ML) engineers easy access to compute so they can dramatically scale our training workloads for our computer vision models,” said Jayden Elliott, software engineer at VisualCortex, a video intelligence platform. “The intuitive platform enables our team to concentrate on our fundamental responsibilities of model training and evaluation, alleviating concerns regarding infrastructure management.”W&B Launch now supports NVIDIA DGX systems, allowing Weights & Biases customers to easily scale up ML experimentation and hyperparameter tuning activities leveraging the power of NVIDIA DGX AI supercomputing.Weights & Biases is also unveiling one of the first integrations with the NVIDIA NIM inference microservices, designed to bridge the gap between the complex world of AI development and the operational needs of enterprise environments
NVIDIANVIDIA NIM MicroservicesNVIDIA NIM microservices optimize inference on more than two dozen popular AI models from NVIDIA and its partner ecosystem to accelerate production AI.New Catalog of GPU-Accelerated NVIDIA NIM Microservices and Cloud Endpoints for Pretrained AI Models Optimized to Run on Hundreds of Millions of CUDA-Enabled GPUs Across Clouds, Data Centers, Workstations and PCsEnterprises Can Use Microservices to Accelerate Data Processing, LLM Customization, Inference, Retrieval-Augmented Generation and GuardrailsAdopted by Broad AI Ecosystem, Including Leading Application Platform Providers Cadence, CrowdStrike, SAP, ServiceNow and MoreSAN JOSE, Calif., March 18, 2024 (GLOBE NEWSWIRE) -- NVIDIA today launched dozens of enterprise-grade generative AI microservices that businesses can use to create and deploy custom applications on their own platforms while retaining full ownership and control of their intellectual property.Built on top of the NVIDIA CUDA ® platform, the catalog of cloud-native microservices includes NVIDIA NIM ™ microservices for optimized inference on more than two dozen popular AI models from NVIDIA and its partner ecosystem. In addition, NVIDIA accelerated software development kits, libraries and tools can now be accessed as NVIDIA CUDA-X ™ microservices for retrieval-augmented generation (RAG), guardrails, data processing, HPC and more. NVIDIA also separately announced over two dozen healthcare NIM and CUDA-X microservices .The curated selection of microservices adds a new layer to NVIDIA’s full-stack computing platform. This layer connects the AI ecosystem of model developers, platform providers and enterprises with a standardized path to run custom AI models optimized for NVIDIA’s CUDA installed base of hundreds of millions of GPUs across clouds, data centers, workstations and PCs.Among the first to access the new NVIDIA generative AI microservices available in NVIDIA AI Enterprise 5.0 are leading application, data and cybersecurity platform providers including Adobe , Cadence , CrowdStrike , Getty Images, SAP , ServiceNow , and Shutterstock.“Established enterprise platforms are sitting on a goldmine of data that can be transformed into generative AI copilots,” said Jensen Huang, founder and CEO of NVIDIA. “Created with our partner ecosystem, these containerized AI microservices are the building blocks for enterprises in every industry to become AI companies.”Story continuesNIM Inference Microservices Speed Deployments From Weeks to MinutesNIM microservices provide pre-built containers powered by NVIDIA inference software — including Triton Inference Server™ and TensorRT™-LLM — which enable developers to reduce deployment times from weeks to minutes.They provide industry-standard APIs for domains such as language, speech and drug discovery to enable developers to quickly build AI applications using their proprietary data hosted securely in their own infrastructure. These applications can scale on demand, providing flexibility and performance for running generative AI in production on NVIDIA-accelerated computing platforms.NIM microservices provide the fastest and highest-performing production AI container for deploying models from NVIDIA, A121 , Adept, Cohere , Getty Images, and Shutterstock as well as open models from Google, Hugging Face , Meta, Microsoft, Mistral AI and Stability AI.ServiceNow today announced that it is using NIM to develop and deploy new domain-specific copilots and other generative AI applications faster and more cost effectively.Customers will be able to access NIM microservices from Amazon SageMaker , Google Kubernetes Engine and Microsoft Azure AI , and integrate with popular AI frameworks like Deepset , LangChain and LlamaIndex .CUDA-X Microservices for RAG, Data Processing, Guardrails, HPCCUDA-X microservices provide end-to-end building blocks for data preparation, customization and training to speed production AI development across industries.To accelerate AI adoption, enterprises may use CUDA-X microservices including NVIDIA Riva for customizable speech and translation AI, NVIDIA cuOpt ™ for routing optimization, as well as NVIDIA Earth-2 for high resolution climate and weather simulations.NeMo Retriever ™ microservices let developers link their AI applications to their business data — including text, images and visualizations such as bar graphs, line plots and pie charts — to generate highly accurate, contextually relevant responses
SAN FRANCISCO--(BUSINESS WIRE)--Weights Biases, the AI developer platform, today announced its participation in the National Artificial Intelligence Research Resource (NAIRR) pilot program. This program is a first step towards realizing the vision for a shared research infrastructure that will strengthen and democratize access to critical resources necessary to power responsible AI discovery and innovation. The NAIRR pilot is a collaboration between academia, industry, nonprofit and government sectors and intended to promote cross-sector partnerships. The pilot will initially support AI research to advance safe, secure, and trustworthy AI as well as the application of AI to challenges in healthcare and environmental and infrastructure sustainability. The NAIRR pilot will also provide infrastructure support to educators to enable training on AI technologies and their responsible approaches. “By contributing licenses for their AI Developer Platform, Weights Biases is providing NAIRR Pilot users access to critical software and tools needed to improve experimental rigor and model reproducibility, key elements for ensuring a responsible AI innovation ecosystem,” said Katie Antypas, office director for the National Science Foundation’s Office of Advanced Cyberinfrastructure