Full-Time
Confirmed live in the last 24 hours
Platform for scaling AI workloads
$180.2k - $214kAnnually
Mid, Senior
Palo Alto, CA, USA + 1 more
More locations: San Francisco, CA, USA
This is a hybrid role, requiring some in-office presence.
You match the following Anyscale's candidate preferences
Employers are more likely to interview you if you match these preferences:
Anyscale provides a platform designed to scale and productionize artificial intelligence (AI) and machine learning (ML) workloads. Its main product, Ray, is an open-source framework that helps developers manage and scale AI applications across various fields, including Generative AI, Large Language Models (LLMs), and computer vision. Ray allows companies to enhance the scalability, latency, and cost-efficiency of their AI operations, with some users reporting improvements of over 90% in these areas. Anyscale serves clients like OpenAI and Ant Group, who rely on Ray to train large models and improve their ML platforms. The company operates on a software-as-a-service (SaaS) model, charging clients a subscription fee for access to the Ray platform. Anyscale's goal is to empower organizations to efficiently scale their AI workloads and optimize their operational performance.
Company Size
501-1,000
Company Stage
Series C
Total Funding
$252.5M
Headquarters
San Francisco, California
Founded
2019
Help us improve and share your feedback! Did you find this helpful?
Medical, Dental, and Vision insurance
401K retirement savings
Flexible time off
FSA and Commuter benefits
Parental and family leave
Office & phone plan reimbursement
This partnership allows organizations to effectively manage and scale their ML workflows by integrating Astronomer's workflow management capabilities with Anyscale's distributed computing power.
Anyscale unveils new products and AI Platform enhancements at Ray Summit 2024.
SAN FRANCISCO, July 31, 2024 (GLOBE NEWSWIRE) - Anyscale, the company behind Ray, the open source framework for scalable AI, named industry veteran Keerti Melkote as chief executive officer following a year of 4x revenue growth and explosive open source adoption.
Anyscale and deepsense.ai develop a scalable cross-modal image retrieval system for e-commerce.
Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here. Thousands of companies use the Ray framework to scale and run highly complex, compute-intensive AI workloads — in fact, you’d be hard-pressed to find a large language model (LLM) that hasn’t been built on Ray. Those workloads contain loads of sensitive data, which, researchers have found, could be highly exposed through a critical vulnerability (CVE) in the open-source unified compute framework. For the last seven months, this flaw has allowed attackers to exploit thousands of companies’ AI production workloads, computing power, credentials, passwords, keys, tokens and “a trove” of other sensitive information, according to new research from Oligo Security. The vulnerability is under dispute — meaning that it is not considered a risk and has no patch. This makes it a “shadow vulnerability,” or one that doesn’t appear in scans. Fittingly, researchers have dubbed it “ShadowRay.”
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here. With generative AI increasingly becoming table stakes for organizations, the big question facing many organizations is how to scale usage in a cost efficient manner.That’s a question that Robert Nishihara, CEO and co-founder of Anyscale is looking to answer. Anyscale is the lead commercial vendor behind the widely deployed open source Ray framework for distributed machine learning training and inference. This week at the Ray Conference that runs from Sept 18-19 in San Francisco, Nishihara is outlining the success and growth of Ray to date, and revealing what’s next. Among the big pieces of news announced today is the general availability of Anyscale Endpoints, which enables organizations to easily fine tune and deploy open source large language models (LLMs). Anyscale is also announcing a new expanded partnership with Nvidia that will see Nvidia’s software for inference and training optimized for the Anyscale Platform.“If you took an Uber ride, ordered something on Instacart, listened to something on Spotify, or watched Netflix or TikTok, or use OpenAI’s Chat GPT, you’re interacting with models built with Ray, ” Nishira told VentureBeat
Anyscale launches new service Anyscale Endpoints, 10X more cost-effective for most popular open-source LLMs.
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More. Seattle based startup OctoML today released its new OctoAI self optimizing infrastructure service to help organizations build and deploy generative AI applications.OctoML got its start in 2019 as a spinout from the University of Washington with the foundation of the company’s technology stack relying on the open source Apache TVM machine learning (ML) compiler framework. Its original focus was to help organizations optimize ML models for deployment, an effort that helped the company raise a total of $131.9 million to date, including an $85 million Series C round in 2021. In June 2022, OctoML added technology to help transform ML models into software functions. Now, the company is going a step further with its OctoAI service, which is all about optimizing the deployment of ML on infrastructure to help improve performance and manage costs.“The demand for compute is just absurd,” Luis Ceze, Octo ML CEO, told VentureBeat
SAN FRANCISCO, CA, Anyscale, the company enabling instant scaling of AI applications, has secured $100 million in Series C funding at a $1 billion valuation.
Anyscale, the AI infrastructure company built by the creators of Ray, the world’s fastest growing open source unified framework for scalable computing, launched Aviary, a new open source project designed to help developers simplify the painstaking process of choosing and integrating the best open source large language models (LLMs) into their applications.