Full-Time
Confirmed live in the last 24 hours
Provides AI models and APIs for search
$60k - $90kAnnually
Entry, Junior
San Francisco, CA, USA
The position is onsite in San Francisco.
You match the following Hive's candidate preferences
Employers are more likely to interview you if you match these preferences:
Hive offers advanced artificial intelligence models through APIs that enhance search capabilities, such as visual similarity and text-to-image search. Their deep learning models can accurately label and classify content, as well as generate images and text from prompts, making them valuable for industries like sports and marketing. Hive differentiates itself with high standards of information security, holding ISO 27001:2022 and SOC Type 2 certifications, and serves a wide range of clients by providing AI solutions that improve operational efficiency. The company's goal is to empower businesses with AI technology that enhances their decision-making and content management.
Company Size
201-500
Company Stage
Series D
Total Funding
$138.3M
Headquarters
San Francisco, California
Founded
2013
Help us improve and share your feedback! Did you find this helpful?
Competitive Pay
Equity
Comprehensive Insurance
Catered Meals
Corporate Gym Membership
Virgin Money has collaborated with smart home energy specialist Hive to launch The Retrofit Boost, a mortgage product designed to help customers improve their property's energy efficiency.
BOT or NOT? This special series explores the evolving relationship between humans and machines, examining the ways that robots, artificial intelligence and automation are impacting our work and lives.TrueMedia launched its deepfake detector on Tuesday morning. (TrueMedia image)Seattle-based nonprofit TrueMedia released a free AI-powered media verification tool Tuesday to help journalists and fact-checkers detect deepfakes and combat misinformation ahead of upcoming U.S. and international elections.The non-partisan organization, which launched in January, is led by Oren Etzioni, University of Washington professor and longtime AI specialist, and funded by Uber co-founder Garrett Camp through his Camp.org nonprofit foundation.Oren Etzioni is founder of TrueMedia. (Allen Institute for AI Photo)Although the tool isn’t perfect, its ability to identify deepfakes is “extremely high,” with about 90% accuracy across images, video, and audio, Etzioni said. TrueMedia uses a combination of internally developed technology and AI detection tools from its partners to analyze media and come up with a probability that content is fake.For example, the tool automatically labeled as “highly suspicious” a known fake video that purported to show Ukraine’s top security official claiming responsibility for the March 22 terrorist attack at a Russian concert hall. The tool stated with 100% confidence that the video contained AI-generated imagery.“If it’s a deepfake, we’re very likely to catch it,” Etzioni said.In addition to launching the new tool Tuesday morning, TrueMedia reached a memorandum of understanding with Microsoft to share data and resources, collaborating on different AI models and approaches.Other partners of TrueMedia include Hive, Clarity, Reality Defender, OctoAI, AIorNot.com, and Sensity.The New York Times covered the launch of the tool Tuesday, citing examples including a fake image of Etzioni in the hospital that he generated using an AI tool
A new nonprofit, nonpartisan technology organization called TrueMedia is developing an AI-powered tool to detect deepfake videos, photos, and audio, aiming to combat political disinformation in the leadup to the 2024 elections.Founded and led by Oren Etzioni, University of Washington professor and former CEO of the Allen Institute for AI, the Seattle-based group is backed by Uber co-founder Garrett Camp through his Camp.org nonprofit foundation.The plan, in essence, is to use AI to fight AI.“Disinformation, transmitted virally over social networks, has emerged as the Achilles heel of democracy in the 21st Century,” the group said in its announcement Wednesday morning, predicting “a tsunami of disinformation” in the 2024 election due to a sharp decline in the cost of using AI to create deceptive media.Oren Etzioni is leading the nonpartisan TrueMedia deepfake detection initiative. (AI2 Photo)TrueMedia plans to release a free, web-based tool in the first quarter of this year that combines advances from TrueMedia with existing deepfake detection tools in areas including computer vision and audio analysis. It will be available initially for use by journalists, fact-checkers, and online influencers before broader public release later in the year.The group is far from the first to take on this challenge, but Etzioni said he believes TrueMedia will be in a strong position to address it.“We think that both the particular focus on political deepfakes, and the particular expertise that we’re bringing together, is going to allow us go further, faster than has been done in the past,” Etzioni said in an interview with GeekWire this week.TrueMedia’s technology will analyze media uploaded by users and indicate the likelihood that the content is manipulated by artificial intelligence, along with an explanation of its assessment.In the meantime, the organization is taking signups for a waitlist and encouraging visitors to its website to submit examples of political deepfake content that they discover online, to help develop its tools.Etzioni said he was inspired to start looking into the problem after taking part in a meeting President Biden held with tech leaders this summer. At the same time, he emphasized the nonpartisan nature of the project. TrueMedia’s AI tools will make a technical assessment about uploaded media, not a political judgment about the underlying content.“This wasn’t what I was focused on [before the meeting]. I just realized how potentially horrific this can be in a narrowly divided election,” he said
OpenAI said companies can use its latest large language model (LLM), GPT-4, to develop artificial intelligence (AI)-assisted content moderation systems. Using GPT-4, companies can perform content moderation with more accurate and consistent labels, a faster feedback loop for policy refinement and a reduced need for human intervention, the company said in a Tuesday (Aug. 15) blog post. “We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators,” OpenAI said in the post. “Anyone with OpenAI API [application programming interface] access can implement this approach to create their own AI-assisted moderation system.”
Now, Hive is unveiling the most ambitious of its Intelligent Search services yet: Web Search, for visual comparisons to content on the open web.