Full-Time
Advances artificial intelligence for public benefit
No salary listed
Senior, Expert
Mountain View, CA, USA
Get referrals →
You have ways to get a DeepMind referral from your network.
Applications through a referral are 3x more likely to get an interview!
Upload your resume to see how it matches 5 keywords from the job description.
PDF, DOC, DOCX, up to 4 MB
DeepMind focuses on advancing artificial intelligence through a collaborative team of scientists, engineers, and machine learning experts. Their technologies are designed for public benefit and scientific discovery, with a strong emphasis on safety and ethics. DeepMind aims to develop artificial general intelligence (AGI), which refers to systems that can solve a wide range of problems. They have achieved significant milestones in AI research, such as creating programs that can diagnose eye diseases as accurately as top doctors, reduce energy consumption in data centers, and predict the 3D shapes of proteins, which may revolutionize drug development. Their goal is to leverage AI to address some of the most pressing scientific challenges facing society.
Company Size
1,001-5,000
Company Stage
Acquired
Total Funding
$533M
Headquarters
London, United Kingdom
Founded
2010
Help us improve and share your feedback! Did you find this helpful?
Performance Bonus
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here. Seven of the eight authors of the landmark ‘Attention is All You Need’ paper, that introduced Transformers, gathered for the first time as a group for a chat with Nvidia CEO Jensen Huang in a packed ballroom at the GTC conference today. They included Noam Shazeer, co-founder and CEO of Character.ai; Aidan Gomez, co-founder and CEO of Cohere; Ashish Vaswani, co-founder and CEO of Essential AI; Llion Jones, co-founder and CTO of Sakana AI; Illia Polosukhin, co-founder of NEAR Protocol; Jakob Uskhoreit, co-founder and CEO of Inceptive; and Lukasz Kaiser, member of the technical staff at OpenAI. Niki Parmar, co-founder of Essential AI, was unable to attend. In 2017, the eight-person team at Google Brain struck gold with Transformers — a neural network NLP breakthrough that captured the context and meaning of words more accurately than its predecessors: the recurrent neural network and the long short-term memory network
All weekend, it seemed like my social media feed was little more than screenshots and memes and links to headlines that either poked fun or took painful stabs at Google’s so-called ‘woke’ Gemini AI model. Days after Google said it had “missed the mark” by outputting ahistorical and inaccurate Gemini images, X (formerly Twitter) had a field day with screenshots of Gemini output that claimed “it is not possible to definitely say who negatively impacted society more, Elon tweeting memes or Hitler.” In particular, VC Marc Andreessen spent the weekend gleefully re-posting inaccurate and offensive outputs that he claimed were “deliberately programmed with the list of people and ideas its creators hate.” This whiplash-inducing shift from the positive response Google received after Gemini’s release in December — with its “Google-will-finally-take-on-GPT-4” vibes — is especially notable because just a little over a year ago, the New York Times reported that Google had declared a “code red” as ChatGPT’s release in November 2022 set off a generative AI boom, potentially leaving the search engine giant in the dust. Even though its researchers had helped build the technology underpinning ChatGPT, Google had long been wary of damaging its brand, the New York Times article said — while new companies like OpenAI “may be more willing to take their chances with complaints in exchange for growth.” But with ChatGPT booming, according to a memo and audio recording, Google CEO Sundar Pichai had “been involved in a series of meetings to define Google’s AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses.”
Image credit: Meta. 今朝の新しい投稿で、Meta は Facebook 、Instagram 、Threads 上の AI が生成したコンテンツを特定し、ラベルを付けると発表した。. この発表は、AI が生成した歌手 Taylor Swift(テイラー・スウィフト)氏のポルノ的なディープフェイクが Twitter で拡散され、ファンや議員からの非難や世界的な見出しにつながった2週間後に行われた。また、2024年のアメリカ選挙を前に、Meta は AI が生成した画像や加工された動画への対処を迫られている。
In a new post this morning, Meta announced it will identify and label AI-generated content on Facebook, Instagram and Threads — though it cautioned it is “not yet possible to identify all AI-generated content.” The announcement comes two weeks after pornographic AI-generated deepfakes of singer Taylor Swift went viral on Twitter, leading to condemnation from fans and lawmakers, as well as global headlines. It also comes as Meta comes under pressure to deal with AI-generated images and doctored videos in advance of the 2024 US elections. Nick Clegg, president of global affairs at Meta, wrote that “these are early days for the spread of AI-generated content,” adding that as it becomes more common, “there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content.” The company would “continue to watch and learn, and we’ll keep our approach under review as we do. We’ll keep collaborating with our industry peers. And we’ll remain in a dialogue with governments and civil society.” The post emphasized that Meta is working with industry organizations like the Partnership on AI (PAI) to develop common standards for identifying AI-generated content. It said the invisible markers used for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices