Internship

Research Scientist Intern

Machine Perception for Input and Interaction, PhD

Confirmed live in the last 24 hours

Meta

Meta

10,001+ employees

Social media platforms and virtual reality solutions

No salary listed

Company Historically Provides H1B Sponsorship

London, UK

Category
Applied Machine Learning
Deep Learning
Computer Vision
AI & Machine Learning
Required Skills
Python
Tensorflow
Pytorch
Machine Learning
Natural Language Processing (NLP)
C/C++
Computer Vision
Requirements
  • Currently has, or is in the process of obtaining, a PhD degree in EE/CS, Applied Math or a related STEM field
  • Experience in one or more of the following: computer vision (e.g. tracking, pose estimation, action/emotion recognition), computer graphics (e.g. appearance, geometry, physically-based modeling), robotics (state estimation, optimal control), machine learning (e.g. efficient deep learning, domain adaptation, transfer learning), natural language processing, or human-computer interaction
  • Excellent communication skills
  • High levels of creativity and problem solving capabilities
  • Experience with C++ or Python
  • Experience with ML frameworks such as PyTorch, Tensorflow, etc.
  • Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment
Responsibilities
  • Design and execution of algorithms in the domain of computer vision, machine learning, computer graphics, sensor fusion, or HCI software and hardware prototyping
  • Design of user studies and experiments
  • Collaboration with other researchers across various disciplines
  • Communication of research agenda, progress, and results
Desired Qualifications
  • Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, or first-authored publications at leading journals or conferences such as CVPR, ECCV/ICCV, BMVC, NeurIPS, ICML, ICLR, CHI, SIGGRAPH/SIGGRAPH Asia, ICRA, IROS, RSS, TPAMI, IJCV, etc.
  • Demonstrated software engineering experience via an internship, work experience, coding competitions, or widely used contributions in open source repositories (e.g. GitHub)
  • Experiences working on high volume multi-sensor streaming data processing and real-time multi-sensor, in particular cameras, fusion for computer vision and machine learning systems
  • Experiences on optimizing and deploying deep learning models on SoC of mobile devices for high performance real time applications
  • Experience with Machine Learning for 3D data (such as meshes, point clouds, gaussian splatting, and voxels)
  • Experience with Machine Learning for Visual Synthesis
  • Intent to return to degree program after the completion of the internship/co-op
  • Experience working and communicating cross functionally in a team environment

Meta Platforms Inc. focuses on social media, communication tools, and virtual reality. It operates popular platforms like Facebook, Instagram, and WhatsApp, allowing users to connect, share content, and engage in communities. The Oculus division provides virtual reality hardware and experiences. Meta's primary revenue comes from advertising, offering businesses tools to target specific audiences using insights from its large user base. This advertising service is scalable and allows for tailored audience segmentation. Additionally, Meta explores revenue through virtual reality product sales and the metaverse, where it looks to monetize virtual goods and services. The company also invests in artificial intelligence and augmented reality to enhance its offerings, aiming to combine user engagement with advanced marketing tools.

Company Size

10,001+

Company Stage

IPO

Headquarters

Menlo Park, California

Founded

2004

Simplify Jobs

Simplify's Take

What believers are saying

  • Llama 4 models outperform competitors like GPT-4o, attracting developers to Meta's platforms.
  • Ray-Ban smart glasses could open new revenue streams in consumer electronics.
  • Mocha AI system enhances interactive digital content, boosting user engagement on social media.

What critics are saying

  • Criticism of Llama 4's performance may harm Meta's AI reputation.
  • Joelle Pineau's departure could disrupt Meta's AI development projects.
  • Privacy concerns over smart glasses and neural wristband may lead to regulatory scrutiny.

What makes Meta unique

  • Meta's Llama 4 Behemoth is among the smartest large language models available.
  • Meta's Ray-Ban smart glasses integrate augmented reality into everyday wearables.
  • Meta's neural wristband offers intuitive control for smart glasses using hand gestures.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Stock Options

Company Equity

Mental Health Support

Flexible Work Hours

Growth & Insights and Company News

Headcount

6 month growth

4%

1 year growth

7%

2 year growth

4%
VentureBeat
Apr 8th, 2025
Meta Defends Llama 4 Release Against ‘Reports Of Mixed Quality,’ Blames Bugs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. Meta’s new flagship AI language model Llama 4 came suddenly over the weekend, with the parent company of Facebook, Instagram, WhatsApp and Quest VR (among other services and products) revealing not one, not two, but three versions — all upgraded to be more powerful and performant using the popular “Mixture-of-Experts” architecture and a new training method involving fixed hyperparameters, known as MetaP. Also, all three are equipped with massive context windows — the amount of information that an AI language model can handle in one input/output exchange with a user or tool. But following the surprise announcement and public release of two of those models for download and usage — the lower-parameter Llama 4 Scout and mid-tier Llama 4 Maverick — on Saturday, the response from the AI community on social media has been less than adoring.Llama 4 sparks confusion and criticism among AI usersAn unverified post on the North American Chinese language community forum 1point3acres made its way over to the r/LocalLlama subreddit on Reddit alleging to be from a researcher at Meta’s GenAI organization who claimed that the model performed poorly on third-party benchmarks internally and that company leadership “suggested blending test sets from various benchmarks during the post-training process, aiming to meet the targets across various metrics and produce a ‘presentable’ result.”The post was met with skepticism from the community in its authenticity, and a VentureBeat email to a Meta spokesperson has not yet received a reply. But other users found reasons to doubt the benchmarks regardless

NDTV
Apr 6th, 2025
Meta Launches Llama 4: All About The Latest Open-Source AI Model

Additionally, Meta introduced Llama 4 Behemoth, describing it as one of the smartest large language models (LLMs) yet, and the most powerful version they've developed.

Azmotech
Apr 6th, 2025
Meta Launches Llama 4 AI Models; Beats GPT-4o and Grok 3 in LMArena

After a four-month hiatus, Meta has unveiled a new lineup of Llama 4 open-weight models.

Latest Nigerian News
Apr 6th, 2025
Mark Zuckerberg says Meta will share news about a Llama 4 Reasoning model "in the next month" (Cheyenne MacDonald/Engadget)

Meta has released the first two models from its multimodal Llama 4 suite: LLama 4 Scout and Llama 4 Maverick.

Govindh Tech
Apr 5th, 2025
LLaMA 3.3 70B Multilingual AI Model Redefines Performance

Meta has released a new, state-of-the-art 70B model that performs comparably to the Llama 3.1 405B model.