We are Generative AI team under Monetization Technology. Our team focuses on developing cutting-edge Generative AI techs across all modalities, including text, image, videos, landing pages, etc., and creates industry-leading technical solutions to improve creative efficiency for advertisers, agencies and creators. We are committed to automated creative workflows by leveraging Generative AI technologies, to increase overall revenue for advertisers, agencies and creators.
We aim to drive and lead the generative AI in the ads tech and creative industry, powering products and driving values for our clients, creators, and the whole ecosystem. We are looking for infrastructure engineers who are excited to grow their business understanding, build highly scalable and reliable software/infrastructure, partner across functions with global teams, and make big impacts. If you are someone who welcomes challenges, we are eager to have you on the team!
We are looking for talented individuals to join us for an internship in 2025. Internships at TikTok aim to offer students industry exposure and hands-on experience. Turn your ambitions into reality as your inspiration brings infinite opportunities at ByteDance.
Applications will be reviewed on a rolling basis. We encourage you to apply early. Candidates can apply to a maximum of TWO positions and will be considered for jobs in the order you apply. The application limit is applicable to ByteDance (select one) and its affiliates' jobs globally.
Internships at ByteDance aim to provide students with hands-on experience in developing fundamental skills and exploring potential career paths. A vibrant blend of social events and enriching development workshops will be available for you to explore. Here, you will utilize your knowledge in real-world scenarios while laying a strong foundation for personal and professional growth. It runs for 12-24 weeks and begins in May/June 2025 or August/September 2025. Successful candidates must be able to commit to one of the following start dates below:
(Select below options for Summer)
- Monday, May 12
- Monday, May 19
- Tuesday May 27 (Memorial Day May 27)
- Monday, June 9
- Monday, June 23
(Select below options for Fall)
- Monday, August 11
- Monday, August 25
- Monday, September 8
- Monday, September 22
Please state your availability clearly in your resume (Start date, End date).
Applications will be reviewed on a rolling basis. We encourage you to apply early. Candidates can apply to a maximum of TWO positions and will be considered for jobs in the order you apply. The application limit is applicable to ByteDance and its affiliates' jobs globally.
Responsibilities:
- Work closely with infrastructure architects and SREs and to enhance the Generative AI Platform's availability, scalability, and cost-efficiency.
- Engineer robust, high-performance data processing and large language model training/inference pipelines, drive engineering excellence and optimization initiatives to ensure the most effective use of resources, including cost optimization and performance tuning of the ML platform.
- Provide a cutting-edge platform to model researchers and data pipeline engineers, accelerating the development and deployment of innovative ML models.
- Stay abreast of the latest advancements in machine learning infrastructure to implement solutions that enhance platform efficiency and performance.
Minimum Qualifications:
1. B.S./M.S. in Computer Science, Computer Engineering, or a related field.
2. Proficiency in Python and familiarity with deep learning frameworks like PyTorch. Strong skills in Linux, Docker, Kubernetes, and high-performance computing principles, including Infrastructure as Code (IaC).
3. Demonstrated expertise in scaling and optimizing generative AI engineering tasks in GPU-intensive environments.
4. Expertise in scaling generative AI models using sequence parallel, model parallel, and pipeline parallel techniques on multiple GPUs.
5. Proven ability to guide and automate the acceleration of model deployment efficiently, enhancing platform capabilities and reducing time-to-market for new features.
Preferred Qualifications:
1. Technical Skills: A strong preference for candidates with good experience in CUDA, training/inference FP8/FP4 optimization etc.
2. Deep understanding of cloud infrastructure platforms like GCP/Azure, and experience collaborating with DevOps/SRE teams for large-scale ML project deployments.
3. Experience with Scheduling Services: Familiarity with scheduling services like Linux Slurm, Kubernetes Volcano, or third-party tools like Run.AI, etc.
4. Distributed Computing: Experience in large-scale ML training and deployment, familiarity with distributed computing frameworks such as Ray.io.
5. Strong problem-solving skills, and proficient in communication, collaboration, and project management.