Job Description
The Advanced Technology Group (ATG) at ServiceNow is a customer-focused innovation group building intelligent software and smart user experiences using existing and latest advanced technologies to enable end-to-end, industry-leading work experiences for customers. We are a group of researchers, applied scientists, engineers, and product managers with a dual mission. We build and evolve the AI platform, and partner with teams to build products and end-to-end AI-powered work experiences. In equal measure, we lay the foundations, research, experiment, and de-risk AI technologies that unlock new work experiences in the future.
You will play a major part in driving significant innovations for our Large Language Models(LLM’s) for Enterprise Language Generation that will power NOW platform with AI experiences in day to day work of our customers. We are just getting started with our early-adopter customers and we need your help in driving new innovations and techniques that can help leapfrog existing methods and create next generation models competing with the very best. This will help power an amazing range of solutions to our 9k+ enterprise customers around the world.
What you get to do in this role:
- Confronted with real-world challenges and datasets, you will need to use your AI/ML expertise and creativity to apply existing methods and develop new ones to solve these problems in a practical and scalable way.
- Research and propose advanced new techniques, that can help leapfrog existing ones. Work with other researchers and Applied Research scientists to validate and drive these innovations, eventually resulting in vastly improved models.
- Contribute to the design, implementation, and scaling of LLM’s as a key AI-first platform offering in ServiceNow’s portfolio.
- Collaborate with a team of like-minded developers, research scientists, product managers and engineers to produce top quality research.
Qualifications
To be successful in this role you have:
- Currently pursing PhD or advanced Masters degree in Computer Science or relevant field. (Graduating Dec 2024 or later)
- Publication record in top conferences (for e.g., NeurIPS, ICML, ICLR, CVPR, ACL, EMNLP)
- Experience in Pretraining of an LLM is preferred but not mandatory
- Experience with Instruction fine tuning and other fine tuning techniques is a must.
- Experience with Reinforcement learning is preferred but not mandatory.
- Experience with various transformer architectures (auto-regressive ,sequence-to-sequence etc)
- Stay in touch with latest advances, research papers.
Expected Outcomes:
- Publication of work at top conference.
- Patent (if applicable).
- Working proof of concepts.
For positions in the Bay Area, we offer a base pay of $73.56/hour, plus equity (when applicable), variable/incentive compensation and benefits. Sales positions generally offer a competitive On Target Earnings (OTE) incentive compensation structure. Please note that the base pay shown is a guideline, and individual total compensation will vary based on factors such as qualifications, skill level, competencies and work location. We also offer health plans, including flexible spending accounts, a 401(k) Plan with company match, ESPP, matching donations, a flexible time away plan and family leave programs (subject to eligibility requirements). Compensation is based on the geographic location in which the role is located, and is subject to change based on work location.