Full-Time

Principal Software Engineer

AI Inference

NVIDIA

NVIDIA

10,001+ employees

Designs GPUs and AI HPC platforms

Compensation Overview

$272k - $431.3k/yr

+ Equity

Company Historically Provides H1B Sponsorship

Remote in USA + 2 more

More locations: Santa Clara, CA, USA | South Carolina, USA

Hybrid

Category
Software Engineering (2)
,
Required Skills
Rust
Python
CUDA
Observability
Requirements
  • 15+ years building production software with significant depth in systems engineering; strong track record of owning ambiguous, high-impact technical problems end-to-end.
  • Demonstrated expertise in LLM inference/serving systems (e.g., vLLM, SGLang) and the tradeoffs that drive real production performance.
  • Strong programming skills in Rust, C++, Python, CUDA; ability to read, modify, and optimize performance-critical code across layers.
  • Experience with GPU performance analysis tools and methodologies (profiling, microbenchmarking, memory/comms analysis) and a strong measurement culture.
  • Solid foundation in distributed systems and concurrency: queues/schedulers, RPC/streaming, multi-process/multi-threaded runtime behavior, and scaling patterns across nodes.
  • Excellent communication skills; ability to influence across teams and represent NVIDIA well in open-source technical forums.
  • BS/MS in Computer Science, Computer Engineering, or related field (or equivalent experience).
Responsibilities
  • Drive upstream-first engineering in vLLM/SGLang: author and land PRs or equivalent experience, engage in development discussions, help compose roadmaps, and build durable maintainer relationships.
  • Build and implement inference-runtime features that improve efficiency, latency, and tail behavior: request scheduling, batching policies, KV-cache management (paging/sharding), memory planning, and streaming.
  • Optimize core hot paths across the stack—from Python orchestration down to C++/CUDA kernels—using profiling and measurement to guide decisions.
  • Improve multi-GPU and multi-node inference: communication patterns, parallelism strategies (tensor/sequence/pipeline), and system-level scaling/efficiency.
  • Strengthen correctness, robustness, and operability: determinism where needed, graceful degradation, backpressure, observability hooks, and performance regression testing.
  • Collaborate across NVIDIA to integrate upstream advances with production needs (deployment patterns, compatibility, security posture) while keeping changes broadly adoptable by the community.
  • Mentor senior engineers, raise the technical bar through build reviews, and establish guidelines for performance engineering and upstream contribution workflows.
Desired Qualifications
  • Substantial open-source contributions to vLLM, SGLang, PyTorch, Triton, NCCL, or related GPU/inference infrastructure; prior maintainer experience is a plus.
  • Shipped performance features such as paged attention/KV paging, speculative decoding, advanced scheduling, quantization-aware serving, or low-latency streaming optimizations.
  • Experience optimizing inference across the full stack: tokenizer and Python runtime overheads, kernel fusion, memory bandwidth, PCIe/NVLink effects, and network fabrics (e.g., InfiniBand).
  • Built robust benchmarking and regression infrastructure for latency and efficiency, including dataset selection, load modeling, and reproducible performance tracking.

NVIDIA designs and manufactures graphics processing units (GPUs) and computing platforms used for gaming, data centers, and artificial intelligence. These products work by using parallel processing to handle complex mathematical calculations much faster than standard computer processors, supported by a software ecosystem that allows developers to build and run AI models. Unlike competitors that may focus solely on hardware, NVIDIA integrates its chips with specialized software and cloud services to create a complete environment for high-performance tasks. The company’s goal is to provide the underlying technology necessary to power advanced computing, from realistic video game graphics to autonomous vehicles and large-scale data analysis.

Company Size

10,001+

Company Stage

IPO

Headquarters

Santa Clara, California

Founded

1993

Simplify Jobs

Simplify's Take

What believers are saying

  • Toyota adopts NVIDIA DRIVE AGX Orin, boosting automotive revenue 103% in Q4 FY2025.
  • SoftBank plans NVIDIA AI servers in Japan by 2030; IREN deploys 5GW infrastructure.
  • NVIDIA reaches $5.5T market cap with $216B FY revenue and $400B projected FCF.

What critics are saying

  • Broadcom supplies custom chips to Google through 2031, Anthropic from 2027, OpenAI.
  • China revenue hits zero from $17B due to US restrictions, $4.5B Q1 2026 charge.
  • B200 GPU rentals drop 30% as sentiment flips bearish, cooling FY2027 $78B guidance.

What makes NVIDIA unique

  • NVIDIA invented the GPU in 1999, pioneering accelerated computing.
  • CUDA platform from 2006 enables GPUs for AI and parallel computing.
  • Full-stack AI infrastructure powers 80% of AI training GPUs in 2025.

Help us improve and share your feedback! Did you find this helpful?

Your Connections

People at NVIDIA who can refer or advise you

Benefits

Company Equity

401(k) Company Match

Growth & Insights and Company News

Headcount

6 month growth

-1%

1 year growth

-3%

2 year growth

-2%
The Associated Press
Apr 15th, 2026
Matlantis integrates NVIDIA ALCHEMI Toolkit for 10x faster materials simulation

Matlantis has integrated NVIDIA's ALCHEMI Toolkit into its materials simulation platform to accelerate industrial materials discovery. The company previously incorporated NVIDIA Warp-optimised kernels, achieving up to 10x speed improvements in atomistic calculations. The integration includes LightPFP, Matlantis' lightweight potential for large-scale simulations, which uses a server-based architecture with NVIDIA ALCHEMI Toolkit-Ops to reduce communication bottlenecks. Matlantis plans to integrate its flagship Universal Machine-Learning Interatomic Potential with the toolkit to further enhance GPU efficiency. Launched in 2021, Matlantis is a cloud-based atomistic simulator jointly developed by PFN and ENEOS. The platform uses deep learning to increase simulation speeds by tens of thousands of times and serves over 150 companies discovering materials including catalysts, batteries and semiconductors.

CNBC
Apr 14th, 2026
Nvidia stock surges 18% on 10-day winning streak fuelled by $1T GPU orders through 2027

Nvidia shares have climbed 18% over a ten-day winning streak, the longest since 2023. The stock is trading about 8% below its October all-time high of $212.19. CEO Jensen Huang revealed at last month's GTC conference that Nvidia has over $1 trillion in GPU orders through 2027, including Blackwell and next-generation Vera Rubin chips. Data centre revenue surged 75% year-over-year and now comprises 88% of the business, a dramatic shift from five years ago when gaming dominated. The rally follows major deals including Meta's February commitment to deploy millions of Nvidia chips across its global data centres. On Monday, Nvidia denied rumours it was pursuing acquisitions of PC makers Dell or HP. The company also unveiled Ising, a new family of open-source models for quantum computing.

Yahoo Finance
Apr 14th, 2026
D-Wave CEO claims quantum computers could challenge Nvidia's AI dominance with superior power efficiency

D-Wave Quantum CEO Alan Baratz claims quantum computing poses a threat to Nvidia, citing superior energy efficiency. Speaking at the Semafor World Economy Summit, Baratz said D-Wave's quantum computer uses just 10 kilowatts of power—equivalent to five or 10 GPUs—whilst solving problems that would take GPU systems nearly a million years. D-Wave shares rose nearly 16% on Tuesday, part of a 140% gain over the past year. The company reported $2.75 million in Q4 revenue, missing analyst estimates, but bookings surged 471% to $13.4 million. The $5.3 billion company recently secured a $20 million agreement with Florida Atlantic University and acquired Quantum Circuits for $550 million. However, quantum machines remain specialised tools, unable to run large language models that drive Nvidia's dominance.

Yahoo Finance
Apr 14th, 2026
Vertiv partners with Nvidia on AI data centre infrastructure as analysts raise price target to $300

Vertiv Holdings has been reaffirmed with a Buy rating by Evercore ISI, setting a price target of $280, whilst Barclays raised its target from $281 to $300 with an Overweight rating. The electrical equipment company is partnering with Nvidia on AI infrastructure development. On 16th March, Nvidia introduced its Vera Rubin DSX AI Factory reference design, with Vertiv providing critical power and cooling solutions for AI data centres. The partnership integrates Vertiv's infrastructure expertise with Nvidia's AI systems to enhance energy efficiency and performance. Vertiv is developing Vertiv OneCore Rubin DSX, a prefabricated system designed to accelerate AI factory deployment. The Brussels-headquartered company specialises in critical digital infrastructure technologies for data centres and communication networks.

Yahoo Finance
Apr 14th, 2026
Nvidia and Dell: AI infrastructure stocks to buy ahead of May earnings

Nvidia and Dell Technologies are positioned as attractive AI infrastructure investments ahead of their May earnings reports, according to recent analysis. Both companies supply critical hardware for AI computing, with demand for AI capacity continuing to outpace available resources across major cloud services. Nvidia shares have remained flat for six months despite strong fundamentals. Last quarter, its data centre business generated $62 billion in revenue, up 75% year over year, with a 75% gross margin. The company expects over $1 trillion in cumulative orders for its Blackwell and upcoming Rubin chips through 2027. Trading at 17 times next year's expected earnings, Nvidia's valuation appears discounted relative to its 66% revenue growth in fiscal year 2026. Dell Technologies similarly stands to benefit from the AI infrastructure build-out. Both companies report earnings in May.