Full-Time

Director - Product Marketing

Posted on 11/30/2025

Marvell

Marvell

10,001+ employees

High-performance semiconductor solutions for data infrastructure

Compensation Overview

$170.9k - $256k/yr

+ Bonus + Equity

Company Historically Provides H1B Sponsorship

Santa Clara, CA, USA

In Person

Category
Product (1)
Requirements
  • B.S. in Electrical or Computer Engineering (or related) required, M.S.E.E. and/or MBA preferred
  • 10 years of relevant semiconductor experience with solid understanding of optical communication and components
  • Proven track record to manage and lead a team, and develop the organization
  • Excellent communication, interpersonal and presentation skills to all levels of the corporation, internal, partner and customer
  • Can-do self-starter with strong cross-functional leadership skills
  • Strategic analytic mind who has had success in the conception and launch of new products
  • Demonstrated product life-cycle management across whole semiconductor product new product introduction process
  • Proven ability to create and drive a funnel with sales
  • Proven ability to gain respect and work effectively with Engineering organizations
  • Must have experience in a customer-facing role. The candidate must have the necessary communications skills and experience to be able to interface effectively and manage product expectations at customer
  • Experience in Datacenter Switching and Product Marketing, Business Analysis. Working experience in working with hyperscale cloud and AI factory customers is a strong plus
  • Comprehensive background in semiconductor design necessary to evaluate product tradeoffs for performance, manufacturing cost, power and total development cost. Familiarity with key system elements of data center connectivity, Network Interface Cards, Data Processing Units, Switching and Optics in order to evaluate product tradeoffs
Responsibilities
  • Lead the organization in defining a winning go-to-market strategy for the Data Center market
  • Build and execute a Data Center ecosystem strategy resulting in a full system solution with other Marvell business units and external partners
  • Collaborate with the Sales team to cultivate deep customer relationships across all levels of the organization
  • Enable sales engagement specific to switch product line and solution approaches tailored to key cloud & hyperscale customers
  • Collaborate with inbound Product Line Managers to define differentiated products and compelling solutions with clear value propositions that are validated and endorsed by leading Data Center customers
  • Enable sales funnel and work closely with customer Product Line Managers on their product roadmap requirements and pitch Marvell solutions to meet their requirements
  • Present product roadmaps and generate excitement at all levels of organization from entry level engineer to CEO
  • Develop and drive sales funnel through team. Be responsible for revenue generation as promised by product ROI s
  • Lead products and solutions business planning activities: market/technology trends, market sizing (TAM, SAM, SOM), key customers to win, competitive analysis, product positioning and pricing
  • Work closely with Original Design Manufacturer and Original Equipment Manufacturer partners to enable switch product lines
  • Work closely on working with Software team to enable out-of-box experience with System on a Chip Network Interface Controller switch platforms
  • Contribute towards all product and solution requirement documents (Market Requirements Document / Product Requirements Document) activities to ensure engineering and cross-functional teams are all in-sync to execute what is needed to win key designs
  • Manage key ecosystem and technology alliances for product and solution success
  • Partner with sales to develop key customer account plans that cover key programs, supply chain partners, decision making tree, organization structure/contacts, and technology roadmap plans
  • Collaborate with engineering to develop reference design solutions for leading use cases and architecture engagements with leading customers and their platform partners
  • Sales & Field Application Engineer training on market and product line plans
  • Help corporate marketing develop product line digital marketing and social media messaging
Desired Qualifications
  • M.S.E.E. and/or Master of Science in Electrical Engineering and/or Master of Business Administration preferred

Marvell Technology, Inc. creates high-performance semiconductor products that power data infrastructure for telecommunications operators, data centers, and enterprises. Its offerings span computing, storage, and networking to enable efficient, secure data transmission, storage, and processing. The products are programmable and scalable platforms designed for high bandwidth and strong security, supporting 5G networks and the broader digital economy. Revenue comes from designing, manufacturing, licensing, and providing related services to other businesses that integrate these components into their own products. Unlike many peers, Marvell emphasizes programmable, scalable platforms tailored to data infrastructure needs and long-term partnerships with enterprise and telecom customers. The company aims to help customers upgrade their networks and data systems to increase capacity, performance, and efficiency while expanding its own business in the data infrastructure space.

Company Size

10,001+

Company Stage

IPO

Headquarters

Santa Clara, California

Founded

1995

Simplify Jobs

Simplify's Take

What believers are saying

  • Google TPU partnership could deliver $1-2 billion in custom chip revenue.
  • Bloomberg projects Marvell captures 20-25% of $118B custom ASIC market by 2030s.
  • Amazon's $200B Anthropic deal drives AWS custom silicon and interconnect demand.

What critics are saying

  • Broadcom's 2031 Google TPU agreement locks Marvell out of primary customer.
  • TSMC production yield issues halt Google test production, costing $1-2B revenue.
  • Hyperscalers shift inference to commoditized ARM chips, reducing custom silicon demand.

What makes Marvell unique

  • Custom silicon expertise positions Marvell as preferred partner for hyperscalers beyond Nvidia.
  • Polariton acquisition enables 3.2T optical interconnects with ultra-low energy consumption.
  • 18 active cloud-provider design wins across AWS, Google, and others demonstrate scale.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

401(k) Retirement Plan

401(k) Company Match

Flexible Work Hours

Paid Vacation

Hybrid Work Options

Company News

BendWebs
Apr 21st, 2026
Google partners with Marvell on new AI chips to challenge Nvidia.

Google partners with Marvell on new AI chips to challenge Nvidia. The partnership. Alphabet Inc.'s Google is currently in talks with Marvell Technology to develop two new chips aimed at running AI models more efficiently. This collaboration, reported by The Information on Sunday citing two people with knowledge of the discussions, marks a strategic shift in how the tech giant approaches its hardware infrastructure. Google has long relied on internal custom silicon, but external partnerships are becoming increasingly common as the demand for AI compute scales beyond what internal labs can easily manage. The partnership is not merely about securing manufacturing capacity. It is a direct effort to improve performance metrics that define the modern AI landscape. Efficiency is the metric that matters now. Training and inference models consume massive amounts of power and memory bandwidth. By integrating Marvell's expertise, Google aims to address these bottlenecks before they become critical failures in production environments. The companies aim to finalize the design of the memory processing unit as soon as next year before handing it off for test production. While Reuters could not immediately verify the report, the context suggests a necessary evolution. Google faces a crowded market where hardware compatibility often dictates software deployment. If Google's Tensor Processing Units (TPUs) remain proprietary, adoption is limited to Google Cloud customers. If these new chips can run on broader hardware stacks, they could expand the addressable market for Google's infrastructure services. The report indicates the companies are focused on efficiency. In high-compute environments, efficiency directly translates to cost savings. For cloud providers, every watt of power consumed during inference reduces the margin available for growth. Hardware strategy. One chip is a memory processing unit designed to work with Google's tensor processing unit (TPU), and the other chip is a new TPU built specifically for running AI models. This distinction is vital for understanding Google's hardware roadmap. The memory processing unit addresses a specific weakness in current architectures. AI models often starve for data movement rather than raw compute power. Memory bandwidth is the primary constraint in modern large language model inference. The second chip, a new TPU, represents a direct competitor to Nvidia's dominant GPUs. Nvidia currently controls the vast majority of the market for AI training and inference. Google has been pushing to make its TPUs a viable alternative to Nvidia's GPUs. This is not a secondary goal. TPU sales have become a key driver of growth in Google's cloud revenue. The company needs to diversify its revenue streams beyond general-purpose computing. The architecture of these new chips matters. Nvidia's GPUs rely on CUDA, a proprietary software stack that developers have built over decades. Google's TPUs rely on JAX and other frameworks. By developing a new TPU specifically for running AI models, Google is attempting to bridge the gap between software frameworks and hardware acceleration. If the new TPU can match Nvidia's performance-per-watt while running on a compatible software stack, it could disrupt the current ecosystem. However, the strategy requires balancing performance with compatibility. Developers prefer hardware that supports their existing workflows. If the new TPU requires significant code rewriting, adoption will be slow. The memory processing unit helps here by optimizing data transfer between the memory and the compute core. This reduces latency without increasing clock speeds. In practical terms, this means faster model loading times and reduced inference costs for enterprise customers. The financial stakes are high. Nvidia's dominance is built on a moat of software and hardware integration. Google cannot simply match Nvidia's raw compute power. It must offer better economics. The new chips aim to run AI models more efficiently. Efficiency implies lower operating costs for data centers. For Google, this means higher margins on cloud infrastructure sales. For customers, it means lower bills for running large language models. Financial goals. TPU sales have become a key driver of growth in Google's cloud revenue as the company aims to show investors that its AI investments are generating returns. This is the primary motivation behind the partnership. Investors scrutinize capital expenditure on hardware. If Google spends billions on custom silicon but cannot sell the hardware effectively, returns on investment suffer. The new partnership with Marvell provides a pathway to externalize these assets. Google's internal TPU usage is well documented. It powers the search engine and the recommendation systems that drive ad revenue. Now, the goal is to monetize that silicon outside of Google Cloud's internal use. Selling TPUs to third-party enterprises is difficult. Most enterprises rely on Nvidia GPUs because the software ecosystem is mature. Google needs to change that perception. The new TPU built specifically for running AI models is designed to compete with Nvidia's dominant GPUs. Competition forces innovation. If Google can offer a product that is cheaper or faster than Nvidia's offering, it will capture market share. This is not about beating Nvidia in a single benchmark. It is about winning the customer base. Large enterprises are looking for alternatives to Nvidia to avoid vendor lock-in. Google's entry into this space provides that option. Revenue growth is tied to hardware sales. If TPU sales grow, Google can justify further investment in AI research. This creates a virtuous cycle. Revenue funds research, which improves hardware, which generates more revenue. The partnership with Marvell accelerates this cycle. Marvell brings established manufacturing relationships. This reduces the risk of production delays. Production delays cost money in downtime and lost sales. The companies aim to finalize the design of the memory processing unit as soon as next year. This timeline suggests urgency. Google's cloud division is under pressure to grow. Hardware sales are a way to accelerate that growth. Investors want to see revenue diversification. If cloud revenue relies solely on compute power, it is vulnerable to cyclical downturns. Selling specialized hardware like TPUs provides a more stable revenue stream. Google and Marvell did not immediately respond to a request for comment. This is standard for the industry. Companies rarely comment on early-stage negotiations. However, the leak itself confirms the direction of the partnership. Analysts have noted that Google needs to improve its hardware margins. If the TPU can be sold at a profit, it changes the financial equation for the entire company. Development timeline. The companies aim to finalize the design of the memory processing unit as soon as next year before handing it off for test production. This schedule is aggressive but realistic for a collaboration of this size. Google has a long history of internal chip design. Marvell has a long history of external chip design. Combining these capabilities reduces the learning curve. Test production is the next step. This involves manufacturing small batches to validate yield and performance. Yield rates are critical. If the new chips have a high defect rate, they will be unsellable. Google's internal use of TPUs is forgiving. A production chip must be flawless for external customers. The design finalization precedes handing off for test production. This sequence ensures that the architecture is stable before silicon fabrication begins. Reuters could not immediately verify the report. Verification is difficult in the semiconductor industry. Supply chain details are often confidential. However, the timing aligns with Google's broader strategy. The company has been reducing its reliance on Nvidia GPUs for internal tasks. The new chips are the culmination of that work. If the partnership fails, Google can still rely on internal design. If it succeeds, it opens new revenue channels. Google and Marvell did not immediately respond to a request for comment. This lack of comment does not negate the report. It simply means the companies are in the early stages of disclosure. In the tech industry, leaks often precede official announcements by weeks or months. The design timeline is public knowledge in the industry. The specific partnership is the variable. This partnership signals a shift in the AI hardware market. Nvidia's dominance is being challenged not just by new players, but by established ones like Google. The focus on efficiency and revenue generation indicates a mature understanding of the market. The new chips are not just faster processors. They are economic tools. By reducing the cost of running AI models, they make AI accessible to more applications. The design finalization precedes handing off for test production, ensuring a path to market. This is a significant development for the industry.

Yahoo Finance
Apr 14th, 2026
Google TPU talks and $2B Nvidia deal position Marvell to capture 20-25% of $118B custom ASIC market

Marvell Technology achieved record data centre revenue of $6.1 billion in fiscal 2026, with custom silicon scaling to a $1.5 billion annual run-rate across 18 cloud-provider design wins. Google is now in active negotiations with Marvell for TPU development and LLM inference chip design services, according to FundaAI. The talks follow Nvidia's recent $2 billion strategic partnership with Marvell to develop custom XPUs and NVLink-compatible networking. Google's discussions aim to diversify suppliers and leverage Marvell's expertise in high-speed interconnects. Bloomberg projects Marvell could capture 20-25% of the $118 billion custom ASIC market by the early 2030s, potentially delivering $23.6-29.5 billion in annual revenue from this segment alone — more than triple its current total revenue.

Dealroom.co
Apr 1st, 2026
Marvell Technology company information, funding & investors

Marvell Technology, developing and producing semiconductors and related technology. Here you'll find information about their funding, investors and team.

Suno
Mar 31st, 2026
Nvidia (NVDC34) announces billion-dollar deal with Marvell and shares soar.

Nvidia (NVDC34) announces billion-dollar deal with Marvell and shares soar. The American giant Nvidia (NVDC34) announced on Tuesday (31st) a billion-dollar investment in semiconductor company Marvell Technology (NASDAQ: MRVL). The investment amount is US$2 billion (approximately R$10.4 billion at the current exchange rate). With the news release, shares of both companies are soaring in the U.S. market. Around 4 PM (Brasília time), Nvidia's shares jumped 5.33% to US$173.98, while Marvell's assets gained 12.90% to US$99.14. Understand the agreement between Nvidia (NVDC34) and Marvell. The agreement announced by Nvidia involves a strategic collaboration between the two companies for developing solutions focused on artificial intelligence (AI) infrastructure, including advanced optical interconnection technologies, custom chips, and integration with high-performance computing platforms. The partnership aims to expand processing capacity and connectivity in data centers and next-generation telecommunications networks. One of the pillars of the collaboration will be Marvell's integration into Nvidia's AI platform, focusing on NVLink technology, a system developed by the chipmaker to connect multiple processors in high-performance computing architectures. This initiative will allow customers to create customized AI infrastructures capable of scaling according to data processing demand. In this context, Marvell is expected to contribute with custom processing chips and scalable networking solutions compatible with the NVLink architecture, while Nvidia will provide expertise in processing and software dedicated to artificial intelligence. "We have reached an inflection point in inference. The demand for data generation is increasing, and the world is racing to build AI centers. Together with Marvell, we are enabling customers to leverage Nvidia's AI infrastructure ecosystem and scale it to create specialized computing systems," declared Nvidia's (NVDC34) CEO Jensen Huang.

Channel NewsAsia
Mar 31st, 2026
Nvidia bets $2 billion on Marvell as rising AI adoption fuels competition.

Nvidia bets $2 billion on Marvell as rising AI adoption fuels competition. 31 Mar 2026 08:13PM (Updated: 31 Mar 2026 11:10PM) Add CNA as a trusted source to help Google better understand and surface our content in search results. March 31: Nvidia has invested $2 billion in Marvell Technology as part of efforts to make it easier for customers to use the custom artificial intelligence chips that the smaller company designs with Nvidia's networking gear and central processors. Shares of Marvell rose about 7 per cent on Tuesday, while Nvidia shares were up 2.7 per cent. Through the deal, Nvidia aims to ensure it remains central to meeting the growing computing needs required by AI tools at a time when some companies are opting for custom processors instead of its pricey processors. "Nvidia gains access to Marvell's semi-custom silicon and advanced optical interconnect capabilities to help scale data center-level AI systems where bandwidth and power efficiency are key bottlenecks," said Jacob Bourne, analyst at EMarketer. "It also broadens Nvidia's ecosystem to include more specialized silicon, which helps Nvidia remain a key access point for increasingly diverse AI workloads." "Investors will likely see this deal as reducing friction as it allows AI chips from other suppliers to operate within Nvidia-dominated data centers. So Nvidia can maintain its dominant position while also expanding the scope and utility of the AI semiconductor sector," Bourne added. The companies will work on advanced networking solutions for AI, focusing on optical interconnects and silicon photonics technology, which enables high-speed, energy-efficient data transmission. Marvell will contribute custom chips and networking solutions compatible with Nvidia's NVLink Fusion, while the AI chip bellwether will supply supporting technologies including central processing units, network interface cards and interconnects. Big Tech firms including Alphabet and Meta are expected to spend at least $630 billion to build AI infrastructure this year, lifting demand for chips used in servers and networking equipment, and benefiting companies such as Marvell. Marvell has said it expects revenue to grow nearly 40 per cent and approach $15 billion in fiscal 2028.

INACTIVE