Full-Time
Posted on 8/1/2025
Delivers IPU-powered AI cloud services
No salary listed
Bristol, UK
In Person
Applicants for this position must hold the right to work in the UK. Unfortunately at this time, we are unable to provide visa sponsorship or support for visa applications.
Graphcore.ai focuses on building and delivering Intelligence Processing Units (IPUs) and related AI services for enterprises. It provides IPU-powered cloud services, pre-trained AI models, optimized inference engines, and APIs on a subscription basis with on-demand pricing and free tiers through cloud partners. The IPUs accelerate AI workloads and enable scalable, cost-efficient AI deployment in enterprise applications. The company differentiates itself through specialized IPU hardware and optimized software stack, plus partnerships with cloud providers to reach customers, rather than relying on generic hardware. The goal is to help businesses scale AI-powered products and operations efficiently by offering flexible, scalable compute and ready-to-use AI components.
Company Size
501-1,000
Company Stage
Series E
Total Funding
$682M
Headquarters
Bristol, United Kingdom
Founded
2016
Help us improve and share your feedback! Did you find this helpful?
Health Insurance
Dental Insurance
Life Insurance
401(k) Retirement Plan
401(k) Company Match
Flexible Work Hours
Paid Vacation
Mental Health Support
Parental Leave
AI News Today | New AI Chips News: Performance Boost Claims. The AI industry is constantly pushing the boundaries of hardware capabilities, and recent announcements surrounding AI News Today | New AI Chips News: Performance Boost Claims highlight this ongoing race. Several companies are asserting significant performance improvements in their latest AI chip designs, promising faster processing, reduced energy consumption, and enhanced capabilities for AI applications. These claims, if validated, could drastically impact the speed and efficiency of everything from cloud computing to edge devices, accelerating the deployment and advancement of AI across various sectors. The industry is watching closely to see which of these performance boosts translate into real-world advantages. Contents [hide] Understanding the latest AI chip performance claims. The core of AI News Today | New AI Chips News: Performance Boost Claims often revolves around specific metrics. Companies typically highlight improvements in teraflops (trillions of floating-point operations per second), power efficiency (performance per watt), and latency (the delay in processing information). These metrics directly impact the ability of AI systems to handle complex tasks like natural language processing, image recognition, and machine learning model training. A higher teraflop count generally indicates a chip's ability to perform more calculations in a given time, leading to faster processing. Better power efficiency translates to lower operating costs and a reduced environmental footprint, making AI more sustainable. Lower latency is crucial for real-time applications, such as autonomous vehicles and robotics, where immediate responses are essential. Machine Learning & Artificial Intelligence Key players in the AI chip market. The AI chip market is dominated by a mix of established tech giants and specialized startups, all vying for a piece of this rapidly growing sector. Companies like NVIDIA, Intel, and AMD continue to innovate in traditional GPU and CPU architectures, adapting them for AI workloads. At the same time, newer players like Graphcore and Cerebras Systems are developing entirely new chip architectures designed specifically for AI, such as wafer-scale integration and massively parallel processing. Each company brings its own unique approach to the challenge of accelerating AI, leading to a diverse range of hardware solutions. Examining the impact of enhanced AI chip performance. The performance improvements promised in AI News Today | New AI Chips News: Performance Boost Claims have far-reaching implications across various industries. In cloud computing, faster and more efficient AI chips can enable cloud providers to offer more powerful AI services to their customers, supporting everything from data analytics to AI-powered applications. In edge computing, improved chip performance can bring AI capabilities closer to the data source, enabling real-time processing and reducing reliance on cloud connectivity. This is particularly important for applications like autonomous vehicles, industrial automation, and smart cities, where low latency and reliable performance are critical. AI tools and the need for faster processing. The development and deployment of AI tools, including tools that generate a list of AI Prompts, relies heavily on computational power. As AI models become more complex and data sets grow larger, the demand for faster processing continues to increase. Whether it's training a new natural language model or running inference on a large dataset, AI chips play a crucial role in determining the speed and efficiency of these tasks. A powerful Prompt Generator Tool, for example, needs to rapidly process user input and generate relevant prompts, which requires significant computational resources. Machine Learning & Artificial Intelligence Discover more Business & Productivity Software Text & Instant Messaging Machine Learning & Artificial Intelligence How new chips are reshaping AI strategy. The advancements highlighted in AI News Today | New AI Chips News: Performance Boost Claims are not just about incremental improvements; they are fundamentally reshaping AI strategy. Businesses are now able to tackle more complex AI problems, develop more sophisticated AI applications, and deploy AI solutions in new and innovative ways. The availability of faster and more efficient AI chips is also driving down the cost of AI, making it more accessible to a wider range of organizations. This democratization of AI is fostering innovation and driving adoption across various sectors. Challenges in validating AI chip performance claims. While the performance claims surrounding new AI chips are often impressive, it's important to approach them with a degree of skepticism. Validating these claims can be challenging, as performance can vary significantly depending on the specific workload, software environment, and system configuration. Companies often use benchmark tests to demonstrate the performance of their chips, but these tests may not always accurately reflect real-world performance. Furthermore, power consumption and latency are often overlooked in these marketing materials, even though they are critical factors in many applications. Independent testing and validation are essential to ensure that these claims are accurate and reliable. Machine Learning & Artificial Intelligence Discover more Machine Learning & Artificial Intelligence Text & Instant Messaging Business & Productivity Software The role of software optimization. The performance of an AI chip is not solely determined by its hardware capabilities. Software optimization plays a critical role in unlocking the full potential of the chip. Optimized compilers, libraries, and frameworks can significantly improve the performance of AI applications, allowing them to run more efficiently on the underlying hardware. Companies like NVIDIA invest heavily in software optimization, providing developers with tools and resources to maximize the performance of their GPUs. This highlights the importance of a holistic approach to AI hardware and software development. Future implications for users, developers, and businesses. The ongoing advancements in AI chip technology have significant implications for users, developers, and businesses. Users can expect to see more powerful and intelligent AI applications in the products and services they use every day, from smartphones to autonomous vehicles. Developers will have access to more powerful tools and resources for building and deploying AI models. Businesses will be able to leverage AI to improve their operations, create new products and services, and gain a competitive advantage. The OpenAI API, for example, is constantly being updated to take advantage of the latest hardware advancements, enabling developers to build more sophisticated AI applications. Navigating the evolving landscape of AI hardware. The AI hardware landscape is constantly evolving, with new architectures, technologies, and companies emerging all the time. It's crucial for businesses and developers to stay informed about these developments and to carefully evaluate the different options available to them. Factors to consider include performance, power efficiency, cost, software support, and long-term roadmap. Engaging with industry experts, attending conferences, and reading industry publications can help organizations navigate this complex landscape and make informed decisions about their AI hardware investments. The continuous stream of AI News Today | New AI Chips News: Performance Boost Claims underscores the relentless innovation in AI hardware, and this matters significantly because it directly impacts the capabilities and accessibility of AI across all sectors. As chip technology advances, MakeAIPrompt can anticipate more sophisticated AI applications, greater efficiency, and wider adoption. Readers should closely monitor developments in chip architecture, power efficiency, and software optimization to fully grasp the potential and the limitations of these new technologies as they shape the future of artificial intelligence. 07/02/2026 20/12/2025 10/03/2026 27/12/2025 13/02/2026 19/12/2025 03/04/2026 01/01/2026 26/12/2025 27/12/2025 21/12/2025 19/02/2026 18/12/2025 03/02/2026
Graphcore to invest GBP1 billion in Bengaluru AI engineering campus, creating 500 semiconductor jobs. Graphcore, a wholly owned subsidiary of SoftBank, announced plans to establish a new AI Engineering Campus in Bengaluru, India, with an investment of up to GBP1 billion (approx. US$1.26 billion) over the next decade. The project is expected to create... * Full access to articles dating back to 2000. * Real-time access to the news as it breaks. * Access to premium content. * Full access to Tomorrow's Headlines. * Asia Supply Chain 250 dataset access.
Graphcore to invest £1 billion in India; 500 semiconductor jobs to be created in Bengaluru. Graphcore, a subsidiary of SoftBank Group, has announced plans to set up a new AI Engineering Campus in Bengaluru, with an investment commitment of up to £1 billion over the next 10 years. The initiative will generate 500 new jobs in India's semiconductor sector. The Bengaluru facility will serve as a central hub for Graphcore's efforts in developing next-generation AI computing technologies. This move supports SoftBank Group's broader ambition to build the world's leading platform for Artificial Super Intelligence. Graphcore is starting immediate recruitment for its first 100 engineering positions in India. These include roles in Silicon Logical and Physical Design, Verification, Chip Characterization, and System Bring-up. The expansion in India is part of a larger investment strategy by SoftBank Group, which acquired Graphcore in 2024. Funding into Graphcore is expected to scale to £1 billion annually over the coming years. The company is also planning to double its workforce in the UK, increasing headcount to about 750, primarily in silicon, software, and AI engineering. This announcement follows SoftBank's other recent AI infrastructure initiatives, including the $500 billion Stargate project, developed in partnership with OpenAI and Oracle. The Indian team in Bengaluru will focus on designing advanced semiconductor solutions intended for use by global AI companies. These products are expected to support applications across drug research, healthcare, climate science, and enterprise automation. Graphcore's decision to establish its engineering base in Bengaluru comes in recognition of the city's well-established technology ecosystem. With its combination of top-tier technical universities, fast-growing startups, and global tech firms, Bengaluru continues to hold its reputation as India's Silicon Valley. India's central government, led by Prime Minister Narendra Modi, has taken significant steps to strengthen the country's semiconductor capabilities. Through national programs aimed at developing specialized skills and attracting global investment, the Indian semiconductor sector is gaining momentum. Graphcore aims to contribute to this national effort by developing local talent and technical expertise. Founded in Bristol, UK in 2016, Graphcore has focused on building a comprehensive AI computing stack - from silicon to data center systems to AI software. Since becoming part of SoftBank Group in 2024, Graphcore has gained access to a broader network of AI technology companies and infrastructure partners. Despite its growth, the company remains committed to an agile, innovation-led culture - where new ideas are encouraged, and teams work at pace to respond to evolving opportunities in the AI and semiconductor space.
SoftBank's Graphcore picks Bengaluru for £1bn AI campus, 500 Jobs incoming. Graphcore, now a wholly owned subsidiary of SoftBank Group, is making a major move in India with the launch of a new AI Engineering Campus in Bengaluru. Here's a quick breakdown of the announcement: Graphcore's £1bn India Expansion * Location: Bengaluru, India * Investment: Up to £1 billion over the next 10 years * Jobs Created: 500 new semiconductor roles * Immediate Hiring: First 100 roles already open, spanning: * Silicon Logical Design * Physical Design * Verification * Characterization * Bring-up This campus will be central to Graphcore's efforts in building next-generation AI computing infrastructure, aligning with SoftBank's broader ambition to lead in Artificial Super Intelligence platforms. Graphcore: AI chip innovator. * Founded: 2016 in Bristol, UK by Nigel Toon and Simon Knowles * Industry: Semiconductors and AI hardware * Core Product: Intelligence Processing Unit (IPU) - a novel processor architecture designed specifically for machine learning workloads * Mission: To enable innovators to build next-generation AI applications and democratize access to machine intelligence * Ownership: Now a wholly owned subsidiary of SoftBank Group Corp, continuing to operate under the Graphcore name Graphcore competes with companies like Nvidia in the AI compute space, and its IPU architecture is known for handling entire ML models inside the processor - a departure from traditional GPU-based systems. AI accelerator comparison (2025). | Feature / Chip | Graphcore IPU (Bow-200) | Nvidia Blackwell B200 GPU | Google TPU v6e (Trillium) | AMD MI350 GPU | | Architecture | Massively parallel IPU tiles | GPU with Transformer Engine | Custom ASIC for ML workloads | GPU with unified memory | | Memory | 900 MB per IPU tile | 180 GB HBM3e per GPU | 32 GB HBM per chip | 128 GB HBM3e | | Bandwidth | ~1.5 TB/s (system level) | Up to 8 TB/s | 1.6 TB/s per chip | ~5.2 TB/s | | Compute (FP16) | ~350 TFLOPS (system level) | 4.5 PFLOPS | 918 TFLOPS BF16 | ~2.5 PFLOPS | | Compute (INT8) | Not optimized | 9 PFLOPS | 1.836 PFLOPS | ~5 PFLOPS | | Scalability | 3D wafer-scale IPU pods | DGX B200 clusters | 256-chip TPU pods | MI350 clusters | | Target Workloads | Sparse ML, graph networks | Transformer-based LLMs | Large-scale ML training | HPC + AI inference | | Power Efficiency | High for sparse workloads | Improved over H100 | Optimized for datacenter | Competitive with Nvidia | | Deployment | Graphcore IPU systems | Nvidia DGX platforms | Google Cloud TPU pods | Enterprise GPU servers |
SoftBank's Graphcore plans $1.3 billion India investment. SoftBank-backed AI chipmaker Graphcore expands in India with a Bengaluru office and long-term R&D and hiring roadmap. Graphcore, the UK-based artificial intelligence (AI) chipmaker acquired by SoftBank Group in 2024, is reportedly planning to invest USD 1.3 billion in India over the next 10 years, as part of its long-term strategy to expand R&D and design capabilities in the country. The announcement comes amid India's growing focus on strengthening its domestic semiconductor and AI ecosystem. Graphcore expands India presence with new Bengaluru office. According to some reports, Graphcore has opened a new office in Bengaluru, with plans to initially hire around 100 AI and semiconductor engineers, scaling up to 500 employees within the next two years. This expansion underscores the company's intent to tap into India's deep engineering talent pool and build advanced capabilities in AI chip design and development. India as a strategic talent and innovation hub. Nigel Toon, CEO of Graphcore, emphasized India's strategic importance, highlighting the country's growing infrastructure and cost-effective environment for R&D. "India offers world-class engineering talent and a supportive innovation ecosystem, making it a key destination for our next phase of growth," he reportedly said. Graphcore is known for its proprietary Intelligence Processing Units (IPUs) - chips designed to power complex machine learning and AI workloads. While manufacturing is currently outsourced to TSMC in Taiwan, the company is reportedly monitoring India's semiconductor policy developments and could explore local production in the future. SoftBank's global AI and semiconductor strategy. SoftBank's acquisition of Graphcore in 2024 is part of a broader push by the Japanese conglomerate to double down on AI and semiconductor technologies globally. The move complements SoftBank's strategy of building a portfolio of cutting-edge tech assets focused on the next generation of compute and AI infrastructure. $1.3 billion investment: not yet independently verified. While the $1.3 billion investment plan signals strong ambition, it has not yet been independently confirmed by official press releases or regulatory filings from SoftBank or Graphcore. Other major media outlets have not corroborated the figure at the time of writing. Analysts note that the investment amount may include infrastructure, talent, R&D, and ecosystem development over the next decade. Aligning with India's semiconductor and AI mission. The potential investment aligns with the Indian government's initiatives such as the Semicon India Programme, which aims to make India a global hub for chip design and manufacturing. Companies like Graphcore could benefit from incentives under this program as they expand local operations.