Pat Gelsinger Says Silicon Without Software Support Is a Bug

Pat Gelsinger Says Silicon Without Software Support Is a Bug

People who are really serious about software should make their own hardware, said Alan Kay, a renowned computer scientist. But according to Intel’s chief executive Pat Gelsinger, it works the other way around too: If you want your hardware to succeed, you have to put software first.

Broad software compatibility is a fundamental advantage that Intel’s processors have traditionally had over other CPUs both because of the x86 architecture and because Intel has always worked closely with software developers. But as the world is changing, Intel’s CEO Pat Gelsinger has to look at software in different ways than his predecessors. On the one hand, Intel must work with a broader ecosystem of independent software vendors (ISVs) than before, and work more closely than before. But on the other hand, Intel’s own software could bring new revenue streams to the company.

“One of the things that I’ve learned in my 11-year ‘vacation’ [at VMware and EMC] is delivering silicon that isn’t supported by software is a bug,” said Pat Gelsinger in an interview with CRN. “We have to deliver the software capabilities, and then we have to empower it, accelerate it, make it more secure with hardware underneath it. And to me, this is the big bit flip that I need to drive at Intel.”

Making the Intel Software Ecosystem Broader

Intel has always tried to ensure that software would take advantage of its latest hardware by properly supporting all the latest instruction set extensions and other technologies designed to speed up certain workloads. To a large degree, Intel assisted its partners in creating an ecosystem of software optimized for its processors.

That approach was instrumental in empowering Intel’s software ecosystem for many years, until accelerated computing emerged in the mid-2000s. Nvidia began to aggressively promote its CUDA platform, whereas other companies relied on various open or proprietary standards like OpenCL, Vulkan, Metal, and OpenAI to speed up performance-hungry workloads with proprietary hardware. Companies like Apple and Nvidia created their own software ecosystems that were not as broad as Intel’s but were competitive enough to attract software developers.

Today, loads of artificial intelligence (AI) and high-performance computing (HPC) applications are developed for Nvidia’s CUDA platform and therefor require the company’s hardware and software stacks. This naturally represents a challenge for Intel and its datacenter CPUs and compute GPUs designed for AI and supercomputers, as now they’re on the other side of the equation: They have to compete against an already established ecosystem.

When Raja Koduri joined Intel in late 2017, one of his first initiatives at the chip giant was to build an open-standard, cross-platform application programming interface (API) that would let developers program CPUs, GPUs, FPGAs and other accelerators to eliminate the need for separate code bases and tools for each architecture. Intel calls this oneAPI.