People who are really serious about software should make their own hardware, said Alan Kay, a renowned computer scientist. But according to Intel’s chief executive Pat Gelsinger, it works the other way around too: If you want your hardware to succeed, you have to put software first.
Broad software compatibility is a fundamental advantage that Intel’s processors have traditionally had over other CPUs both because of the x86 architecture and because Intel has always worked closely with software developers. But as the world is changing, Intel’s CEO Pat Gelsinger has to look at software in different ways than his predecessors. On the one hand, Intel must work with a broader ecosystem of independent software vendors (ISVs) than before, and work more closely than before. But on the other hand, Intel’s own software could bring new revenue streams to the company.
“One of the things that I’ve learned in my 11-year ‘vacation’ [at VMware and EMC] is delivering silicon that isn’t supported by software is a bug,” said Pat Gelsinger in an interview with CRN. “We have to deliver the software capabilities, and then we have to empower it, accelerate it, make it more secure with hardware underneath it. And to me, this is the big bit flip that I need to drive at Intel.”
Making the Intel Software Ecosystem Broader
Intel has always tried to ensure that software would take advantage of its latest hardware by properly supporting all the latest instruction set extensions and other technologies designed to speed up certain workloads. To a large degree, Intel assisted its partners in creating an ecosystem of software optimized for its processors.
That approach was instrumental in empowering Intel’s software ecosystem for many years, until accelerated computing emerged in the mid-2000s. Nvidia began to aggressively promote its CUDA platform, whereas other companies relied on various open or proprietary standards like OpenCL, Vulkan, Metal, and OpenAI to speed up performance-hungry workloads with proprietary hardware. Companies like Apple and Nvidia created their own software ecosystems that were not as broad as Intel’s but were competitive enough to attract software developers.
Today, loads of artificial intelligence (AI) and high-performance computing (HPC) applications are developed for Nvidia’s CUDA platform and therefor require the company’s hardware and software stacks. This naturally represents a challenge for Intel and its datacenter CPUs and compute GPUs designed for AI and supercomputers, as now they’re on the other side of the equation: They have to compete against an already established ecosystem.
When Raja Koduri joined Intel in late 2017, one of his first initiatives at the chip giant was to build an open-standard, cross-platform application programming interface (API) that would let developers program CPUs, GPUs, FPGAs and other accelerators to eliminate the need for separate code bases and tools for each architecture. Intel calls this oneAPI.
To take advantage of oneAPI and ensure ISVs will optimize their programs for Intel’s instruction set extensions like AMX (Advanced Matrix eXtensions), XMX (Xe Matrix eXtensions), or Deep Learning Boost (AVX-512 VNNI, 4VNNIW, AVX-512 BF16, etc.), Intel will have to engage with more developers than ever before and work with them better than ever before, says Gelsinger.
AI and HPC are of course megatrends that make headlines for technology companies like Intel, and evidently the blue company is playing catch up with Nvidia here. But programs for AI and HPC are not the only types of software that Intel needs optimized for its hardware. There are emerging applications for edge computing, datacenters, and even client PCs that will have to rely on new types of Intel hardware not available just several years ago, and these will have to be part of Intel’s software ecosystem.
For example, Intel’s upcoming Alder Lake CPUs for client PCs will integrate high-performance and energy-efficient cores as well as a special Intel Thread Director hardware unit that will ensure the proper load balancing and correct assigning of cores for different workloads. To maximize the efficiency of the Thread Director, Intel will need to work closely with developers of operating systems and third-party programs.
Another example is Intel’s Atom system-on-chips based on energy-efficient cores and aimed at 5G and edge computing applications. Programs that run on these SoCs have to be optimized for them (and eventually for Xe-HP GPUs for machines on the edge) rather than for Intel’s Xeon or AMD’s Epyc processors featuring full-fat cores. That means that Intel will have to engage with many developers of appropriate software since the number of potential edge computing applications is hard to overestimate. Nvidia is also there with its EGX platform that includes easy-to-deploy machines powered by software accelerated using Nvidia’s CUDA hardware.
Intel Mulls Paid Software Services
Some of Intel’s partners believe that the chip giant could consider Nvidia’s approach to datacenters and edge computing, which includes DGX systems for AI and/or HPC as well as EGX machines for edge applications, according to CRN. Value added resellers can take those machines that are shipped with a general software stack and equip them with additional programs tailored for a particular client.
Since Intel is the world’s number one supplier of PC and server CPUs, the company is unlikely to be interested in competing against its own customers and offer its own machines. This could undermine its semi-custom/custom x86 business that Pat Gelsinger pits hopes on, but it could still capitalize not only on its hardware, but also on its software. For example, Intel already offers its Intel Unite and Intel Data Center Manager software for additional fees and could expand its software offerings, says Greg Lavender, Intel’s new chief technology officer.
“I do expect that you’ll see more in that area: How do we leverage our software assets? How do we have unique monetized software assets and services that we’ll be delivering to the industry, that can stand in and of their own right? And yeah, that’s a piece of the business model that I do expect to do more of in the future,” Lavender told CRN.
Intel is not ready to talk about the exact kinds of paid software that it might offer to its clients with its CPUs, but it says that instead of developing its own programs it could sell things like advanced platform telemetry to software makers and then share revenue. Such data could make it significantly easier for security companies to detect malware or viruses both on client and server systems.
Traditionally, Intel has used its software as a value adder for its hardware. For example, Intel’s Quick Sync Video software for desktop PCs comes free with Intel’s drivers and takes advantage of the video encoding/decoding capabilities built into Intel CPUs. It remains to be seen how Intel manages to make money selling both hardware and software in a world where many companies now consider their own homegrown SoCs tailored for particular applications, but this is an option that management is now mulling.