Why do we need dedicated AI engines in laptops, and especially in the ultrathin laptops that AMD, Qualcomm and Intel are all targeting with their latest processors? What’s the motivation behind Intel’s AI Everywhere and AI-PC programmes, and where is the desktop PC in all this? First, it’s important to remember that “AI” is much, much more than just chatbots, copilots and other intelligent assistants. It is a catch-all that covers a very broad spread of technologies, some of which have been in widespread use for many years. It’s also essential to recognise that the vast majority of AI “workloads” involve inferencing, which is the process of applying a trained model to a particular situation. And as more and more personal and office applications incorporate AI techniques, this is where AI-enabled laptops come in, along with the AI-enabled smartphones and other mobile devices that are also coming down the line.
Inferencing is a complex task, but nowhere near as heavyweight as the job of training that model in the first place. Training is still the role of the hundreds and thousands of high-end GPUs from Nvidia and others that fill servers world-wide, and of course all the other specialist AI chips that currently sit inside AWS, Azure and Google data centres. Personal productivity examples of using AI inferencing – or rather, its ML subset – include facial recognition for FaceID, and improving the real-time background blur and eye-tracking within videoconferencing applications. There’s many other areas where inferencing can greatly improve performance, though, including graphic design, photo and video processing, 3D modelling, AR/VR, data science and much more.