Most people get artificial intelligence wrong. The problem researchers are working on isn’t understanding how to make a machine think – it’s understanding how humans think, so we can create machines that do the same.
We hear a lot about AI’s potential, but already there are plenty of apps and systems that make use of AI in limited ways to do things like pilot autonomous vehicles, understand search engine requests and power web-based chatbots.
But those tasks are all limited by the amount of processing power needed to replicate the human brain. That’s a problem Intel has been working on, with partners like Ferrari, Microsoft and the Princeton Neuroscience Institute.
New problems, new solutions
The result? A range of technologies specifically created by Intel to bring power and speed to AI-related tasks:
- Xeon Scalable Processors: Silicon designed to handle all AI workloads, including deep learning.
- Nervana Neural Net Processors (NNP): Purpose-built NNP for deep-learning training and inference.
- Field Programmable Gate Array (FPGA): Providing real-time programmable acceleration for deep-learning inference workloads.
- Movidius Myriad Vision Processing Unit (VPU): Ultra-low power solution for computer vision and on-device neural networks.
- Saffron AI Solutions: Associative learning systems using cognitive and machine reasoning.
It’s clear from these that ‘deep learning’ is emerging as a critical element of advanced AI systems. But what is it, and how is it different to AI, machine learning, neural networks and other related technologies?
Deep learning is a software technique dedicated to replicating human cognitive processes in greater sophistication. As MIT Technology Review explains:
Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 per cent of the brain where thinking occurs. The software learns, in a very real sense, to recognise patterns in digital representations of sounds, images and other data.
But of course all software needs to run on hardware, which is where Intel comes in. With hardware optimised for specific tasks and workloads, researchers are making fast progress on the problems that consume them. Here are a few real-world examples that show how Intel’s hardware is helping create new AI solutions:
Microsoft: We ask a lot from our search engines, and the team behind Microsoft’s ‘Project Brainwave’ is using Intel Xeon CPUs and Arria 10 FPGAs to help Bing provide ‘multi-perspective answers’ to queries. This requires machine reading comprehension, running searches over a huge number of pages and aggregating the results – in real-time.
Ferrari: Ferrari is using the race track to intelligently integrate big data into the driving experience. By combining in-car video, live telemetry, drone footage and more, AI systems can access and process far more data than any driver ever could – data about the car, the track, other cars, the ‘shape’ of a lap and a race, and more – and use it to assist the driver in real-time.
Princeton Neuroscience Institute: It may be ironic or even poetic, but AI is playing a crucial role in helping us understand how the human mind works. Functional MRI scanners give us unprecedented insights into the brain’s workings, but it takes massive computing power – like that provided by the Brain Imaging Analysis Kit created in partnership between Intel and Princeton – to learn what the images are telling us.
And these are just the start – there are plenty more examples of advanced AI programs powered by Intel, from personal finance to analyzing whale health. It seems we’re getting to the point where the only limit on what we can do will be our imagination. That’s a ‘human software’ problem – Intel hardware is one of the tools that will get us there.