Home > Cyber News > Intel and Facebook Working on AI Inference Chip Called Nervana
CYBER NEWS

Intel and Facebook Working on AI Inference Chip Called Nervana

Intel and Facebook are working together on a new artificial intelligence (AI) chip. Called the Nervana Neural Network Processor (NNP-I), it is an AI chip for inference-based workloads. The new chip is expected to be finished in the second half of 2019, Intel said on Monday during the Consumer Electronics Show in Las Vegas.

Facebook is pleased to be partnering with Intel on a new generation of power-optimized, highly tuned AI inference chip that will be a leap in inference workload acceleration,” Facebook said in its own statement.

With this chip, Intel is hoping to remain on a leading position in the fast-growing market of AI. Similar chips are expected from competitive companies such as Nvidia and Amazon’s Web Services unit, Reuters reported.




Intel and Facebook Working on Nervana AI Chip

According to Naveen Rao, corporate VP and general manager of AI at Intel, the chip is built on a 10-nanometer Intel process and will include Ice Lake cores to handle general operations and neural network acceleration.

Nervana is designed with the idea to help researchers in the process of inference. In AI, an inference engine is a component of the system that applies logical rules to the knowledge base to deduce new information. The inference is, in fact, part of machine learning. Machine learning has two phases – training and inference. Training takes a long time and is more complicated. Performing inference on new data appears to be easier and is the essential technology behind computer vision, voice recognition, and language processing tasks, experts explain.

Related: [wplinkpreview url=”https://sensorstechforum.com/intel-threat-detection-technology-gpu-scanning/”]Intel Deploys Threat Detection Technology, Enables GPU Scanning

Nervana is optimized for image recognition, Intel explained during the presentation. The chip’s architecture is different because it lacks a standard cache hierarchy, and on-chip memory is managed by software directly, VentureBeat reported. In addition, the chip is able to distribute neural network parameters across multiple chips, and this is due to its high-speed on- and off-chip interconnects.

It is worth mentioning that Intel’s processors are at the top of the market for machine learning inference. In September 2018 Nvidia also launched its own inference processor, and Amazon is following in their footsteps. However, Amazon’s chip is not a direct competition as the company is not selling the chips. Amazon is planning to sell services to its cloud customers which will utilize the chip in the future. But this means that Amazon won’t be needing chips from Intel and Nvidia, which is a loss of a major customer, Reuters noted.

Milena Dimitrova

An inspired writer and content manager who has been with SensorsTechForum since the project started. A professional with 10+ years of experience in creating engaging content. Focused on user privacy and malware development, she strongly believes in a world where cybersecurity plays a central role. If common sense makes no sense, she will be there to take notes. Those notes may later turn into articles! Follow Milena @Milenyim

More Posts

Follow Me:
Twitter

Leave a Comment

Your email address will not be published. Required fields are marked *

This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Privacy Policy.
I Agree