Intel® nGraphTM

Arjun K. Bansal,Anahita Bhiwandiwalla, Jayaram Bobba,Mahew Brookhart, Avijit Chakraborty,Will Constable, Christian Convey,Leona Cook,Omar Kanawi, Robert Kimball,Jason Knight, Nikolay Korovaiko,Varun Kumar, Yixing Lao,Christopher R. Lishka, Jaikrishnan Menon,Jennifer Myers,Sandeep Aswath Narayana, Adam Procter, Tristan J. Webb

semanticscholar(2018)

引用 0|浏览11
暂无评分
摘要
Œe Deep Learning (DL) community sees many novel topologies published each year. Achieving high performance on each new topology remains challenging, as each requires some level of manual e‚ort. Œis issue is compounded by the proliferation of frameworks and hardware platforms. Œe current approach, which we call “direct optimization”, requires deep changes within each framework to improve the training performance for each hardware backend (CPUs, GPUs, FPGAs, ASICs) and requires O(f p) e‚ort; where f is the number of frameworks and p is the number of platforms. While optimized kernels for deep-learning primitives are provided via libraries like Intel® Math Kernel Library for Deep Neural Networks (MKL-DNN), there are several compiler-inspired ways in which performance can be further optimized. Building on our experience creating neon (a fast deep learning library on GPUs), we developed Intel nGraph, a soon to be open-sourced C++ library to simplify the realization of optimized deep learning performance across frameworks and hardware platforms. Initially-supported frameworks include TensorFlow, MXNet, and Intel® neon framework. Initial backends are Intel Architecture CPUs (CPU), the Intel® Nervana Neural Network ProcessorTM (NNP), and NVIDIA GPUs. Currently supported compiler optimizations include ecient memory management and data layout abstraction. In this paper, we describe our overall architecture and its core components. In the future, we envision extending nGraph API support to a wider range of frameworks, hardware (including FPGAs and ASICs), and compiler optimizations (training versus inference optimizations, multi-node Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro€t or commercial advantage and that copies bear this notice and the full citation on the €rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SYSML 2018, Stanford University © 2016 Copyright held by the owner/author(s). . DOI: and multi-device scaling via ecient sub-graph partitioning, and HW-speci€c compounding of operations).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要