IEEE Organizations related to Deep Learning

Back to Top

No organizations are currently tagged "Deep Learning"



Conferences related to Deep Learning

Back to Top

No conferences are currently tagged "Deep Learning"


Periodicals related to Deep Learning

Back to Top

No periodicals are currently tagged "Deep Learning"


Most published Xplore authors for Deep Learning

Back to Top

Xplore Articles related to Deep Learning

Back to Top

Table of Contents

2018 International Congress on Big Data, Deep Learning and Fighting Cyber Terrorism (IBIGDELFT), 2018

The following topics are dealt with: learning (artificial intelligence); security of data; computer crime; cryptography; Internet; Big Data; support vector machines; text analysis; invasive software; Internet of Things.


A Disaggregated Memory System for Deep Learning

IEEE Micro, 2019

As the complexity of deep learning (DL) models scales up, computer architects are faced with a memory “capacity” wall, where the limited physical memory inside the accelerator device constrains the algorithm that can be trained and deployed. This article summarizes our recent work on designing an accelerator- centric, disaggregated memory system for DL.


Welcome from General Chair

2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), 2018

The following topics are dealt with: telecommunication traffic; Internet; cloud computing; optimisation; learning (artificial intelligence); computer centres; telecommunication network routing; mobile computing; video streaming; resource allocation.


ONNC-Based Software Development Platform for Configurable NVDLA Designs

2019 International Symposium on VLSI Design, Automation and Test (VLSI-DAT), 2019

With the proliferation of deep learning and the increasing pressure to deploy inference applications at the edge, many AI chip makers integrate the open source NVIDIA Deep Learning Accelerator (NVDLA) design in their AI solutions. Lack of open source compiler support and having only limited configurability support in the software stacks erect a barrier for developers to freely explore the ...


Prediction Based Sub-Task Offloading in Mobile Edge Computing

2019 International Conference on Information Networking (ICOIN), 2019

Mobile Edge Cloud Computing has been developed and introduced to provide low- latency service in close proximity to users. In this environment., resource constrained UE (user equipment) incapable to execute complex applications (i.e VR/AR., Deep Learning., Image Processing Applications) can dynamically offload computationally demanding tasks to neighboring MEC nodes. To process tasks even faster with MEC nodes., we can divide ...


More Xplore Articles

Educational Resources on Deep Learning

Back to Top

IEEE.tv Videos

Enter Deep Learning
2019 ICRA Plenary- Challenges for Deep learning towards AI
Deep Learning & Machine Learning Inference - Ashish Sirasao - LPIRC 2019
Designing Reconfigurable Large-Scale Deep Learning Systems Using Stochastic Computing - Ao Ren: 2016 International Conference on Rebooting Computing
Overcoming the Static Learning Bottleneck - the Need for Adaptive Neural Learning - Craig Vineyard: 2016 International Conference on Rebooting Computing
Deep Learning and the Representation of Natural Data
Panel: ML & DL for 5G - B5GS 2019
Deep Graph Learning: Techniques and Applications - Haifeng Chen - IEEE Sarnoff Symposium, 2019
Deep Learning Cookbook - Sergey Serebryakov - ICRC San Mateo, 2019
Co-Design of Algorithms & Hardware for DNNs - Vivienne Sze - LPIRC 2019
Enhancing 5G+ Performance: ML & DL for 5G - Tim O'Shea - B5GS 2019
Q&A with Dillon Graham: IEEE Rebooting Computing Podcast, Episode 18
Non-Volatile Memory Array Based Quantization - Wen Ma - ICRC San Mateo, 2019
Deeper Neural Networks - Kurt Keutzer - LPIRC 2019
Intelligent Systems for Deep Space Exploration: Solutions and Challenges - Roberto Furfaro
LPIRC: A Facebook Approach to Benchmarking ML Workload
The Era of AI Hardware - 2018 IEEE Industry Summit on the Future of Computing
Neural Processor Design Enabled by Memristor Technology - Hai Li: 2016 International Conference on Rebooting Computing
A Conversation with…Richard Mallah: IEEE TechEthics
Spiking Network Algorithms for Scientific Computing - William Severa: 2016 International Conference on Rebooting Computing

IEEE-USA E-Books

  • Table of Contents

    The following topics are dealt with: learning (artificial intelligence); security of data; computer crime; cryptography; Internet; Big Data; support vector machines; text analysis; invasive software; Internet of Things.

  • A Disaggregated Memory System for Deep Learning

    As the complexity of deep learning (DL) models scales up, computer architects are faced with a memory “capacity” wall, where the limited physical memory inside the accelerator device constrains the algorithm that can be trained and deployed. This article summarizes our recent work on designing an accelerator- centric, disaggregated memory system for DL.

  • Welcome from General Chair

    The following topics are dealt with: telecommunication traffic; Internet; cloud computing; optimisation; learning (artificial intelligence); computer centres; telecommunication network routing; mobile computing; video streaming; resource allocation.

  • ONNC-Based Software Development Platform for Configurable NVDLA Designs

    With the proliferation of deep learning and the increasing pressure to deploy inference applications at the edge, many AI chip makers integrate the open source NVIDIA Deep Learning Accelerator (NVDLA) design in their AI solutions. Lack of open source compiler support and having only limited configurability support in the software stacks erect a barrier for developers to freely explore the NVDLA design space at system level. This paper presents an ONNC- based software development platform that includes the first open source compiler for NVDLA-based designs, a virtual platform with various CPU models as well as configurable NVDLA models, and auxiliary tools for debugging. The platform is tightly coupled with the hardware design tradeoffs and provides extendibility for compiler optimization, more CPU types, and more NVDLA hardware configurations. It lifts many restrictions of software development for those who like to leverage the NVDLA design in inference applications.

  • Prediction Based Sub-Task Offloading in Mobile Edge Computing

    Mobile Edge Cloud Computing has been developed and introduced to provide low- latency service in close proximity to users. In this environment., resource constrained UE (user equipment) incapable to execute complex applications (i.e VR/AR., Deep Learning., Image Processing Applications) can dynamically offload computationally demanding tasks to neighboring MEC nodes. To process tasks even faster with MEC nodes., we can divide one task into several sub-tasks and offload to multiple MEC nodes simultaneously., thereby each sub-task will be processed in parallel. In this paper., we predict the total processing duration of each task on each candidate MEC node using Linear Regression. According to the previously observed state of each MEC node., we offload sub- tasks to their respective edge node. We also developed a monitoring module at core cloud. The results show a decrease in execution duration when we offload an entire application to one edge node compared with local execution.

  • A Knowledge Graph based Bidirectional Recurrent Neural Network Method for Literature-based Discovery

    In this paper, we present a model which incorporates biomedical knowledge graph, graph embedding and deep learning methods for literature-based discovery. Firstly, the relations between entities are extracted from biomedical abstracts and then a knowledge graph is constructed by using these obtained relations. Secondly, the graph embedding technologies are applied to convert the entities and relations in the knowledge graph into a low- dimensional vector space. Thirdly, a bidirectional Long Short-Term Memory network is trained based on the entity associations represented by the pre- trained graph embeddings. Finally, the learned model is used for open and closed literature-based discovery tasks. The experimental results show that our method could not only effectively discover hidden associations between entities, but also reveal the corresponding mechanism of interactions. It suggests that incorporating knowledge graph and deep learning methods is an effective way for capturing the underlying complex associations between entities hidden in the literature.

  • Design of Convolutional Neural Network for Classifying Depth Prediction Images from Overhead

    We predict depth of some objects, such a person, chairs and a soccer ball and so on, in overhead images with Fully Convolutional Residual Networks (FCRN) [1]. This networks can predict depth of RGB images taken by monocular cameras. And we classify images predicted depth. Thus we aim at differentiating person or other objects.

  • Efficient Posit Multiply-Accumulate Unit Generator for Deep Learning Applications

    The recently proposed posit number system is more accurate and can provide a wider dynamic range than the conventional IEEE754-2008 floating-point numbers. Its nonuniform data representation makes it suitable in deep learning applications. Posit adder and posit multiplier have been well developed recently in the literature. However, the use of posit in fused arithmetic unit has not been investigated yet. In order to facilitate the use of posit number format in deep learning applications, in this paper, an efficient architecture of posit multiply-accumulate (MAC) unit is proposed. Unlike IEEE754-2008 where four standard binary number formats are presented, the posit format is more flexible where the total bitwidth and exponent bitwidth can be any number. Therefore, in this proposed design, bitwidths of all datapath are parameterized and a posit MAC unit generator written in C language is proposed. The proposed generator can generate Verilog HDL code of posit MAC unit for any given total bitwidth and exponent bitwidth. The code generated by the generator is a combinational design, however a 5-stage pipeline strategy is also presented and analyzed in this paper. The worst case delay, area, and power consumption of the generated MAC unit under STM-28nm library with different bitwidth choices are provided and analyzed.

  • Progressive Latent Models for Self-Learning Scene-Specific Pedestrian Detectors

    The performance of offline learned pedestrian detectors significantly drops when they are applied to video scenes of various camera views, occlusions, and background structures. Learning a detector for each video scene can avoid the performance drop but it requires repetitive human effort on data annotation. In this paper, a self-learning approach is proposed, toward specifying a pedestrian detector for each video scene without any human annotation involved. Object locations in video frames are treated as latent variables and a progressive latent model (PLM) is proposed to solve such latent variables. The PLM is deployed as components of object discovery, object enforcement, and label propagation, which are used to learn the object locations in a progressive manner. With the difference of convex (DC) objective functions, PLM is optimized by a concave-convex programming algorithm. With specified network branches and loss functions, PLM is integrated with deep feature learning and optimized in an end-to-end manner. From the perspectives of convex regularization and error rate estimation, detailed optimization analysis and learning stability analysis of the proposed PLM are provided. The extensive experiments demonstrate that even without annotation involved the proposed self-learning approach outperforms weakly supervised learning approaches, while achieving comparable performance with transfer learning approaches.

  • Deep Learning Aided Fingerprint Based Beam Alignment for mmWave Vehicular Communication

    Harnessing the substantial bandwidth available at millimeter wave (mmWave) carrier frequencies has proved to be beneficial to accommodate a large number of users with increased data rates. However, owing to the high propagation losses observed at mmWave frequencies, directional transmission has to be employed. This necessitates efficient beam-alignment for a successful transmission. Achieving perfect beam-alignment is however challenging, especially in the scenarios when there is a rapid movement of vehicles associated with ever-changing traffic density, which is governed by the topology of roads as well as the time of the day. Therefore, in this paper, we take the approach of fingerprint based beam-alignment, where a set of beam pairs constitute the fingerprint of a given location. Furthermore, given the time-varying traffic density, we propose a multi-fingerprint based database for a given location, where the base station (BS) intelligently adapts the fingerprints with the aid of learning. Additionally, we propose multi- functional beam transmission as an application of our proposed design, where the beam-pairs that satisfy the required received signal strength (RSS) participate in increasing the spectral efficiency or improving the end-to-end performance in some other way. Explicitly, the BS leverages the plurality of beam-pairs to attain both multiplexing and diversity gains. Furthermore, if the plurality of beam-pairs is higher than the number of RF chains, the BS may also employ beam-index modulation to further improve the spectral efficiency. We demonstrate that having multiple fingerprint-based beam-alignment provides superior performance than that of the single fingerprint based beam-alignment. Furthermore, we show that our learning-aided multiple fingerprint design provides a better fidelity compared to that of the benchmark scheme also employing multiple fingerprint but dispensing with learning. Additionally, our reduced-search based learning-aided beam-alignment design performs similarly to beam-sweeping based beam-alignment, even though an exhaustive beam-search is carried out by the latter. More explicitly, our design is capable of maintaining the target performance in dense vehicular environments, while both single fingerprint and line-of-sight (LOS) based beam-alignment suffer from blockages.



Standards related to Deep Learning

Back to Top

No standards are currently tagged "Deep Learning"