Conferences related to Multi-layer neural network

Back to Top

2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)

The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted papers will be peer reviewed. Accepted high quality papers will be presented in oral and postersessions, will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE


2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC)

The Conference focuses on all aspects of instrumentation and measurement science andtechnology research development and applications. The list of program topics includes but isnot limited to: Measurement Science & Education, Measurement Systems, Measurement DataAcquisition, Measurements of Physical Quantities, and Measurement Applications.


ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

The ICASSP meeting is the world's largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits, and over 50 lecture and poster sessions.


IECON 2020 - 46th Annual Conference of the IEEE Industrial Electronics Society

IECON is focusing on industrial and manufacturing theory and applications of electronics, controls, communications, instrumentation and computational intelligence.


IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium

All fields of satellite, airborne and ground remote sensing.


More Conferences

Periodicals related to Multi-layer neural network

Back to Top

Aerospace and Electronic Systems Magazine, IEEE

The IEEE Aerospace and Electronic Systems Magazine publishes articles concerned with the various aspects of systems for space, air, ocean, or ground environments.


Antennas and Propagation, IEEE Transactions on

Experimental and theoretical advances in antennas including design and development, and in the propagation of electromagnetic waves including scattering, diffraction and interaction with continuous media; and applications pertinent to antennas and propagation, such as remote sensing, applied optics, and millimeter and submillimeter wave techniques.


Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on

Methods, algorithms, and human-machine interfaces for physical and logical design, including: planning, synthesis, partitioning, modeling, simulation, layout, verification, testing, and documentation of integrated-circuit and systems designs of all complexities. Practical applications of aids resulting in producible analog, digital, optical, or microwave integrated circuits are emphasized.


Electron Devices, IEEE Transactions on

Publishes original and significant contributions relating to the theory, design, performance and reliability of electron devices, including optoelectronics devices, nanoscale devices, solid-state devices, integrated electronic devices, energy sources, power devices, displays, sensors, electro-mechanical devices, quantum devices and electron tubes.


Geoscience and Remote Sensing, IEEE Transactions on

Theory, concepts, and techniques of science and engineering as applied to sensing the earth, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.


More Periodicals

Most published Xplore authors for Multi-layer neural network

Back to Top

Xplore Articles related to Multi-layer neural network

Back to Top

Synchronous Machine Steady-State Stability Annlysis Using An Artificial Neural Network

IEEE Power Engineering Review, 1991

None


Reverse modeling of microwave circuits with bidirectional neural network models

IEEE Transactions on Microwave Theory and Techniques, 1998

Neural networks have been been developed into an alternative modeling approach for the microwave circuit-design process. In this paper, we describe a neural network-based microwave circuit-design approach that implements the solution- searching optimization routine by a modified neural network learning process. Both the development of a microwave circuit model and the searching of a design solution can thus take advantage ...


Optoelectronic neural networks: mapping multilayer architectures on to an optoelectronic demonstrator

2003 Conference on Lasers and Electro-Optics Europe (CLEO/Europe 2003) (IEEE Cat. No.03TH8666), 2003

In this paper we outline some of the changes needed to implement multilayer feed-forward neural networks using the demonstrator hardware which was based on around an array of vertical cavity surface emitting lasers. Network simulations show that the neural network demonstrator hardware can be used to implement two different classes of feed-forward network, the multilayer perceptron (MLP) and radial basis ...


Reduction of necessary precision for the learning of pattern recognition

[Proceedings] 1991 IEEE International Joint Conference on Neural Networks, 1991

The authors propose a novel learning algorithm with weighted error function (WEF). They have reduced the necessary precision for the learning of multi- font alpha-numeric recognition to 10-bit fixed point precision using the WEF. The WEF raises the recognition accuracy by more than 25% when the precision of all operations (including multiplication and addition) and the precision of all data ...


Unfully interconnected neural networks as associative memory

IEEE International Symposium on Circuits and Systems, 1990

Unfully interconnected neural networks (UINNs) are proposed as associative memory. The basic idea is to form compact internal representations of patterns in order to increase the storage efficiency of the interconnections. Several effective methods for designing UINNs as associative memory, including monolayered and multilayered neural networks, are presented. A maximum- interconnection-preserving method which forms a rectangular grid structure of local ...


More Xplore Articles

Educational Resources on Multi-layer neural network

Back to Top

IEEE.tv Videos

Lizhong Zheng's Globecom 2019 Keynote
Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware - Emre Neftci: 2016 International Conference on Rebooting Computing
FPGA demonstrator of a Programmable ML Inference Accelerator - Martin Foltin - ICRC San Mateo, 2019
IMS 2011 Microapps - IQ Mixer Measurements: Techniques for Complete Characterization of IQ Mixers Using a Multi-Port Vector Network Analyzer
802.1: Higher Layer LAN Protocols
ICASSP 2010 - Advances in Neural Engineering
Improved Deep Neural Network Hardware Accelerators Based on Non-Volatile-Memory: the Local Gains Technique: IEEE Rebooting Computing 2017
IEEE Themes - Efficient networking services underpin social networks
Keynote on Machine Learning: Andrea Goldsmith - B5GS 2019
Emergent Neural Network in reinforcement learning
High Throughput Neural Network based Embedded Streaming Multicore Processors - Tarek Taha: 2016 International Conference on Rebooting Computing
Learning to Dribble on a Real Robot by Success and Failure
Uncovering the Neural Code of Learning Control - Jennie Si - WCCI 2012 invited lecture
Adaptive Learning and Optimization for MI: From the Foundations to Complex Systems - Haibo He - WCCI 2016
Behind Artificial Neural Networks
RNSnet: In-Memory Neural Network Acceleration Using Residue Number System - Sahand Salamat - ICRC 2018
Spiking Network Algorithms for Scientific Computing - William Severa: 2016 International Conference on Rebooting Computing
Memory Centric Artificial Intelligence - Damien Querlioz at INC 2019
A Comparator Design Targeted Towards Neural Net - David Mountain - ICRC San Mateo, 2019
Network Resource Allocation: Technical Session 3 - HPSR 2020 Virtual Conference

IEEE-USA E-Books

  • Synchronous Machine Steady-State Stability Annlysis Using An Artificial Neural Network

    None

  • Reverse modeling of microwave circuits with bidirectional neural network models

    Neural networks have been been developed into an alternative modeling approach for the microwave circuit-design process. In this paper, we describe a neural network-based microwave circuit-design approach that implements the solution- searching optimization routine by a modified neural network learning process. Both the development of a microwave circuit model and the searching of a design solution can thus take advantage of a hardware neural network processor, which is significantly faster than a software simulation. In addition, a systematic simulation-based approach to convert conventional circuit models into neural network models for this design process is described. The development of a heterojunction bipolar transistor (HBT) amplifier model and its applications are demonstrated.

  • Optoelectronic neural networks: mapping multilayer architectures on to an optoelectronic demonstrator

    In this paper we outline some of the changes needed to implement multilayer feed-forward neural networks using the demonstrator hardware which was based on around an array of vertical cavity surface emitting lasers. Network simulations show that the neural network demonstrator hardware can be used to implement two different classes of feed-forward network, the multilayer perceptron (MLP) and radial basis function (RBF) networks. In both cases, the actual training of the networks is performed offline using hardware simulations and the weighted interconnections between neurons are fixed before application to the optoelectronic hardware.

  • Reduction of necessary precision for the learning of pattern recognition

    The authors propose a novel learning algorithm with weighted error function (WEF). They have reduced the necessary precision for the learning of multi- font alpha-numeric recognition to 10-bit fixed point precision using the WEF. The WEF raises the recognition accuracy by more than 25% when the precision of all operations (including multiplication and addition) and the precision of all data (including weights and backpropagation signals) are limited to 10-bit fixed point. This improves the feasibility of analog implementation and lessens the data width of digital implementation. The performance of the WEF is high even with a small number of hidden neurons. This enables the reduction of weight memory. Furthermore, the WEF accelerates the learning and thus refines the adaptability of backpropagation.<<ETX>>

  • Unfully interconnected neural networks as associative memory

    Unfully interconnected neural networks (UINNs) are proposed as associative memory. The basic idea is to form compact internal representations of patterns in order to increase the storage efficiency of the interconnections. Several effective methods for designing UINNs as associative memory, including monolayered and multilayered neural networks, are presented. A maximum- interconnection-preserving method which forms a rectangular grid structure of local interconnections is proposed. Dynamical modeling almost doubles the average storage per interconnection weight of the neural network compared with the Hopfield model. Multilayered neural networks are of relatively high storage capacity.<<ETX>>

  • Implementing backpropagation with analog hardware

    The main problem when implementing backpropagation with analog hardware is the offset present in multipliers in the backward path of multilayer neural networks. Here this problem is further investigated and a solution is presented at the cost of the speed of the backpropagation algorithm.<<ETX>>

  • A VLSI neuroprocessor for real-time image flow computing

    A locally connected multi-layer stochastic neural network and its associated VLSI array neuroprocessors have been developed for high-performance image flow computing systems. An extendable VLSI neural chip has been designed with a silicon area of 4.6*6.8 mm/sup 2/ in a MOSIS 2 mu m scalable CMOS process. The mixed analog-digital design techniques are utilized to achieve compact and programmable synapses with gain-adjustable neurons and winner-take-all cells for massively parallel neural computation. Hardware annealing through the control of the neurons' gain helps to efficiently search the optimal solutions. Computing of image flow using one 2 mu m 72-neuron neural chip can be accelerated by a factor of 187 more than a Sun-4/260 workstation. Real-time image flow processing on industrial images is practical using an extended array of VLSI neural chips. Actual examples on moving trucks are presented.<<ETX>>

  • Analysis of incremental communication for multilayer neural networks on a field programmable gate array

    A neural network is a massively parallel distributed processor made up of simple processing units known as neurons. These neurons are organized in layers and every neuron in each layer is connected to each neuron in the adjacent layers. This connection architecture makes for an enormous number of communication links between neurons This is a major issue when considering a hardware implementation of a neural network since communication links take up hardware space, and hardware space costs money. To overcome this space problem incremental communication for multilayer neural networks has been proposed. Incremental communication works by only communicating the change in value between neurons as opposed to the entire magnitude of the value. This allows for the numbers to be represented with a fewer number of bits, and thus can be communicated with narrower communication links. To validate the idea of incremental communication an incremental communication neural network was designed and implemented, and then compared to a traditional neural network. From the implementation it is seen that even though the incremental communication neural network saves design space through reduced communication links, the additional resources necessary to shape the data for transmission outweighs any design space savings when targeting a modern FPGA.

  • An Efficient Scalable Parallel Hardware Architecture for Multilayer Spiking Neural Networks

    Artificial neural networks (ANNs) are processing models widely explored due to their computational capabilities for solving problems. Recently, spiking neural networks (SNNs) are being studied as more biological plausible models that resemble closer to biological neurons than classical ANNs. In spite of SNNs offer richer dynamics, their full utilization in practical systems is still limited due to high computational demand on microprocessors-based software implementations. In order to overcome this drawback, an efficient scalable parallel hardware architecture for SNNs is proposed to map efficiently area demanding and dense interconnection requirements of neural processing. The SNNs models have the advantage of reducing the bandwidth needed for interchanging information among neurons, making them more suitable for hardware implementation, due to the communication scheme based on digital spikes. The hardware implementation is divided into two main phases: recall and learning. Timing, hardware resources and performance comparison are mainly shown for the recall phase in this paper.

  • Finite precision error analysis of neural network electronic hardware implementations

    The high speed desired in the implementation of many neural network algorithms, such as backpropagation learning in multilayer perceptrons (MLPs), may be attained through the use of finite-precision hardware. This finite precision hardware, however, is prone to errors. A method of theoretically deriving and statistically evaluating this error is presented. This could be used as a guide to the details of hardware design and algorithm implementation. The authors describe the derivation of the techniques involved, as well as the details of the backpropagation example. The intent is to provide a general framework by which most neural network algorithms under any set of hardware constraints may be evaluated.<<ETX>>



Standards related to Multi-layer neural network

Back to Top

No standards are currently tagged "Multi-layer neural network"


Jobs related to Multi-layer neural network

Back to Top