Backpropagation
7,609 resources related to Backpropagation
IEEE Organizations related to Backpropagation
Back to TopConferences related to Backpropagation
Back to Top2013 12th IEEE International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)
Cognitive Informatics (CI) is a cuttingedge and multidisciplinary research field that tackles the fundamental problems shared by modern informatics, computing, AI, cybernetics, computational intelligence, cognitive science, intelligence science, neuropsychology, brain science, systems science, software engineering, knowledge engineering, cognitive robots, scientific philosophy, cognitive linguistics, life sciences, and cognitive computing.
2013 15th International Conference on Transparent Optical Networks (ICTON)
ICTON addresses applications of transparent and all optical technologies in telecommunication networks, systems, and components. ICTON topics are well balanced between basic optics and network engineering. Interactions between those two groups of professionals are a valuable merit of conference. ICTON combines high level invited talks with carefully selected regular submissions.
2013 26th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE)
This is a general Electrical and Computer Engineering Conference which encompasses all aspects of these fields.
2013 International Joint Conference on Neural Networks (IJCNN 2013  Dallas)
Neural Networks
Periodicals related to Backpropagation
Back to TopEngineering in Medicine and Biology Magazine, IEEE
Both general and technical articles on current technologies and methods used in biomedical and clinical engineering; societal implications of medical technologies; current news items; book reviews; patent descriptions; and correspondence. Special interest departments, students, law, clinical engineering, ethics, new products, society news, historical features and government.
Neural Networks, IEEE Transactions on
Devoted to the science and technology of neural networks, which disclose significant technical knowledge, exploratory developments, and applications of neural networks from biology to software to hardware. Emphasis is on artificial neural networks.
Xplore Articles related to Backpropagation
Back to TopOnline learning of dynamical systems in the presence of model mismatch and disturbances
Danchi Jiang; Jun Wang IEEE Transactions on Neural Networks, 2000
This paper is concerned with the online learning of unknown dynamical systems using a recurrent neural network. The unknown dynamic systems to be learned are subject to disturbances and possibly unstable. The neuralnetwork model used has a simple architecture with one layer of adaptive connection weights. Four learning rules are proposed for the cases where the system state is measurable ...
Guest Editorial Neural Networks Council Awards
M. H. Hassoun IEEE Transactions on Neural Networks, 1996
First Page of the Article ![](/xploreAssets/images/absImages/00478387.png)
Power system harmonics prediction with an artificial neural network
H. Mori; S. Suga Circuits and Systems, 1991., IEEE International Sympoisum on, 1991
Describe an artificial neural net (ANN) based approach to prediction of power system harmonic voltages. The effectiveness of recurrent neural networks is examined. Recurrent neural networks have the advantage of being able to consider the dynamics of a time series, unlike the conventional feedforward ANN. Four recurrent neural networks are applied to prediction of the fifth harmonic voltage. A comparison ...
Optoelectronic Systems Trained With Backpropagation Through Time
Michiel Hermans; Joni Dambre; Peter Bienstman IEEE Transactions on Neural Networks and Learning Systems, 2015
Delaycoupled optoelectronic systems form promising candidates to act as powerful information processing devices. In this brief, we consider such a system that has been studied before in the context of reservoir computing (RC). Instead of viewing the system as a random dynamical system, we see it as a true machinelearning model, which can be fully optimized. We use a recently ...
Fast learning method of interval type2 fuzzy neural networks
Damian Olczyk; Urszula MarkowskaKaczmar 2014 14th UK Workshop on Computational Intelligence (UKCI), 2014
The Fuzzy Set Parameter Estimation algorithm is proposed for fast learning interval type2 fuzzy neural networks applied for classification problems. Classes are disjoint. Learning consists of estimating appropriate values of fuzzy set parameters in every rule. Estimation is based on statistical properties of the training data. The experimental study confirms that it is dozens times quicker than the backpropagation method, ...
More Xplore Articles
Educational Resources on Backpropagation
Back to TopeLearning
Online learning of dynamical systems in the presence of model mismatch and disturbances
Danchi Jiang; Jun Wang IEEE Transactions on Neural Networks, 2000
This paper is concerned with the online learning of unknown dynamical systems using a recurrent neural network. The unknown dynamic systems to be learned are subject to disturbances and possibly unstable. The neuralnetwork model used has a simple architecture with one layer of adaptive connection weights. Four learning rules are proposed for the cases where the system state is measurable ...
Guest Editorial Neural Networks Council Awards
M. H. Hassoun IEEE Transactions on Neural Networks, 1996
First Page of the Article ![](/xploreAssets/images/absImages/00478387.png)
Power system harmonics prediction with an artificial neural network
H. Mori; S. Suga Circuits and Systems, 1991., IEEE International Sympoisum on, 1991
Describe an artificial neural net (ANN) based approach to prediction of power system harmonic voltages. The effectiveness of recurrent neural networks is examined. Recurrent neural networks have the advantage of being able to consider the dynamics of a time series, unlike the conventional feedforward ANN. Four recurrent neural networks are applied to prediction of the fifth harmonic voltage. A comparison ...
Optoelectronic Systems Trained With Backpropagation Through Time
Michiel Hermans; Joni Dambre; Peter Bienstman IEEE Transactions on Neural Networks and Learning Systems, 2015
Delaycoupled optoelectronic systems form promising candidates to act as powerful information processing devices. In this brief, we consider such a system that has been studied before in the context of reservoir computing (RC). Instead of viewing the system as a random dynamical system, we see it as a true machinelearning model, which can be fully optimized. We use a recently ...
Fast learning method of interval type2 fuzzy neural networks
Damian Olczyk; Urszula MarkowskaKaczmar 2014 14th UK Workshop on Computational Intelligence (UKCI), 2014
The Fuzzy Set Parameter Estimation algorithm is proposed for fast learning interval type2 fuzzy neural networks applied for classification problems. Classes are disjoint. Learning consists of estimating appropriate values of fuzzy set parameters in every rule. Estimation is based on statistical properties of the training data. The experimental study confirms that it is dozens times quicker than the backpropagation method, ...
More eLearning Resources
IEEEUSA EBooks

This chapter contains sections titled: 11.1 Introduction, 11.2 The Perceptron, 11.3 Training a Perceptron, 11.4 Learning Boolean Functions, 11.5 Multilayer Perceptrons, 11.6 MLP as a Universal Approximator, 11.7 Backpropagation Algorithm, 11.8 Training Procedures, 11.9 Tuning the Network Size, 11.10 Bayesian View of Learning, 11.11 Dimensionality Reduction, 11.12 Learning Time, 11.13 Deep Learning, 11.14 Notes, 11.15 Exercises, 11.16 References

Appendix G: Thirty Years of Adaptive Neural Networks: Perceptron, Madaline, and Backpropagation
This chapter contains sections titled: Introduction Fundamental Concepts Adaptation Â Â The Minimal Disturbance Principle Error Correction Rules Â Â Single Threshold Element Error Correction Rules Â Â MultiElement Networks SteepestDescent Rules Â Â Single Threshold Element SteepestDescent Rules Â Â MultiElement Networks Summary Acknowledgments Bibliography

Chapter six begins by introducing genetic algorithms by way of analogy with the biological processes at work in the evolution of organisms. The basic framework of a genetic algorithm is provided, including the three basic operators: selection, crossover, and mutation. A simple example of a genetic algorithm at work is examined, with each step explained and demonstrated. Next, modifications and enhancements from the literature are discussed, especially for the selection and crossover operators. Genetic algorithms for realvalued variables are discussed. The use of genetic algorithms as optimizers within a neural network is demonstrated, where the genetic algorithm replaces the using backpropagation algorithm. Finally, an example of the use of WEKA for genetic algorithms is provided.

Multilayer Neural Networks and Backpropagation
A computationally effective method for training the multilayer perceptrons is the backpropagation algorithm, which is regarded as a landmark in the development of neural network. This chapter presents two different learning methods, batch learning and online learning, on the basis of how the supervised learning of the multilayer perceptron is actually performed. The essence of backpropagation learning is to encode an inputoutput mapping into the synaptic weights and thresholds of a multilayer perceptron. It is hoped that the network becomes well trained so that it learns enough about the past to generalize to the future. The chapter concludes with crossvalidation and generalization. Crossvalidation is appealing particularly when people have to design a large neural network with good generalization as the goal in different ways. Generalization is assumed that the test data are drawn from the same population used to generate the training data.

The logic of connectionist systems
A connectionist system is a cellular network of adaptable nodes that has a natural propensity for storing knowledge. This emergent property is a function of a training process and a pattern of connections. Most analyses of such systems first assume an idiosyncratic specification for the nodes (often based on neuron models) and a constrained method of interconnection (reciprocity, no feedback, etc). In contrast, a general node model is assumed in this paper. It is based on a logic truth table with a probabilistic element. It is argued that this includes other definitions and leads to a general analysis of the class of connectionist systems. The analysis includes an explanation of the effect of training and testing techniques that involve the use of noise. Specifically, the paper describes a way of predicting and optimizing noise based training by the definition of an ideal node logic which ensures the most rapid descent of the resulting probabilistic automaton into the trained stable states. 'Hard' learning is shown to be achievable on the notorious parity checking problem with a level of performance that is two orders of magnitude better than other wellknown error backpropagation techniques demonstrated on the same topology. It is concluded that there are two main areas of advantage in this approach. The first is the direct probabilistic automaton model that covers and explains connectionist approaches in general, and the second is the potential for highperformance implementations for such systems.

Backpropagation in nonfeedforward networks
Backpropagation is a powerful supervised learning rule for networks with hidden units. However, as originally introduced, and as described in Chapter 4, it is limited to feedforward networks. In this chapter we derive the generalization of backpropagation to nonfeedforward networks. This generalization happens to take a very simple form: the error propagation network can be obtained simply by linearizing, and then transposing, the network to be trained. Networks with feedback necessarily raise the problem of stability. We prove that the error propagation network is always stable when training is performed. We also derive a sufficient condition for the stability of the nonfeedforward neural network, and we discuss the problem of the possible existence of multiple stable states. Finally, we present some experimental results on the use of back propagation in nonfeedforward networks.

RadialBasis Function Networks
This chapter focuses on the radialbasis function (RBF) network as an alternative to multilayer perceptrons. It will be interesting to find that in a multilayer perceptron, the function approximation is defined by a nested set of weighted summations, while in a RBF network, the approximation is defined by a single weighted sum. The chapter focuses on the use of a Gaussian function as the radialbasis function. The reason behind the choice of the Gaussian function as the radialbasis function in building RBF networks is that it has many desirable properties, which will become evident as the discussion progresses. It is important to point out that RBF networks and multilayer perceptrons can be trained in alternative ways besides those presented. For multilayer perceptrons, the backpropagation algorithm is simple to compute locally and it performs stochastic gradient descent in weight space when the algorithm is implemented in an online learning mode.

This chapter contains sections titled: 11.1 The Backpropagation Algorithm, 11.2 Derivation, 11.3 Practical Considerations, 11.4 NPCompleteness, 11.5 Comments, 11.6 Exercises, 11.7 Programming Projects

GradientBased Learning Applied to Document Recognition
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful GradientBased Learning technique. Given an appropriate network architecture, GradientBased Learning algorithms can be used to synthesize a complex decision surface that can classify high dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Reallife document recognition systems are composed of multiple modules including field extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multimodule systems to be trained globally using GradientBased methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.

Overview of Designs and Capabilities
This chapter contains sections titled: Introduction, Supervised Control, Direct Inverse Control, Neural Adaptive Control, Backpropagation Through Time, Adaptive Critics