2,207 resources related to Unsupervised learning
- Topics related to Unsupervised learning
- IEEE Organizations related to Unsupervised learning
- Conferences related to Unsupervised learning
- Periodicals related to Unsupervised learning
- Most published Xplore authors for Unsupervised learning
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted full papers will be peer reviewed. Accepted high quality papers will be presented in oral and poster sessions,will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE.
2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.
The International Conference on Image Processing (ICIP), sponsored by the IEEE SignalProcessing Society, is the premier forum for the presentation of technological advances andresearch results in the fields of theoretical, experimental, and applied image and videoprocessing. ICIP 2020, the 27th in the series that has been held annually since 1994, bringstogether leading engineers and scientists in image and video processing from around the world.
The International Conference on Robotics and Automation (ICRA) is the IEEE Robotics and Automation Society’s biggest conference and one of the leading international forums for robotics researchers to present their work.
The 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020) will be held in Metro Toronto Convention Centre (MTCC), Toronto, Ontario, Canada. SMC 2020 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report most recent innovations and developments, summarize state-of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems, and cybernetics. Advances in these fields have increasing importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience and thereby improve quality of life. Papers related to the conference theme are solicited, including theories, methodologies, and emerging applications. Contributions to theory and practice, including but not limited to the following technical areas, are invited.
The IEEE Aerospace and Electronic Systems Magazine publishes articles concerned with the various aspects of systems for space, air, ocean, or ground environments.
Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...
IEEE Communications Magazine was the number three most-cited journal in telecommunications and the number eighteen cited journal in electrical and electronics engineering in 2004, according to the annual Journal Citation Report (2004 edition) published by the Institute for Scientific Information. Read more at http://www.ieee.org/products/citations.html. This magazine covers all areas of communications such as lightwave telecommunications, high-speed data communications, personal communications ...
Specific topics of interest include, but are not limited to, sequence analysis, comparison and alignment methods; motif, gene and signal recognition; molecular evolution; phylogenetics and phylogenomics; determination or prediction of the structure of RNA and Protein in two and three dimensions; DNA twisting and folding; gene expression and gene regulatory networks; deduction of metabolic pathways; micro-array design and analysis; proteomics; ...
Serves as a compendium for papers on the technological advances in control engineering and as an archival publication which will bridge the gap between theory and practice. Papers will highlight the latest knowledge, exploratory developments, and practical applications in all aspects of the technology needed to implement control systems from analysis and design through simulation and hardware.
IEEE Access, 2016
This paper aims to highlight distinctive features of the SP theory of intelligence, realized in the SP computer model, and its apparent advantages compared with some AI-related alternatives. Perhaps most importantly, the theory simplifies and integrates observations and concepts in AI-related areas, and has potential to simplify and integrate of structures and processes in computing systems. Unlike most other AI-related ...
Journal of Communications and Networks, 2012
Reliable detection of primary user activity increases the opportunity to access temporarily unused bands and prevents harmful interference to the primary system. By extracting a global decision from local sensing results, cooperative sensing achieves high reliability against multipath fading. For the effective combining of sensing results, which is generalized by a likelihood ratio test, the fusion center should learn some ...
Fundamentals of Computational Intelligence: Neural Networks, Fuzzy Systems, and Evolutionary Computation, None
This chapter focuses fuzzy clustering with the fuzzy C-means (FCM) and the possibilistic C-means (PCM) as the primary examples. Clustering methods are used to look for structure in sets of unlabeled vectors. In many applications, we need to assign known labels to test data. In these cases, we assume that we have training sets of patterns that represent the various ...
IEEE Transactions on Neural Networks, 1994
Implementations of competitive learning often use input and weight vectors "normalized" based on the sum of weight vector components. While it is realized that some distortion of results can occur with this procedure, it is generally not appreciated how dramatic the distortion can be, and that it compromises the dot product as a similarity measure. We show here that in ...
2006 IEEE Workshop on Multimedia Signal Processing, 2006
Distributed compression is particularly attractive for stereoscopic images since it avoids communication between cameras. Since compression performance depends on exploiting the redundancy between images, knowing the disparity is important at the decoder. Unfortunately, distributed encoders cannot calculate this disparity and communicate it. We consider a toy problem, the compression of random dot stereograms, and propose an expectation maximization algorithm to ...
ICRA 2020 Keynote - Can Deep Reinforcement Learning from pixels be made as efficient as from state?
"What is Big Data Analytics and Why Should I Care?" - Big Data Analytics Tutorial Part 1
Honors 2020: Michael I. Jordan Wins the IEEE John von Neumann Medal
IEEE Themes - Learning about human behavior from mobile phone data
Enter Deep Learning
Machine Learning of Motor Skills for Robotics
Linear Regression: Intro to Machine Learning Workshop - IEEE Region 4 Presentation
Ensemble Approaches in Learning
Overcoming the Static Learning Bottleneck - the Need for Adaptive Neural Learning - Craig Vineyard: 2016 International Conference on Rebooting Computing
Federated Learning for Networking - Anwar Walid - IEEE Sarnoff Symposium, 2019
Perception-Action-Learning and Associative Skill Memories
IEEE Day Future Milestone: Machine Learning in the future
Deep Learning & Machine Learning Inference - Ashish Sirasao - LPIRC 2019
Signal Processing and Machine Learning
Brain-like Intelligence Inside - Towards Autonomously Interacting Systems
Panel Discussion - COVID-19, Deep Learning and Biomedical Imaging Panel
ICASSP 2011 Trends in Machine Learning for Signal Processing
Continuously Learning Neuromorphic Systems with High Biological Realism: IEEE Rebooting Computing 2017
Computer-Assisted Audiovisual Language Learning
This paper aims to highlight distinctive features of the SP theory of intelligence, realized in the SP computer model, and its apparent advantages compared with some AI-related alternatives. Perhaps most importantly, the theory simplifies and integrates observations and concepts in AI-related areas, and has potential to simplify and integrate of structures and processes in computing systems. Unlike most other AI-related theories, the SP theory is itself a theory of computing, which can be the basis for new architectures for computers. Fundamental in the theory is information compression via the matching and unification of patterns and, more specifically, via a concept of multiple alignment. The theory promotes transparency in the representation and processing of knowledge, and unsupervised learning of natural structures via information compression. It provides an interpretation of aspects of mathematics and an interpretation of phenomena in human perception and cognition. concepts in the theory may be realized in terms of neurons and their inter-connections (SP-neural). These features and advantages of the SP system are discussed in relation to AI-related alternatives: the concept of minimum length encoding and related concepts, how computational and energy efficiency in computing may be achieved, deep learning in neural networks, unified theories of cognition and related research, universal search, Bayesian networks and some other models for AI, IBM's Watson, solving problems associated with big data and in the development of intelligence in autonomous robots, pattern recognition and vision, the learning and processing of natural language, exact and inexact forms of reasoning, representation and processing of diverse forms of knowledge, and software engineering. In conclusion, the SP system can provide a firm foundation for the long-term development of AI and related areas, and at the same time, it may deliver useful results on relatively short timescales.
Reliable detection of primary user activity increases the opportunity to access temporarily unused bands and prevents harmful interference to the primary system. By extracting a global decision from local sensing results, cooperative sensing achieves high reliability against multipath fading. For the effective combining of sensing results, which is generalized by a likelihood ratio test, the fusion center should learn some parameters, such as the probabilities of primary transmission, false alarm, and detection at the local sensors. During the training period in supervised learning, the on/off log of primary transmission serves as the output label of decision statistics from the local sensor. In this paper, we extend unsupervised learning techniques with an expectation maximization algorithm for cooperative spectrum sensing, which does not require an external primary transmission log. Local sensors report binary hard decisions to the fusion center and adjust their operating points to enhance learning performance. Increasing the number of sensors, the joint-expectation step makes a confident classification on the primary transmission as in the supervised learning. Thereby, the proposed scheme provides accurate parameter estimates and a fast convergence rate even in low signal-to-noise ratio regimes, where the primary signal is dominated by the noise at the local sensors.
This chapter focuses fuzzy clustering with the fuzzy C-means (FCM) and the possibilistic C-means (PCM) as the primary examples. Clustering methods are used to look for structure in sets of unlabeled vectors. In many applications, we need to assign known labels to test data. In these cases, we assume that we have training sets of patterns that represent the various classes under consideration. The chapter shows how both the distances and fuzzy labels are combined to create class labels for the test vector. Like the crisp counterpart, the fuzzy k-nearest neighbor (FKNN) algorithm is simple in concept. The concept of nearest neighbors normally refers to distance in a metric space. In Gader et al., the k-NN soft labels were used to drive a self- organizing feature map (SOFM), a neural-based clustering algorithm. It can be considered the extreme case of the multiprototype classification that is related to clustering.
Implementations of competitive learning often use input and weight vectors "normalized" based on the sum of weight vector components. While it is realized that some distortion of results can occur with this procedure, it is generally not appreciated how dramatic the distortion can be, and that it compromises the dot product as a similarity measure. We show here that in some cases an input vector identical to an existing output node weight vector can be classified as belonging to a different output node. This contradicts the generally-accepted concept of weight vectors developing as prototypes during competitive learning. Ways to minimize this problem are also given.<<ETX>>
Distributed compression is particularly attractive for stereoscopic images since it avoids communication between cameras. Since compression performance depends on exploiting the redundancy between images, knowing the disparity is important at the decoder. Unfortunately, distributed encoders cannot calculate this disparity and communicate it. We consider a toy problem, the compression of random dot stereograms, and propose an expectation maximization algorithm to perform unsupervised learning of disparity during the decoding procedure. Our experiments show that this can achieve twice as efficient compression compared to a system with no disparity compensation and perform nearly as well as a system which knows the disparity through an oracle
Self-organizing mapping, an unsupervised learning algorithm, is applied. The accuracy of this approach has been tested using 282 students' psychological testing. Through analyzing the weights of networks, a suggestion for improving the inventory is provided.
Many large enterprises are establishing high-density logistics network points to improve customer satisfaction. From the point of logistics efficiency increases, it is an effective method. But it also results in repeated construction. To avoid increasing logistics cost and wasting social resources, a method based on Self-Organizing Feature Map and Baumol-Wolfe model is used. Compared to general location methods, its innovative point is the combined use of cluster analysis, and its result can be easily got by using Matlab and successive-approximation algorithm. In the end, a practical calculation example is used to analyze the feasibility and superiority of this method.
Model selection in unsupervised learning is a hard problem. In this paper, a simple selection criterion for hyper-parameters in one-class classifiers (OCCs) is proposed. It makes use of the particular structure of the one-class problem. The mean idea is that the complexity of the classifier is increased until the classifier becomes inconsistent on the target class. This defines the most complex classifier, which can still reliably be trained on the data. Experiments indicated the usefulness of the approach.
A multistage network that will reduce the translational uncertainty of a one- dimensional object is presented. To implement this network, novel network structures like multiple-valued outputs, competition between links instead of nodes, and cooperation of signals at the links are used. The number of nodes and links needed to implement the architecture is small. If the input field consists of n cells, then the total number of cells needed is only O(n). The total number of connections needed is O(nlogn). It is shown that size- invariant recognition can also be achieved if the input to the architecture is provided by a scale-sensitive network called a masking field.<<ETX>>
A new approach to unsupervised learning in a single-layer neural. network is discussed. An algorithm for unsupervised learning based on Hebbian learning rule is presented. A simple neuron model is analyzed. Adopted neuron model represents dynamic neural model which contains both feed forward and feedback connections between input and output. Actually, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule in which the modification of the synaptic strength is proportional not to pre- and post-synaptic activity, but instead to the pre-synaptic and averaged value of post-synaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of original Hebb rule are avoided. Implementation of the basic Hebb scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
No standards are currently tagged "Unsupervised learning"