Conferences related to Distributed Parallel Architecture

Back to Top

2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)

The 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020) will be held in Metro Toronto Convention Centre (MTCC), Toronto, Ontario, Canada. SMC 2020 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report most recent innovations and developments, summarize state-of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems, and cybernetics. Advances in these fields have increasing importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience and thereby improve quality of life. Papers related to the conference theme are solicited, including theories, methodologies, and emerging applications. Contributions to theory and practice, including but not limited to the following technical areas, are invited.


2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)

Cluster Computing, Grid Computing, Edge Computing, Cloud Computing, Parallel Computing, Distributed Computing


2019 28th International Conference on Parallel Architectures and Compilation Techniques (PACT)

Bring together researchers from architecture, compilers, applications and languages to present and discuss innovative research of common interest.


2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)

The conference is the primary forum for cross-industry and multidisciplinary research in automation. Its goal is to provide a broad coverage and dissemination of foundational research in automation among researchers, academics, and practitioners.


2019 IEEE 18th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)

The IEEE ICCI*CC series is a flagship conference of its field. It not only synergizes theories of modern information science, computer science, communication theories, AI, cybernetics, computational intelligence, cognitive science, intelligence science, neuropsychology, brain science, systems science, software science, knowledge science, cognitive robots, cognitive linguistics, and life science, but also promotes novel applications in cognitive computers, cognitive communications, computational intelligence, cognitive robots, cognitive systems, and the AI, IT, and software industries.

  • 2018 IEEE 17th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)

    Informatics models of the brainCognitive processes of the brainThe cognitive foundation of big dataMachine consciousnessNeuroscience foundations of information processingDenotational mathematics (DM)Cognitive knowledge basesAutonomous machine learningNeural models of memoryInternal information processingCognitive sensors and networksCognitive linguisticsAbstract intelligence (aI)Cognitive information theoryCognitive information fusionCognitive computersCognitive systemsCognitive man-machine communicationCognitive InternetWorld-Wide Wisdoms (WWW+)Mathematical engineering for AICognitive vehicle systems Semantic computingDistributed intelligenceMathematical models of AICognitive signal processingCognitive image processing Artificial neural netsGenetic computingMATLAB models of AIBrain-inspired systemsNeuroinformaticsNeurological foundations of the brainSoftware simulations of the brainBrain-system interfacesNeurocomputingeBrain models

  • 2017 IEEE 16th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)

    Cognitive Informatics is a transdisciplinary field that studies the internal information processing mechanisms of the brain, the underlying abstract intelligence theories and denotational mathematics, and their engineering applications in cognitive computing, computational intelligence, and cognitive systems. Cognitive Computing is a cutting-edge paradigm of intelligent computing methodologies and systems based on CI, which implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain. CI and CC not only synergize theories of modern information science, computer science, communication theories, AI, cybernetics, computational intelligence, cognitive science, intelligence science, neuropsychology, brain science, systems science, software science, knowledge science, cognitive robots, cognitive linguistics, and life science, but also reveal exciting applications in cognitive computers, cognitive robots, and computational intelligence.

  • 2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)

    Cognitive Informatics (CI) is a transdisciplinary field that studies the internal information processing mechanisms of the brain, the underlying abstract intelligence (¿I) theories and denotational mathematics, and their engineering applications in cognitive computing, computational intelligence, and cognitive systems. Cognitive Computing (CC) is a cutting-edge paradigm of intelligent computing methodologies and systems based on cognitive informatics, which implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain.

  • 2015 IEEE 14th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)

    The scope of the conference covers cognitive informatics, cognitive computing, cognitive communications, computational intelligence, and computational linguitics.

  • 2014 IEEE 13th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)

    Cognitive informatics, cognitive computing, cognitive science, cognitive robots, artificial intelligence, computational intelligence

  • 2013 12th IEEE International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)

    Cognitive Informatics (CI) is a cutting-edge and multidisciplinary research field that tackles the fundamental problems shared by modern informatics, computing, AI, cybernetics, computational intelligence, cognitive science, intelligence science, neuropsychology, brain science, systems science, software engineering, knowledge engineering, cognitive robots, scientific philosophy, cognitive linguistics, life sciences, and cognitive computing.

  • 2012 11th IEEE International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)

    Cognitive informatics and Cognitive Computing are a transdisciplinary enquiry on the internal information processing mechanisms and processes of the brain and their engineering applications in cognitive computers, computational intelligence, cognitive robots, cognitive systems, and in the AI, IT, and software industries. The 11th IEEE Int l Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC 12) focuses on the theme of e-Brain and Cognitive Computers.

  • 2011 10th IEEE International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)

    Cognitive Informatics and Cognitive Computing are a transdisciplinary enquiry on the internal information processing mechanisms and processes of the brain and their engineering applications in cognitive computers, computational intelligence, cognitive robots, cognitive systems, and in the AI, IT, and software industries. The 10th IEEE Int l Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC 11) focuses on the theme of Cognitive Computers and the e-Brain.

  • 2010 9th IEEE International Conference on Cognitive Informatics (ICCI)

    Cognitive Informatics (CI) is a cutting-edge and transdisciplinary research area that tackles the fundamental problems shared by modern informatics, computing, AI, cybernetics, computational intelligence, cognitive science, neuropsychology, medical science, systems science, software engineering, telecommunications, knowledge engineering, philosophy, linguistics, economics, management science, and life sciences.

  • 2009 8th IEEE International Conference on Cognitive Informatics (ICCI)

    The 8th IEEE International Conference on Cognitive Informatics (ICCI 09) focuses on the theme of Cognitive Computing and Semantic Mining. The objectives of ICCI'09 are to draw attention of researchers, practitioners, and graduate students to the investigation of cognitive mechanisms and processes of human information processing, and to stimulate the international effort on cognitive informatics research and engineering applications.

  • 2008 7th IEEE International Conference on Cognitive Informatics (ICCI)

    The 7th IEEE International Conference on Cognitive Informatics (ICCI 08) focuses on the theme of Cognitive Computers and Computational Intelligence. The objectives of ICCI 08 are to draw attention of researchers, practitioners and graduate students to the investigation of cognitive mechanisms and processes of human information processing, and to stimulate the international effort on cognitive informatics research and engineering applications.

  • 2007 6th IEEE International Conference on Cognitive Informatics (ICCI)

  • 2006 5th IEEE International Conference on Cognitive Informatics (ICCI)

  • 2005 4th IEEE International Conference on Cognitive Informatics (ICCI)


More Conferences

Periodicals related to Distributed Parallel Architecture

Back to Top

Automatic Control, IEEE Transactions on

The theory, design and application of Control Systems. It shall encompass components, and the integration of these components, as are necessary for the construction of such systems. The word `systems' as used herein shall be interpreted to include physical, biological, organizational and other entities and combinations thereof, which can be represented through a mathematical symbolism. The Field of Interest: shall ...


Computer

Computer, the flagship publication of the IEEE Computer Society, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications. Computer is a resource that practitioners, researchers, and managers can rely on to provide timely information about current research developments, trends, best practices, and changes in the profession.


Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on

Methods, algorithms, and human-machine interfaces for physical and logical design, including: planning, synthesis, partitioning, modeling, simulation, layout, verification, testing, and documentation of integrated-circuit and systems designs of all complexities. Practical applications of aids resulting in producible analog, digital, optical, or microwave integrated circuits are emphasized.


Computers, IEEE Transactions on

Design and analysis of algorithms, computer systems, and digital networks; methods for specifying, measuring, and modeling the performance of computers and computer systems; design of computer components, such as arithmetic units, data storage devices, and interface devices; design of reliable and testable digital devices and systems; computer networks and distributed computer systems; new computer organizations and architectures; applications of VLSI ...


Computing in Science & Engineering

Physics, medicine, astronomy—these and other hard sciences share a common need for efficient algorithms, system software, and computer architecture to address large computational problems. And yet, useful advances in computational techniques that could benefit many researchers are rarely shared. To meet that need, Computing in Science & Engineering (CiSE) presents scientific and computational contributions in a clear and accessible format. ...


More Periodicals

Most published Xplore authors for Distributed Parallel Architecture

Back to Top

Xplore Articles related to Distributed Parallel Architecture

Back to Top

A simulation based optimization approach for scheduling of a semiconductor manufacturing system

2014 IEEE International Conference on System Science and Engineering (ICSSE), 2014

As an important and challenging problem, the scheduling of semiconductor manufacturing is a hot topic in both engineering and academic field. Its purpose is to satisfy production constraints on time, cost and quality while optimizing some performance indexes like cycle-time, movement, WIP and etc. However, due to complexities of semiconductor manufacturing system, conventional technologies and/or methods are hard to solve ...


Distributed Network Resources Monitoring Based on Multi-agent and Matrix Grammar

2011 Fourth International Symposium on Parallel Architectures, Algorithms and Programming, 2011

Network resources monitoring and management is critical to ensure security and load balance of network and information system, especially in the increasingly extensively used cloud computing and distributed parallel architecture. This paper presents a distributed network resources monitoring solution based on multi-agent and matrix grammar. A distributed multi-agent architecture for network resources monitoring is described. The paper proposes a generic ...


An Image Retrieval System Based on Parallel Architecture

2010 Third International Conference on Knowledge Discovery and Data Mining, 2010

How to quickly find image that user needed. It is important to build an index to image database. It is possible to retrieve images from database using a unique identification defined by a human operator as an index to images, but it is more reasonable to index images based on their contents. The principle of Content-Based Image Retrieval system is ...


A high performance face recognition system based on a huge face database

2005 International Conference on Machine Learning and Cybernetics, 2005

This paper presents a high performance face recognition system, in which the face database has a large amount of 2.5 million faces. Huge as the face database is, the recognition processes in ordinary ways meets with great difficulties: the identification rate of most algorithms may decline significantly; meanwhile, querying on a large-scale database may be quite time-consuming. In our system, ...


Design and analysis of a parallel architecture for distributed radar simulation

2011 IEEE International Conference on Computer Science and Automation Engineering, 2011

Here we introduce a distributed parallel architecture for radar system simulation according to the radar system theory. In our system, we adopt the method of modularization, bring forward our system model with distributed and expansive characteristics, introduce pipelining technique basing on two levels of time policies including both of scheduling interval and frame unit into our design, compare the time ...


More Xplore Articles

Educational Resources on Distributed Parallel Architecture

Back to Top

IEEE.tv Videos

Introduction to Chip Multiprocessor Architecture
Lew Tucker, IEEE GLOBECOM'13 Keynote Address - Lew Tucker, CTO, Cisco Systems
Future of Computing: Memory/Storage - Steve Pawlowski - ICRC San Mateo, 2019
Parallel Quantum Computing Emulation - Brian La Cour - ICRC 2018
IMS 2012 Microapps - Practical Electromagnetic Modeling of Parallel Plate Capacitors at High Frequency
Shaping the Future of Quantum Computing - Suhare Nur - ICRC San Mateo, 2019
Reconfigurable Distributed MIMO for Physical-layer Security - Zygmunt Haas - IEEE Sarnoff Symposium, 2019
IMS2013 Micro-Apps 2013: Parallel Processing Options for EM Simulation
Neural Processor Design Enabled by Memristor Technology - Hai Li: 2016 International Conference on Rebooting Computing
SOC DESIGN METHODOLOGY FOR IMPROVED ROBUSTNESS
An Integrated Optical Parallel Multiplier Exploiting Approximate Binary Logarithms - Jun Shiomi - ICRC 2018
Mobile Transport for 5G RAN - Rajesh Chundury - IEEE Sarnoff Symposium, 2019
Keynote 2: Exploring New Technologies for 5G/Future Networks - Dilip Krishnaswamy - India Mobile Congress, 2018
IMS MicroApps: The Finite-Element Method
Multiple Sensor Fault Detection and Isolation in Complex Distributed Dynamical Systems
Patrizio Vinciarelli, Newell Award: APEC 2019
IMS MicroApps: AWR's iFilter
A 32GHz 20dBm-PSAT Transformer-Based Doherty Power Amplifier for MultiGb/s 5G Applications in 28nm Bulk CMOS: RFIC Interactive Forum 2017
APEC Speaker Highlights: Ron Van Dell
A Spike-Timing Neuromorphic Architecture: IEEE Rebooting Computing 2017

IEEE-USA E-Books

  • A simulation based optimization approach for scheduling of a semiconductor manufacturing system

    As an important and challenging problem, the scheduling of semiconductor manufacturing is a hot topic in both engineering and academic field. Its purpose is to satisfy production constraints on time, cost and quality while optimizing some performance indexes like cycle-time, movement, WIP and etc. However, due to complexities of semiconductor manufacturing system, conventional technologies and/or methods are hard to solve this kind of scheduling problem. A new scheduling approach based on simulation based optimization (SBO) is proposed in this paper. For the issue of the high computational cost including both CPU time and memory space which could hinder the application of SBO scheduling in practice, a distributed/parallel architecture is discussed. With genetic algorithm as an optimization algorithm, the proposed SBO based scheduling approach for semiconductor manufacturing system is tested on its feasibility and effectiveness.

  • Distributed Network Resources Monitoring Based on Multi-agent and Matrix Grammar

    Network resources monitoring and management is critical to ensure security and load balance of network and information system, especially in the increasingly extensively used cloud computing and distributed parallel architecture. This paper presents a distributed network resources monitoring solution based on multi-agent and matrix grammar. A distributed multi-agent architecture for network resources monitoring is described. The paper proposes a generic matrix grammar which uses WMI, CIM and SNMP to remotely collect and manage data from network components. The matrix grammar provides a generic mechanism to describe what to be monitored, how to collect and process data. A monitoring automation engine consisting of a matrix analyzer and a recipe processor is described. The proposed solution has good extensibility, scalability, and enables monitoring automation and software reusability.

  • An Image Retrieval System Based on Parallel Architecture

    How to quickly find image that user needed. It is important to build an index to image database. It is possible to retrieve images from database using a unique identification defined by a human operator as an index to images, but it is more reasonable to index images based on their contents. The principle of Content-Based Image Retrieval system is to retrieve images based on the content of the images. One of the important components in the system is to extract the visual features of the images for performing more abstract analysis. However, some of these features are computationally expensive. To solve this issue, a flexible Distributed parallel architecture has been proposed to improve the extraction time for the system. This architecture will also provide the software system with the flexibility of adding and removing any visual features from the system.

  • A high performance face recognition system based on a huge face database

    This paper presents a high performance face recognition system, in which the face database has a large amount of 2.5 million faces. Huge as the face database is, the recognition processes in ordinary ways meets with great difficulties: the identification rate of most algorithms may decline significantly; meanwhile, querying on a large-scale database may be quite time-consuming. In our system, a special distributed parallel architecture is proposed to speed up the computation. Furthermore, a multimodal part face recognition method based on principal component analysis (MMP-PCA) is adopted to perform the recognition task, and the MMX technology is introduced to accelerate the matching procedure. Practical results prove that this system has an excellent performance in recognition: when searching among 2,560,000 faces on 6 PC servers with Xeon 2.4 GHz CPU, the querying time is only 1.094 s and the identification rate is above 85% in most cases. Moreover, the greatest advantage of this system is not only increasing recognition speed but also breaking the upper limit of face data capacity. Consequently, the face data capability of this system can be extended to an arbitrarily large amount.

  • Design and analysis of a parallel architecture for distributed radar simulation

    Here we introduce a distributed parallel architecture for radar system simulation according to the radar system theory. In our system, we adopt the method of modularization, bring forward our system model with distributed and expansive characteristics, introduce pipelining technique basing on two levels of time policies including both of scheduling interval and frame unit into our design, compare the time of simulation between the sequential system and the parallel system, finally give out our conclusions.

  • Training a Probabilistic Graphical Model With Resistive Switching Electronic Synapses

    Current large-scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy. New memory technologies, such as nanoscale two-terminal resistive switching memory devices, offer a compact, scalable, and low-power alternative that permits on-chip colocated processing and memory in fine-grain distributed parallel architecture. Here, we report the first use of resistive memory devices for implementing and training a restricted Boltzmann machine (RBM), a generative probabilistic graphical model as a key component for unsupervised learning in deep networks. We experimentally demonstrate a 45-synapse RBM realized with 90 resistive phase change memory (PCM) elements trained with a bioinspired variant of the contrastive divergence algorithm, implementing Hebbian and anti-Hebbian weight updates. The resistive PCM devices show a twofold to tenfold reduction in error rate in a missing pixel pattern completion task trained over 30 epochs, compared with untrained case. Measured programming energy consumption is 6.1 nJ per epoch with the PCM devices, a factor of ~ 150 times lower than the conventional processor-memory systems. We analyze and discuss the dependence of learning performance on cycle-to-cycle variations and number of gradual levels in the PCM analog memory devices.

  • A high-performance universal miniature radar system

    This paper proposes the design and realization of a high-performance universal miniature radar system. It presents a well solution to the main challenges of the radar system including extremely huge data flow and calculating burden, the traditional custom-built pattern of radar system, and the strict limitations for the size, weight and power consumption of the airborne or space-borne real-time Synthetic Aperture Radar(SAR) signal processing systems. The system has showed the virtues of standardization, modularization, stability, reconstruction, good adaptability due to the combined application of the distributed parallel architecture, latest interconnection standard and processor. By the successful application cases of airborne SAR/GMTI and space- borne imaging, its high-performance universality and miniature property could be adequately proved.

  • Practical distributed computation of maximal exact matches in the cloud

    Computation of maximal exact matches (MEMs) is an important problem in comparing genomic sequences. Optimal sequential algorithms for computing MEMs have been already introduced and integrated in a number of software tools. To cope with large data and exploit new computing paradigms like cloud computing, it is important to develop efficient and ready-to-use solutions running on distributed parallel architecture. In a previous work, we have introduced a distributed algorithm running on a computer cluster for computing the MEMs. In this paper, we extend this work in two directions: First, we introduce new variants of this algorithm; one of them has a better time complexity than the published one. These variants as we will demonstrate by experiments are faster in practice. Second, we introduce a cloud based implementation, where we automate the process of creating and configuring the cluster, submitting the jobs, and finally collecting the results and terminating the cloud machines.



Standards related to Distributed Parallel Architecture

Back to Top

No standards are currently tagged "Distributed Parallel Architecture"


Jobs related to Distributed Parallel Architecture

Back to Top