Conferences related to Data preprocessing

Back to Top

2015 IEEE International Parallel and Distributed Processing Symposium (IPDPS)

IPDPS is an international forum for engineers and scientists from around the world to present their latest research findings in all aspects of Parallel Processing.

  • 2013 IEEE International Symposium on Parallel & Distributed Processing (IPDPS)

    Parallel and distributed algorithms, focusing on stability, scalability, and fault-tolerance. Applications of parallel and distributed computing, including web, peer-to-peer, cloud, grid, scientific, and mobile computing. Parallel and distributed architectures including instruction-level and thread-level parallelism; petascale and exascale systems designs. Parallel and distributed software, including parallel and multicore programming languages, compilers, runtime systems, operating systems, and middleware for grids and clouds.

  • 2011 IEEE International Parallel & Distributed Processing Symposium (IPDPS)

    IPDPS is an international forum for engineers and scientists from around the world to present their latest research findings in all aspects of parallel computation. In addition to technical sessions of submitted paper presentations, the meeting offers workshops, tutorials, and commercial presentations & exhibits. IPDPS represents a unique international gathering of computer scientists from around the world.

  • 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS)


2013 IEEE 11th International Conference on Industrial Informatics (INDIN)

The aim of the conference is to bring together researchers and practitioners from industry and academia and provide them with a platform to report on recent developments, deployments, technology trends and research results, as well as initiatives related to industrial informatics and their application.


2012 4th Conference on Data Mining and Optimization (DMO)

The scope of the conference includes, but is not limited to the following subjects: Parallel and distributed data mining algorithms, Data streams mining, Graph mining, Spatial data mining, Text & multimedia mining, Web mining, Pre-processing techniques, etc. Linear/Nonlinear Optimization, Integer/Combinatorial Optimization, metaheuristics, Network Optimization, Scheduling Problems and Stochastic Optimization.

  • 2011 3rd Conference on Data Mining and Optimization (DMO)

    Data and text mining tasks such as classification, prediction, clustering, association rules mining, etc. Data mining techniques such neural networks, genetic algorithm, artificial immune system, etc. Automated scheduling and planning models, heuristics and algorithms. Optimization problems including scheduling, timetabling, manufacturing, logistics, space allocation, anomaly detection, bioinformatics, etc.

  • 2009 2nd Conference on Data Mining and Optimization (DMO 2009)

    Data & text mining tasks such as classification, prediction, clustering, etc. Data & text mining techniques such as neural networks, genetic algorithm and other soft computing technique. Data & text Mining Applications in Medical, Healthcare and other fields. Optimization Techniques for Data & text mining. Optimization algorithms such as Local Search, Meta-heuristics search, Heuristic Search and others. Application of oprimizations technique such as in Shop-floor scheduling, Sport scheduling, Timetablin


2012 4th International Conference on Intelligent & Advanced Systems (ICIAS)

Sensor Technology Nonlinear Circuits & Systems Signal Processing Instrumentation & Control Systems Communications Systems Image Processing & Multimedia Systems Biomedical Systems VLSI & Embedded Systems Power Electronics & Power Systems Computational & Articial Intelligence

  • 2010 International Conference on Intelligent & Advanced Systems (ICIAS)

    Theory & Systems - Neural Networks & Systems - Artificial Intelligence - Computational Method - Non-linear Circuits & Systems - Signal Processing - Wavelet & Filter Banks Analog & Digital Systems - Sensory & Control Systems - Communication Systems - Image Processing & Multimedia Systems - VLSI & Embedded Systems - Biomedical Systems - Power Electronic & Power Systems

  • 2007 International Conference on Intelligent & Advanced Systems (ICIAS)

    ICIAS 2007 aims at bringing together experts and researchers working in the area of advanced and intelligent systems. Last few decades have seen proliferation of many kind of systems due mainly to advancement in theory, analysis and design techniques of circuits and systems. These systems have found applications in biomedicine, communication engineering, giga-scale systems, nanotechnology and power electronics.


2012 IEEE 13th International Conference on Information Reuse & Integration (IRI)

Given volumes of information in digital form, we are constantly faced with new challenges with regards to efficiently using it and extracting useful knowledge from it. Information reuse and integration (IRI) seeks to maximally exploit such available information to create new knowledge and to reuse it for addressing newer challenges. It plays a pivotal role in the capture, maintenance, integration, validation, extrapolation, and application of knowledge to augment human decision -making capabilities.

  • 2011 IEEE International Conference on Information Reuse & Integration (IRI)

    Given volumes of information in digital form, we are constantly faced with new challenges with regards to efficiently using it and extracting useful knowledge from it. Information reuse and integration (IRI) seeks to maximally exploit such available information to create new knowledge and to reuse it for addressing newer challenges. It plays a pivotal role in the capture, maintenance, integration, validation, extrapolation, and application of knowledge to augment human decision -making capabilities.

  • 2010 IEEE International Conference on Information Reuse & Integration (2010 IRI)

    Given volumes of information in digital form, we are constantly faced with new challenges with regards to efficiently using it and extracting useful knowledge from it. Information reuse and integration (IRI) seeks to maximally exploit such available information to create new knowledge and to reuse it for addressing newer challenges. It plays a pivotal role in the capture, maintenance, integration, validation, extrapolation, and application of knowledge to augment human decision -making capabilities.


More Conferences

Periodicals related to Data preprocessing

Back to Top

Knowledge and Data Engineering, IEEE Transactions on

Artificial intelligence techniques, including speech, voice, graphics, images, and documents; knowledge and data engineering tools and techniques; parallel and distributed processing; real-time distributed processing; system architectures, integration, and modeling; database design, modeling, and management; query design, and implementation languages; distributed database control; statistical databases; algorithms for data and knowledge management; performance evaluation of algorithms and systems; data communications aspects; system ...


Parallel and Distributed Systems, IEEE Transactions on

IEEE Transactions on Parallel and Distributed Systems (TPDS) is published monthly. Topic areas include, but are not limited to the following: a) architectures: design, analysis, and implementation of multiple-processor systems (including multi-processors, multicomputers, and networks); impact of VLSI on system design; interprocessor communications; b) software: parallel languages and compilers; scheduling and task partitioning; databases, operating systems, and programming environments for ...


Potentials, IEEE

This award-winning magazine for technology professionals explores career strategies, the latest research and important technical developments. IEEE Potentials covers theories to practical applications and highlights technology's global impact.




Xplore Articles related to Data preprocessing

Back to Top

Pre-processing stereo transparent images: extraction of non-transparent regions by variable length pattern correspondence

R. E. Frye; R. S. Ledley Biomedical Engineering Conference, 1996., Proceedings of the 1996 Fifteenth Southern, 1996

One of the hallmarks of a transparent image is the superimposition of structures in the image. This gives the image its "see through" character. However, portions of a transparent image can be considered non-transparent if no superimposed structures are present. By defining a new type of pixel uniqueness, which the authors call "pattern uniqueness", the nontransparent portions of transparent images ...


The study of CDM-BSC-based data mining driven fishbone applied for data processing

Zhang Yun Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, 2015

Data Mining Driven Fishbone(DMDF), which is whole a new term, is an enhancement of abstractive conception of multidimensional-data flow of fishbone applied for data processing to optimize the process and structure of data management and data mining. CDM-BSC(CRISP-DM applied with Balance Scorecard), which is developed from combination of traditional Data Processing Methodology and BSC for performance measurement systems. End-to-end DMDF ...


Original TV coding scheme for 34 Mbit/s

J. Ronsin Acoustics, Speech, and Signal Processing, 1989. ICASSP-89., 1989 International Conference on, 1989

The author describes a coding scheme for color television signals over 34 Mb/s channels. Spatio-temporal blocks are used to reduce both the spatial and temporal redundancies within the blocks; the blocks are independent of one another and of fixed size (5×3×4). Spatial interpolation and temporal prediction are used, along with luminance masking on itself as well as on the chrominance ...


Motion Segmentation through Incremental Hierarchical Clustering

Syed Asim Ali Shah; M. Usman Naseem; Saif-ur-Rehman; Asim Karim 2006 IEEE International Multitopic Conference, 2006

Motion segmentation is a key step in many applications such as video surveillance, medical decision support, and target tracking. Motion segmentation is challenging because of the large amounts of data to be processed and the real-time requirements of the applications. The k-means clustering algorithm has often been used for motion segmentation. However, the k-means algorithm is computationally expensive and requires ...


Adaptive estimation of noise covariance matrices in real-time preprocessing of geophysical data

G. Noriega; S. Pasupathy IEEE Transactions on Geoscience and Remote Sensing, 1997

Modern data acquisition systems record large volumes of data which are often not suitable for direct computer processing-a first stage of preprocessing (or data "editing") is usually needed. In earlier work G. Noriega et al. (1992) the authors have developed an algorithm for multichannel data preprocessing, based on Kalman filtering and suitable for real-time geophysical data collection applications. The present ...


More Xplore Articles

Educational Resources on Data preprocessing

Back to Top

eLearning

Pre-processing stereo transparent images: extraction of non-transparent regions by variable length pattern correspondence

R. E. Frye; R. S. Ledley Biomedical Engineering Conference, 1996., Proceedings of the 1996 Fifteenth Southern, 1996

One of the hallmarks of a transparent image is the superimposition of structures in the image. This gives the image its "see through" character. However, portions of a transparent image can be considered non-transparent if no superimposed structures are present. By defining a new type of pixel uniqueness, which the authors call "pattern uniqueness", the nontransparent portions of transparent images ...


The study of CDM-BSC-based data mining driven fishbone applied for data processing

Zhang Yun Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, 2015

Data Mining Driven Fishbone(DMDF), which is whole a new term, is an enhancement of abstractive conception of multidimensional-data flow of fishbone applied for data processing to optimize the process and structure of data management and data mining. CDM-BSC(CRISP-DM applied with Balance Scorecard), which is developed from combination of traditional Data Processing Methodology and BSC for performance measurement systems. End-to-end DMDF ...


Original TV coding scheme for 34 Mbit/s

J. Ronsin Acoustics, Speech, and Signal Processing, 1989. ICASSP-89., 1989 International Conference on, 1989

The author describes a coding scheme for color television signals over 34 Mb/s channels. Spatio-temporal blocks are used to reduce both the spatial and temporal redundancies within the blocks; the blocks are independent of one another and of fixed size (5×3×4). Spatial interpolation and temporal prediction are used, along with luminance masking on itself as well as on the chrominance ...


Motion Segmentation through Incremental Hierarchical Clustering

Syed Asim Ali Shah; M. Usman Naseem; Saif-ur-Rehman; Asim Karim 2006 IEEE International Multitopic Conference, 2006

Motion segmentation is a key step in many applications such as video surveillance, medical decision support, and target tracking. Motion segmentation is challenging because of the large amounts of data to be processed and the real-time requirements of the applications. The k-means clustering algorithm has often been used for motion segmentation. However, the k-means algorithm is computationally expensive and requires ...


Adaptive estimation of noise covariance matrices in real-time preprocessing of geophysical data

G. Noriega; S. Pasupathy IEEE Transactions on Geoscience and Remote Sensing, 1997

Modern data acquisition systems record large volumes of data which are often not suitable for direct computer processing-a first stage of preprocessing (or data "editing") is usually needed. In earlier work G. Noriega et al. (1992) the authors have developed an algorithm for multichannel data preprocessing, based on Kalman filtering and suitable for real-time geophysical data collection applications. The present ...


More eLearning Resources

IEEE.tv Videos

No IEEE.tv Videos are currently tagged "Data preprocessing"

IEEE-USA E-Books

  • References

    The growing interest in data mining is motivated by a common problem across disciplines: how does one store, access, model, and ultimately describe and understand very large data sets? Historically, different aspects of data mining have been addressed independently by different disciplines. This is the first truly interdisciplinary text on data mining, blending the contributions of information science, computer science, and statistics.The book consists of three sections. The first, foundations, provides a tutorial overview of the principles underlying data mining algorithms and their application. The presentation emphasizes intuition rather than rigor. The second section, data mining algorithms, shows how algorithms are constructed to solve specific problems in a principled manner. The algorithms covered include trees and rules for classification and regression, association rules, belief networks, classical statistical models, nonlinear models such as neural networks, and local "memory-based" models. The third section shows how all of the preceding analysis fits together when applied to real-world data mining problems. Topics include the role of metadata, how to handle missing data, and data preprocessing.

  • Index

    The growing interest in data mining is motivated by a common problem across disciplines: how does one store, access, model, and ultimately describe and understand very large data sets? Historically, different aspects of data mining have been addressed independently by different disciplines. This is the first truly interdisciplinary text on data mining, blending the contributions of information science, computer science, and statistics.The book consists of three sections. The first, foundations, provides a tutorial overview of the principles underlying data mining algorithms and their application. The presentation emphasizes intuition rather than rigor. The second section, data mining algorithms, shows how algorithms are constructed to solve specific problems in a principled manner. The algorithms covered include trees and rules for classification and regression, association rules, belief networks, classical statistical models, nonlinear models such as neural networks, and local "memory-based" models. The third section shows how all of the preceding analysis fits together when applied to real-world data mining problems. Topics include the role of metadata, how to handle missing data, and data preprocessing.

  • Generalized Maximum Margin Clustering and Unsupervised Kernel Learning

    Maximum margin clustering was proposed lately and has shown promising performance in recent studies [1, 2]. It extends the theory of support vector machine to unsupervised learning. Despite its good performance, there are three major problems with maximum margin clustering that question its efficiency for real-world applications. First, it is computationally expensive and difficult to scale to large-scale datasets because the number of parameters in maximum margin clustering is quadratic in the number of examples. Second, it requires data preprocessing to ensure that any clustering boundary will pass through the origins, which makes it unsuitable for clustering unbalanced dataset. Third, it is sensitive to the choice of kernel functions, and requires external procedure to determine the appropriate values for the parameters of kernel functions. In this paper, we propose "generalized maximum margin clustering" framework that addresses the above three problems simultaneously. The new framework generalizes the maximum margin clustering algorithm by allowing any clustering boundaries including those not passing through the origins. It significantly improves the computational efficiency by reducing the number of parameters. Furthermore, the new framework is able to automatically determine the appropriate kernel matrix without any labeled data. Finally, we show a formal connection between maximum margin clustering and spectral clustering. We demonstrate the efficiency of the generalized maximum margin clustering algorithm using both synthetic datasets and real datasets from the UCI repository.

  • Discovery of Patterns in Earth Science Data Using Data Mining

    This chapter contains sections titled: Introduction Data Description and Data Sources Data Preprocessing Clustering Association Analysis Query Processing Other Techniques Conclusions This chapter contains sections titled: Acknowledgments References

  • Class Imbalance Learning Methods for Support Vector Machines

    Support vector machines (SVMs) is a very popular machine learning technique, which has been successfully applied to many real-world classification problems from various domains. Despite of all its theoretical and practical advantages, SVMs could produce suboptimal results with imbalanced datasets. This chapter briefly reviews the learning algorithm of SVMs. It discusses why SVMs are sensitive to the imbalance in datasets. The chapter also reviews the methods found in the literature to handle the class imbalance problem for SVMs. These methods have been developed as both data preprocessing methods (called external methods) and algorithmic modifications to the SVM algorithm (called internal methods). Fuzzy SVMs for Class Imbalance Learning (FSVM-CIL) settings have resulted in better classification results on the datasets than the existing CIL methods applied for standard SVMs, namely random oversampling, random undersampling, synthetic minority oversampling technique (SMOTE), different error costs (DEC), and zSVM methods.

  • A Comparision of RBF and MLP Networks for Classification of Biomagnetic Fields

    This chapter contains sections titled: Introduction, The Problem, Model Assumptions, Production of Training Data, Preprocessing, Probabilistic Background, The Neural Network Topologies, Knowledge Extraction, Conclusion

  • Algorithmic Methods for the Analysis of Gene Expression Data

    The traditional approach to molecular biology consists of studying a small number of genes or proteins that are related to a single biochemical process or pathway. A major paradigm shift recently occurred with the introduction of gene-expression microarrays that measure the expression levels of thousands of genes at once. These comprehensive snapshots of gene activity can be used to investigate metabolic pathways, identify drug targets, and improve disease diagnosis. However, the sheer amount of data obtained using high throughput microarray experiments and the complexity of the existing relevant biological knowledge is beyond the scope of manual analysis. Thus, the bioinformatics algorithms that help analyze such data are a very valuable tool for biomedical science. First, a brief overview of the microarray technology and concepts that are important for understanding the remaining sections are described. Second, microarray data preprocessing, an important topic that has drawn as much attention from the research community as the data analysis itself is discussed. Finally, some of the more important methods for microarray data analysis are described and illustrated with examples and case studies.



Standards related to Data preprocessing

Back to Top

No standards are currently tagged "Data preprocessing"


Jobs related to Data preprocessing

Back to Top