Statistics

View this topic in
Statistics is the study of the collection, organization, and interpretation of data. (Wikipedia.org)






Conferences related to Statistics

Back to Top

2013 21st International Conference on Geoinformatics

GIS in Regional Economic Development and Environmental Protection under Globalization


2013 IEEE International Conference on Data Engineering (ICDE 2013)

The annual IEEE International Conference on Data Engineering (ICDE) addresses research issues in designing, building, managing, and evaluating advanced data-intensive systems and applications. It is a leading forum for researchers, practitioners, developers, and users to explore cutting-edge ideas and to exchange techniques, tools, and experiences.



Periodicals related to Statistics

Back to Top

Education, IEEE Transactions on

Educational methods, technology, and programs; history of technology; impact of evolving research on education.


Electromagnetic Compatibility, IEEE Transactions on

EMC standards; measurement technology; undesired sources; cable/grounding; filters/shielding; equipment EMC; systems EMC; antennas and propagation; spectrum utilization; electromagnetic pulses; lightning; radiation hazards; and Walsh functions


Geoscience and Remote Sensing, IEEE Transactions on

Theory, concepts, and techniques of science and engineering as applied to sensing the earth, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.


Knowledge and Data Engineering, IEEE Transactions on

Artificial intelligence techniques, including speech, voice, graphics, images, and documents; knowledge and data engineering tools and techniques; parallel and distributed processing; real-time distributed processing; system architectures, integration, and modeling; database design, modeling, and management; query design, and implementation languages; distributed database control; statistical databases; algorithms for data and knowledge management; performance evaluation of algorithms and systems; data communications aspects; system ...




Xplore Articles related to Statistics

Back to Top

A flat spectral photon flux source for single photon detector quantum efficiency calibration

Haiyong Gan; Ruoduan Sun; Nan Xu; Jianwei Li; Yanfei Wang; Guojin Feng; Chundi Zheng; Chong Ma; Yandong Lin 2015 11th Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR), 2015

A flat spectral photon flux source is proposed to facilitate the single photon detector quantum efficiency calibration in an extended wavelength range (400-800nm). The absolute quantum efficiency at certain wavelengths (e.g. 633nm and 807nm) of the photon counter under test can be measured via correlated photons method and used to evaluate the photon statistics of the flat spectral photon flux ...


Preliminary test and evaluation of non-destructive examination for ITER First Wall development in Korea

Suk-Kwon Kim; Eo Hwak Lee; Jae-Sung Yoon; Hyun-Kyu Jung; Dong Won Lee; Byoung-Yoon Kim 2011 IEEE/NPSS 24th Symposium on Fusion Engineering, 2011

ITER First Wall (FW) includes beryllium armour joined to a Cu heat sink with a stainless steel back plate. These first wall panels are one of the critical components in the ITER tokamak with a maximum surface heat flux of 5 MW/m2. So, a qualification test needs to be performed with the goal to qualify the joining technologies required for ...


Histogram cloning and CuSum: An experimental comparison between different approaches to Anomaly Detection

Christian Callegari; Stefano Giordano; Michele Pagano 2015 International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS), 2015

Due to the proliferation of new threats from spammers, attackers, and criminal enterprises, Anomaly-based Intrusion Detection Systems have emerged as a key element in network security and different statistical approaches have been considered in the literature. To cope with scalability issues, random aggregation through the use of sketches seems to be a powerful prefiltering stage that can be applied to ...


New effective analytic representation based on the time-varying Schur coefficients for underwater signals analysis

M. Lopatka; O. Adam; C. Laplanche; J. -F. Motsch; J. Zarzycki Europe Oceans 2005, 2005

The paper proposes a new effective analytic signal representation dedicated to underwater signal analysis. The proposed approach is based on the recursive normalized exact least-square ladder estimation algorithm because of its excellent convergence behaviour, extremely fast start-up performance and its capability to quickly track parameter changes. The linear orthogonal parameterization procedure considered in this paper is numerically efficient and stable ...


Higher order statistics for laser-extinction measurements

B. Lacaze; M. Chabert 2007 15th European Signal Processing Conference, 2007

Recent laser technology provides accurate measures of the dynamics of fluids and embedded particles. For instance, the laser-extinction measurements (LEM) use a laser beam passing across the fluid and measure the residual laser light intensity at the fluid output. Some particle and fluid properties are estimated from these measurements such as concentration or velocity. However, the flow is submitted to ...


More Xplore Articles

Educational Resources on Statistics

Back to Top

eLearning

A flat spectral photon flux source for single photon detector quantum efficiency calibration

Haiyong Gan; Ruoduan Sun; Nan Xu; Jianwei Li; Yanfei Wang; Guojin Feng; Chundi Zheng; Chong Ma; Yandong Lin 2015 11th Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR), 2015

A flat spectral photon flux source is proposed to facilitate the single photon detector quantum efficiency calibration in an extended wavelength range (400-800nm). The absolute quantum efficiency at certain wavelengths (e.g. 633nm and 807nm) of the photon counter under test can be measured via correlated photons method and used to evaluate the photon statistics of the flat spectral photon flux ...


Preliminary test and evaluation of non-destructive examination for ITER First Wall development in Korea

Suk-Kwon Kim; Eo Hwak Lee; Jae-Sung Yoon; Hyun-Kyu Jung; Dong Won Lee; Byoung-Yoon Kim 2011 IEEE/NPSS 24th Symposium on Fusion Engineering, 2011

ITER First Wall (FW) includes beryllium armour joined to a Cu heat sink with a stainless steel back plate. These first wall panels are one of the critical components in the ITER tokamak with a maximum surface heat flux of 5 MW/m2. So, a qualification test needs to be performed with the goal to qualify the joining technologies required for ...


Histogram cloning and CuSum: An experimental comparison between different approaches to Anomaly Detection

Christian Callegari; Stefano Giordano; Michele Pagano 2015 International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS), 2015

Due to the proliferation of new threats from spammers, attackers, and criminal enterprises, Anomaly-based Intrusion Detection Systems have emerged as a key element in network security and different statistical approaches have been considered in the literature. To cope with scalability issues, random aggregation through the use of sketches seems to be a powerful prefiltering stage that can be applied to ...


New effective analytic representation based on the time-varying Schur coefficients for underwater signals analysis

M. Lopatka; O. Adam; C. Laplanche; J. -F. Motsch; J. Zarzycki Europe Oceans 2005, 2005

The paper proposes a new effective analytic signal representation dedicated to underwater signal analysis. The proposed approach is based on the recursive normalized exact least-square ladder estimation algorithm because of its excellent convergence behaviour, extremely fast start-up performance and its capability to quickly track parameter changes. The linear orthogonal parameterization procedure considered in this paper is numerically efficient and stable ...


Higher order statistics for laser-extinction measurements

B. Lacaze; M. Chabert 2007 15th European Signal Processing Conference, 2007

Recent laser technology provides accurate measures of the dynamics of fluids and embedded particles. For instance, the laser-extinction measurements (LEM) use a laser beam passing across the fluid and measure the residual laser light intensity at the fluid output. Some particle and fluid properties are estimated from these measurements such as concentration or velocity. However, the flow is submitted to ...


More eLearning Resources

IEEE-USA E-Books

  • Learning Under Covariate Shift

    As the power of computing has grown over the past few decades, the field of machine learning has advanced rapidly in both theory and practice. Machine learning methods are usually based on the assumption that the data generation mechanism does not change over time. Yet real-world applications of machine learning, including image recognition, natural language processing, speech recognition, robot control, and bioinformatics, often violate this common assumption. Dealing with non-stationarity is one of modern machine learning's greatest challenges. This book focuses on a specific non-stationary environment known as covariate shift, in which the distributions of inputs (queries) change but the conditional distribution of outputs (answers) is unchanged, and presents machine learning theory, algorithms, and applications to overcome this variety of non-stationarity. After reviewing the state-of- the-art research in the field, the authors discuss topics that include learning under covariate shift, model selection, importance estimation, and active learning. They describe such real world applications of covariate shift adaption as brain-computer interface, speaker identification, and age prediction from facial images. With this book, they aim to encourage future research in machine learning, statistics, and engineering that strives to create truly autonomous learning machines able to learn under non- stationarity.

  • The Problem of Internet Governance

    For years, the world saw the Internet as a creature of theU.S. Department of Defense. Now some claim that the Internet is aself-governing organism controlled by no one and needing nooversight. Although the National Science Foundation and othergovernment agencies continue to support and oversee criticaladministrative and coordinating functions, the Internet is remarkablydecentralized and uninstitutionalized. As it grows in scope,bandwidth, and functionality, the Internet will require greatercoordination, but it is not yet clear what kind of coordinatingmechanisms will evolve.The essays in this volume clarify these issues and suggest possiblemodels for governing the Internet. The topics addressed range fromsettlements and statistics collection to the sprawling problem ofdomain names, which affects the commercial interests of millions ofcompanies around the world. One recurrent theme is the inseparabilityof technical and policy issues in any discussion involving theInternet.Contributors:Guy Almes, Ashley Andeen, Joseph P. Bailey, Steven M. Bellovin, ScottBradner, Richard Cawley, Che-Hoo Cheng, Bilal Chinoy, K Claffy, MariaFarnon, William Foster, Alexander Gigante, Sharon Eisner Gillett, MarkGould, Eric Hoffman, Scott Huddle, Joseph Y. Hui, David R. Johnson,Mitchell Kapor, John Lesley King, Lee W. McKnight, Don Mitchell,Tracie Monk, Milton Mueller, Carl Oppedahl, David G.Post, YakovRekhter, Paul Resnick, A. M. Rutkowski, Timothy J. Salo, PhilipL. Sbarbaro, Robert Shaw.A publication of the Harvard Information Infrastructure Project

  • References

    An unprecedented wealth of data is being generated by genome sequencing projects and other experimental efforts to determine the structure and function of biological molecules. The demands and opportunities for interpreting these data are expanding rapidly. Bioinformatics is the development and application of computer methods for management, analysis, interpretation, and prediction, as well as for the design of experiments. Machine learning approaches (e.g., neural networks, hidden Markov models, and belief networks) are ideally suited for areas where there is a lot of data but little theory, which is the situation in molecular biology. The goal in machine learning is to extract useful information from a body of data by building good probabilistic models--and to automate the process as much as possible.In this book Pierre Baldi and Sÿren Brunak present the key machine learning approaches and apply them to the computational problems encountered in the analysis of biological data. The book is aimed both at biologists and biochemists who need to understand new data-driven algorithms and at those with a primary background in physics, mathematics, statistics, or computer science who need to know more about applications in molecular biology.This new second edition contains expanded coverage of probabilistic graphical models and of the applications of neural networks, as well as a new chapter on microarrays and gene expression. The entire text has been extensively revised.

  • Appendix A: Analysis of the Moments of the Decision Statistics for the FH CDMA Communication System

    In this appendix are estimated moments of decision statistics for FH CDMA system.

  • Computational Models in the Spatial Domain

    This chapter describes the computational visual attention models in the spatial domain, based on the bottom-up mechanism. Although there have been a large number of bottom-up computational models in the spatial domain since 1998, this chapter only discusses a few typical computational models: baseline saliency (BS) model, models based on neural networks and models based on statistical signal processing theory, such as information theory (the AIM model), decision-theory (the DISC model) natural statistical (the SUN model) and Bayesian theory (the surprise detection model). Section 3.1 introduces the major parts of the BS system, while Section 3.2 addresses the issues related to visual attention for video. These two sections aim to give the reader the most important ideas for modelling bottom-up visual attention in the spatial domain. Section 3.3 presents more details and variations of the BS model, to give the reader more insight and choices within the topic. Section 3.4 introduces an alternative solution, a graph-based approach, for determining visual attention, and we also demonstrate and discuss its difference with the BS model. Section 3.5 gives a new filter basis bank learning from natural images to extract features of the input image, which is based on information maximum, called the AIM model. Another model, referred to as DISC, which processes the centre-surround inhibition based on optimal decision theory, is introduced in Section 3.6. Then Section 3.7 presents a paradigm shift in visual attention modelling by introducing a new methodology based on comprehensive statistics from a large number of natural images, rather than the current test image (as used in the models in Sections 3.1 to 3.6). Section 3.8 presents a surprise detection model to test the saliency location, based on Bayesian theory.

  • References

    Research in systems biology requires the collaboration of researchers from diverse backgrounds, including biology, computer science, mathematics, statistics, physics, and biochemistry. These collaborations, necessary because of the enormous breadth of background needed for research in this field, can be hindered by differing understandings of the limitations and applicability of techniques and concerns from different disciplines. This comprehensive introduction and overview of system modeling in biology makes the relevant background material from all pertinent fields accessible to researchers with different backgrounds.The emerging area of systems level modeling in cellular biology has lacked a critical and thorough overview. This book fills that gap. It is the first to provide the necessary critical comparison of concepts and approaches, with an emphasis on their possible applications. It presents key concepts and their theoretical background, including the concepts of robustness and modularity and their exploitation to study biological systems; the best-known modeling approaches, and their advantages and disadvantages; lessons from the application of mathematical models to the study of cellular biology; and available modeling tools and datasets, along with their computational limitations.

  • Functional Yield Modeling

    This chapter contains sections titled: Introduction Basic Yield Statistics: Random Defects Classes of Yield Models Yield Model Components Applications of Functional Yield Models Summary This chapter contains sections titled: Acknowledgments This chapter contains sections titled: Exercises and Solutions This chapter contains sections titled: References

  • Backmatter

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.This book, written by the inventors of the method, brings together, organizes, simplifies, and substantially extends two decades of research on boosting, presenting both theory and applications in a way that is accessible to readers from diverse backgrounds while also providing an authoritative reference for advanced researchers. With its introductory treatment of all material and its inclusion of exercises in every chapter, the book is appropriate for course use as well. The book begins with a general introduction to machine learning algorithms and their analysis; then explores the core theory of boosting, especially its ability to generalize; examines some of the myriad other theoretical viewpoints that help to explain and understand boosting; provides practical extensions of boosting for more complex learning problems; and finally presents a number of advanced theoretical topics. Numerous applications and practical illustrations are offered throughout.

  • References

    A comprehensive introduction to the exploding field of data mining We are surrounded by data, numerical and otherwise, which must be analyzed and processed to convert it into information that informs, instructs, answers, or otherwise aids understanding and decision-making. Due to the ever-increasing complexity and size of today's data sets, a new term, data mining, was created to describe the indirect, automatic data analysis techniques that utilize more complex and sophisticated tools than those which analysts used in the past to do mere data analysis. Data Mining: Concepts, Models, Methods, and Algorithms discusses data mining principles and then describes representative state-of- the-art methods and algorithms originating from different disciplines such as statistics, machine learning, neural networks, fuzzy logic, and evolutionary computation. Detailed algorithms are provided with necessary explanations and illustrative examples. This text offers guidance: how and when to use a particular software tool (with their companion data sets) from among the hundreds offered when faced with a data set to mine. This allows analysts to create and perform their own data mining experiments using their knowledge of the methodologies and techniques provided. This book emphasizes the selection of appropriate methodologies and data analysis software, as well as parameter tuning. These critically important, qualitative decisions can only be made with the deeper understanding of parameter meaning and its role in the technique that is offered here. Data mining is an exploding field and this book offers much-needed guidance to selecting among the numerous analysis programs that are available.

  • Index

    An unprecedented wealth of data is being generated by genome sequencing projects and other experimental efforts to determine the structure and function of biological molecules. The demands and opportunities for interpreting these data are expanding rapidly. Bioinformatics is the development and application of computer methods for management, analysis, interpretation, and prediction, as well as for the design of experiments. Machine learning approaches (e.g., neural networks, hidden Markov models, and belief networks) are ideally suited for areas where there is a lot of data but little theory, which is the situation in molecular biology. The goal in machine learning is to extract useful information from a body of data by building good probabilistic models--and to automate the process as much as possible.In this book Pierre Baldi and Sÿren Brunak present the key machine learning approaches and apply them to the computational problems encountered in the analysis of biological data. The book is aimed both at biologists and biochemists who need to understand new data-driven algorithms and at those with a primary background in physics, mathematics, statistics, or computer science who need to know more about applications in molecular biology.This new second edition contains expanded coverage of probabilistic graphical models and of the applications of neural networks, as well as a new chapter on microarrays and gene expression. The entire text has been extensively revised.



Standards related to Statistics

Back to Top

No standards are currently tagged "Statistics"


Jobs related to Statistics

Back to Top