158 resources related to Data Curation
- Topics related to Data Curation
- IEEE Organizations related to Data Curation
- Conferences related to Data Curation
- Periodicals related to Data Curation
- Most published Xplore authors for Data Curation
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted papers will be peer reviewed. Accepted high quality papers will be presented in oral and postersessions, will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE
The 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020) will be held in Metro Toronto Convention Centre (MTCC), Toronto, Ontario, Canada. SMC 2020 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report most recent innovations and developments, summarize state-of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems, and cybernetics. Advances in these fields have increasing importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience and thereby improve quality of life. Papers related to the conference theme are solicited, including theories, methodologies, and emerging applications. Contributions to theory and practice, including but not limited to the following technical areas, are invited.
All fields of satellite, airborne and ground remote sensing.
JCDL encompasses the many meanings of the term "digital libraries", including (but not limited to) new forms of information institutions; operational information systems with all manner of digital content; new means of selecting, collecting, organizing, and distributing digital content; and theoretical models of information media, including document genres and electronic publishing. Digital libraries are distinguished from information retrieval systems because they include more types of media, provide additional functionality and services, and include other stages of the information life cycle, from creation through use. Digital libraries also can be viewed as a new form of information institution or as an extension of the services libraries currently provide.
CBMS 2019 will provide an international forum to discuss the latest developments in the field of computational medicine, biomedical informatics and related fields. During the CBMS symposium, there will be regular and special track (ST) sessions with technical contributions reviewed and selected by an international programme committee, as well as keynote talks and tutorials given by leading experts in their fields. Regular and ST presentations will cover a broad range of issues in related to areas in the context of medical informatics, e-Health, computer vision, healthcare games, software systems in medicine, big data analytics in healthcare, cognitive computing in healthcare, telemedicine systems, medical education, HCI in healthcare, web-based medical information, active and healthy aging systems, technology in clinical and healthcare research, among others.
Specific topics of interest include, but are not limited to, sequence analysis, comparison and alignment methods; motif, gene and signal recognition; molecular evolution; phylogenetics and phylogenomics; determination or prediction of the structure of RNA and Protein in two and three dimensions; DNA twisting and folding; gene expression and gene regulatory networks; deduction of metabolic pathways; micro-array design and analysis; proteomics; ...
The IEEE Computational Intelligence Magazine (CIM) publishes peer-reviewed articles that present emerging novel discoveries, important insights, or tutorial surveys in all areas of computational intelligence design and applications.
Computer, the flagship publication of the IEEE Computer Society, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications. Computer is a resource that practitioners, researchers, and managers can rely on to provide timely information about current research developments, trends, best practices, and changes in the profession.
IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics. From specific algorithms to full system implementations, CG&A offers a strong combination of peer-reviewed feature articles and refereed departments, including news and product announcements. Special Applications sidebars relate research stories to commercial development. Cover stories focus on creative applications of the technology by an artist or ...
Both general and technical articles on current technologies and methods used in biomedical and clinical engineering; societal implications of medical technologies; current news items; book reviews; patent descriptions; and correspondence. Special interest departments, students, law, clinical engineering, ethics, new products, society news, historical features and government.
2014 4th International Symposium ISKO-Maghreb: Concepts and Tools for knowledge Management (ISKO-Maghreb), 2014
This article aims at evaluating the interest of the Wicri network, a network of semantic wikis, both as a reservoir of curation rules allowing to enrich corpora metadata and as a tool for parameterizing and supporting of instructions for the creation of corpora exploration servers. Starting, from the analysis of a bibliographic corpus extracted from different documentary databases, the experiments ...
2016 International Conference on Progress in Informatics and Computing (PIC), 2016
In this paper, analysis is made on the roles of different participants in data curation, including the government, scientific research institutes and researchers, IT and network centers, data centers and STI agencies, to define their specific functions in data curation. It is pointed out that due to the influence of the “social stereotype”, STI agencies engaged in data curation is ...
2016 IEEE International Conference on Services Computing (SCC), 2016
Log-based business operation analysis is getting more and more attention from enterprise decision makers. However, at the very first step of the analysis service, two primary obstacles are always encountered: how to process a wide variety of local event log formats and how to handle personally identifiable information weaved in an event log. Due to these obstacles, typical business analysts ...
2006 International Conference of the IEEE Engineering in Medicine and Biology Society, 2006
In this paper, we propose two architectures for curating PDB data to improve its quality. The first one, PDB Data Curation System, is developed by adding two parts, Checking Filter and Curation Engine, between User Interface and Database. This architecture supports the basic PDB data curation. The other one, PDB Data Curation System with XCML, is designed for further curation ...
2017 13th International Conference on Semantics, Knowledge and Grids (SKG), 2017
This paper concisely introduces data curation and service innovation, analyzes the motivation and innovation of libraries to provide service based on data curation, and builds a data curation-based library innovative service system from the perspective of individual library institution, in view of biological organisms and in combination with theories such as Data Curation Lifecycle Model and Four Dimensional Model of ...
Big Data Analytics: Tools and Technologies - Big Data Analytics Tutorial Part 2
Explorations in BIG Data and sMall Data with a Fuzzy Perspective
Estimating Sparse Eigenstructure for High Dimensional Data
Q&A with Dr. Sorel Reisman: IEEE Big Data Podcast, Episode 2
Mahmoud Daneshmand on IoT and Big Data Analytics: IoT: Even Bigger Data
Concept of Arrays
Big Data and Analytics at Verizon
Human-Guided Video Data Collection in Marine Environnment
Deep Learning and the Representation of Natural Data
TechNews: Big Data
Temporal Pattern Mining in Symbolic Time Point and Time Interval Data
Q&A with Dejan Milojicic: IEEE Big Data Podcast, Episode 7
Computational Intelligence in (e)Healthcare - Challenges and Opportunites
Q&A with Dave Belanger & Kathy Grise, Part 1: IEEE Big Data Podcast, Episode 12
Q&A with Dr. Ling Liu: IEEE Big Data Podcast, Episode 8
Consequences of Big Data on the Individual
Q&A with Dr. Iqbal Ahamed: IEEE Big Data Podcast, Episode 3
Time-series Workloads and Implications for Time-series Databases - Michael Freedman - IEEE Sarnoff Symposium, 2019
Q&A with Dr. Mahmoud Daneshmand: IEEE Big Data Podcast, Episode 1
This article aims at evaluating the interest of the Wicri network, a network of semantic wikis, both as a reservoir of curation rules allowing to enrich corpora metadata and as a tool for parameterizing and supporting of instructions for the creation of corpora exploration servers. Starting, from the analysis of a bibliographic corpus extracted from different documentary databases, the experiments that we have conducted in this context rely to the use of wiki technology as a central tool in a process of construction of knowledge.
In this paper, analysis is made on the roles of different participants in data curation, including the government, scientific research institutes and researchers, IT and network centers, data centers and STI agencies, to define their specific functions in data curation. It is pointed out that due to the influence of the “social stereotype”, STI agencies engaged in data curation is taken for granted and they have accumulated fully-fledged infrastructure, professional human resources necessary for data curation and certain theoretical and practical experience in data curation, but that improvements are still needed in terms of privileges in access and management of scientific data, IT and organization and processing of scientific data. It is proposed that in the curation system of scientific data, STI agencies should be engaged mainly in curation of long tail data, participation in preparation of policies, regulations and standards for data curation, archiving and long-term storage of scientific data, provision of embedded services in data curation during the whole lifecycle, and education and training concerning competency in data curation.
Log-based business operation analysis is getting more and more attention from enterprise decision makers. However, at the very first step of the analysis service, two primary obstacles are always encountered: how to process a wide variety of local event log formats and how to handle personally identifiable information weaved in an event log. Due to these obstacles, typical business analysts who do not have programming skills have lost business opportunities at the early stages. We propose a privacy-preserving data curation specification language, BELAS, for such analysts and present experimental results that show how most of a real-life event log could be processed in process analysis services.
In this paper, we propose two architectures for curating PDB data to improve its quality. The first one, PDB Data Curation System, is developed by adding two parts, Checking Filter and Curation Engine, between User Interface and Database. This architecture supports the basic PDB data curation. The other one, PDB Data Curation System with XCML, is designed for further curation which adds four more parts, PDB-XML, PDB, OODB, Protin-OODB, into the previous one. This architecture uses XCML language to automatically check errors of PDB data that enables PDB data more consistent and accurate. These two tools can be used for cleaning existing PDB files and creating new PDB files. We also show some ideas how to add constraints and assertions with XCML to get better data. In addition, we discuss the data provenance that may affect data accuracy and consistency
This paper concisely introduces data curation and service innovation, analyzes the motivation and innovation of libraries to provide service based on data curation, and builds a data curation-based library innovative service system from the perspective of individual library institution, in view of biological organisms and in combination with theories such as Data Curation Lifecycle Model and Four Dimensional Model of Service Innovation.
In the modern days of Big Data, the curation of data has become more and more important, especially for handling high volume and complex data systems. With data volumes growing exponentially, along with the increasing variety and heterogeneity of data sources, acquiring the data you may need for analysis has become a costly and time-consuming process. Multiple data sets from various sources must be first processed and connected before they can be used by big data analytics tools. Publication and presentation of data analytics are also very important. However, traditional data curation systems are not designed for this purpose and there is no consideration on the chronological values. Another limitation is that they are usually designed for programmers, not for the ordinary users. In this paper, we propose Chronological Big data Curation system. In the proposed system, acquisition and care of data are processed on the basis of relations between specific topics and chronological order to ensure that data maintains its value over time. The system is implemented and experimental results show the goodness of the proposed system.
Frequently-asked-question (FAQ) systems are effective in operating and reducing costs of IT services. Basically, FAQ data preparation requires data curation of available heterogeneous question-and-answer (QA) data sets and creating FAQ clusters. We identified that the labor intensiveness of data curation is a major problem and that it strongly affects the final FAQ output quality. To deal with this problem, we designed a FAQ creation system with a strong focus on the effectiveness of its data-curation component. We conducted a field study by inspecting two sources: incident reports and a QA forum. The first source of incident reports showed a high F-score of 89.9% (precision: 82.5%, recall: 100%). We also applied the same set of parameters to 300 entries of the QA forum and achieved an F-score of 94.3% (precision: 94.9%, recall: 93.8%).
The Green Computing Observatory (GCO) addresses the previous issues within the framework of a production infrastructure dedicated to e-science, providing a unique facility for the Computer Science and Engineering community. The first barrier to improved energy efficiency is the difficulty of collecting data on the energy consumption of individual components of data centers, and the lack of overall data collection. GCO collects monitoring data on energy consumption of a large computing center, and publish them through the Grid Observatory. These data include the de tailed monitoring of the processors and motherboards, as well as the global site information, such as overall consumption and overall cooling, as optimizing at the global level is a promising way of research. A second barrier is making the collected data usable. The difficulty is to make the data readily consistent and complete, as well as understandable for further exploitation. For this purpose, GCO opts for an ontological approach in order to rigorously define the semantics of the data (what is measured) and the context of their production (how are they acquired and/or calculated).
We find that software frameworks for digital content management and access may be used for capturing certain data provenance information, particularly for data that has already been created and archived at a repository center. One of the key enabling factors is the abstraction concept of a digital object augmented with semantic relationships. One set of frameworks, Fedora Repository and Drupal CMS with the Islandora connector hold great promise for provenance applications as well as long-term curation of Geoscience datasets.
We present the eGor digital laboratory assistant platform that improves the experience of characterizing materials, devices and processes. Conceived to address challenges in biosensor and biomedical system development, eGor is a highly flexible platform for 1) automation of data acquisition with precise timing control, 2) production of results objects that are rich with information defining the measurement setup and provenance of instruments and datasets, and 3) curation of results objects and processed child datasets throughout the data life cycle. eGor packs three tools into a user-friendly browser interface: Designer to manage digital inventory of instruments and digitally capture measurement project scheme details including instrument layout and test procedures; Executer to monitor real-time measurements, search and run project schemes, and schedule future automated project runs; Analyzer to search, view and annotate results objects containing a digital description of the project scheme and the results data it generated, filter and process datasets, and trace provenance across all instruments, schemes, and datasets. These eGor services interface with a measurement workbench through the Instrument Manager tool that runs on local workbenches to collection data from and manage communication with physical instruments. Together, these eGor tools enable biomedical research with improved accuracy and precision through timing-controlled automation and with greater productivity through intuitive user-friendly interfaces capable of scheduling and running measurements without user presence. Moreover, eGor can have groundbreaking impact on research reproducibility by generating meta-rich results objects that permit exact repetition of measurements and collaborative sharing of both data and detailed project schemes.
No standards are currently tagged "Data Curation"
Apple Media Products (AMP) - Big Data Analyst, Analytics Engineering
Apple Media Products - Big Data Analyst, Analytics Engineering
Lawrence Livermore National Laboratory