Document image processing
231 resources related to Document image processing
- Topics related to Document image processing
- IEEE Organizations related to Document image processing
- Conferences related to Document image processing
- Periodicals related to Document image processing
- Most published Xplore authors for Document image processing
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted full papers will be peer reviewed. Accepted high quality papers will be presented in oral and poster sessions,will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE.
2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.
The International Conference on Image Processing (ICIP), sponsored by the IEEE SignalProcessing Society, is the premier forum for the presentation of technological advances andresearch results in the fields of theoretical, experimental, and applied image and videoprocessing. ICIP 2020, the 27th in the series that has been held annually since 1994, bringstogether leading engineers and scientists in image and video processing from around the world.
The ICASSP meeting is the world's largest and most comprehensive technical conference focused on signal processing and its applications. The conference will feature world-class speakers, tutorials, exhibits, and over 50 lecture and poster sessions.
All fields of satellite, airborne and ground remote sensing.
The IEEE Reviews in Biomedical Engineering will review the state-of-the-art and trends in the emerging field of biomedical engineering. This includes scholarly works, ranging from historic and modern development in biomedical engineering to the life sciences and medicine enabled by technologies covered by the various IEEE societies.
Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...
Computer, the flagship publication of the IEEE Computer Society, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications. Computer is a resource that practitioners, researchers, and managers can rely on to provide timely information about current research developments, trends, best practices, and changes in the profession.
IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics. From specific algorithms to full system implementations, CG&A offers a strong combination of peer-reviewed feature articles and refereed departments, including news and product announcements. Special Applications sidebars relate research stories to commercial development. Cover stories focus on creative applications of the technology by an artist or ...
This publication covers the theory, design, fabrication, manufacturing and application of information displays and aspects of display technology that emphasize the progress in device engineering, device design, materials, electronics, physics and reliabilityaspects of displays and the application of displays.
IEE European Workshop on Handwriting Analysis and Recognition: A European Perspective, 1994
Proceedings of the Fifth International Conference on Document Analysis and Recognition. ICDAR '99 (Cat. No.PR00318), 1999
We present a memory efficient method for transposing a run-length encoded bi- level image. Image transposing is a commonly used operation for affine transformations such as document image deskewing. The best existing method for transposing a run-length image is the pxy table based method. For images of typical engineering drawings, which are large, crowded and noisy, this method requires an ...
International Conference on Professional Communication,Communication Across the Sea: North American and European Practices, 1990
The DIP (document image processing) system, and its configurations and functions are described, and the standards issue facing the DIP system is discussed. DIP involves scanning and electronically storing documents so they can be retrieved and used without requiring paper copies or file cabinets. DIP systems are most useful when integrated into a local area network that provides shared access ...
Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing. ISIMP 2001 (IEEE Cat. No.01EX489), 2001
Binarization of a gray scale document image is one of the most important steps for automatic document processing. The paper presents a two-stage document image binarization approach. The approach applies a region based binarization technique first to the whole image and utilizes a neural network based binarization technique to those text blocks in which a good character segmentation cannot be ...
Proceedings of 3rd International Conference on Document Analysis and Recognition, 1995
A syntactic rule learning method is presented for analyzing document images and constructing a database from them. This method is used in a digital library system named CyberMagazine, where document images are sequentially converted into database tuples by block segmentation, rough classification, and syntactic analysis. The syntactic rule has an ability to analyze symbols located in two dimensional plane, and ...
Zohara Cohen AMA EMBS Individualized Health
ICASSP 2010 - New Signal Processing Application Areas
Robotics History: Narratives and Networks Oral Histories: Ray Jarvis
Robotics History: Narratives and Networks Oral Histories: Minoru Asada
Lunar Industrialization: The First Step to the Solar System
2011 IEEE Jack S. Kilby Signal Processing Medal - Ingrid Daubechies
Noise Enhanced Information Systems: Denoising Noisy Signals with Noise
Martin Vetterli accepts the IEEE Jack S. Kilby Signal Processing Medal - Honors Ceremony 2017
ICASSP 2010 - Science and Technology of DSP
"Approximation- Beyond the Tyranny of Digital Computing," (Rebooting Computing)
WIRELESS TRANSCEIVER SYSTEM DESIGN FOR MODERN COMMUNICATION STANDARDS
ICASSP 2010 - Advances in Neural Engineering
Engineering in Medicine and Biology: Segment 3
IMS 2015: Robert H. Caverly - Aspects of Magnetic Resonance Imaging
Neural Processor Design Enabled by Memristor Technology - Hai Li: 2016 International Conference on Rebooting Computing
How Facial Analysis Technology Can Help Children with Genetic Disorders - IEEE Region 4 Technical Presentation
Quantization Without Fine-Tuning - Tijimen Blankevoort - LPIRC 2019
ICASSP 2010 - Radar Imaging of Building Interiors
IEEE Xplore Celebrates Two Million Documents
We present a memory efficient method for transposing a run-length encoded bi- level image. Image transposing is a commonly used operation for affine transformations such as document image deskewing. The best existing method for transposing a run-length image is the pxy table based method. For images of typical engineering drawings, which are large, crowded and noisy, this method requires an exorbitant amount of memory. The method proposed uses a very compact representation of run-length encoded images. Also, it bypasses certain steps from the pxy table based method. Consequently, the saving in memory use is proportional to the number of horizontal runs and the number of vertical (transposed) runs. The computation time for both the methods is almost identical.
The DIP (document image processing) system, and its configurations and functions are described, and the standards issue facing the DIP system is discussed. DIP involves scanning and electronically storing documents so they can be retrieved and used without requiring paper copies or file cabinets. DIP systems are most useful when integrated into a local area network that provides shared access to devices such as laser printers, scanners, and optical disks.<<ETX>>
Binarization of a gray scale document image is one of the most important steps for automatic document processing. The paper presents a two-stage document image binarization approach. The approach applies a region based binarization technique first to the whole image and utilizes a neural network based binarization technique to those text blocks in which a good character segmentation cannot be achieved at the first stage. Experimental results on a number of document images show that our two-stage binarization approach performs better than other binarization techniques in terms of character segmentation quality and computing time.
A syntactic rule learning method is presented for analyzing document images and constructing a database from them. This method is used in a digital library system named CyberMagazine, where document images are sequentially converted into database tuples by block segmentation, rough classification, and syntactic analysis. The syntactic rule has an ability to analyze symbols located in two dimensional plane, and has a syntax similar to an ordinal context free grammar except for the concatenation of symbols. In the presented learning method, the syntactic rules are generated from a set of parse trees by decomposing the trees according to non terminal symbols, generalizing the decomposed trees to a syntactic rule, and merging them.
This paper provides an overview of a new software framework TABS, which has been designed to support the rapid development of image processing and image analysis systems and components. Compared to other image manipulation software frameworks, TABS has a number of novel features which make it particularly suitable for use in applications where hypotheses rather than single "hard" results are generated by system components, and symbolic data are manipulated.
Morphological operators have proven to be useful for many image processing tasks. However, the design of an adequate operator for a given task is not simple in general. A possible approach to deal with this difficulty is to design operators using training based methods. This work shows the application of trained morphological operators for several document processing tasks including character recognition, text segmentation and graphics processing.
Automatic cheque image processing is an important task in document image processing. Although many methods have been developed, it is still a challenge to find effective methods for some of the tasks, one of these is how to extract the filled-in strokes from a given cheque image. The problem becomes more difficult when a cheque has a complicated background. Most current methods are only applicable to simple binary images. This paper presents a method for processing complicated grey images in two steps. Both the extraction of reference lines and filled-in strokes are included. Our method is based on the construction of a pseudo 2D wavelet with adjustable rectangular supports. The experimental result shows good performance in the present method, which is also effective for slightly skewed cheque image.
A multichannel filtering-based texture segmentation method is applied to a variety of document image processing problems: text-graphics separation, address-block location, and bar code localization. In each of these segmentation problems, the text context or bar code in the image is considered to define a unique texture. Thus, all three document analysis problems can be posed as texture segmentation problems. Two-dimensional Gabor filters are used to compute texture features. Both supervised and unsupervised methods are used to identify regions of text or bar code in the document images. The performance of the segmentation and classification scheme for a variety of document images demonstrates the generality and effectiveness of the approach.<<ETX>>
When trying to enhance the quality of a gray-level digital image obtained from the acquisition of a document containing text and/or graphics, relevant problems arise if the original is not highly contrasted and if it is spoiled by noise. In the paper, a nonlinear preprocessing method is suggested for this application. It is based on a two-dimensional (2-D) complete quadratic filter, i.e. an operator composed by a linear FIR component and a quadratic component acting, in our case, on the same support. In the proposed operator, the linear component is a conventional low-pass filter having a noise-smoothing effect; the quadratic component, on the other side, being sensitive to the abrupt changes in the luminance levels of the input image, compensates for the blurring effect of the former term and is even able to enhance the edges of the objects. The resulting image is thus less noisy and sharper than the original one.<<ETX>>
No standards are currently tagged "Document image processing"