The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
This works focuses on the High Efficiency Video Coding (HEVC) standard as a compression method to be potentially adopted by the Digital Imaging and Communications in Medicine (DICOM) standard. We are particularly interested in improving the lossless compression efficiency of the intra coding process for grayscale anatomical medical images. We focus on intra coding due to its low complexity and outstanding...
This article describes lossless compression algorithms for multisets of sequences, taking advantage of the multiset's unordered structure. Multisets are a generalisation of sets where members are allowed to occur multiple times. A multiset can be encoded naïvely by storing its elements in some sequential order, but then information is wasted on the ordering. We propose a technique that...
We investigate a method to improve the compression ratio for hyperspectral data compression by use of a pre-processing step that gathers together correlated pixels before the transform is applied in a KLT-JPEG2000 based compression. Using a k-means clustering algorithm, the pixels can be grouped together before the application of the transform. Some similar methods have been studied, but k-means has...
We propose a compression algorithm for the quality scores contained in FASTQ files which are generated in large volumes during high throughput sequencing. The proposed algorithm is a context dependent arithmetic coder which is based on observations of the structure of quality scores in FASTQ files. Simulation results indicate a significantly superior performance of the algorithm to the current state...
Although text compression can be successfully applied to markup languages, it does so without any semantic knowledge of the data types present within the markup. In this paper we illustrate how this added knowledge can be used to develop a hybrid tool which combines traditional text compression with markup-awareness to improve compression size against existing well known text compression tools. Our...
In this paper, a compressive detection approach for detecting the presence of multiple Frequency Hopping Spread Spectrum (FHSS) signals is introduced. The large bandwidth produced due to hopping between frequency channels makes interception of FHSS signals challenging. Conventional FHSS detection approaches use channelized radiometers which reduce the sampling rate by dividing the entire FHSS hopping...
Upon the completion of the single-layer H.265/HEVC, scalable extensions of the H.265/HEVC standard, called Scalable High Efficiency Video Coding (SHVC), are currently under development. Compared to the simulcast solution that simply compresses each layer separately, SHVC offers higher coding efficiency by means of inter-layer prediction which is implemented by inserting inter-layer reference (ILR)...
Semantics of communicated data can lead to conclusions with varying degrees of priorities. Depending on the interests of the communicating parties, some facts lead to conclusions that carry a high risk when ignored, and others may not be worth the resources to share the facts leading to those uninteresting conclusions. This paper studies the worst-case semantic data compression problem for sharing...
MPEG is currently developing a standard titled Compact Descriptors for Visual Search (CDVS) for descriptor extraction and compression. In this work, we report comprehensive patch-level experiments for a direct comparison of low bitrate descriptors for visual search. For evaluating different compression schemes, we propose a dataset of matching pairs of image patches from the MPEG-CDVS image-level...
Presents the introductory welcome message from the conference proceedings. May include the conference officers' congratulations to all involved with the conference event and publication of the proceedings record.
This paper presents an approach for using hierarchically structured multi-view features for mobile visual search. We utilize a graph model to describe the feature correspondences between multi-view images. To add features of images from new viewpoints, we designa level raising algorithm and the associated multi-view geometric verification, which are based on the properties of the hierarchical structure...
Key point features are very effective tools in image matching and key point feature aggregation is an effective scheme for creating a compact representation of the images for visual search. This solution not only achieves compression, but also offers the benefits of better accuracy in matching and indexing efficiency. Research is active in this area and recent results on Fisher Vector based aggregation...
Medical images are captured in a 16-bit high-resolution grayscale format and are large, frequently reaching MBs per image and PBs for the archive. Regulatory compliance requirements make de-ploying new full image compression techniques difficult. Instead of forcing applications and end users to deal with the deployment complexity, we show that image data can be effectively and transparently compressed...
The computation of a peeling order in a randomly generated hypergraph is the most time-consuming step in a number of constructions, such as perfect hashing schemes, random r-SAT solvers, error-correcting codes, and approximate set encodings. While there exists a straightforward linear-time algorithm, its poor I/O performance makes it impractical for hypergraphs whose size exceeds the available internal...
Accelerometer data collected from moving vehicles can be modeled as a self-similar random process. A Renyi entropy measure computed over the Wigner-Ville distribution of this non-stationary process is used to select the most relevant data samples for compression. Wavelet transform based transform coding is applied to compress the accelerometer data with minimal distortion and accurate inferences (detection...
Recent advances in adaptive filter theory and the hardware for signal acquisition have led to the realization that purely linear algorithms are often not adequate in these domains. Nonlinearities in the input space have become apparent with today's real world problems. Algorithms that process the data must keep pace with the advances in signal acquisition. Recently kernel adaptive (online) filtering...
Typical greedy algorithms for sparse reconstruction problems, such as orthogonal matching pursuit and iterative thresholding, seek strictly sparse solutions. Recent work in the literature suggests that given a priori knowledge of the distribution of the sparse signal coefficients, better results can be obtained by a weighted averaging of several sparse solutions. Such a combination of solutions, while...
Rate-Distortion Optimization in High Efficiency Video Coding promotes the coding efficiency, but also imposes intensive computation to the encoder, because the complex Syntax-based context-adaptive Binary Arithmetic Coding is performed for each candidate coding configuration. We develop the classification based regression method to derive the rate models, which fast estimate the bit cost of quantization...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.