## Data Mining: Multimedia, Soft Computing, and BioinformaticsA primer on traditional hard and emerging soft computing approaches for mining multimedia data While the digital revolution has made huge volumes of high dimensional multimedia data available, it has also challenged users to extract the information they seek from heretofore unthinkably huge datasets. Traditional hard computing data mining techniques have concentrated on flat-file applications. Soft computing tools-such as fuzzy sets, artificial neural networks, genetic algorithms, and rough sets-however, offer the opportunity to apply a wide range of data types to a variety of vital functions by handling real-life uncertainty with low-cost solutions. Data Mining: Multimedia, Soft Computing, and Bioinformatics provides an accessible introduction to fundamental and advanced data mining technologies. This readable survey describes data mining strategies for a slew of data types, including numeric and alpha-numeric formats, text, images, video, graphics, and the mixed representations therein. Along with traditional concepts and functions of data mining-like classification, clustering, and rule mining-the authors highlight topical issues in multimedia applications and bioinformatics. Principal topics discussed throughout the text include: The role of soft computing and its principles in data mining Principles and classical algorithms on string matching and their role in data (mainly text) mining Data compression principles for both lossless and lossy techniques, including their scope in data mining Access of data using matching pursuits both in raw and compressed data domains Application in mining biological databases |

### From inside the book

Results 1-3 of 42

able event (

**symbol**) gives us more surprise and hence we expect that it might

carry more information. On the contrary, the more probable event (

**symbol**) will

carry less information because it was expected more. Note an analogy to the

concept ...

n-extended source, it can be proven that E(An) = nE(A), where E(A) is the entropy

of the original source A. Let us now consider encoding the blocks of n source

**symbols**, at a time, into binary codewords. For any e > 0, it is possible to construct

a ...

The decoder also dynamically builds a dictionary, which is the same as that built

by the encoder. Initially the dictionary contains nothing. Since the first input pair to

the decoder is < 0, C(b) >, it first decodes the

**symbol**6 from the codeword C(b).

### What people are saying - Write a review

### Contents

Soft Computing | 35 |

Multimedia Data Compression | 89 |

standard | 129 |

Copyright | |

9 other sections not shown

### Other editions - View all

Data Mining: Multimedia, Soft Computing, and Bioinformatics Sushmita Mitra,Tinku Acharya Limited preview - 2005 |

Data Mining: Multimedia, Soft Computing, and Bioinformatics Sushmita Mitra,Tinku Acharya No preview available - 2005 |