Full metadata
Title
Incorporating auditory models in speech/audio applications
Description
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Date Created
2011
Contributors
- Krishnamoorthi, Harish (Author)
- Spanias, Andreas (Thesis advisor)
- Papandreou-Suppappola, Antonia (Committee member)
- Tepedelenlioğlu, Cihan (Committee member)
- Tsakalis, Konstantinos (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
xv, 144 p. : ill. (some col.)
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.I.9163
Statement of Responsibility
by Harish Krishnamoorthi
Description Source
Viewed on March 27, 2012
Level of coding
full
Note
thesis
Partial requirement for: Ph.D., Arizona State University, 2011
bibliography
Includes bibliographical references (p. 119-128)
Field of study: Electrical engineering
System Created
- 2011-08-12 04:34:29
System Modified
- 2021-08-30 01:53:16
- 3 years 2 months ago
Additional Formats