search query: @keyword automatic speech recognition / total: 10
reference: 4 / 10
« previous | next »
Author:Smit, Peter
Title:Stacked transformations for foreign accented speech recognition
Publication type:Master's thesis
Publication year:2011
Pages:vii + 54      Language:   eng
Department/School:Tietotekniikan laitos
Main subject:Informaatiotekniikka   (T-61)
Supervisor:Kurimo, Mikko
Instructor:Pylkkönen, Janne
Electronic version URL: http://urn.fi/URN:NBN:fi:aalto-201207022663
OEVS:
Electronic archive copy is available via Aalto Thesis Database.
Instructions

Reading digital theses in the closed network of the Aalto University Harald Herlin Learning Centre

In the closed network of Learning Centre you can read digital and digitized theses not available in the open network.

The Learning Centre contact details and opening hours: https://learningcentre.aalto.fi/en/harald-herlin-learning-centre/

You can read theses on the Learning Centre customer computers, which are available on all floors.

Logging on to the customer computers

  • Aalto University staff members log on to the customer computer using the Aalto username and password.
  • Other customers log on using a shared username and password.

Opening a thesis

  • On the desktop of the customer computers, you will find an icon titled:

    Aalto Thesis Database

  • Click on the icon to search for and open the thesis you are looking for from Aaltodoc database. You can find the thesis file by clicking the link on the OEV or OEVS field.

Reading the thesis

  • You can either print the thesis or read it on the customer computer screen.
  • You cannot save the thesis file on a flash drive or email it.
  • You cannot copy text or images from the file.
  • You cannot edit the file.

Printing the thesis

  • You can print the thesis for your personal study or research use.
  • Aalto University students and staff members may print black-and-white prints on the PrintingPoint devices when using the computer with personal Aalto username and password. Color printing is possible using the printer u90203-psc3, which is located near the customer service. Color printing is subject to a charge to Aalto University students and staff members.
  • Other customers can use the printer u90203-psc3. All printing is subject to a charge to non-University members.
Location:P1 Ark Aalto     | Archive
Keywords:automatic speech recognition
foreign accent recognition
linear transformation
stacked transformations
Abstract (eng): Nowadays, large vocabulary speech recognizers exist that are performing reasonably well for specific conditions and environments.
When the conditions change however, performance degrades quickly.
For example, when the person to be recognized has a foreign accent the conditions could mismatch with the model, resulting in high error rates.

The problem in recognizing foreign accented speech is the lack of sufficient training data.
If enough data would be available of the same accent, from numerous different speakers, a well performing accented speech model could be built.

Besides the lack of speech data, there are more problems with training a complete new model.
It costs a lot of computational resources and storage space to train a new model.
If speakers with different accents must be recognized, these costs explode as every accent needs retraining.
A common solution for preventing retraining is to adapt (transform) an existing model, such that it better matches the recognition conditions.

In this thesis multiple different adaptation transformations are considered.
Speaker Transformations are using speech data from the target speaker, Accent Transformations use speech data from different speakers, who have the same accent as the speech that needs to be recognized.
Neighbour Transformations are estimated with speech from different speakers that are automatically determined to be similar to the target speaker.

Novelty in this work is the stack wise combination of these adaptations.
Instead of using a single transformation, multiple transformations are 'stacked together'.
Because all adaptations except the speaker specific adaptation can be precomputed, no extra computational costs at recognition time occur compared to normal speaker adaptation and the adaptations that can be precomputed are much more refined as they can use more and better adaptation data.
In addition, they need only a very small amount storage space, compared to a retrained model.

The effect of Stacked Transformations is that the models have a better fit for the recognition utterances.
When compared to no adaptation, improvements up to 30% in Word Error Rate can be achieved.
In adaptation with a small number (5) of sentences, improvements up to 15% are gained.
ED:2011-06-29
INSSI record number: 42159
+ add basket
« previous | next »
INSSI