Limits...
Improving Speaker Recognition by Biometric Voice Deconstruction.

Mazaira-Fernandez LM, Álvarez-Marquina A, Gómez-Vilda P - Front Bioeng Biotechnol (2015)

Bottom Line: The present study benefits from the advances achieved during last years in understanding and modeling voice production.The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches.Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions.

View Article: PubMed Central - PubMed

Affiliation: Neuromorphic Voice Processing Laboratory, Center for Biomedical Technology, Universidad Politécnica de Madrid , Madrid , Spain.

ABSTRACT
Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g., YouTube) to broadcast its message. In this new scenario, classical identification methods (such as fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. The present study benefits from the advances achieved during last years in understanding and modeling voice production. The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract the gender-dependent extended biometric parameters is given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions.

No MeSH data available.


Influence of the GSE configuration on the results achieved on terms of EER for both male and female speakers on development set.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4585141&req=5

Figure 7: Influence of the GSE configuration on the results achieved on terms of EER for both male and female speakers on development set.

Mentions: We have also verified that the improvement derived from incorporating GSE information into the feature vector is systematically obtained and is not the result of an isolated and specific configuration. Figure 7 (upper) provides in green solid line the minimum EERX (y-axis) obtained when GSE is incorporated into the feature vector in the form of MFCCs for male speakers. Different numbers of MFCCG = {2,4,6,8,10} have been tested, which have been computed applying a filter bank with different numbers of filters FG = [4…23] (x-axis). Each point in the x-axis represents the minimum EER obtained for a specific value of FG, regardless the MFCCG values. Figure 7 (lower) provides the same information for female speakers. Clearly, from the depicted results, the use of GSE systematically results in an improvement of recognition rates regardless the gender of speakers.


Improving Speaker Recognition by Biometric Voice Deconstruction.

Mazaira-Fernandez LM, Álvarez-Marquina A, Gómez-Vilda P - Front Bioeng Biotechnol (2015)

Influence of the GSE configuration on the results achieved on terms of EER for both male and female speakers on development set.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4585141&req=5

Figure 7: Influence of the GSE configuration on the results achieved on terms of EER for both male and female speakers on development set.
Mentions: We have also verified that the improvement derived from incorporating GSE information into the feature vector is systematically obtained and is not the result of an isolated and specific configuration. Figure 7 (upper) provides in green solid line the minimum EERX (y-axis) obtained when GSE is incorporated into the feature vector in the form of MFCCs for male speakers. Different numbers of MFCCG = {2,4,6,8,10} have been tested, which have been computed applying a filter bank with different numbers of filters FG = [4…23] (x-axis). Each point in the x-axis represents the minimum EER obtained for a specific value of FG, regardless the MFCCG values. Figure 7 (lower) provides the same information for female speakers. Clearly, from the depicted results, the use of GSE systematically results in an improvement of recognition rates regardless the gender of speakers.

Bottom Line: The present study benefits from the advances achieved during last years in understanding and modeling voice production.The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches.Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions.

View Article: PubMed Central - PubMed

Affiliation: Neuromorphic Voice Processing Laboratory, Center for Biomedical Technology, Universidad Politécnica de Madrid , Madrid , Spain.

ABSTRACT
Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g., YouTube) to broadcast its message. In this new scenario, classical identification methods (such as fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. The present study benefits from the advances achieved during last years in understanding and modeling voice production. The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract the gender-dependent extended biometric parameters is given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions.

No MeSH data available.