TY - GEN
T1 - Robust detection of voiced segments in samples of everyday conversations using unsupervised HMMS
AU - Asgari, Meysam
AU - Shafran, Izhak
AU - Bayestehtashk, Alireza
PY - 2012
Y1 - 2012
N2 - We investigate methods for detecting voiced segments in everyday conversations from ambient recordings. Such recordings contain high diversity of background noise, making it difficult or infeasible to collect representative labelled samples for estimating noise-specific HMM models. The popular utility get-f0 and its derivatives compute normalized cross-correlation for detecting voiced segments, which unfortunately is sensitive to different types of noise. Exploiting the fact that voiced speech is not just periodic but also rich in harmonic, we model voiced segments by adopting harmonic models, which have recently gained considerable attention. In previous work, the parameters of the model were estimated independently for each frame using maximum likelihood criterion. However, since the distribution of harmonic coefficients depend on articulators of speakers, we estimate the model parameters more robustly using a maximum a posteriori criterion. We use the likelihood of voicing, computed from the harmonic model, as an observation probability of an HMM and detect speech using this unsupervised HMM. The one caveat of the harmonic model is that they fail to distinguish speech from other stationary harmonic noise. We rectify this weakness by taking advantage of the non-stationary property of speech. We evaluate our models empirically on a task of detecting speech on a large corpora of everyday speech and demonstrate that these models perform significantly better than standard voice detection algorithm employed in popular tools.
AB - We investigate methods for detecting voiced segments in everyday conversations from ambient recordings. Such recordings contain high diversity of background noise, making it difficult or infeasible to collect representative labelled samples for estimating noise-specific HMM models. The popular utility get-f0 and its derivatives compute normalized cross-correlation for detecting voiced segments, which unfortunately is sensitive to different types of noise. Exploiting the fact that voiced speech is not just periodic but also rich in harmonic, we model voiced segments by adopting harmonic models, which have recently gained considerable attention. In previous work, the parameters of the model were estimated independently for each frame using maximum likelihood criterion. However, since the distribution of harmonic coefficients depend on articulators of speakers, we estimate the model parameters more robustly using a maximum a posteriori criterion. We use the likelihood of voicing, computed from the harmonic model, as an observation probability of an HMM and detect speech using this unsupervised HMM. The one caveat of the harmonic model is that they fail to distinguish speech from other stationary harmonic noise. We rectify this weakness by taking advantage of the non-stationary property of speech. We evaluate our models empirically on a task of detecting speech on a large corpora of everyday speech and demonstrate that these models perform significantly better than standard voice detection algorithm employed in popular tools.
KW - life log
KW - speech detection
KW - voice detection
UR - http://www.scopus.com/inward/record.url?scp=84874223263&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84874223263&partnerID=8YFLogxK
U2 - 10.1109/SLT.2012.6424264
DO - 10.1109/SLT.2012.6424264
M3 - Conference contribution
AN - SCOPUS:84874223263
SN - 9781467351263
T3 - 2012 IEEE Workshop on Spoken Language Technology, SLT 2012 - Proceedings
SP - 438
EP - 442
BT - 2012 IEEE Workshop on Spoken Language Technology, SLT 2012 - Proceedings
T2 - 2012 IEEE Workshop on Spoken Language Technology, SLT 2012
Y2 - 2 December 2012 through 5 December 2012
ER -