Microsoft demonstrates live speech translation
We’ve seen it in the movies, instant speech translation, and we’ve more or less thought it might not be possible in our life time. Well that time might be over, soon.
researchers at Microsoft Research and the University of Toronto have made a breakthrough using a “technique called Deep Neural Networks, which is patterned after human brain behavior, researchers were able to train more discriminative and better speech recognizers than previous methods,” Microsoft chief research officer Rick Rashid said.
Rashid also goes on to say the demo is a result of 60 years of work, though not only by Microsoft. The latest breakthroughs have taken place in the last two years, and has reduced its software error rates by as much as 25%.
“We have been able to reduce the word error rate for speech by over 30 percenty compared to previous methods. This means that rather than having one word in [four] or [five] incorrect, now the error rate is one word in [seven] or [eight].” said Rasid.
The most impressive was demonstrated speaking English while simultaneously translating into Chinese.
“When I spoke in English, the system automatically combined all the underlying technologies to deliver a robust speech to speech experience – my voice speaking Chinese. The results are still not perfect, and there is still much work to be done, but the technology is very promising, and we hope that in a few years we will have systems that can completely break down language barriers.”