This presentation provides a contrast between automated evaluation and human evaluation of machine translation output. We explain how automated evaluation is useful in the development of MT systems and then go on to describe the automated metrics BLEU, TER, GTM and Meteor.
The presenter talks about “best practices” when the video was recorded in 2014. Slate Desktop™ uses updated best practices based on lessons learned through the intervening years.
Published on Jul 7, 2014