This presentation provides a contrast between automated evaluation and human evaluation of machine translation output. We explain how automated evaluation is useful in the development of MT systems and then go on to describe the automated metrics BLEU, TER, GTM and Meteor.
The presenter talks about “best practices” of 2014 when the video was recorded. Slate Desktop™ uses newer best practices based on lessons learned since this video was created.
Published on Jul 7, 2014