A Slate™ Desktop engine has three performance criteria when in production: speed, linguistic quality and return on investment.
A SlateMT engine’s speed per segment depends on the number of words per segment and the computer’s hardware specifications. As a CAT MT provider, SlateMT returns the draft in less than one second. Pretranslating supported files through the dashboard is 3 to 7 times faster.
Your time is valuable. You’re probably considering SlateMT as a productivity tool. Traditional human and automatic MT evaluations are time consuming and confusing. SlateMT has a unique Engine Summary Report that shows an engine’s “vital signs” that are fast and easy indicators of the engine’s health.
You can calculate your return on investment in SlateMT. Track your vital sign scores as you work with your CAT tool. Compare your scores without and without using your SlateMT engine and you can see how SlateMT impacts your productivity.
Human evaluations are the only reliable method to determine the linguistic quality of a translation, whether human- or machine-generated. Human evaluations, however, are slow, labor-intensive and expensive.
Several automated evaluation methods reports how closely machine translated target segments match known human translated target segments in an evaluation set. These closeness score are similar to CAT tool fuzzy scores that report how closely source segments match to known human translations in a translation memory.