A Slate Rocks customer (a translator) created a translation engine with his or her translation memories (TMs) using a personal computer. This page describes the engine and compares the translator’s Slate Desktop experience to an experience using Google’s new-and-improved neural machine translation (NMT) technology. You can read the entire report with thirth (30) more customer experiences by downloading the full report Study of Machine Translated Segment Pairs.
Slate Desktop Engine Details
The customer started with translation memories in the language pair and industry of his or her work, totaling the estimated corpus size number of segments. Slate Desktop cleaned the TMs, prepared a training corpus and built the engine. Note that these processes typically runs overnight. During that processing, Slate Desktop also extracted a representative set consisting of randomly selected segments from the training corpus.
|estimated corpus size||180,000|
|segments per representative set||2,372|
|words per source segment||22|
|words per target segment||25|
Benchmark Score Comparison
The segment pairs in the representative set are representative of the translator’s daily work. By focusing on one translator’s experience, these scores indicate a level of work reduction this customer will likely experience in his or her daily work using the respective MT system (Google or Slate Desktop) with 95% confidence.
|words per MT segment||24||25|
|exact MT match (count)||136||731|
|exact MT match (percent)||5.7%||30.8%|
|words per exact MT match (count)||10||16|
|filtered BLEU score (no exact MT matches)||46.44||70.15|
|segments requiring edit (count)||2,236||1,641|
|character edits per segment||47||40|
|total character edits||105,092||65,640|
These scores indicate this customer using Slate Desktop will likely spend significantly less time editing MT suggestions than if he or she were using Google for this work. This is because Slate Desktop creates engines with the customers translation memories and optimizes them to predict how the customer translates. While on the other hand, Google optimizes its NMT service for millions of customers with countless demands.
Google’s three (3) longest exact MT matches
The exact MT match (count) in the Benchmark Scores table (above) is the number of segments that Google NMT successfully matched to the translator’s actual work, i.e. Google successfully predicted the translator’s actions. The three segments in this table are the exact MT match segments with the longest length. This translator can expect to experience these kinds Google NMT results while translating these kinds of project.
|sv||en (Google and translator)|
|Texterna till förordningarna (EU) nr 550/2010, (EU) nr 574/2010, (EU) nr 632/2010, (EU) nr 633/2010 och (EU) nr 662/2010 på isländska och norska, som ska offentliggöras i EES-supplementet till Europeiska unionens officiella tidning, ska vara giltiga.||The texts of Regulations (EU) No 550/2010, (EU) No 574/2010, (EU) No 632/2010, (EU) No 633/2010 and (EU) No 662/2010 in the Icelandic and Norwegian languages, to be published in the EEA Supplement to the Official Journal of the European Union, shall be authentic.|
|Kommissionens förordning (EG) nr 327/98 av den 10 februari 1998 om öppnande och förvaltning av vissa tullkvoter för import av ris och brutet ris  har ändrats flera gånger  på ett väsentligt sätt.||Commission Regulation (EC) No 327/98 of 10 February 1998 opening and providing for the administration of certain tariff quotas for imports of rice and broken rice  has been substantially amended several times .|
|Medlemsstaterna ska senast den 31 december 2011 anta och offentliggöra de lagar och andra författningar som är nödvändiga för att följa detta direktiv.||Member States shall adopt and publish, by 31 December 2011 at the latest, the laws, regulations and administrative provisions necessary to comply with this Directive.|
Slate’s three (3) longest exact MT matches
The exact MT match (count) in the Benchmark Scores table (above) is the number of segments that this translator’s Slate Desktop engine successfully matched to the translator’s actual work, i.e. Slate Desktop successfully predicted the translator’s actions. The three segments in this table are the exact MT match segments with the longest length. This translator can expect to experience these kinds Slate Desktop results while translating these kinds of project.
|sv||en (Slate and translator)|
|Kommissionen har i enlighet med artikel 9.1 första stycket i förordning (EG) nr 510/2006 granskat Förenade kungarikets ansökan om godkännande av ändringar av produktspecifikationen för den skyddade geografiska beteckningen ”Welsh Beef”, vilken registrerades i enlighet med kommissionens förordning (EG) nr 2400/96  i dess ändrade lydelse enligt förordning (EG) nr 2066/2002 .||In accordance with the first subparagraph of Article 9(1) of Regulation (EC) No 510/2006, the Commission has examined the United Kingdom’s application for the approval of amendments to the specification for the protected geographical indication ‘Welsh Beef’ registered in accordance with Commission Regulation (EC) No 2400/96 , as amended by Regulation (EC) No 2066/2002 .|
|I det avseendet noterar övervakningsmyndigheten att de norska myndigheterna bara lämnat allmänna kommentarer om hur de måste genomföra åtgärderna på grund av Hurtigrutens svaga ekonomiska ställning  för att se till att företaget skulle fortsätta att fullgöra den allmänna trafikplikten, eftersom det vore svårt för de norska myndigheterna att hitta ett annat företag som kan tillhandahålla tjänsten (åtminstone på kort till medellång sikt).||In that regard, the Authority notes that the Norwegian authorities have only made general remarks on how they had to implement the measures due to Hurtigruten’s weak financial position  in order to ensure that it would continue to provide the public service, as it would be difficult, for the Norwegian authorities, to find another undertaking to provide the service (at least in the short to medium term).|
|Rådets förordning (EG) nr 1275/2005 av den 26 juli 2005 om ändring av förordning (EG) nr 2268/2004 om införande av en slutgiltig antidumpningstull på import av volframkarbid och smält volframkarbid med ursprung i Folkrepubliken Kina (EUT L 202, 3.8.2005, s. 1).||Council Regulation (EC) No 1275/2005 of 26 July 2005 amending Regulation (EC) No 2268/2004 imposing a definitive anti-dumping duty on imports of tungsten carbide and fused tungsten carbide originating in the People’s Republic of China (OJ L 202, 3.8.2005, p. 1).|
Glossary of Benchmark Score Terms
Glossary of Benchmark Score Terms
Our customer experience studies use these terms.
A set of segment pairs that are representative of the translator’s daily work. Segment pairs are randomly selected and removed from a translator’s translation memories. The set is not edited or manipulated to prioritize any particular kind of segment. The set does not show evidence of any attempts to prioritize or manipulate the kinds of segments.
The source (authored) language 2-letter code of the evaluation set (e.g. de, en, es, fr, sv).
The target (translated) language 2-letter code of the evaluation set (e.g. ar, bg, cs, de, en, es, fr, ga, hr, it, nl, pl, ru).
The subject or industry covered in the evaluation set.
estimated corpus size
The estimated number of segment pairs in the training corpus (all TMs) used to create the Slate Desktop engine.
segments per representative set
The number of segments in the representative set. This number represents a 95% confidence level relative to the total corpus size.
words per source segment
The average number of words per source segment in the representative set. A low number indicates the representative set likely has a disproportionately high number of short segments (terminology or glossary entries).
words per target segment
The average number of words per target segment in the representative set. A low number indicate the representative set likely has a disproportionately high number of short segments (terminology or glossary entries).
words per MT segment
The average number of words per segment generated by the respective MT system. Indicates how closely the MT system matches the number of words per human translated segment.
The BLEU score is a “likeness” match (similar to TM fuzzy match) between all of the MT segments and human reference translations in a set. Higher scores are better and 100 is an exact MT match.
To create the cumulative score, the BLEU algorithm first scores “likeness” between an MT segment and its reference segment. “Likeness” is based on preponderance of the same words in the same order. A score of zero “0” means no likeness. A score of “100” means the MT segment exactly matches its reference segment (below).
The algorithm then consolidates all segment scores into a cumulative BLEU score representing the “likeness” of the entire set. This cumulative BLEU score is conceptually similar to an average of all segment BLEU scores in the set, but computationally it is different.
exact MT match (count)
An exact MT match segment exactly matches the reference human translated segment, AKA BLEU score 100 and 0 edit-distance (Levenstein)
The number of MT segments that exactly match their respective reference segment in the representative set, i.e. segments with BLEU score 100 and 0 edit-distance (Levenstein) score. Segments in this category represent pure cognitive effort for the translator to identify them as correct, without need for mechanical work such as typing or dictation to edit them.
exact MT match (percent)
The percentage of exact MT match segments in a set or the average percent across all representative sets in the summary. A high percentage score represent less work for a translator. Based on all customer experience studies, Google NMT scores range from 1.8% to 11.2%. Slate Desktop scores range from 20.0% to 53.7%.
words per exact MT match (count)
The average number of words per segment generated by the respective MT system for only the exact MT match segments. MT technologies have the reputation of performing poorly for long sentences. The difference (delta) between the words per MT segment and words per exact MT match scores shows the amount of desegregation the MT system suffers with long segments. A smaller delta is good and indicates the MT system performs better with longer segments.
filtered BLEU score (no exact MT matches)
Imagine a set of 10 sentence with BLEU scores (100, 100, 100, 100, 90, 65, 70, 80, 75, 40). The cumulative BLEU score (like average) is 73. A high cumulative BLEU score is considered good but it poorly represents the amount of editing work for a translator. Therefore, we divide scoring into two systems.
First, we report the percentage of segments that require zero editing, i.e. the exact MT match (percent) value above. In this case, 4 of the 10 sentences (40%) require no editing. Clearly, higher percents are better.
Then, we remove these segments and recalculate the filtered BLEU score scores using only the 6 segments that require editing. In this case, the BLEU drops from 73 (for 10 sentences) to 70 (for 6 sentences).
The filtered BLEU score is always equal to or lower than the cumulative BLEU score. This score represents the necessary editing work. Although higher BLEU scores are good, you also need to consider the delta between the cumulative BLEU score and the filtered BLEU score.
A small delta with a low percentage of exact MT match segments signals virtually every segment represents editing work for the translator.
A small delta with a high percentage of exact MT match segments signals the engine will likely serve the translator well.
A large delta with many exact MT match segments will result in a lower filtered BLEU score. This signals more editing work for a smaller number of segments. Therefore, it is not as serious as a low cumulative BLEU score.
segments requiring edit (count)
The difference between the segments per representative sets and the exact MT match (count), i.e. the inverse of the exact MT match (count). A higher number indicates more work.
character edits per segment (Levenstein)
The average edit-distance (Levenstein) score per segment requiring editing. The edit-distance (Levenstein) score represents the number of character edits that are needed to transform an MT segment into the reference segment. Therefore, this number represents the average number of character edits per segment to “fix” the MT segment. Higher scores indicates more edit work is required. A score of zero (0) means the segment is an exact MT match and no edit work is required.
total character edits
The edit-distance (Levenstein) score for a set. A higher number indicates more edit work is required.