How does Language I/O evaluate quality?
Language I/O’s core mission is to create a leading environment for language translation. At the root of this mission is the quality of translations. Language I/O continuously works to improve translation methods and provide you with constantly improving translation quality.
- Language I/O is integrated with the world’s best neural machine translation (NMT) platforms. When a translation request hits Language I/O servers, they intelligently select the best engine for fluency for that particular language pair, and based on your domain, profile and preferences.
- With curated glossaries, Language I/O imposes your preferred translations for problematic terms and phrases in real-time. This is executed on top of the select NMT engine from the beginning of the translation process.
- Language I/O has numerous accuracy feedback loops. Known as supervised learning feedback, they allow Language I/O to adjust engine selection and glossary imposition in order to improve accuracy. Other tools such as Quality Rating Flags and Translation Quality Estimate (TQE)* allow you and your agents to provide direct feedback as well to improve future translation. And with the new Self-Improving Glossary (SIGLO, see below), Language I/O can proactively detect new terms and phrases that require a special translation for you or your industry.
(* Some tools are only available in certain integrations.)
What are Quality Estimates?
The Quality Estimate is a range of new features that allow organizations to define quality thresholds for Translation Optimization and Machine Translation, then evaluate their content according to these thresholds.
Other ways that Language I/O evaluates quality is through customer feedback, specifically the usage of retranslation and flag ratings.
Why does it matter?
By defining a threshold for quality and evaluating outgoing messages against this threshold, the Language I/O QE processes help make sure that your agents and end users get the best translation quality available at that time, whether through translation optimization, or by ensuring that the translation meets a predefined standard of quality.
Do quality evaluations impact translation turnaround times?
Because Quality Estimate runs on top of the translation processes, there might be an increased latency cost associated with using this feature.
What is "quality"?
It is important to specify what exactly is evaluated when talking about Quality Estimate. Language I/O QE processes evaluate quality in terms of contextual similarity.
Here's an example with a message that contains an accidental typo that is also a real word (your/you're):
- Original message: "Hi, this is John. I'm going to look at you're chat with our chatbot so I can understand what's going on. Please give me a few seconds and I'll be right back."
- MT1: "Bonjour, c'est John. Je vais voir si tu parles avec notre chatbot pour que je puisse comprendre ce qui se passe. Donnez-moi quelques secondes et je reviens tout de suite."
The first translation engine is thrown off by the typo and misses the intent. It also mixes formal and informal styles, so the resulting translation falls below the TQE threshold. As a result, Quality Failover kicks in and requests a second translation:
- MT2: "Bonjour, c'est John. Je vais regarder votre chat avec notre chatbot pour comprendre ce qui se passe. Accordez-moi quelques secondes et je reviens tout de suite".
The second translation engine does detects the meaning behind the typo and fixes the translation accordingly, so its score is higher. This is the translation that is returned to the agent.
To summarize, Language I/O's approach prioritizes the intent of the text and the preservation of meaning in the translation, over word choice or style.
Comments
0 comments
Please sign in to leave a comment.