In this newsletter, we share how Insights enhanced their customer experience across 11 markets using Intento Language Hub. We introduce Translation Storage for centralized translation management and a simple TM clean-up process with the Intento LQA metric. We also analyze Google’s latest translation models and explore GenAI in localization through our webinars – comparing LLMs with traditional MT and reviewing 2024 developments while looking ahead to 2025.
How Insights created a multilingual customer experience for their global expert network
75% of buyers are more likely to buy again from brands that offer customer care in their native language, stressing the importance of multilingual support. Discover how one of our clients, Insights, used the Intento Language Hub to provide their customers with a consistent language experience across 11 markets, leading to a 59% increase in user engagement on their platform. Read the full story.
Product updates
Centralize and amplify your translation management with Translation Storage
Managing translations across platforms is complex and costly. Intento Translation Storage centralizes your translations, making it easy to reuse, evaluate, and improve your content. It seamlessly connects with your translation management, customer, and employee systems to ensure faster delivery, consistent messaging, and significant cost savings. Learn more about how it works.
Ensure translations always meet your standards with clean translation memory (TM)
Translation memories can accumulate errors over time, affecting future translation quality. Keep your translation memory accurate with our simple clean-up process: export content from your TMS, run it through the Intento LQA metric to find low-quality segments and import the cleaned version back. It’s that easy to maintain high-quality translations! Learn more about this simple TM clean-up.
GenAI for enterprise localization
LLMs or Traditional MT? Quality and cost insights from Intento and XTM
In our recent webinar with XTM International, we compared LLMs and Traditional MT systems, exploring their quality, adaptability, and cost. Watch the recording to learn about:
- Quality, analyzing where LLMs outperform traditional MT and how both handle errors
- Adaptability, comparing how well LLMs and traditional MT handle specialized translations
- Cost differences and when LLMs become more cost-effective
- We’ve also compared the speed of LLMs and traditional MT for large-scale tasks, discussing current limitations and factors affecting performance.
GenAI in Localization: 2024 in review and what to expect for 2025
Balázs Kis, Chief Evangelist at memoQ, and Konstantin Savenkov, CEO and Co-Founder at Intento, shared their vision of how GenAI is transforming enterprise localization and what’s next for the industry. While we couldn’t address all questions during the live session, our speakers have now answered the remaining ones—from evaluation methods to regulatory compliance. Read their insights and watch the full webinar recording.
A deep dive into new Google’s Translation AI models
As a company automating localization for Fortune 500 clients, we closely monitor and evaluate every significant change in translation technology.
We’ve analyzed Google’s latest translation model updates and expanded language options to understand their impact:
- What the updated Gemini models and Google MT are capable of
- Where the updated Translation LLM fits in the current AI translation landscape
- How to build an optimal automatic translation workflow using different AI models
- The cost-effectiveness of LLMs compared to traditional translation models
Keeping you updated on the latest in AI
- OpenAI announced early evaluations of their next model, o3 (there will be no o2)
- Andrew Ng and DeepLearning.AI published a new short course (just 1 hour) on the o1 model
- Google announced Gemini 2.0 Flash Thinking Mode, an experimental model that’s trained to generate the “thinking process” the model goes through as part of its response.
- Another interesting model with reasoning is QwQ from the Qwen team of Alibaba Cloud.
- Microsoft released Phi-4, the latest small language model (SLM) in the Phi family. In addition to conventional language processing, it excels at complex reasoning in areas such as math.
- NVIDIA CEO Jensen Huang, at his CES 2025 keynote, included Agentic AI among the three types of robots they are targeting. The other two are self-driving cars and humanoid robots.
- The Economist article: Machine translation is almost a solved problem