In this newsletter, we cover:
- How Strava scaled localization in 6 weeks with Intento, even with no training data
- Why requirements are the key to predictable quality
- How LLM workflows lift the NMT quality ceiling
- What you get with the Phrase + Intento integration, including requirements-driven automation, automated LQA, and ready-to-use output delivered into Phrase
- Market watch: what the latest model news means for localization teams
How Strava built an AI-driven localization program—from zero to global—in 6 weeks
With Intento, Strava went from an English-only app to global-ready in just 6 weeks—and within 90 days saw gains in revenue and active users.
Global growth shouldn’t mean slower releases. With the right localization setup, you can ship faster and keep the product experience consistent across markets.
Here’s what Strava made possible with Intento Language Hub:
- Shipped 2M+ words across 7 languages
- Built an AI-first localization workflow in 2 weeks, even with no training data
- Used AI agents to enforce glossary, tone, and formatting rules, so linguists focused on governance and brand fit instead of reworking raw output
- Ran internationalization and localization (i18n/l10n) in parallel to save months
Maximize your TMS: Build translation workflows that meet your business requirements
A translation can look “better” on generic quality metrics—and still create more work for your business.
In our blog post (based on a webinar with Daria Sinitsyna, Lead AI Engineer at Intento, and Jure Dernovšek, Solution Engineer Coordinator at memoQ), we explain how requirement-driven workflows help you:
- Understand why generic scores can reward the wrong outcomes
- Turn post-editing patterns into clear, enforceable requirements
- Reduce rework and make quality more predictable
Key takeaway: requirements aren’t “extra”—they’re quality.
Translation quality with LLMs
Neural machine translation (NMT) has given localization a big productivity boost—but it also has an intrinsic quality ceiling. How do LLMs overcome these limits, and what can you expect?
In our blog post on translation quality with LLMs, we explain:
- Why NMT alone hits a ceiling: How far you can push NMT quality with better models, data cleaning, and customization — and why, beyond a certain point, extra effort brings only small improvements.
- What changes with LLMs: How LLM-based multi-agent systems remove the built-in quality cap and let you enforce style, terminology, and compliance through prompts and system design.
- How LLM systems change economics and scope: When it makes sense to invest in multi-agent LLM workflows, how they let you move more content streams to full automation, and why they can reach levels that MT plus post-editing alone can’t.
The Phrase + Intento integration
Enterprise localization teams don’t just need fluent output—they need translations that match brand voice, approved terminology, and language rules. Intento Language Hub for Phrase delivers ready-to-use translations into Phrase—translated and automatically post-edited by AI agents, and checked against your requirements with Intento LQA—so teams can scale automation without sacrificing control.
What you get with Intento Language Hub for Phrase:
- Phrase stays the system of record — manage projects, workflows, linguists/vendors, TMs, and handoffs as you do today.
- Requirements-driven automation inside Phrase — Intento runs its AI workflow around your terminology, style, and compliance rules and delivers ready-to-use output into Phrase projects.
- Automated LQA that guides improvement — Intento LQA checks output against your business and language requirements and flags what doesn’t meet them, so linguists focus on what needs attention instead of reviewing everything.
- Less post-editing, more predictable quality — reduce manual effort by automating pre-translation cleanup, post-translation fixes, and quality checks—so human review is reserved for what needs it.
- Ongoing tuning from Intento experts — workflows are maintained and optimized over time as your content, languages, and expectations evolve.
If your goal is to scale automation and keep translations consistent across languages, the Phrase + Intento integration helps you move faster with predictable quality that matches your standards.
Market watch: what the latest AI news means for language AI teams
Chinese reasoning models are now credible alternatives—but only if they fit your data policy. Kimi-K2.5 (open) from Moonshot AI and Qwen3-Max-Thinking (commercial) from Alibaba point to a broader shift: strong models are coming from China, not just the usual vendors—so it’s worth comparing them on production behaviors (style, terminology, consistency), not only overall MT scores.
Translation expectations are rising. ChatGPT Translate raises the bar for how easy it should be to refine a translation, but it doesn’t solve production control. It makes iteration easy—add context, adjust tone, rerun—yet localization still needs consistent terminology, tag handling, traceability, and repeatable quality across teams and content types. That gap is why a chat UI doesn’t replace a governed translation workflow—you still need a layer that enforces consistent requirements across models and your TMS.
Top-tier models are being used more selectively. Claude Opus 4.6 handles longer tasks and adds a 1M-token context window (beta), making it attractive for revision and QA steps. But models change fast—so quality at scale comes from consistent requirements and checks, not from picking one “best” model.
Other updates
- TranslateGemma (open MT models): Google launched TranslateGemma, open models focused on multilingual and MT use cases.
- OpenAI model retirements (ChatGPT): OpenAI plans to retire GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini in ChatGPT (API availability unchanged).


