In this newsletter, we cover:
- How Strava built an AI-driven localization program to scale globally in just six weeks
- Recognition from Nimdzi Insights as “Tech of the Week”
- How an agentic approach can cut post-editing by up to 95%
- Our enhanced Trados Enterprise integration, which preserves tags
- A Language Hub for MetricStream that supports 650+ languages for global risk management
- The latest AI developments in the market
How Strava and Runna built a brand-new AI-driven localization program and went global in under six weeks
At LocWorld54 Monterey, we showed how Strava scaled Runna from one market to global scale in six weeks—and grew revenue and active users within three months.
Read this practical case study on fast market entry — in collaboration with Welocalize and Cornelius Communications. You’ll learn how Strava:
- Shipped 2M+ words across 7 languages in six weeks, starting with no TMs or workflows.
- Replaced slow handoffs with parallel, cross-team work to meet the deadline.
- Used specialized AI agents to meet translation requirements without historical data.
- Kept humans focused on language governance, brand fit, and final decisions—not output.
Intento named “Tech of the Week” by Nimdzi Insights
Nimdzi Insights has named Intento “Tech of the Week,” recognizing our advancements in AI integration for enterprise localization, including modules for terminology, tone, compliance, and post-editing, which enable more effective solutions.
This recognition highlights how AI orchestration tools are shaping the next stage of multilingual communication—moving beyond simple translation delivery to providing holistic control of content workflows across the enterprise.
AI agents for enterprise localization
Only 1–2% of your content may need human post-editing, but it can eat up 95–98% of your localization budget. Read our blog post on how to fix that with AI agents. We explain why machine translation alone can’t meet enterprise business and language requirements, even when it’s customized and “good enough,” and why teams still pay heavily for manual fixes.
You’ll see:
- The gap between data-driven translation (what MT can learn) and requirements-based translation (what your business actually needs).
- How AI agents handle instructions like style, tone, terminology, compliance, and source cleanup—so translators don’t spend time on mechanical edits.
- How an agentic approach can cut post-editing by up to 95% and let translators focus on work where human judgment matters most.
70 new DeepL languages now available in Intento Language Hub
DeepL’s expansion—adding around 70 new languages in beta—is now fully integrated into Intento Language Hub, so all newly supported DeepL languages are available to you for translation and workflows.
Protect your tags with the Intento Language Hub for Trados Enterprise
We’ve enhanced the Intento Language Hub integration with Trados Enterprise, giving you precise control over translations while keeping every tag in place.
What this means for you:
- Non-translatable items stay untouched: Serial numbers, codes, and placeholders are automatically recognized and left as they are.
- Original tags return intact: Layout and formatting markers are preserved in their original positions, so your document looks correct the first time.
- Consistent handling of complex content: The Language Hub for Trados Enterprise processes each segment according to its type, ensuring reliable results even in mixed or heavily tagged files.
The Intento Language Hub allows you to access multi-agent translation workflows directly from Trados Enterprise. Use the best MT and GenAI models for each step of your multi-agent workflow across all your projects.
Translation for MetricStream in over 650 languages
We were proud to sponsor GRC Summit Las Vegas 2025 and meet with the GRC community to discuss how to manage global risk in local languages.
Our Language Hub for MetricStream transforms how you manage risk across borders by making your enterprise risk dashboards and analytics, operational risk intelligence, and business continuity plans available in over 650 languages with real-time AI translation.
Keeping you updated on the latest AI developments in the market
New models:
- OpenAI has released GPT‑5.1, introducing two variants (Instant & Thinking) with new personality styles and adaptive reasoning time.
- Google has released Gemini 3, its most intelligent AI model, making the Gemini 3 Pro family available in the Gemini app, AI Studio, Vertex AI and the Gemini API (gemini-3.0-pro and related models) for advanced multimodal tasks.
- xAI announced Grok 4.1, an updated model now live on grok.com, X and the mobile apps with improved creative, emotional and collaborative capabilities while maintaining Grok 4’s intelligence.
- Anthropic introduced Claude Opus 4.5, its newest flagship model, describing it as the best model in the world for coding, agents and computer use and making it available across its apps, API and major clouds.
- Mistral AI introduced Mistral 3, the next generation of its open multimodal and multilingual models, highlighted as combining frontier performance with fully open access.
- DeepSeek released DeepSeek-V3.2 and DeepSeek-V3.2-Speciale as new reasoning-first models built for agents.
- Moonshot AI released the open-weight Kimi K2 Thinking model and says it outperforms some closed models on community benchmarks, with some observers dubbing it a “DeepSeek Moment 2.0.”
Other updates:
- Meta’s FAIR team introduced Omnilingual ASR, a suite of speech-to-text models covering 1,600+ languages—including ~500 low-resource languages reportedly transcribed by AI for the first time.
- In “Visualizing Research: How I Use Gemini 3.0 to Turn Papers into Comics,” Grigory Sapunov (CTO & Co-Founder at Intento) shows how he uses the Gemini 3.0 Pro Image (“Nano Banana Pro”) model to turn research papers from Arxiv into graphic-novel-style visuals.
Building on this approach, in “NeurIPS 2025 Best Papers in Comics” he publishes an auto-review of the award-winning papers from the NeurIPS 2025 deep learning conference in a comics-style format. - The Model Context Protocol team marked MCP’s first anniversary with a November 25, 2025 spec update adding task-based workflows, simpler authorization, new security and enterprise features and improved support for agentic tool servers.
- “Compute as Teacher” proposes turning extra inference compute into reference-free supervision by generating many rollouts, synthesizing a single answer (even if all rollouts are wrong), and using auto-generated rubrics to score them.
- The “ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration” paper by NVIDIA shows that the future isn’t a single 10-trillion-parameter model that does everything; it is a Compound AI System where a lightweight “Manager” routes work to specialized “Workers.” The paper provides the blueprint for this architecture, proving that an 8B model can outperform a frontier model (GPT-5) while cutting inference costs by roughly 70%.
- Also handy: the “Agentic Design Patterns” Google Docs book.
- A UC Berkeley and Project CETI study finds that sperm whale codas exhibit structured vowel- and diphthong-like patterns that are produced, combined and exchanged in ways comparable to aspects of human speech. The researchers use generative adversarial networks (GANs) to analyze the acoustic properties of the calls.


