Intento

Blog/Monthly Digest

April 2025: Subway’s translation ROI, requirements-driven approach, and GenAI for translation in 2025

Marta García

Product Marketing Manager at Intento

In this newsletter, we share how Subway implemented AI in their localization workflows, explain the difference between data-driven and requirements-driven approaches, compare 12 GenAI models for translation quality in 2025, introduce Gemini 2.5 Pro Preview to Language Hub, and more.

Maximize translation ROI with AI: Lessons from Subway

Read our blog post with key insights from our webinar with Carrie Fischer (Subway), John Weisgerber (XTM), and Vann Maxson (Intento) on AI implementation in enterprise localization workflows.

You’ll discover how to maximize ROI with these five practical tips:

  • Measuring success beyond cost savings
  • Selecting content that benefits most from AI
  • Overcoming security hurdles
  • Scaling for growth
  • Staying ahead with continuous technology evaluation

Data-driven vs requirements-driven translation

In our latest blog post, we examine three critical limitations of traditional data-driven translation:

  • Quality plateaus despite more data
  • Inability to capture specific requirements
  • Poor adaptation to changing standards.

As an alternative, we introduce requirements-driven translation, which starts with defined translation requirements, implements automatic checks, and deploys AI agents to fix the gaps. Our approach enables fully automated translation of high-volume content while meeting all requirements—with several successful implementations already in production!

Generative AI for Translation in 2025

Explore our detailed comparison of the 12 newest LLMs for translation quality.

We tested the latest models from OpenAI, Google, Anthropic, and DeepSeek across general, legal, and healthcare content.

Key findings:

  • GPT-4.5 and o1 consistently outperform competitors
  • Reasoning models have the highest translation latency
  • Legal content remains challenging for all models
  • More detailed prompts impact performance differently across models

See the complete data and discover which model best fits your translation requirements.

Try Gemini 2.5 Pro Preview, now in Language Hub

We’ve added Gemini 2.5 Pro Preview to Language Hub as our newest MT provider. This model delivers exceptional results for enterprise localization, bringing powerful reasoning capabilities to meet specific requirements for precision and consistency.

Gemini 2.5 models reason through their thoughts before responding, delivering significantly improved accuracy and performance.

Deliver global service in local languages with Language Hub for ServiceNow

Integrated with Language Hub, ServiceNow speaks every user’s language – from IT support to HR policies and customer service. Through real-time translation of every interaction, you create an authentic language experience across departments while keeping operations simple.

Everyone connects in their native language:

📌 Global IT teams instantly responding to worldwide support requests
📌 HR team managing policies for employees from all regions
📌 Support agents helping customers across continents

All requests and responses happen in each user’s language, which drives higher satisfaction rates, faster service delivery, and efficient global operations – without growing your team or complexity.

Keeping you updated on the latest AI developments in the market

Major AI providers continue to release closed models with significant performance improvements and expanded capabilities:

  • Google announced Gemini 2.5 Flash in addition to Pro
  • OpenAI announced o3 and o4-mini
  • OpenAI announced the GPT-4.1 family of API-only models, outperforming GPT‑4o and GPT‑4o mini across the board
  • OpenAI begins deprecating GPT‑4.5 Preview in the API, as GPT‑4.1 offers improved or similar performance on many key capabilities at much lower cost and latency. GPT‑4.5 Preview will be turned off on July 14, 2025.
  • Tencent’s new Hunyuan-T1 model claims performance on par with o1 and DeepSeek-R1

The open-source AI landscape is expanding rapidly with increasingly multilingual capabilities and broader language support across multiple providers:

  • Meta announced their Llama 4 family of models, which includes the multimodal Llama 4 Scout, Llama 4 Maverick, and the upcoming Llama 4 Behemoth. Llama 4 enables open-source fine-tuning efforts by pre-training on 200 languages, including over 100 with over 1 billion tokens each and 10x more multilingual tokens overall than Llama 3.
  • DeepSeek updated their V3 model with the improved DeepSeek-V3-0324 version
  • Google’s Gemma 3 family of open models offers out-of-the-box support for over 35 languages and pre-trained support for over 140 languages.
  • OpenAI plans to release their first open language model since GPT‑2 in the coming months and gather feedback from the community.
  • Alibaba Cloud published their Qwen3 family of models. Qwen3 models support 119 languages and dialects.

New AI standards emerge while researchers share insights into the current and future state of artificial intelligence:

Read more

SHARE THIS ARTICLE
Continue reading the article after registration
Already a member? Sign In

Intento — your compass in a forest of Machine Translation