Intento

Blog/GenAI

From service desk to strategic localization partner in the AI era

Vlada Klimova

Head of Customer Support at Intento

AI didn’t just increase translation volume—it multiplied the number of people and systems producing language.

Multilingual content is now created across more teams, tools, and workflows. Without a shared governance layer to keep tone, brand protection, and cultural fit consistent, output drifts—and localization ends up fixing issues downstream.

This post is adapted from a webinar conversation between Hilary Atkisson Normanha (Sr. Product Manager, Localization & Internationalization at Spotify) and Ekaterina Syromyatnikova (VP of Customer Success at Intento). Their core message: localization can’t stay a delivery function. It needs to own governance—set requirements, define how quality is measured, and influence AI workflows before multilingual content ships.

Content is scaling faster than standards

A decade ago, localization teams controlled most multilingual output because they owned the pipeline. AI broke that pipeline into pieces.

Across the org, teams hook up to different models, write their own prompts, and set their own quality thresholds—so users see inconsistent language across the product.

Hilary summed up what’s missing: “We don’t have a layer of governance that can ensure consistent tone, brand protection, cultural fit, glossaries, style guides, all of those things.” Without a shared layer that applies the same rules everywhere, quality turns into rework.

Governance has to move upstream

The fix isn’t more post-editing; it’s influencing how content gets produced in the first place.

Hilary described her work as “bridging the gap between the localization team and the more technical side of the business.” That bridge matters because language is now generated by systems, not handed off as a single, controlled stream.

In practice, upstream governance comes down to a few early decisions:

  • which models and connectors teams use
  • how prompts and guardrails are defined
  • where terminology, tone, and safety guidance live
  • how quality is evaluated before content ships
  • which content types should not be AI-generated at all

When localization owns those inputs, teams can move fast without drifting apart.

Speak in outcomes, not localization vocabulary

Most localization teams can explain what’s wrong, but they often frame it in internal terms.

Hilary put it simply: “The language that we’re speaking is not the language that they’re speaking. We’re saying things like glossaries and tone, and they’re saying, ‘We don’t know what you’re talking about.’”

Ekaterina made the practical remark: the shared language with leadership is impact—time saved, risk reduced, growth enabled. That’s what gets attention in planning cycles.

So the business case starts where the stakeholder already lives:

  • Engineering: time lost to rework and fixes caused by language issues
  • Marketing: engagement and conversion across regions
  • Support: deflection and time-to-resolution by locale
  • Product: activation, retention, search success, time in app

Then you propose a governance change that moves their metric, not a quality initiative that only makes sense inside localization.

Use “good enough” metrics, then validate

This is where teams often stall: they try to prove a clean causal link between language quality and business results, but the data is partial, shared, or owned elsewhere.

Hilary’s point wasn’t to lower standards. It was to stop waiting for perfect proof before taking action. “Perfectionism is getting in our way as an industry.” Leaders plan with assumptions and ranges; your proposal can do the same if you’re transparent about what you’re assuming and how you’ll check it.

Engineering time is a practical starting point because it’s expensive and visible. Hilary’s approach is simple: “Get a few metrics, create a ballpark number, do a projection, and say we’re going to save approximately X amount of engineering time.”

A credible “good enough” method looks like this:

  • pull 2–3 recent examples of language-related rework (bugs, rollbacks, re-translation cycles)
  • estimate hours spent across roles (engineering, PM, QA, localization)
  • multiply by frequency per quarter
  • convert to cost using internal rates or a standard estimate
  • present a range (low / likely / high) and state assumptions clearly

That’s usually enough to secure a pilot—and the pilot is where you earn precision.

Your expertise is the missing input

Many language professionals assume the technical org understands multilingual constraints. In Hilary’s experience, it often doesn’t.

“When I start talking to them about why the LLM isn’t doing well in this language, they’re shocked.” That surprise is a signal: you have expertise that affects product performance, and it often isn’t represented where decisions are being made.

It shows up in the details that shape the user experience, and AI often gets them wrong:

  • plurals, gender, and inflection rules
  • placeholders, formatting, and UI constraints
  • locale sorting and collation
  • terminology consistency at scale
  • cultural fit and safety boundaries
  • evaluation frameworks for multilingual output

Owning that expertise—then translating it into product and engineering terms—is how localization becomes a strategic partner.

Stop behaving like a service desk

If localization behaves like an intake queue, it will be treated like one.

Hilary named the pattern: localization has often been positioned “as a service desk.” That posture works when there’s one pipeline. It fails when content generation is decentralized.

Her alternative is a useful reframing: treat localization as a “company inside a company”. “Our customers are the other people inside the company.” That changes the job from delivering outputs to driving adoption of better ways to produce multilingual content.

A practical pattern follows:

  • start with what teams already want (speed, coverage, automation)
  • get into the workflow
  • add evaluation and standards once you’re inside
  • expand from one team to standardization across teams

Go to the right level, then expand sideways

In a large org, good ideas often die because they’re pitched to the wrong person. You get agreement, then nothing moves.

Hilary’s advice is direct: “You need to go to the decision-maker.” You may still get shut down at first if you’re pitching at the wrong level or without a clear metric. Lean on peers to learn the language and pain points, ask your manager to open doors, and take the proposal to a decision-maker who can make the tradeoff.

That’s how you avoid getting stuck in one-off fixes as well. If there’s no decision-maker backing the change, localization ends up solving the same problems team by team. When a decision maker supports it, you can standardize how multilingual content is produced—so quality doesn’t depend on which team built the workflow.

Cultural nuance needs checks and product signals

When meaning depends on cultural references—images, memes, local context—AI can drift. The fix isn’t debate after release. Set requirements up front and watch the results.

Hilary described two layers: sampling checks that verify adherence to the context and guidance you provide, and real-world metrics that show whether users got what they needed. “You can also use real-world metrics. Did they click on anything?” When the output misses, users abandon, refine searches, or disengage—and that behavior becomes evidence stakeholders respect.

What to learn next without becoming an engineer

AI changes weekly, and that can make people feel permanently behind.

Hilary’s advice is to drop the fear response and focus on adjacent areas where localization expertise creates leverage. “Let go of that fear because you actually have a lot of knowledge already.” The highest-return areas are multilingual LLM behavior, evaluation frameworks, context injection (terminology and style constraints), taxonomy, sampling strategies, and quality thresholds by content type.

The shift that matters

AI will put more multilingual content into production, so the key is keeping it consistent.

Localization can either fix issues after release or lead governance upfront—setting language and business requirements, tying decisions to stakeholder metrics, and shaping the systems that generate language.

You can watch the full webinar recording here.

Read more

SHARE THIS ARTICLE
Continue reading the article after registration
Already a member? Sign In

We know how to make your business multilingual and productive. Let's talk.