In this newsletter, we address key challenges about LLMs in machine translation through our recent webinar, share how Bayer connects 19,000 employees with one AI agent in nine languages, introduce new Gemini and Claude models, and highlight our Cognigy, Confluence DC, and Jira DC integrations for real-time translation in global teams.
LLMs in Machine Translation
In our recent webinar, “Debunking Myths about LLMs in Machine Translation,” Grigory Sapunov (CTO of Intento, Google Developer Expert in Machine Learning with a Ph.D. in AI) and Andrzej Zydroń (CIO of XTM International with over 45 years of localization expertise) discussed how LLMs are transforming translation workflows, addressing key concerns about costs, security, and implementation. They also answered the most pressing questions from the audience.
Multilingual AI Agents transform customer and employee experience
Why your Contact Center AI agents need to speak multiple languages
When Contact Center AI agents speak your customers’ language, CSAT scores jump, and costs drop – but managing AI agents for each language? That’s the old way. See how a single multilingual AI agent boosts operational efficiency.
Bayer empowers 19,000 employees with one AI Agent speaking 9 languages
Learn how Bayer, a global pharma leader, transformed their employee experience by integrating Language Hub in Cognigy. They now reach teams worldwide in their native languages.
Product updates
The new models now available at Language Hub
- Gemini 2.0 Flash (popular for high-volume tasks with multimodal reasoning and 1M token context), Gemini 2.0 Flash-Lite (most cost-efficient Google’s model yet), Gemini 2.0 Pro (excels at coding performance and complex prompts).
- Claude 3.7 Sonnet–Anthropic’s most intelligent creation and the first hybrid reasoning model on the market delivers near-instant responses and extended, step-by-step thinking.
- Anthropic models are now available through Google Vertex’s Model Garden.
Transform your Cognigy AI Agents into multilingual virtual assistants
Cognigy integrated with Language Hub enables your AI agents to speak your customers’ native languages – whether they’re seeking support or receiving personalized messages. Through real-time translation, you create authentic conversations customers love:
- Your customers speak their language while agents work in theirs
- Your virtual assistants scale 24/7 support globally from a single language version
- Your sales reach customers in local languages while preserving your style and tone of voice
All while managing conversations from one place. The result is higher satisfaction rates, faster sales cycles, and efficient global growth.
Improve your global teamwork with multilingual knowledge sharing in Confluence DC
Our Fortune 500 clients prove it daily: effective communication drives business success. That’s why we created Language Hub for Confluence – enabling global teams to collaborate in their native languages in shared spaces.
Transform your international teamwork through seamless knowledge sharing in Confluence, where everyone can create pages, leave comments, search, and read in any language. With real-time translation built into the platform, your workspace becomes a unified hub for multilingual collaboration while maintaining a single source of truth – simple and effective.
Accelerate your global teamwork with real-time issue translation in Jira DC
Language Hub for Jira instantly translates issues, comments, and custom fields to:
- Drive effective collaboration with everyone accessing the knowledge base in their native language
- Boost project velocity by eliminating manual translations in your Scrum and Kanban workflows
- Scale seamlessly as your team expands while maintaining standardized summaries, descriptions, comments, and custom fields across projects
Keeping you updated on the latest AI developments in the market
- Exploring DeepSeek with Grigory Sapunov, CTO and Co-founder of Intento:
- DeepSeek moment: The high-level overview of what was special with DeepSeek and what models are there.
- DeepSeek-V3: Technical Details: Technical details about DeepSeek-V3 architecture: Multi-head Latent Attention (MLA), DeepSeekMoE, Multi-Token Prediction (MTP), FP8 Training, Suggestions on Hardware Design, etc
- DeepSeek-V3: Training: The training procedure of DeepSeek-V3-Base and DeepSeek-V3. Pre-training, Post-training, Group Relative Policy Optimization (GRPO), etc.
- DeepSeek-R1: Open model with Reasoning: The DeepSeek-R1 reasoning model, similar to OpenAI o1/o3 or Google Gemini 2.0 Flash Thinking. DeepSeek-R1-Zero, DeepSeek-R1, and DeepSeek-R1-Distill models are covered here.
- The QwQ-32B reasoning model from Alibaba Cloud offers performance comparable to DeepSeek-R1 but with a much smaller size—32 billion parameters compared to DeepSeek’s 671 billion.
- Anthropic Claude added models and instances:
Anthropic:
- model claude-3-7-sonnet-20250219
- model claude-3-7-sonnet-latest
- instance ai.text.translate.anthropic.claude-3-7-sonnet-20250219
Google Vertex:
- model: claude-3-7-sonnet@20250219
- instance ai.text.translate.google.vertex.anthropic.claude-3-7-sonnet-20250219
- OpenAI updated their models:
Added:
- gpt-4.5-preview
- gpt-4.5-preview-2025-02-27
- gpt-4o-2024-11-20
- o1-2024-12-17
- o1-preview
- o1-preview-2024-09-12
- o3-mini
- o3-mini-2025-01-31
Removed:
- gpt-3.5-turbo-0613
- gpt-3.5-turbo-16k-0613
- gpt-4-32k
- gpt-4-32k-0613