In this newsletter, we cover topics including: 110 new languages available on our platform via Google API, with 26 being exclusive to us; the Intento LQA metric for identifying potential translation errors, upgrades to Translation Storage for maintaining high-quality translation memory, and the new Translator for Confluence. We also share how you can easily add AI to your content workflows with our Language Skills.
Boost your content workflows with Language Skills in Intento GenAI Portal
We developed the GenAI Portal to help you easily add AI to your content workflows to automate repeatable tasks. Learn how our Language Skills can help solve typical content issues for different teams.
Product updates
Google added 110 new languages for translation via API, with 26 of these being new to our platform, including 8 dialects
These additions are exclusive to us as they’re not available through other models on our platform. They include rare languages like Afar and Hunsrik and more common ones like Fulah and Zapotec.
To better serve global communities, we’ve also added support for Betawi, Chiga, Dinka, Tulu, and many more. Our dialect offerings now include Ndau (Zimbabwe), Portuguese (Portugal), and Persian (Afghanistan).
For improved accessibility, we’ve added alternative alphabets, including Berber (Latin), Bambara (N’Ko), Malay (Arabic), and Panjabi (Arabic).
Whether you’re connecting with Batak Toba speakers, need Kituba translation, or require localization in Kok Borok, we have you covered. Book a demo to learn more about our comprehensive language offerings.
Easily find translation errors with the Intento LQA metric
Our Translation Storage saves you money by reusing translations instead of starting from scratch every time. We’ve made it even better by helping you find translation errors. With our new Intento LQA metric, combining Multidimensional Quality Metrics and GPT-4o, you can now evaluate translation accuracy, fluency, and grammar quickly and cost-effectively.
This means you can direct your linguists to focus on the content that really needs their attention, saving time and money. We’ve already tested this in our latest State of MT Report and for a study at the AMTA 2024 conference. The results show it’s much more accurate at catching critical and major translation mistakes, with fewer false alarms. Book a demo now to learn more!
Maintain high-quality TM with Translation Storage
We’ve upgraded our Translation Storage by adding the Intento LQA metric and support for TMX files. Now, you can import your approved translations into Storage, automatically clean them using Intento LQA, and leverage them in real-time translation scenarios like website translation or document translation through the Translation Portal. Plus, you can export the clean translation memory and use it in your translation management system.
This update tackles a common challenge in translation management: the gradual buildup of errors in TM that can affect translation accuracy over time. Keeping your TM accurate requires regular cleanup. This process used to take weeks or months, but now it can be done in minutes. Our update also cuts down on manual review, saving time and money. Book a demo to learn how to maintain high-quality TM.
Make Confluence instantly multilingual with Intento’s Translator for Confluence
We’ve increased our range of Atlassian solutions and made it easier for everyone to communicate and share knowledge in the Confluence Data Center with our new Translator for Confluence.
Your team needs to access content every day and when they speak multiple languages it is hard to keep up with the demand for translation. Now, with the Intento Translator for Confluence, when someone writes, searches, and reads Confluence pages or leaves comments, everything automatically translates in real-time, while keeping your terminology, style, and tone of voice. This means that everyone can access and search for content in the language they are most comfortable with, making it easier to work together. Book a demo today to learn more.
We shared our new study on the Comparative evaluation of LLMs for Linguistic Quality Assessment in MT at GALA
We’ve expanded our study on using AI to evaluate Machine Translation. This time, we’ve tested models from OpenAI, Google, Anthropic, and open-source options across three language pairs and eight domains.
Our goal is to find how well various Large Language Models perform translation quality checks. We’re measuring how well they match human judgments on error types, severity, and overall quality scores. The results will help create a more efficient quality assessment process.
We hope to share more details soon about which AI models best evaluate machine translation quality across multiple languages, new tips for building a more efficient quality check system, and much more.
Keeping you updated on the latest in artificial intelligence
- New OpenAI models: o1-preview and o1-mini with better reasoning.
- New Google production-ready Gemini models: gemini-1.5-pro-002 and gemini-1.5-flash-002 with a reduced price, higher rate limits, and faster output. Everything is available on the Intento MT and GenAI platforms.
- Fine-tuning is now available for GPT-4o and GPT-4o mini on all paid usage tiers, helping you achieve higher performance at a lower cost for specific use cases. Fine-tuning for vision models is also now available.
- A new Model Distillation offering provides developers with an integrated workflow to manage the entire distillation pipeline directly within the OpenAI platform. It involves fine-tuning smaller, cost-efficient models using outputs from more capable models, allowing them to match the performance of advanced models on specific tasks at a much lower cost.
- OpenAI announced a public beta of the Realtime API. It currently supports text and audio as both input and output.
- OpenAI now supports Prompt Caching, which allows developers to reduce costs and latency by reusing recently seen input tokens. Anthropic Claude and Google Gemini also support a similar feature.
- The Gemini API supports queries in 100+ additional languages. Gemini 1.5 Flash tuning is now available to all developers.
- The Google’s Pathways Language Models (PaLM) are deprecated. As of October 9, 2024, you can no longer access these models from new Google Cloud projects, and access for all projects will be removed on April 9, 2025. Using Gemini is recommended.
- Anthropic proposed Contextual Retrieval, which improves RAG quality significantly.
- Anthropic announced the Claude Enterprise plan, which includes an expanded 500K context window, more usage capacity, and enterprise-grade security features.
- Cohere released a new (V2) version of their APIs with better developer experience.
- Salesforce announced Agentforce to help create autonomous AI agents.
- Meta released Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B) for edge and mobile devices.
- AI21 released the Jamba 1.5 family of open models with long effective context handling, good speed, and quality. The models are built on the hybrid SSM-Transformer architecture.