Are AI Meeting captions all created equal? Insights from Slator’s Independent Research

Your global teams rely on AI translated captions to follow meetings, act on decisions, and stay aligned across languages. But how accurate are those captions?

Many organisations assume the translation layer works well enough. Slator's independent research shows the gap between platforms is bigger than most expect — and the consequences of poor translation quality can go well beyond awkward phrasing.

Join us for an analyst-led session where Slator presents the findings from their Language Quality Assessment of AI Translated Captions report, including how major platforms compare, where translation breaks down, and what "good enough" actually looks like for high-stakes business meetings.

  • Tuesday, May 19, 2026, 2:00 PM UTC
  • Online
  • English
Register Now

What you’ll learn

  • How AI translated captions were evaluated and what the results revealed across platforms
  • Where translation errors are most common, and which language combinations are most at risk
  • A clear framework for evaluating caption quality in your own environment
  • What the findings mean if your teams rely on AI captions for cross-language communication

Meet the speakers

Alex Edwards

Head of Consulting, Slator

Hadi Inja

Product Marketing Lead, DeepL

Leonardo Doin

Head of Voice, DeepL

Secure your spot

Register now and join us in reimagining language for the AI era.