Explore DeepL's approach to improving document translation quality, especially in translating PDFs. Learn about the challenges of layout preservation, development of a new quality metric, and the iterative process behind enhancing translation accuracy.
Discover how DeepL harnessed FP8 for training and inference in next-gen LLMs, boosting throughput and model quality. Learn about our journey with NVIDIA's technology, achieving faster training and superior translations while maintaining low latency.
Learn how a change in the ":scheme" header during an HTTP/2 load balancer switch caused 502 errors in DeepL's web app. This post explores DeepL's troubleshooting journey, HTTP/2 specs, and key insights for developers on load balancer configurations.
DeepL’s Chief Scientist Stefan Mesken explains how AI research evolves to anticipate emergent capabilities that come with larger models and bigger data sets.
Explore Model Context Protocol (MCP) and its impact on AI since its 2024 launch. Learn how MCP enables AI agents to access real-world tools, and get step-by-step guidance on how to build your own MCP server in just 10 lines of code.
Unlock the true potential of your organization with a robust design system. Discover how design tokens and reusable components streamline collaboration, enhance consistency and boost productivity across teams, enabling efficient product development.
DeepL’s Chief Scientist Stefan Mesken explores the configuration and capabilities of DeepL’s new NVIDIA DGX SuperPOD with DGX GB200 systems, and what these mean for model architecture and synthetic data.