An interview with Benjamin Kolyško, certified data protection expert at Fondia Lithuania.
As artificial intelligence (AI) becomes increasingly integrated into language services, understanding the legal landscape is crucial. How do we ensure AI-driven language solutions remain accurate, fair and free from bias? And how do we protect the data that powers them?
To shed light on the legal considerations around AI usage, we spoke with the TOFT International legal advisor and partner Benjamin Kolyško, specialising in data protection. He shares practical insights on the legal risks of AI, what new regulations like the EU AI Act mean for language services and how companies and language professionals can use AI in a legally sound and responsible way.
AI tools are gaining popularity in professional settings, including language services and content creation. What should our primary legal considerations be regarding data safety?
Benjamin: It’s important to understand that no technology – including AI – is completely secure. Whether it’s a cloud-based machine translation tool or a generative AI model used in content creation, there will always be risks. These may come in the form of cyberattacks, internal data leaks or even accidental disclosure through poorly configured systems.
When integrating AI into business processes, companies must carefully assess these risks, evaluate potential negative legal consequences and implement appropriate technical and organizational safeguards.
What specific legal risks are associated with AI technologies?
Benjamin: There are several key legal risks to be aware of:
How does the GDPR currently regulate AI usage?
Benjamin: The General Data Protection Regulation (GDPR) provides a foundational legal framework for AI applications involving personal data. Under the GDPR, companies must process data lawfully, fairly and transparently. This includes informing individuals about how their data will be used, especially when processed by AI tools.
The key GDPR principles also include:
It is worth mentioning that when third-party AI tools are involved, formal data processing agreements (DPAs) are required to ensure those vendors meet the aligned data protection standards.
Furthermore, it must be ensured that Individuals (end-users and clients) can exercise their rights to data access, correction, erasure and objection to automated decision-making.
While GDPR lays a strong foundation for ethical data handling, the EU AI Act is now being implemented as a complementary legal measure that addresses the broader risks and responsibilities tied specifically to AI technologies.
Can you elaborate on the AI Act and its relevance to language services?
Benjamin: The EU AI Act introduces a risk-based regulatory framework that classifies AI systems into four categories – unacceptable, high, limited and minimal risk – based on their potential to cause harm.
For language services, this risk-based classification is particularly relevant. For example, AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for essential healthcare services, and which also have translation features, may be categorized as high-risk. This means stricter obligations apply, including transparency, oversight and the use of high-quality datasets to avoid errors or bias.
Selecting partners who meet the content provenance standards is key – not just for compliance but for safeguarding operations and reputation. The Act encourages companies to work with experienced language service providers who are proactive in monitoring legal developments and maintaining AI systems responsibly.
Ultimately, the AI Act is not about limiting innovation – it’s about ensuring that AI is used responsibly, especially in high-stakes environments like multilingual communication and localisation.
Let’s summarise: what are the best practices for mitigating the risks associated with AI usage in language services?
Benjamin: Managing AI-related risks requires a combination of regulatory compliance, technical safeguards and operational discipline.
Here are the best key practices language service providers – and anyone working with the written and visual content – should adopt:
By embedding these practices into their operations, language service providers can not only reduce legal and reputational risks but also build trust with clients and position themselves as responsible users of AI.
As AI continues to evolve, so does the legal landscape governing its use. Companies dealing with content creation and language services must stay informed and proactive in addressing the legal implications of AI integration.
By implementing robust data protection measures, maintaining transparency with clients/end-users and adhering to regulatory frameworks like the GDPR and AI Act, organisations can integrate AI solutions into their processes responsibly and effectively.