Navigating AI in Language Services: Legal Insights for 2025

An interview with Benjamin Kolyško, certified data protection expert at Fondia Lithuania.

 

As artificial intelligence (AI) becomes increasingly integrated into language services, understanding the legal landscape is crucial. How do we ensure AI-driven language solutions remain accurate, fair and free from bias? And how do we protect the data that powers them?

To shed light on the legal considerations around AI usage, we spoke with the TOFT International legal advisor and partner Benjamin Kolyško, specialising in data protection. He shares practical insights on the legal risks of AI, what new regulations like the EU AI Act mean for language services and how companies and language professionals can use AI in a legally sound and responsible way.

Understanding the Legal Risks of AI in professional settings

AI tools are gaining popularity in professional settings, including language services and content creation. What should our primary legal considerations be regarding data safety?

Benjamin: It’s important to understand that no technology – including AI – is completely secure. Whether it’s a cloud-based machine translation tool or a generative AI model used in content creation, there will always be risks. These may come in the form of cyberattacks, internal data leaks or even accidental disclosure through poorly configured systems.

When integrating AI into business processes, companies must carefully assess these risks, evaluate potential negative legal consequences and implement appropriate technical and organizational safeguards.

What specific legal risks are associated with AI technologies?

Benjamin: There are several key legal risks to be aware of:

  1. Cybersecurity Threats: AI tools, especially those hosted in the cloud or integrated via third parties, can be vulnerable to hacking, data breaches or unauthorized access. These incidents can compromise confidential information and lead to legal consequences under data protection laws.
  2. Data Privacy Issues: When personal or sensitive data is entered into AI systems, particularly publicly available ones, there’s a risk of non-compliance with privacy regulations like the GDPR. This includes concerns over how data is stored, processed and potentially shared without proper safeguards or user consent.
  3. Misinterpretation and Misinformation: AI-generated content isn’t always accurate. In regulated fields like law, finance or healthcare, incorrect or misleading outputs can result in legal liabilities, contractual breaches or harm to individuals relying on the information.
  4. Intellectual Property Infringement: AI tools trained on copyrighted material may generate content that replicates or closely resembles protected works. This can lead to IP violations if the output is used commercially without proper rights or attribution.

 

Legal Frameworks Governing AI Usage

How does the GDPR currently regulate AI usage?

Benjamin: The General Data Protection Regulation (GDPR) provides a foundational legal framework for AI applications involving personal data. Under the GDPR, companies must process data lawfully, fairly and transparently. This includes informing individuals about how their data will be used, especially when processed by AI tools.

The key GDPR principles also include:

  • Purpose Limitation: Personal data should only be used for the specific purpose for which it was collected.
  • Data Minimization: Only the data necessary for a given task should be processed.
  • Accuracy: Companies are responsible for ensuring that the data processed – and any AI-generated output – is accurate and up to date.
  • Accountability: Organisations must be able to demonstrate compliance with GDPR, often through clear internal policies and documentation.
  • Integrity and confidentiality: Companies must ensure appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.

 

It is worth mentioning that when third-party AI tools are involved, formal data processing agreements (DPAs) are required to ensure those vendors meet the aligned data protection standards.

Furthermore, it must be ensured that Individuals (end-users and clients) can exercise their rights to data access, correction, erasure and objection to automated decision-making.

While GDPR lays a strong foundation for ethical data handling, the EU AI Act is now being implemented as a complementary legal measure that addresses the broader risks and responsibilities tied specifically to AI technologies.

Can you elaborate on the AI Act and its relevance to language services?

Benjamin: The EU AI Act introduces a risk-based regulatory framework that classifies AI systems into four categories – unacceptable, high, limited and minimal risk – based on their potential to cause harm.

For language services, this risk-based classification is particularly relevant. For example, AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for essential healthcare services, and which also have translation features, may be categorized as high-risk. This means stricter obligations apply, including transparency, oversight and the use of high-quality datasets to avoid errors or bias.

Selecting partners who meet the content provenance standards is key – not just for compliance but for safeguarding operations and reputation. The Act encourages companies to work with experienced language service providers who are proactive in monitoring legal developments and maintaining AI systems responsibly.

Ultimately, the AI Act is not about limiting innovation – it’s about ensuring that AI is used responsibly, especially in high-stakes environments like multilingual communication and localisation.

 

Best Practices for Mitigating AI-Related Risks

Let’s summarise: what are the best practices for mitigating the risks associated with AI usage in language services?

Benjamin: Managing AI-related risks requires a combination of regulatory compliance, technical safeguards and operational discipline.

Here are the best key practices language service providers – and anyone working with the written and visual content – should adopt:

  • Employee Training: Educate staff on responsible AI usage, data protection principles and company policies. Employees should know how to identify sensitive content and avoid unintentionally exposing it through AI tools.
  • Comprehensive Documentation: Maintain clear records of AI usage – what tools are used, what data is processed and for what purpose. This supports internal oversight and prepares the organisation for audits or legal reviews.
  • Vendor Compliance and Contracts: Ensure third-party AI providers meet GDPR and AI Act standards. Use robust DPAs to clarify how data is handled, stored and protected.
  • Transparency and Human Oversight: For AI systems used in content creation and translation – especially in high-risk areas – ensure that human experts review and validate AI outputs. This not only improves quality but also reduces legal liability.
  • Bias Monitoring and Feedback Loops: Implement processes to monitor AI performance over time, gather user feedback and correct bias or inaccuracies. This helps align AI tools with both compliance requirements and real-world use.
  • Internal Policies for Regulatory Alignment: Establish internal procedures for assessing AI tools and tracking regulatory updates, especially those introduced under the EU AI Act. Regularly audit your tools and workflows to ensure ongoing compliance.

 

By embedding these practices into their operations, language service providers can not only reduce legal and reputational risks but also build trust with clients and position themselves as responsible users of AI.

 

Closing Thoughts

As AI continues to evolve, so does the legal landscape governing its use. Companies dealing with content creation and language services must stay informed and proactive in addressing the legal implications of AI integration.

By implementing robust data protection measures, maintaining transparency with clients/end-users and adhering to regulatory frameworks like the GDPR and AI Act, organisations can integrate AI solutions into their processes responsibly and effectively.

 

You might also like

For the second time, we attended the GALA conference, hosted this year in Valencia, Spain. It’s a signature event of the Globalisation and Localisation Association (GALA), aimed at bringing together a community of professionals to share knowledge, expand contacts and renew professional motivation.
On May 15-17, we participated in BALTTECHNIKA 2024, one of Lithuania’s most significant annual tech events. For the first time, TOFT International presented its own stand.