Explore how EU AI legislation affects L&D. Understand and address concerns over AI usage and data privacy. This post explains the new regulations and offers practical steps for L&D teams to ensure compliance.
A few years ago, artificial intelligence (AI) seemed like something from a science fiction movie. Today, it’s a reality that many people use for work and personal tasks, transforming many aspects of our lives, including learning and development (L&D).
When used correctly, AI can make L&D more effective and efficient by offering interactive formats, reducing content creation costs, and personalising learning experiences.
Yet, despite these benefits, many L&D professionals are hesitant to adopt AI. Concerns about costs and data privacy are significant factors, and quite rightly so.
Given that fines for data privacy violations under the GDPR have reached nearly £4 billion since 2018, it’s understandable why companies are cautious about fully embracing AI technology.
Cue the recent EU AI Act.
In this blog post, we’ll explain the new EU AI Act and its potential implications for L&D. We’ll also guide you through practical steps to meet these new requirements, ensuring your education and training initiatives are compliant and effective.
EU AI legislation: A simple overview
The EU AI Act is a new regulation that governs how AI can be used throughout the European Union. It was officially passed on August 1, 2024, and will be rolled out gradually over the next few years. This Act applies to both companies that develop AI and those that use it in their professional activities, called deployers.
The Act states that organisations must disclose information about their AI systems, including their purpose, data usage, and decision-making processes. This includes ensuring that AI-driven decisions can be explained to users–maintaining transparency and accountability.
By classifying risk levels—minimal, limited, high, and unacceptable, the regulation aims to ensure appropriate oversight and control, ensuring that higher-risk AI applications are subject to stricter requirements and scrutiny (with high-risk AI applications facing more stringent rules and oversight).
Unacceptable risk: Banned entirely
- Manipulative AI: AI systems designed to manipulate or deceive people, such as those used for extensive disinformation campaigns or fake news generation.
- Social Scoring Systems: AI systems evaluate and score individuals based on their behaviour or social activities. For example, the social credit system in China assesses people’s trustworthiness and can affect their access to services.
High Risk: Heavily regulated
- Critical Infrastructure AI: AI used in managing essential infrastructure like telecommunications, power grids, or water supply systems, where failures or malfunctions could severely affect public safety.
- Medical Diagnostic AI: AI systems used in healthcare to diagnose diseases or guide treatment decisions require rigorous validation and oversight to ensure accuracy and reliability.
Limited Risk: Requires transparency
- Chatbots: AI systems used for interacting with a bot for specific tasks, for example, L&D employee development
- Deep Fakes: AI-generated media that creates realistic but fake images or videos must clearly state that the content is artificial to avoid spreading misinformation.
Minimal Risk: No Specific regulation
- Video Games: AI systems in video games that manage non-player characters and create engaging gameplay.
- Spam Filters: AI algorithms used in email services to filter out spam and categorise messages, which have minimal impact on safety and privacy.
The Act aims to support innovation while ensuring safety. It requires AI systems and those who use them to follow strict data privacy standards, including GDPR, and mandates regular checks to prevent bias and discrimination.
What does this mean for L&D?
Love it or hate it, AI is here to stay. In fact, about 80% of people are keen to learn more about using AI in their jobs because they understand its importance and potential to simplify tasks, keep them competitive, and bring fresh ideas to their roles.
However, using AI technology is more than just plain sailing.
PwC’s annual global workforce survey reveals that 33% of people are concerned about their roles being replaced by technology within three years. This means that while many are excited about AI, they also have concerns about job security.
To address these concerns, organisations need to help manage risk by offering training that teaches AI skills and reassures employees about their future roles. For L&D functions, this covers two key points:
- 1) The growing demand for training programmes focused on AI skills and integration.
- 2) The ongoing need to assess how AI is used across their L&D initiatives to maintain compliance with legislative requirements.
Whether your organisation has already integrated AI features into its L&D strategy or is just starting to explore its potential, understanding the impact of the EU AI Act is essential.
Let’s delve into what this means for L&D.
Firstly, everyone involved in AI—whether implementing, using, or managing it—must adhere to this legislation to avoid significant fines. L&D teams must evaluate how the Act affects their strategies and address any challenges limiting their use of AI for training and development. This means having people rather than machines oversee AI to prevent harm.
Secondly, L&D must adhere to data privacy standards, including GDPR. AI tools for tracking learner progress or personalising content can handle data securely and comply with privacy regulations.
Next, if you use AI-driven chatbots or recommendation engines in your learning platform, you must inform users that AI is being used. This disclosure helps learners gain trust in the platform. Being transparent also helps build confidence and ensures learners understand how their employer uses their data.
Finally, successful organisations regularly review their L&D strategy and track their return on investment. Proactively auditing AI tools ensures that your workforce development initiatives are effective and free from bias or discrimination, providing everyone with fair and unbiased training experiences.
Comply with the EU AI Act
Regardless of where you are with AI technology in L&D, there are steps you can take now to prepare your team for both today and the future. If you haven’t yet adopted AI-supported technologies for workforce development, these actions will help guide your decision-making and make it easier to embrace the benefits.
1. Understand the regulations
It sounds straightforward, but keeping up-to-date with the EU AI Act is crucial. This means staying informed about its requirements and guidelines, which may involve attending workshops, webinars, or consulting legal experts to ensure you understand compliance obligations.
Leverage your existing networks to stay current on new AI legislation. This might include participating in relevant workshops, reading the latest L&D articles, or seeking advice from your organisation’s legal team or external experts to ensure you are well-informed about compliance requirements.
2. Assess the level of risk
Different AI-generated features come with varying risk levels, so it’s essential to assess these risks to ensure compliance with EU AI laws. For example, AI for basic tasks like data entry requires less regulation. However, AI tools that interact with users, such as chatbots or training modules, are considered a limited risk and still need explaining to learners.
On the other hand, high-risk AI applications, such as those used in sensitive fields like Healthcare or HR, are subject to stricter regulations and require regular monitoring. Blossom’s unique modular design offers extensive flexibility, integrating seamlessly with various business tools like MS Teams, Google Workspace, and HR and payroll systems – complying with EU AI legislation.
As Blossom is the LXP of choice and trusted by many highly regulated companies in Healthcare, Energy, and Telecommunications, you can feel confident that any high-risk AI applications are well-managed and meet EU regulations.
3. Leverage diverse data and analytics
AI can support L&D by monitoring the effectiveness of learning programmes, connecting data to business goals, and generating clear reports. This helps ensure that your L&D efforts are effective in meeting business objectives and fair and transparent.
Considering L&D gathers large amounts of data, which can raise privacy concerns, organisations can reduce the risk of data security breaches, unfair treatment, and biased outcomes.
Why not provide users with self-service portals where they can access, review, and manage their data? This gives individuals control over their information and how it’s used. Using AI in this way helps you follow EU rules by enhancing transparency, user control, and data accuracy while also supporting data security and accountability measures.
Let’s put that into perspective. Say an employee notices outdated information in their profile that could affect their learning path or career development opportunities. A self-service portal lets them easily update their information or request corrections. This direct control over their data aligns with GDPR’s right to rectification and ensures that the AI models work with accurate and current data.
Next, periodically review the AI algorithms used to identify and address biases. This might involve checking if certain groups of learners are receiving different recommendations or resources based on factors like gender, age, or background.
This commitment to fairness ensures compliance with the EU AI Act’s non-discrimination requirements. It also shows the C-Suite and stakeholders that L&D is dedicated to ethical AI use. As a result, you reduce the risk of legal challenges or penalties and create an inclusive and equitable learning environment.
4. Implement feedback loops
Create mechanisms for learners to provide feedback on their experiences with the AI tools. For example, use feedback forms within the AI platform, allowing learners to quickly rate the quality of responses from a chatbot or AI tutor. This immediate feedback helps identify areas where L&D processes can be improved.
Blossom allows you to easily create, distribute, and report on quizzes, tests, and surveys across the entire learning system. This feature helps gather valuable feedback and assists in meeting the EU AI Act’s transparency and accountability requirements, ensuring that AI tools in your L&D strategy are continually refined and compliant with regulatory standards.
5. Educate teams
Knowledge is power, right? When everyone is educated about the EU AI legislation and how it affects their work, they can use AI tools more effectively and responsibly.
For L&D, this means developing targeted training programmes to ensure all employees understand the regulations. If we consider a company using AI in recruitment processes, targeted training can educate HR teams on the ethical use of AI for screening candidates. This training can teach how to avoid biases and ensure AI tools meet fairness and non-discrimination requirements under the EU AI Act. This helps HR professionals make fairer hiring decisions and reduces the risk of discrimination-related disputes and fines.
6. Choose the right software for your needs
Using a compliant LXP fosters trust with employees and learners by showing that your organisation values their privacy and is committed to ethical practices. This trust can lead to increased engagement and satisfaction.
When selecting software for workforce development, choose one that meets your needs for compliance, transparency, data protection, and bias prevention. Let’s look at compliance with the EU AI Act and European Labour Law Directives, such as the Directive on Temporary Agency Work and the Directive on Working Time. Professional development can impact employee rights and working conditions.
Blossom leverages AI smart recommendations to suggest learning content and courses based on the skills needed for a user’s current role. It helps learners manage their time by focusing on relevant content, improving their skills more effectively, and discovering new resources and courses they might not have found otherwise.
- “ZIM was in desperate need of a streamlined compliance process. Since Blossom, employee participation levels have increased and regulatory and compliance training requirements understood and met” –Ofit Gortler, Training Developer at ZIM Integrated Shipping Services Ltd
Furthermore, Blossom offers complete transparency about how AI is used, including how learning recommendations are generated and how data is processed.
Meet EU AI regulations with Blossom
The EU AI Act mandates diligence in using AI tools, ensuring they meet legal standards and foster a safe, fair, and effective learning environment.
Since the AI Act requires L&D to be more transparent about developing education and training programmes and using AI-generated data, it increases accountability for issues arising from AI technologies. By preparing now, you can save time and costs for potential legal and compliance matters.
Find out how Blossom uses AI technology to streamline L&D processes while ensuring compliance with the latest regulations. Schedule a demo with one of our friendly experts.