AI
A robot hand with the letter AI and a lady justice statue on the wooden table with law books. 3d illustration.

It’s no secret that data privacy is a hot button issue of the 21st century. With so much personal information being submitted and stored online, the risk of a data breach is more serious than ever. Human Resources departments have always been a hotbed of personal, medical, and financial information, which explains why confidentiality is so central to HR practices. 

In 2018, the European Union passed the much-discussed General Data Protection Regulation (GDPR), a law addressing the modern issue of data privacy with many implications. The GDPR continues to lead the way internationally by reinforcing the belief that individuals have a legal right to their own data. 

The EU recently announced an even stricter grip on digital citizenship and the dissemination of private data by proposing regulations on artificial intelligence (AI). Let’s talk about the proposed legal requirements for software that uses private data and AI, and how these regulations may impact HR departments moving forward. 

Proposal for the regulation of AI

The European Commission’s proposal to regulate the use of artificial intelligence includes a whopping fine equivalent to 6% of a company’s global turnover if serious noncompliance is uncovered. The proposal details many rules and prohibitions for what it terms “high risk AI” software, or software that uses AI while dealing with personal data. 

The regulations place responsibility on both the users and providers of AI-driven software, meaning liability exists for both in the event of a data breach or hacking attempt. The proposed body of law places emphasis on mitigating bias in AI, providing full transparency regarding the type of algorithms used, and maintaining minimum levels of human oversight. 

This will be achieved by applying techniques to datasets that will help identify potential biases and inaccuracies and test the relevance of the personal data being used. Software providers and HR departments must prove their compliance to these newly proposed regulations, which could soon become law in Europe. 

This is an especially sensitive topic in 2021, a year that has seen an unprecedented boost in workplace digitalization. With the pandemic forcing most companies to switch to remote work, many HR departments have begun to rely on high quality cloud-based collaborative software that provides important features like real-time project status updates and access controls.  

Recruitment and hiring processes were totally re-envisioned to allow for Zoom job interviews, digital resumes, and virtual onboarding processes. While many are celebrating this leap, the convenience comes at a price. Although software can make communication, collaboration, and organization much easier, the potential to introduce greater bias in hiring, as well as the risk of losing control of sensitive data, can put organizations in a tricky spot.

Companies will need to have comprehensive regulations and risk management controls in order to be deemed sufficient under this new body of law. This means that HR software must be designed with the aim of legal compliance in mind, while also applying self-auditing practices to ensure that personal data remains safe throughout its lifetime in the system.

Issues with compliance 

Software providers are expected to build software that is cyber secure and that takes all necessary precautions to protect user data. However, the responsibility doesn’t end there. Companies who use said software also are legally liable if they do not monitor and adhere to operational guidelines meant to keep data safe which will be given to them by their software providers. There are several challenges companies may face in conforming with these regulations. 

Increase in remote hiring

This requirement is made even more complex when one considers the recent influx of remote workers and what organizations need to do to ensure that their work-from-home colleagues stay cyber secure. With so many unique AI-driven HR software available, and new technology being developed every day, it will be a challenge for organizations to fully understand what they need to do to stay compliant. 

The required steps to prove compliance for those that design and use AI-driven software is determined on a case-by-case basis, making it even harder to understand what exactly needs to be done to follow the new guidelines. What’s more, with technology being developed at a break-neck pace, it’s difficult to see how traditional legal institutions will keep up with the changing times. 

High risk vs. lower risk AI

In an effort to regulate a wide-ranging area of tech like AI, the European Commission identifies in its proposed regulations a difference between “high risk” and “lower risk” AI, the latter being AI that does not directly handle sensitive data. 

For “lower risk” AI software, regulatory standards are much more lax. For example, companies are required to disclose to users whether they are interacting with an AI-driven software (like a chatbot) but don’t need to provide full transparency as to which algorithms are used. 

However, the regulations meant for high risk AI are more stringent, and they will likely affect organizations all around the world. The new law covers any AI-driven software that is used in Europe, even if the company designing and selling the software is located abroad. 

Impacts beyond the EU

We know the ripple effects of laws like this because we have seen how the GDPR has affected corporate views on data privacy around the world. American companies often take note of new regulations coming from Europe because they know that these may be turned into frameworks for future regulations in the US. Even if these laws never make it across the ocean, they still affect any international company that does business with European clients. 

One of the biggest requirements from the GDPR that has inspired similar legislation throughout the world is the requirement that organizations that experience data breaches inform their customers of this within at least 72 hours of the breach. This is part of the European Union’s commitment to empower individuals to take control of their own data. 

The EU is not the only body considering regulations for the implications of advanced technology. In the US, The IoT CyberSecurity Improvement Act of 2020 seeks to encourage more uniform and secure ways of developing and managing such technology. 

The trend of creating appliances that can easily be hooked up and controlled via the internet opens up the door for hackers, and it certainly requires oversight. But like the EU’s proposed legislation, there appears to be a lack of clarity at this point as to how companies and individuals will be held accountable.

Moving forward with privacy 

While these newly proposed regulations for AI have resulted in the creation of an European Artificial Intelligence Board, the nascent law respects the sovereignty of individual European countries when it comes to enforcement. Unlike the GDPR, countries will be individually responsible for enforcing the new law within their borders, adding confusion to the already vague proposed regulations.

It will be vital that the EU provides clarification on exactly which mechanisms are required to prove compliance to their new AI regulations, allowing international authorities to understand and fully cooperate with the new governance. After all, AI already suffers from misunderstanding and rumors regarding it’s adoptions and potential, and any body of law on the topic should make an effort to clarify what AI is and isn’t. 

The new proposal will be debated and expounded upon by the European Council and Parliament, which will hopefully shed light into the unclear aspects of the new law and how organizations can do their best to stay compliant. In the meantime, many firms struggle with becoming compliant with the regulations and what it could mean for their companies moving forward.

Conclusion

The new regulations proposed have not been put into law yet, and it’s clear that many uncertainties will need to be cleared up prior to adoption. These things take time. After all, the GDPR was adopted in 2016 but not actually enforced until 2018.

The European Union has been a leader in tech regulation and a champion for data privacy, and it’s likely that they will continue to inspire others with their legislation in the years to come. However, it’s important for organizations and their HR departments to stay well-informed so they can rise to the occasion and maintain data privacy compliance alongside the rapid digital transformation which comes with AI adoption.

LEAVE A REPLY

Please enter your comment!
Please enter your name here