Employers increasingly resort to algorithms and automated-decision making in recruiting new employees. While this is a time-saving and efficient way to filter out the most suitable applicant, companies must beware of data protection rules as well as other potential considerations. This article explains the ICO’s suggestions to companies for ensuring an effective, non-discriminatory and compliant use of algorithms for hiring purposes.
Principle of lawfulness, fairness and transparency
Decisions made in a hiring process are often rather subjective than objective. While a company’s HR department is trained in picking the best candidate based on their qualifications, beware that algorithms are not neutral and human-made. Thus, they often have biases embedded that lead to discrimination.
If you wish to use automated decision-making in your hiring processes, it should be part of your data protection impact assessment (DPIA) to make sure AI is a necessary and proportionate solution before you start processing personal data. Don’t forget that the processing of personal data, especially sensitive personal data that are often included in applications, must be in line with the principle of lawfulness, fairness and transparency. Thus, any processing of personal data must have a legal basis and be fair and transparent in relation to data subjects.
Accordingly, all algorithms must be fair and transparent, which means that any data that you process of an applicant must not have adverse, unjustified effects on the individual. Especially sensitive personal data, such as religious affiliations or health data, must not lead to discrimination.
DPIA in the context of HR
Based on European Data Protection Board (EDPB) Guidelines, the ICO provides a list of processing operations for which you are required to complete a DPIA, as these are “likely to result in high risk”. This list of the ICO is non-exhaustive, which means that other processing activities not listed here may also require a DPIA.
Regardless, the ICO considers it “best practice” to conduct a DPIA, whether or not the processing is likely to result in a high risk, in order to ensure all processing activities are in line with the respective data protection principles. The list below does not include all processing operations that require a DPIA; however, it gives you an overview and demonstrates the requirement to do a DPIA when using algorithms for employment decisions:
|Type of processing||Description||Examples|
|Innovative Technology||Processing involving the use of new technologies, or the novel application of existing technologies (including AI). A DPIA is required for any intended processing operation(s) involving innovative use of technologies.||
|Denial of service||Decisions about an individual’s access to a product, service, opportunity or benefit, which are based to any extent on automated-decision making or involves processing of special-category data.||
|Large-scale profiling||Any profiling of individuals on a large scale.||
|Biometric data||Any processing of biometric data for the purpose of uniquely identifying an individual.||
|Tracking||Processing which involves tracking an individual’s geolocation or behaviour, including but not limited to the online environment.||
Obligations of UK law
Although UK data protection law is straight-forward in that regard, the UK Equalities Act 2010 states that indirect discrimination can be justified, if proportionate. Therefore, in your DPIA, you must ensure that you assess whether or not possible discriminatory effects could be justified in being proportionate and in any case provide appropriate safeguards and technical measures for the phase in which the algorithm is designed and implemented.
In addition, the EU General Data Protection Regulation (GDPR) prohibits “solely automated decision-making that has a legal or similarly significant effect”, unless one of the three exceptions applies:
- explicit consent,
- necessary to enter into a contract, or
- authorised by union or member state law.
None of the exceptions seem to be applicable in the private hiring process. For that reason, it is required to bring a human element into automated decision-making processes. While the EU GDPR only applies to the UK until the end of April 2021 or June 2021 at the very latest, the rule supports adherence to the principle of lawfulness, fairness and transparency. The UK GDPR currently seems to follow this approach. Here you find more information about the UK GDPR.
As mentioned before, the principle of fairness requires that any adverse effects on the data subject are reasonable and justified. In addition, individuals have a right to privacy and non-discrimination, as protected by binding human-rights laws. Accordingly, companies must use appropriate technical and organisational measures to prevent discrimination when processing personal data for profiling and automated decision-making. Any potential adverse impact on these rights must be addressed and assessed in the DPIA, while implementing “data protection by design” to avoid accountability for any violations.
Conclusion: Keep the algorithms on a short leash
Algorithms and automated decision-making can be an effective tool for your company to pick the best fitting applicant in a hiring process. In order to do so, you should be aware of possible biases and discrimination embedded in the algorithms. For that reason, assess the necessity and proportionality in your DPIA to ensure compliance with the principle of lawfulness, fairness and transparency and ensure you implement data protection by design through establishing effective technical and organisational measures. Lastly, don’t let the algorithm decide on its own and train your HR department on how to responsibly use algorithms in the hiring process.
Make data protection your competitive advantage
Our UK data protection support will help you!