Search

Bias in artificial intelligence: risks and solutions

Artificial intelligence (AI) is increasingly being used in various areas of business – from recruitment to strategic decision-making processes. While these technologies have the potential to improve efficiency, objectivity and accuracy, their application also harbours the risk of bias, which can lead to discriminatory decisions.

We shed light on the dangers, legal framework conditions and methods for preventing bias in AI systems.

The problem of bias in AI systems

Bias in AI is a phenomenon that occurs when AI systems systematically produce biased results that unfairly favour or disadvantage certain groups or individuals. These biases can manifest themselves in a variety of ways, from disadvantaging certain population groups in job searches to unfair treatment in legal or medical applications.

The causes and risks of bias in AI systems are complex and deeply rooted in the technical aspects of AI development.

Causes of bias in AI systems

Prejudices in the training data

Many AI systems learn from historical data that reflects human decisions, behaviour, and assessments. If this data contains prejudices against certain groups or individuals, the AI learns these prejudices and replicates them in its decisions.

Selection of modelling approaches

The decisions that developers make when selecting algorithms and modelling approaches can also introduce bias into AI. Certain models can overemphasise or underemphasise patterns in the data, leading to biased predictions. The complexity of the model and the way it deals with ambiguities or deviating data play a crucial role here.

Subjective decisions in the design of the algorithms

The subjective decisions that go into the development process, such as the definition of success in a particular context or the selection of characteristics to be included in a model, can significantly influence the results and lead to biased outcomes.

Garbage in – garbage out principle

The GIGO principle (garbage in – garbage out) is well known in computer science and data science and describes the phenomenon that the quality of the output data is directly dependent on the quality of the input data. This means that if the input data is incorrect, incomplete, or distorted, the results generated by the AI will also be incorrect or distorted.

In the context of AI development, the GIGO effect reflects the fact that machine learning algorithms and models can inherently only be as good as the data used to train them. If this training data shows systematic biases towards certain groups or scenarios, the algorithms will not only learn these biases, but will also consolidate and reinforce them in their output processes.

In addition to general quality issues, the GIGO principle emphasises the ethical and technical challenges associated with the development and implementation of AI systems. It raises questions regarding responsibility for the quality of the data, the fairness of the algorithms, and the transparency of the decision-making processes.

Companies must therefore not only develop technical solutions to improve data quality and reduce bias, but also create ethical frameworks that address responsibility for the impact of AI systems.

Examples of AI bias and its treatment

Amazon is a prominent example of how bias in artificial intelligence can lead to undesirable results. Amazon developed an AI with the aim of optimising the application process by pre-sorting applications and filtering out the best candidates. The system was trained with historical application data, which, however, reflected a predominantly male applicant base. As a result, the AI systematically favoured applications from men. Even indirect references to gender, such as membership of women’s clubs, led to the application being downgraded.

This phenomenon was not the result of intentional discrimination by the AI or its developers but reflected the inequalities in the training data. Amazon recognised this problem and attempted to correct the bias, but ultimately decided against using AI for hiring decisions. This illustrates the challenge of effectively combating bias and designing AI systems in such a way that they make fair decisions and comply with legal requirements.

However, the problem of bias can also arise when using generative AI. Most generative AI models are trained using extensive data sets collected from the internet or other sources. These data sets often contain inherent biases that are characterised by historical inequalities, social norms and cultural stereotypes. For example, a model trained on images of executives who are predominantly male may tend to favour men when generating new images of executives.

Risks of bias in AI systems

The risks posed by bias in AI systems are far-reaching and can have serious consequences for companies. AI systems trained on biased data can reinforce existing social and economic inequalities by perpetuating discriminatory practices in areas such as recruitment processes or employee evaluations. However, AI can also lead to new and unrecognised forms of discrimination by using patterns in data that appear neutral at first glance but actually correlate with socially relevant characteristics, which can lead to the following effects:

Loss of reputation

Companies that use AI systems that turn out to be biased can suffer a significant loss of reputation. If consumers, employees, or the general public learn that a company is using technology that makes discriminatory decisions or treats certain groups unfairly, this can lead to a loss of trust. This can cause customers to turn away from the brand and take their business elsewhere.

Legal and economic consequences

Laws and regulations to protect against discrimination and ensure equal treatment are becoming increasingly strict. Companies whose AI systems exhibit bias can violate these laws and expose themselves to considerable penalties, fines or legal disputes. The following (national) legal acts contain regulatory requirements in this regard:

Bias in AI systems is often unintentional but can still lead to decisions that discriminate against certain groups of people on the basis of race, gender, religion, disability, age, or sexual identity. Such discrimination is in direct conflict with the General Equal Treatment Act (AGG) in Germany, which explicitly prohibits discrimination in these areas. Other EU Member States have similar laws in place.

The AGG aims to ensure equal treatment and prevent discrimination in various areas of life, including the world of work and access to goods and services. If an AI system used by a company makes decisions that lead to such discrimination, this may be considered a violation of the AGG, even if the discrimination was not intentional.

In the event of a proven violation of the AGG, companies may be committed to pay damages. The amount of damages can be considerable depending on the severity of the violation.

When using AI for automated decision-making, the requirements of Art. 22 GDPR must be taken into account in addition to any anti-discrimination laws, such as the AGG. This stipulates that decisions based solely on automated processing activities that have a significant impact are only permitted with the express consent of the data subject, for the fulfilment of a contract, or on the basis of a legal commitment. In addition, appropriate measures must be taken to safeguard the rights and freedoms as well as the legitimate interests of the data subject. This includes, in particular, the right to obtain human intervention, to express his or her point of view and to contest the decision.

Through these provisions, Art. 22 GDPR also counteracts the emergence of bias by introducing a human corrective into the automated decision-making process. This is intended to ensure that automated or AI-based decisions are checked again for fairness and legal compliance.

Violations of Art. 22 GDPR can lead to high fines and claims for damages for companies.

The European AI Act represents a decisive step towards legally addressing the risks of bias in AI systems. Although it does not explicitly prohibit bias, it sets strict guidelines for AI systems, especially for so-called high-risk AI systems, in order to avoid discrimination and promote fairness and transparency. This includes risk assessments, detailed documentation, and high data quality standards.

The AI Act also provides specific guidelines on bias minimisation in Art. 10 (5). Namely, the Act authorises the processing activity of special categories of personal data under certain circumstances, provided that they are strictly necessary for the identification, monitoring, and correction of bias in high-risk AI systems. This provision reflects a deep understanding of the need to use sensitive data as a tool to ensure the non-discriminatory functioning of AI systems.

The AI thus creates a new legal basis that makes it possible to include such special categories of personal data in the bias correction process while maintaining the highest security standards.

The interpretation of the term “necessary” depends on the specific circumstances of the individual case and is highly context-dependent. Various factors such as the specific scope of application of the AI, the type of data processed, and the potential risks of bias play a role in the assessment of necessity. Given the complexity and potential legal consequences, a case-by-case assessment is essential.

In view of these framework conditions, the AI Act provides significant incentives for companies to take the compliance of their AI systems seriously and implement them proactively by threatening fines of up to 6% of their global annual turnover.

Practical approaches to minimising bias in AI

Start of form

The introduction of bias minimisation strategies in the company is therefore essential and requires a multidisciplinary approach that encompasses technical, ethical, and legal perspectives. It is important to recognise that the challenge is to minimise bias without altering or distorting historical and factual realities. The incident with Google’s AI Gemini in February 2024 illustrates the complexity of this endeavour, as attempts to eliminate bias can lead to inaccurate and out-of-context representations.

To effectively minimise bias, a balanced approach that promotes diversity while maintaining accuracy and authenticity is important:

Qualitative and diversified data sets

To effectively minimise bias, the data sets used to train AI models should be diverse and representative of the entire target population. This includes a careful review and, if necessary, enrichment of the data sets to adequately include minorities and underrepresented groups.

It is therefore advisable for organisations to use data sources that have been carefully checked and validated. There are numerous databases funded by the European Union (EU) that provide access to extensive and high-quality datasets in all European languages. These collections contain thousands of terabytes of information and therefore represent a valuable resource for training AI models.

Transparent algorithms

Disclosing how AI algorithms work can help to identify and address potential sources of bias. Provide information about the algorithms’ data, methods and decision-making logic and use techniques such as LIME (Local Interpretable Model-Agnostic Explanations) or SHAP (SHapley Additive exPlanations) to understand which factors influence the AI’s decisions.

  • LIME makes it possible to explain which features in the input data contribute significantly to the decisions of a complex AI model by creating simplified models that work locally around the prediction.
  • SHAP, on the other hand, uses game theory to quantify the influence of each feature on the prediction by assigning the contribution of each feature to the prediction of the model in a fair and consistent way.

Full disclosure of how AI systems work is often not possible due to their black box nature. Nevertheless, it is important to convey a basic understanding of how they work. Partial transparency that explains the basic principles and decision-making processes of AI can already be helpful.

Ethical guidelines and standards

The development and implementation of AI requires careful consideration of ethical principles to ensure that these technologies serve the good of society and respect individual rights. In this context, ethical guidelines and standards play a central role. They serve as a normative basis for integrating fairness, transparency, and responsibility into the life cycle of AI systems.

International and national standards provide a framework within which developers, operators, and regulatory authorities can design and evaluate AI systems. They reflect a consensus on best practices and ethical norms to be considered in the design and use of AI. These include, for example, the OECD guidelines on AI, the European Commission’s ethical guidelines for trustworthy AI or specific national guidelines and standards that reflect the legal and cultural context of a country.

Continuous monitoring and adjustment

AI systems should be regularly checked for bias and discriminatory effects. This requires continuous adjustments and updates to the algorithms to ensure that they remain fair over time.

External audits

In addition to continuous internal monitoring, the implementation of external audits by independent third parties is also important to ensure an objective assessment of fairness and bias in AI systems.

Interdisciplinary teams

The integration of diversity into the teams involved in the development and implementation of AI systems is of great importance in order to maximise the diversity of perspectives and experiences that flow into the development process. A diverse composition of these teams contributes significantly to creating a broader understanding of the diverse social, cultural, and ethical contexts in which AI systems are used.

The promotion of diversity in teams could be supported by appropriate recruitment guidelines for companies to encourage a more diverse composition of research and development teams.

Conclusion

Combating bias in AI is a key challenge that is crucial to creating AI systems that are not only efficient, but also fair, transparent, and trustworthy.

An effective anti-bias strategy requires a synergistic combination of technical innovation, ethical principles, and sound legal guidelines. This integrated approach enables companies to advance AI technologies that not only promote economic success, but also respect social values and are fair for all recipients of AI decisions.

AI Compliance

Reach legal certainty for the development and implementation of artificial intelligence in your company.

Contact us!

Secure the knowledge of our experts!

Subscribe to our free newsletter: