Search

Deepfakes in companies: successful defence and employee liability

The spread of deepfakes is increasing rapidly and is fascinating due to the increasingly blurred line between reality and fiction. What may seem entertaining in the case of deceptively real replicas of famous personalities in photos and videos can also jeopardise personal data and the integrity of individuals.

But how can companies effectively protect themselves against deepfakes? What about the liability of employees and employers in the event of a successful deepfake attack? An overview!

What are deepfakes?

The term deepfake is made up of “deep learning” – a key concept in machine learning based on the use of large neural networks – and the word “fake”. Deepfakes therefore refer to an advanced form of digital media manipulation in which artificial intelligence (AI) and machine learning are used to create or modify highly realistic but fake audiovisual content.

At its core, deepfake technologies use extensive data sets of real audiovisual materials to train a neural network that is capable of imitating certain aspects of these materials – such as facial features, voice patterns or specific movement sequences – or generating completely new content that is modelled on specific real people. Using methods such as face swapping, lip syncing and voice cloning, deepfakes can portray people in situations or make statements that never took place.

This technology raises significant legal and security issues, particularly with regard to data protection and the protection of the integrity of individuals. The potential uses of deepfakes range from harmless entertainment to falsifying evidence in legal disputes and carrying out targeted phishing attacks.

Given the rapid development and sophistication of deepfake technologies, it has become increasingly difficult to distinguish fake content from authentic content, emphasising the need for robust detection methods, legal frameworks and widespread education about the risks and implications of this technology.

Increased security risks due to deepfake fraud

Deepfakes represent a significant escalation in the digital threat landscape by providing the ability to create convincing audio-visual forgeries that can undermine traditional security measures.

This advanced form of tampering increases security risks on multiple levels:

Overcoming biometric security systems

Deepfakes can fool biometric security systems based on facial or voice recognition. By being able to mimic the appearance or voice of an authorised person with high accuracy, attackers could gain access to secure systems. This poses a direct threat to the security of confidential information.

Refinement of social engineering and phishing

Deepfakes open up new dimensions of threats in the area of phishing by making it possible to create highly authentic-looking fake communications. These range from emails with manipulated video or audio attachments to artificially generated voice messages and completely staged video calls. The particular danger of these attacks lies in their ability to mimic familiar visual and acoustic signals in such a way that recipients are instinctively inclined to believe them.

A striking example of the increased security risk posed by deepfakes is the so-called CEO fraud, in which perpetrators artificially imitate the voice of a managing director or other high-ranking executive. A scenario could look like this:

An employee receives a call purporting to be from the CEO with an urgent request to carry out a financial transaction or disclose confidential information immediately. The astonishing authenticity of the voice synthetically reproduced by deepfake technology can lead to the employee not questioning the legitimacy of the request. This harbours the risk of serious financial losses or data breaches.

The threat of deepfakes extends far beyond voice impersonation and includes the creation of visually convincing videos or avatars. Consider the example of a spoofed video conference:

An employee is invited to a seemingly urgent meeting initiated by the company’s IT security officer. The visual representation of the IT security officer presented during the conference appears deceptively real through the use of deepfake technology, including the synchronisation of lip movements and the imitation of characteristic facial expressions. The artificially generated IT security officer says that there is an acute security problem that requires the immediate disclosure of access data or the installation of specific software. The assumption that they are talking to a real colleague could lead employees to follow the instructions, which could result in unauthorised access to the company network or the introduction of malicious software.

Such a scenario may seem unlikely at first glance, but the incident that occurred in Hong Kong in early February 2024 is real evidence of the advancing threat of deepfake technology:

A finance employee of a Hong Kong-based multinational initially harboured suspicions when he received an urgent email from his UK-based Chief Financial Officer (CFO) requesting a meeting about a supposedly secret business venture. The employee initially suspected a phishing attack and acted with caution.

However, his scepticism subsided when he took part in a video conference call with the CFO and other senior company representatives. He recognised the people by their faces, voices and the backgrounds of their offices. They informed him that the company was about to initiate a highly sensitive business project that required an immediate capital investment and that he would be tasked with executing it. Following instructions from the CFO and other senior executives, the finance employee then initiated fifteen transfers to five different accounts in Hong Kong, totalling HK$200 million (over US$25.6 million). He was urged to exercise the utmost discretion and not to disclose any information to his colleagues.

About a week later, when enquiring at the company’s headquarters about the progress of the secret deal, it was revealed that there had never been such a scheme and no one in the company knew about the matter. At that moment, the finance employee realised that he had become the victim of a deepfake scam.

Employee liability for deepfake fraud

If an employee falls victim to such fraud, complex questions arise regarding the potential legal consequences. In this case, employee liability requires detailed and case-by-case consideration. The degree of negligence, the circumstances of the fraud case and the preventive measures established in the company play a key role.

Degree of negligence

The central question is whether and to what extent the employee has disregarded the care required in traffic. Negligence exists if the employee disregards the level of care that would have been expected of a reasonable person in the same situation. A distinction is made between

Acts that correspond to a slight oversight and in which the required care was breached to a minor degree. Typically, slightly negligent behaviour does not lead to liability on the part of the employee.

Example: An employee receives an e-mail that appears to come from the CEO of the company. The email asks for a confidential document to be forwarded. The employee checks the email address, which is similar to the CEO’s real email account, and follows the instruction without additional verification. It turns out that the email is part of a deepfake scam.

In this case, it could be argued that the employee was slightly negligent as they checked the email address but did not take further steps to verify it, such as calling the CEO directly. The similarity of the email address to the real one could be interpreted as an oversight, especially if no specific training on such types of fraud has taken place.

This is the case if the duty of care has been breached to an extent that goes beyond a minor oversight, but is not considered gross negligence. In this case, partial liability of the employee may be considered, depending on the circumstances of the individual case.

Example: An employee receives a call that is generated by deepfake technology and in which the voice of the CEO is imitated. The artificial CEO asks the employee to make an urgent transfer to a new supplier account. Although the employee knows that such requests normally require a process with several levels of authorisation, he decides to make the transfer without further verification because the call sounded very convincing.

This could be considered medium negligence as the employee ignored the company’s existing process despite being aware of it. The decision to follow the instructions without the necessary authorisations represents a significant neglect of the care required in traffic.

Gross negligence refers to a significant disregard for the care required in traffic, for example when basic safety checks are omitted. In such cases, the employee is usually fully liable for the damage caused.

Example: An employee is shown a video message created by deepfake in which the CEO demands the release of a large sum of money to an unknown party. The employee has already taken part in training courses that explicitly warn against such fraud attempts and emphasise the need to verify such requests through personal discussions with the person issuing the instruction. Nevertheless, the employee carries out the instruction without any form of reassurance or verification.

In this case, gross negligence could be involved, especially if the employee deliberately disregarded the company’s clear instructions and safety protocols. The fact that he was trained and ignored explicit procedures to verify such requests shows a significant disregard for due diligence.

Circumstances of the fraud

For a full assessment of negligence, the specific circumstances of the particular fraud case are also crucial. This includes the question of whether the fraud was obvious, what information the employee had and whether a reasonably diligent employee would have acted similarly in a similar situation.

The quality of the deepfake also plays a decisive role in the assessment of an employee’s negligence. A highly sophisticated deepfake that is almost indistinguishable from reality can often mislead even careful and trained employees, which influences the assessment of negligence. In contrast, obvious flaws and errors in a poorly made deepfake would indicate that an employee with reasonable care and attention should have recognised the deception and carried out appropriate security checks.

Therefore, negligence is not determined solely by the employee’s actions, but also by the circumstances of the individual case and the persuasiveness and quality of the deepfake.

Preventive measures by the employer

A key aspect in the assessment of employee liability is always the extent to which the employer has implemented preventive measures to prevent fraud attempts, particularly those based on deepfake technology.

Preventing fraud through deepfakes in the company requires a multi-layered preventive approach that includes technological solutions as well as organisational measures and raising employee awareness.

Firstly, employee education and training is crucial to raise awareness of the existence and potential dangers of deepfakes. Employees should be trained to critically scrutinise and verify before responding to requests received via digital communication channels, especially if these include requests to disclose sensitive information or conduct financial transactions. In particular, employees should be encouraged to respond to such a request – for example via (deepfake) call or video call – by calling back using the channels they are familiar with in order to confirm the task.

On a technical level, companies could also introduce measures to counteract deepfakes. Deepfake detection software specialised in identifying manipulation in audiovisual materials could be used to proactively detect and block fake content.

Effective prevention of deepfake fraud requires a dynamic strategy that includes ongoing training, communication of clear guidelines and continuous adaptation to new threat scenarios. These measures, initiated by the employer, form the foundation for protection against such attacks.

If an employee, despite having access to extensive training and clear guidelines provided by the employer to combat deepfakes, ignores or violates them, this could be considered gross negligence.

It is therefore primarily the responsibility of the employer to educate its employees about the risks and prevention strategies regarding deepfake attacks. In some cases, however, the proactive involvement of employees can also play an important role – especially when it comes to employees in highly qualified positions such as IT specialists or compliance officers. Specialists in these areas are expected to have a higher level of expertise and initiative. Should such specialised employees nevertheless fall victim to deepfake fraud, they could be partially blamed due to their expected expertise and neglect of personal responsibility. The professional position and the associated expected expertise therefore play a key role in the assessment of liability.

Conclusion

Deepfake technology represents an immense challenge for companies and their security strategies. With the ability to create deceptively real audiovisual content, deepfakes open the door to new avenues of fraud that challenge traditional security measures. The increasing prevalence and advanced nature of this technology requires a rethink of how to prevent and respond to security risks.

Overall, dealing with deepfake-related security risks requires a combined effort from employers and employees. While employers are responsible for implementing robust security systems, regularly training their employees and creating a culture of security awareness, employees must play an active role in the security chain and effectively utilise the resources and knowledge made available to them. Faced with these challenges, it is clear that companies need in-depth legal expertise to minimise the risk of fraud and ensure an appropriate response to the threats posed by deepfakes.

Contact us!

Secure the knowledge of our experts!

Subscribe to our free newsletter: