Artificial intelligence is changing healthcare. What does the law say about it?

Artificial intelligence is changing healthcare. What does the law say about it?

  • Artificial intelligence is penetrating healthcare faster than many admit. It can accelerate diagnosis, improve doctors' decision-making, and make top-quality care accessible even where specialists are lacking. At the same time, however, it raises questions that the legal framework must address: who will bear responsibility for damage caused by an algorithm, how to set up oversight of its decision-making, and how to safely handle sensitive patient data? European AI regulation already brings concrete rules and healthcare facilities must adapt to them. This article therefore focuses on key legal aspects of using artificial intelligence in medicine – from regulation and personal data protection to liability for damages.
  • Artificial intelligence (AI) is already being used in healthcare facilities today, or will be used increasingly frequently in the near future in a whole range of areas. The most common examples are assistance in diagnosis through analysis of medical images or laboratory results, risk assessment of complications in specific patients, automated patient triage in emergency departments or pre-hospital care, or accelerated processing and evaluation of medical documentation. Artificial intelligence can save doctors time, increase accuracy and consistency of diagnoses, and also make specialized care accessible to patients even in regions where specialists might not otherwise be available. In the future, the development of personalized medicine based on AI is also expected, where treatment recommendations will be adapted to genetic and other individual patient characteristics.

Regulation of artificial intelligence

  • AI in healthcare has enormous potential, but also fundamental impacts on human health and safety. Therefore, it is subject to strict regulation in the European Union. The Artificial Intelligence Regulation introduces categorization of systems according to risk level. Artificial intelligence systems that are used in healthcare and simultaneously meet the definition of medical devices or in vitro medical devices according to relevant EU regulations fall into the category of high-risk systems. Such classification brings a series of obligations for artificial intelligence system providers (manufacturers) – from mandatory conformity assessment, through requirements for training data quality, system robustness and safety, to requirements for transparency and detailed technical documentation. Added to this are obligations arising from the field of medical device regulation – namely, especially proving clinical safety and efficacy through studies.
  • Healthcare facilities that implement these solutions typically act as so-called deploying entities (professional users). Healthcare facilities have the obligation to keep operational records, ensure appropriate staff training, perform regular system monitoring and report defects and adverse events to the system provider, and also ensure that human oversight of the system is always guaranteed. Some obligations – for example, ensuring oversight – can be contractually transferred back to the artificial intelligence system provider. Among the professional public, there is also discussion about the need to obtain informed patient consent for the use of artificial intelligence systems at least when the artificial intelligence system will be used experimentally, i.e., in the development and testing phase.
  • Questions arise regarding the categorization of some other solutions that are not medical devices. For example, patient triage systems will be high-risk when used in crisis situations. However, the precise definition of crisis situations is not yet entirely clear. It may involve extraordinary events with a high number of injured, but it is uncertain whether regular hospital emergency departments could also be included among crisis situations. Conversely, automated processing of medical documentation usually does not fall into the high-risk category. However, if another service were operated over the documentation, for example a chatbot providing advice to patients, it would be a limited-risk system, and thus with its own set of obligations for providers and healthcare facilities.

Artificial intelligence and personal data protection

  • A separate and very significant area is the protection of personal data and data subject to medical confidentiality. Data in healthcare belongs among special categories of personal data according to GDPR, specifically health data. Their processing is possible when providing healthcare, but this exception cannot be automatically used for the purposes of personal data processing by AI providers if they want to use data for further training or improvement of their systems. In such cases, healthcare facilities should very carefully address conditions in contracts with these providers, ideally prohibiting data transfer or allowing their sharing exclusively in anonymized form. However, when deciding, it is necessary to consider the nature of the specific system and the purpose for which it is to be used. From a functionality perspective, the more quality data the system has available, the more accurate and reliable results it achieves.

Liability for damages

  • Liability for damages is absolutely crucial in the field of using artificial intelligence in healthcare. Artificial intelligence systems can significantly influence diagnosis, treatment decisions, and healthcare organization. If it were not clearly established who is responsible for potential errors or damages caused by an artificial intelligence system, patients would find themselves in uncertainty and there would be a risk that they could not invoke their rights to compensation for damages. Artificial intelligence systems learn by themselves and make autonomous decisions. Even the system provider is often unable to explain why the system generated a certain output, i.e., why it decided as it did.
  • We believe that when providing healthcare services, liability according to the Civil Code will be applied, specifically according to § 2936, as has been the case so far. This provision states that whoever is obliged to perform something for someone and uses a defective thing in doing so shall compensate for damage caused by the defect of the thing. This expressly applies also to the provision of healthcare, social, veterinary and other biological services. Liability is objective, i.e., regardless of fault. This means that a healthcare facility is liable to a patient for damage caused by a defective artificial intelligence system, even if it did not cause the defect or error itself. The problem with this provision is determining whether the thing, i.e., the artificial intelligence system, is defective. An artificial intelligence system or algorithm may not be defective at all, but its output may harm a patient. The system may "decide" based on erroneous data, or based on its poor interpretation of error-free data. Therefore, the doctor will probably have the final word (at least in the foreseeable future), who will have to establish diagnosis and treatment procedure lege artis.
  • If liability for damages were not assessed in this way, a patient could easily find themselves in a situation where they would have no possibility to obtain compensation for damages. However, since healthcare facilities do not have systems and their outputs completely under their control, it is appropriate for them to contractually address damage compensation in relation to artificial intelligence system providers. The contract should clearly determine who and to what extent is responsible for what – for example, for what errors of the artificial intelligence system the system provider will be responsible to the healthcare facility. These contractual provisions are crucial for artificial intelligence systems, because the artificial intelligence provider could argue in case of a claim for damages that it has no control over the system due to its learning function.
  • Healthcare facilities should actively verify whether their existing insurance coverage also includes damages caused by artificial intelligence systems. Traditional insurance products may not expressly cover risks associated with the use of new technologies, which could lead to disputes with insurers. Special insurance contracts or extensions of existing policies can ensure that AI-related risks are covered in the same way as other forms of professional liability.

Conclusion

  • Artificial intelligence in healthcare offers enormous opportunities for increasing efficiency and quality of care for patients, but at the same time brings new legal challenges. Healthcare facilities should not be afraid of using AI, but must not underestimate preparation. It is key to pay attention especially to setting up contractual relationships with system providers, consistently define liability for damages, and verify adequate insurance coverage. Only through careful preparation can it be ensured that not only the interests of patients and healthcare facilities themselves are protected, but also that artificial intelligence does not create distrust between patients and doctors.

This article was prepared by Attorney Eva Fialová together with Partner Michal Matějka from the law firm PRK Partners, who specialize in information and communication technology law, personal data protection, and legal aspects of new technologies.