Responsible use of AI based tools in medical diagnosis and treatment – the AI sandbox

31.01.2023

In 2021, the Norwegian Data Processing Authority (DPA) established a regulatory test environment (“sandbox”) for Artificial Intelligence (AI). The purpose of the sandbox is that companies and government agencies can collaborate with the DPA when developing and testing AI based technology which is likely to have privacy implications. The DPA will provide guidance during the development and testing phases, to ensure that the resulting AI tools will be responsible and in compliance with personal data processing rules.

The Bergen hospital project

In a recently completed sandbox project, the DPA collaborated with public hospitals in Bergen in connection with the development of an AI-based diagnostic tool. The Bergen hospitals had experienced that 10 % of their patients are responsible for more than half (53 %) of the total number of bed days, and that these patients are frequently readmitted to the hospital within 30 days of their previous discharge. The hospital wanted to develop technology which, based on an algorithm and machine learning, could predict which of their patients that were likely to be readmitted, based on analysis of data from the individual patient’s medical records.

Earlier medical studies had shown that it was possible, based on routine data, to predict which patients had the highest risk for readmission. Such earlier models had proven to be accurate on a group level, but not on an individual level. The purpose of the project in Bergen was to develop technology that would apply such predictions to the individual patient, thereby enabling the medical professionals to provide additional medical treatment to the patients who needed it, in order to avoid readmission. This would in turn improve the individual health care received by these patients, as well as save on hospital resources.

In step 1, Helse Bergen developed an AI based system which would analyze data obtained from the patient’s digital medical records and provide a warning if there was an increased risk of readmission. This warning would be provided to the medical staff upon admission of the patient. The development of the algorithm was carried out using actual patient data.

In step 2, the medical personnel involved in the treatment of the patient would use the prediction and their medical know-how to issue a patient score and implement preemptive measures for the patients where this was deemed necessary.

 

The DPIA prior to the project

Prior to the project, the participants in the project carried out a Data Protection Impact Assessment (DPIA). The DPIA revealed two specific risks to the fairness of the processing, namely a risk of false negatives and false positives and a risk of demographic bias. Neither of these was found to entail a high risk.

Regarding the false positives, this would simply mean that the patient would receive more extensive treatment than what they otherwise would, which is not a threat to the patients. A false negative would entail that the patient would receive the same treatment as they would have received without the tool, which also cannot be categorized as high risk.

Regarding the risk of demographic bias, the project considered whether this risk could be reduced using more extensive data sets. In machine learning, it is often beneficial to provide as much data as possible, however this was likely to conflict with the principle of data minimization in the GDPR. Using low quality data sets in the training of the algorithm was identified as entailing a risk for demographic bias, which could also result in demographic bias in the finished product. The project found, however, that the accuracy of the algorithm was not dependent on the use of an extensive amount of data pertaining to the patient, but rather that using fewer but carefully selected parameters would provide results that were just as accurate. For example, it turned out that it was only necessary to consider the number of previous diagnoses given to a patient, it was not necessary to include data regarding each specific diagnosis.

 

Lawfulness of the processing

The development of the tool was based on actual patient records, i.e., medical data relating to identified patients. Such data is considered as personal data and as “special categories” under the GDPR Article 9.

In the consideration of the legal basis for the processing, the project distinguished between processing of personal data for the purpose of developing the technology through machine learning, and the later use of the tool in medical treatment. For the development phase, the DPA found that the processing was lawful under Article 6(1)(e), as the processing was necessary for the performance of a task carried out in the public interest. Further, the DPA found that the processing was necessary for the purpose of preventive medicine under Article 9(2)(h) and also that the processing was necessary for reasons of public interest in the area of public health, Article 9(2)(i). The DPA found basis in Norwegian law for the processing in the Norwegian legislation on health personnel, which after an amendment in 2021 sets out that, following an approval from the Ministry of Health, patient journal data may be used for development of clinical tools and for the purpose of promoting health or improving medical services.

For the later use of the tool, the DPA found that the processing was lawful under Article 6(1)(c), as the processing is necessary for compliance with a legal obligation – the hospitals have an obligation to provide adequate medical services to its patients. The DPA further found that the processing was necessary for the purpose of preventive medicine and for the provision of health care or treatment, Article 9(2)(h). The DPA found basis in Norwegian legislation on specialist medical services, which contains an obligation for the specialist medical service to provide adequate medical services, and also in the health personnel legislation imposing an obligation on medical personnel to keep medical records of the patients’ medical data.

 

Does use of the tool entail automated individual decision-making?

The project considered whether use of the tool would be in violation of the ban in Article 22 against automated individual decision-making. The conclusion was however that this was not the case, as any decisions regarding the patient were not to be based solely on automated processing, but rather by the doctors and other medical personnel based on input from the system.

 

Article provided by INPLP member: Øystein Flagstad (Gjessing Reimers, Norway)

 

 

Discover more about the INPLP and the INPLP-Members

Dr. Tobias Höllwarth (Managing Director INPLP)

What is the INPLP?

INPLP is a not-for-profit international network of qualified professionals providing expert counsel on legal and compliance issues relating to data privacy and associated matters. INPLP provides targeted and concise guidance, multi-jurisdictional views and practical information to address the ever-increasing and intensifying field of data protection challenges. INPLP fulfils its mission by sharing know-how, conducting joint research into data processing practices and engaging proactively in international cooperation in both the private and public sectors.