Royal Free NHS Trust – Google DeepMind trial failed to comply with data protection law
Wizuda’s CTO – Shane O’Keeffe explains how using pseudonymisation can speed up medical research programs whilst avoiding embarrassing data protection issues.
The recent high profile case brought by the ICO in the UK found that the NHS illegally handed Google over 1.6 million patient records. The Trust provided the personal data as part of a trial to test an alert, diagnosis and detection system for acute kidney injury. An ICO investigation found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test.
One of the key findings from the ICO’s report on the case is:
‘In this case, we haven’t been persuaded that it was necessary and proportionate to disclose 1.6 million patient records to test the application.’
The simple fact is that it wasn’t necessary to use actual patient records in the test phase. With intelligent use of pseudonymisation it’s a straightforward task to produce meaningful data to allow for full testing of new software and processes. It’s something that we’ve been doing at Wizuda for our clients across Europe and the Middle East for several years.
One example is a large research institute in the Middle East who conceived a project to identify and map overlapping social groups using mobile call detail records. Call detail records can contain a large amount of sensitive personal information including phone numbers and call location information. To overcome this, the originator of the data uses the Anonymisation module of Wizuda to pseudonymise the data to ensure that the data received by the research institute remained useful while removing the ability for anyone to trace the data back to the individuals.
Our software allows for consistent replacement of values with the same tokens both within a dataset and across multiple sets of data using customisable lookup lists. By consistently replacing sensitive values in each batch of data with the same tokens, the researchers could build their profiles without having the overheads of securing and managing large datasets of sensitive and personally identifiable information.
The creation of anonymised or pseudonymised data usually requires significant analysis to ensure the output data is both secure and useful but with Wizuda’s Anonymisation module, the process of creating those datasets is quick and intuitive. Processing of large datasets can be scheduled and distributed to ensure optimal performance even if data is being processed in a real-time environment.
It’s now clear that even highly laudable medical research programs have to follow the law with regard to data protection. Building the use of pseudonymisation into Privacy Impact Assessments at the start of any program will certainly help speed up the testing stage and allows program owners time to assess what, if any, level use of actual personal data should be rolled out over time as and when patients consent is obtained.
Click here to view the full story.
If you’d like to find out more about Wizuda’s solutions in this area please contact us for more information.