Attending events was a good way to get the discussion around data privacy and ethics started. This one was particularly interesting when you consider Artificial Intelligence and the possibility for it to be a black box.
Will GDPR affect advancements in Artificial Intelligence?
The short answer is, yes! But that doesn’t make it a bad thing.
Innovation in Artificial Intelligence (AI) is one of the greatest achievements in the modern digital age. It is the ability for computers and machines to perform human-like activities such as learning, problem solving and decision making.
The potential of AI is something of great excitement to futurists and transhumanist believers, who predict that AI could be billions of times smarter than humans, with the possibility of individuals needing to merge with computers to survive.
Popular technology integrated with AI includes device wearables for example an Apple Watch that can monitor your physical activity and certain health attributes. Such devices have a private benefit to the individual consuming it, but they also create a wide array of data sets, that contribute to creating the internet of things.
Organisations can benefit from access to this data, to continue innovating technologies and solutions through AI or by using the data to produce sophisticated insights for both the consumer and organisation. When put to good use, the potential these technologies produce for society is exponential. However, when organisations take advantage and negatively exploit the wide array of personal data created by such technologies, we could see the current global trust crisis deepen.
A colleague attended an event that discussed the potential of AI applications to be further integrated within the NHS. Here concerns were raised by several AI developers that GDPR could stifle innovation within the field. This was immediately challenged by the “GDPR experts” in the room. Their view was that GDPR is necessary to protect the collection, use and sharing of personal data. And, those who worked within the NHS believed GDPR would actually reduce the number of opt-outs, stating that; if citizens had greater control over how their personal data was used, for example, development of applications to improve their healthcare management, they would be more likely to give permission for this purpose.
Another point raised was that fully anonymised data is not covered by GDPR. AI developers can continue to freely use anonymised data to develop AI applications. Those creating AI solutions that will have a benefit to the citizen or the wider public could consider claiming either legitimate or public interest as the legal justification for collecting, using or sharing personal data. However, to use this compliantly would require transparency and an organisation would need to demonstrate that have considered the citizen’s rights. Organisations that do this and truly deliver value from using citizens’ personal data, can genuinely build stronger trust the organisation and citizens.
GDPR will ensure that organisations, either using personal data to develop AI or capturing personal data through applications run by AI algorithms, are not crossing the moral line and exploiting citizens personal data. GDPR should not stifle innovation within this sector if data is used ethically and the organisation is being fully transparent about the purposes of using individuals’ personal data.