I recently attended the 7th DIA India Pharmacovigilance (PV) Conference in Mumbai. One of the sessions was dedicated to the Impact of Artificial Intelligence (AI) and Predictive Sciences in the world of PV. The session was moderated by Moin Don and Anju Agarwal; the panel comprised Mengesh Kulkarni, Saikat Biswas, Retesh Kumar and Saurahb Khurana.

The interaction between the panel of guest speakers and the open floor reminded me of the Mumbai traffic: it was frenetic, loud, comprised of many different thought vehicles and yet despite all this – it traveled in the same direction without any head-on collisions!

To say this was a lively discussion would be an understatement. This was testament to the ‘safe environment’ that had been established early on between the honoured speakers and the eager attendees. This was definitely the place to share ideas and seek help from those with experience perhaps different to your own.

[Of note, because of the rapidity of the conversation and the number of contributors, I apologise in advance if I have attributed any of the information to the wrong person…. I am just a human with a pen and paper!]

The major themes discussed, among others, included:

  • The intelligence of machines
  • The value of social media
  • The potential loss of human jobs

There was no disputing the fact that year on year the amount of information available and the number of safety-related events to report is growing – while the budgets to pay for the necessary resources to complete the tasks is getting slimmer. The most common approach to remedy this imbalance is task automation.  But what happens when tasks include decision-making steps? If these steps are being completed by a machine, then artificial intelligence is required.

The intelligence of machines:

“Chaos – a dynamical system that is extremely sensitive to its initial conditions”

From what I heard, the audience and panel were in agreement that machines would only be as intelligent as the humans taught them to be. In order to grow the intellect, the machines must be fed on appropriate clean data – lots of it and regularly! This data-diet needs to include a full spectrum of positive and negative “nutrients”, to enable all aspects of the intellect to grow. In addition the data need to be appropriate for the use case.  And… since the machine is blind to the plate you are offering, the data need to be structured so it knows where to find what it needs.

According to Chaos theory “A very small change may make the system behave completely differently. Very small changes in the starting position of a chaotic system make a big difference after a while.”

This is why we need to get machine learning right…. We need to feed it correctly for what we want it to, eventually, do.

I think it was Dr Kulkarni that described perfect AI diet in three words: “Repetitive, structured, rule based.” He said that when this diet is available – the AI is working. But, he believed, in situations where judgement is required, or there is a unique reaction, AI could not take the place of humans.

The value of social media

The increased use of social media to communicate experiences has increased.  One of the guest speakers explained that use of social media relating to patient experiences had increased by 10-fold since last year. Patients talk on social media and they research before they take up a particular treatment.

The value and quality of social media were discussed as learning sources for AI; personally I liken social media to fast food – it is loved by teens, it has immediate gratification, it contains some core nutrients, but you need to discard the wrappings (rubbish) to get to it.

Retesh Kumar commented that approximately 60% of important information is lost when social media is not monitored. He agreed that it was full of noise and unstructured, but that it could be cleaned and yes this would be a burden…” but as a data set it could not be ignored.” He suggested that information acquired through social media should be used for risk management and signal detections rather than for ICSRs whereby follow-up (e.g. of patients/ carers) is necessary.

Another member of the panel agreed and gave an example of “weight gain” being reported by patients in the social media after the launch of the drug. The company investigated this separately and discovered increased appetite had emerged with use of the drug in the wider post-marketing environment. In this case, social media allowed an as yet undetected signal to be revealed.

The potential loss of human jobs

There was some debate as to the wisdom of relying solely on AI to complete PV activities beyond ICSRs. The panel collectively explained that aggregate reports contained approximately 10 modular sections that were repetitive and could perhaps be completed by AI. This represents about 40% of the report –but, humans would be needed to complete the non-repetitive portions.

Comments were made regarding the fact that this would be fine if all the data collected were from structured sources, but this was not the case, and as physicians they would be afraid of the false negatives. The panel agreed that the current technologies were not capable of allowing AI to perform risk-benefit decisions.

The audience was told that AI learns via algorithms, but if these are self-learning algorithms, this leads to noise (ironically one of the aspects AI is used to decrease). However, if the algorithms are “locked” this allows AI to make judgment and less noise. Then of course there is the constant evolution of authority regulations to contend with, and the fact that these are not necessarily harmonised across all authorities. The overall impact of this would be that humans would still be required to tweak the algorithms to allow AI to make the right decisions.

With respect to the fear of job loss – the main message was that AI would allow humans to upskill and use their intelligence for more predictive activities. Anju Agarwal, quite rightly, reminded us all of how nervous we were not so long ago when computers were first introduced into the workforce. “Instead of losing our jobs,” she said, “they liberated us – taking away the repetitive laborious jobs and allowing upskilling and unrestrictive thinking.”

Krishna Bahadursingh renamed AI as “Augmented Intelligence” and told us that we should be using it to allow humans to carry out more effective safety activities. He stated that “PSUR, DSUR and labelling is reactive and after the fact”, and it was more like “performing pharmacompliance and pharmadudiligance because it was not predictive or vigilant.”

In terms of efficacy compared with patient safety, with the potential of further increasing noise and missing signals (use of algorithms), and the current opinion that machines cannot make judgement calls, it was suggested that the PV industry use AI to monitor structured information only,  and to perform tasks up until the physician decision is required.

This view was summed up by Dr. Bahadursingh in his statement that “we need to use machines for their expertise to allow humans to perform theirs.”

From my humble perspective, I have to agree that if AI is used as an expert in repetitive tasks, then humans will indeed be freed up to perform the more predictive activities that are required to keep patients safe.

The session ended with a comment relating to the parallel development of three streams that are needed to allow effective, harmonised, AI-driven PV activities.

The three streams are technology, which is developing most rapidly, Regulatory, which is struggling to keep up, and Legal, which is the slowest developer and anchoring the whole process. Until these can be aligned, and developed at a similar rate, AI will remain a buzzword in the world of PV where increase safety reporting, the very thing that regulators have wished for, threatens to overwhelm the system.

 

Image credit: Artharva Tulsi on Unsplash

Leave a Reply