It has been a distracting year so far and it is only the month of May. We had a state of emergency, change of prime minister, Carnival, Eid, Easter, elections, change in government, and change of prime minister again.

And now, as the new Government settles in, further changes are expected in State boards, policy, and strategic direction. Internationally, Ukraine and Russia continue to be at war. Israel continues to bomb Gaza. India and Pakistan, although now in a truce, were just recently attacking each other. The United States and Donald Trump continue to take centre stage with most news, as every day is another sensational “break or ignore” the rules session. In the background of all of that, is the growing influence of AI in our lives, creeping up seemingly slowly but really moving with increasing speed and catching us unawares.

A few days ago, the Emergency Medicine University team had a virtual conference on AI which was presented by two AI specialists. The topic was the use of AI in postgraduate medicine. They first went through the basics of AI then had a demonstration of how AI could be used to ingest a manual and produce a summary of the said manual. They showed also how AI could augment students’ studying where it could summarise manuals, large documents and even podcasts. AI could also be used to produce PowerPoint presentations for classes and even conferences. As the Emergency Medicine lecturers detailed that our postgraduate students do case reports of patients and topics with analysis and comparisons of the evidence of published research, the AI specialists told us that AI could easily be doing this for the student in the student’s own writing voice and style. Once the AI has been taught the style of the human, it could then produce a case report with complexity and analysis, where it would be difficult to discern the writer or prove that this was not the student’s work. In fact, they said, there is no clear AI programme that could screen a document to see if it was done by an AI and not the human. Thus, according to the skill of the student to train their AI, the lecturer or teacher would not be able to detect or screen or prove if the work was human or not.

Going further, they detailed how the AI could be used to write MCQs and short answer questions (SAQ), replacing or assisting the lecturers in preparing questions for exams. Again, similar to students, the AI could be given manuals, textbooks, curriculum, and the prompt given to the AI could be to produce MCQs in the style of a professor or consultant in Emergency Medicine in the Caribbean. The caveat here though for lecturers is that the students could do the same and could be studying with said questions, after allowing their AI to ingest similar material as the lecturers did. If the AI is open source (using generic public data) or more finely tuned like co-pilot of ChatGPT, the databases could be similar and there is a risk that the questions produced could be very similar and the student would have an unfair advantage for the written exams.

It means that presentations, projects, case reports and even research, may be heavily AI assisted or AI dominated. Revision MCQs and SAQs could be similar to the exam ones if the examiners used a similar AI prompt. Thus, assessment of that student in terms of semester projects and written exams becomes outdated, inaccurate, irrelevant, and unclear. This means that a poor student can still pass the semester project work and the written exams. It now puts more pressure on the lecturers to find other direct exam assessment that assesses the real human without AI.

The facilitators of the session even described the use of AI in the Emergency department where the doctors could present the just seen patient case to their AI, and the AI could advise on the possible diagnosis and further investigations. Again, making it possible for a below-average doctor to rely too much on AI to assist with medical decision making. It has to be noted that AI systems are not always correct and are prone to hallucinations (giving made-up information and conclusions as if they were real).

At the other extreme though, China has opened the world’s first AI hospital where the AI doctors could potentially see 10,000 patients in a few days. In testing these China AI doctors, they achieved an average mark of 93% in the China University written exam, higher than the humans.

The facilitators of the AI session advised that AI is neither bad nor good. Like a dangerous dog, it depends on input and training. It depends on what biases from the same humans is in-putted. AI, they said, could be a marvellous tool to make us more efficient. Once we leave it to assist only and not take over in terms of decision making and influence. The only way to do this , they said, is to use and get familiar with AI, make sure we pivot and modify the rules and boundaries and make sure we are ahead of the game or the race.

Man, it is only May. So much happening, the race going on and I am not sure if we have even started.

Dr Joanne F Paul is an Emergency

Medicine Lecturer with The UWI

www.globalmotohub.com