De-mystifying generative AI
- fran2779
- Nov 25
- 3 min read
It's been a little over a year since the Kohlrabi co-directors attended the NCRM Methods Con (strong recommend) in Manchester and we were completely dazzled by a talk, '24 hours of AI'. The very charismatic presenter guided us through his day giving us a taste of times he'd use ChatGPT or Claude or other useful programmes I can't remember. All day the presenters talked about how efficient AI can make their research activities. On our journey home we said to each other, "we've probably only got a year until everyone is using these things and we'll be behind... or.... (looking wide-eyed) research consultancies won't even be needed anymore...?"
It's been a year and i'm not saying I don't use it myself at times but i'm starting to see generative AI like buying fast fashion, driving short distances or not using a 30 degree wash cycle. It's another thing that I have a sense is causing some harm, but... sometimes I do it anyway as it's easy to pretend that my contribution doesn't matter or that it's just going to be one time.
It's less easy to pretend when wise people around you are speaking up. I'm really grateful to Catherine Brys, PhD MBA and Wim Vanderbauwhede for raising my awareness in their recent seminar - Demystifying AI. Firstly, they reminded us to call it generative AI to distinguish it from Machine Learning, which can be useful for tasks like prediction or detection in fields like medicine and agriculture. Generative AI on the other hand doesn't think - it makes a best guess by combining snippets of information it has been trained on.
Wim reminded us that the "Intelligence" in AI is really good marketing. These models are not intelligent: they cannot assess the accuracy or reliability of their sources, and they may invent, overlook key perspectives or inadvertently add bias. When we are using it to summarise confidential or sensitive data we should also be wary of the serious risk we run of data leakage. Our data can show up in other people’s chat responses and can be shared with companies providing the GenAI service.
As a society it feels like we've swallowed the concept that Generative AI will improve productivity. However, Wim presented a small RCT which reported no difference in productivity for government workers using Generative AI. Another RCT found that developers were actually slowed down by using it. He draw a comparison to fast food. It may feel nice but it's not good for us: there are worries about 'brain rot' with habitual usage. On a human level we might be becoming addicted to another fast, easy route rather than practising the ability to create, to focus, to build our knowledge.
The ethical issues were what really shocked me. I did know that those tools can only do their work as they are trained on stolen work. Many of the audience, including me, were surprised to realise that they'd never thought about the low paid workers, often living far away from the UK, who maintain the system. The content that Generative AI scrapes from the internet at times contains child pornography and violence. The workers filtering content have to remove harmful material manually- a traumatising new form of colonialism which is being actively taken up as an issue by the Data Workers Enquiry.
Then we turned to the environmental impact. Already GenAI data centres cause more emissions than the entire aviation industry. We heard about crop failures as a result of water being diverted to AI data centres and discussed the delay to phasing out of fossil fuels.
I don't really know how to finish this write up of the talk. It feels like we're just beginning in this discussion. I do see "secure platforms' advertised and I hear of researchers achieving breakthroughs in research using some AI contributions. I'm not totally against it. I also drive a lot and often wash towels on 90 degrees... I'm just grateful to the seminar and to all those others around me raising queries for getting me to start thinking about my contribution - as a public health researcher and a member of society.




Comments