New 'AI' technologies for evidence synthesis: how do they work, and can we trust them?
The past year has seen an explosion of interest in 'generative large language models' (LLMs) and their possible use in all areas of work and research. Already, over 12,000 papers have been published with 'ChatGPT' in the title or abstract. The use of LLMs in systematic reviews is frequently discussed, and it seems that we are on the verge of a revolution in automation. However, the quality of evaluations is often low, and there are many questions about bias, generalisability, reliability and transparency still to be addressed. Understanding a little about how LLMs work will enable us to critique some of the claims made about their applicability, so this seminar will start with a gentle introduction to the technology. It will then explore some of the ways that LLMs might be useful and how we can assess for ourselves some of the (at times, outlandish) claims being made about their utility.
Speakers
Professor James Thomas, EPPI Centre
Professor James Thomas' research is centred on improving policy and decision-making through more creative use and appreciation of existing knowledge. It covers substantive disciplinary fields – such as health promotion, public health and education – and also the development of tools and methods that support this work conducted both within UCL and in the wider community. He has written extensively on research synthesis, including meta-analysis and methods for combining qualitative and quantitative research in ‘mixed method’ reviews; and also designed EPPI-Reviewer, software which manages data through all stages of a systematic review.
Event notices
- Please note that you can join this event in person or you can join the session remotely.
- Please note that the recording link will be listed on this page when available.
Admission
Contact