A glimpse into the cooking pot of digital technology assessment methods
Pauline Riousset | 28. März 2023
Digitisation, and in particular generative AI, is transforming the profession of researcher in many ways. The release of ChatGPT has led to an increase in plagiarism in science, but also to entirely new scientific collaborations (with ChatGPT itself!). My colleague Steffen Albrecht is currently investigating how ChatGPT has affected the scientific community and could do so in the future in a study recently commissioned by the German Bundestag. How innovative digital methods can be used for scientific policy advice, on the other hand, is one of my personal favourite topics.
Digital tools are of great importance for our work at TAB – and not only since the release of ChatGPT. They help us to identify the most relevant scientific findings and innovations that could have a far-reaching impact on the economy, society and the environment in the mass of new publications that appear every day. I have been fascinated by this type of analysis since my studies at SciencesPo Paris, where I worked on mapping controversies using online data. After my undergraduate studies, I spent a research period at MIT, where I helped develop software to map and visualise the impacts of shale gas extraction. Now I am always on the lookout for the latest and most innovative digital knowledge tools that can help track relevant socio-technical developments, identify research and knowledge gaps, and uncover needs for action for the Bundestag.
Currently, I am particularly motivated by the planned International Handbook of Technology Assessment, edited by Prof. Dr Armin Grunwald, for which I am writing a contribution on "Digital Methods for Technology Assessment". My three co-authors are Lionel Villard, Anders Koed Madsen and Nicolas Baya-Laffite, who have all worked extensively on the usefulness of digital knowledge tools in their academic work. I will briefly introduce them and their lighthouse projects below.
Lionel Villard is a lecturer at the ESIEE, School of Innovation and Entrepreneurship in Paris, and a researcher at LISIS, an interdisciplinary research laboratory dedicated to the analysis of science and innovation in societies. Lionel Villard manages the CorText platform, a web application that aims to strengthen research in the social sciences and humanities on trends in science, technology and innovation. It gives researchers without programming skills access to analytical tools to answer questions relevant to them and to visualise results based on selected datasets (e.g. patents, publications, media articles). It can be used to map innovation ecosystems, study the social dynamics of knowledge production, and track the evolution and emotional expression (sentiment analysis) of online social and professional discourse. The strength of the platform lies in its ability to combine different analytical dimensions, such as the automatic analysis of texts (text mining) with a temporal, social or geographical component. Specifically, it can be used for the following tasks
- Identify and locate innovation hotspots
- Analyse the perception of technologies (positive, negative, neutral) among experts and the general public, assess polarising developments and gather evidence of pressures for action.
- Identify research trends and gaps
- Evaluate the effectiveness of innovation and research policy instruments
My second contact, Anders Koed Madsen, is a professor at TANTLab at Aalborg University in Denmark. Part of his work is spent testing digital methods for researching socio-technical developments. In collaboration with the Danish Board of Technology (TAB's partner in the EPTA network), he recently explored how digital methods, and in particular the analysis of Twitter data, can be used as an element of participatory processes in technology assessment. The aim of the so-called data sprint was to better understand citizens' concerns about epidemics, pandemics and vaccinations. In his project, he and his colleagues searched Twitter for relevant narratives that illustrate dilemmas in dealing with the aforementioned issues and that motivate large numbers of people in different communities to actively participate in social media. For example, one of the narratives selected was about whether the health threat posed by the Zika virus should lead to the postponement or relocation of the 2016 Olympic Games in Brazil. The results of the data analysis or narratives were discussed with citizens. Anders and his team use this example to show how participatory evaluation of new technologies can be enriched by online data and visual access to information.
Finally, Nicolas Baya-Laffite, a professor at the University of Geneva (and previously a lecturer and researcher at SciencesPo's médialab in Paris), explores the dynamics of conflict using digital methods, while also addressing the question of how digital methods are changing the analysis of science, technology and innovation.
I had a long discussion with Nicolas, Anders and Lionel about the opportunities that digital methods will bring to TA in the future. In addition to the above-mentioned benefits of a platform like CorText, we came to the conclusion that digital methods can help to reflect on one's own assumptions about the impact of a technology and to gather new information more easily and continuously. For example, topic modelling can be used to automatically identify and categorise topics from a corpus of text and use them as input for expert interviews, focus groups or a survey. For example, an analysis of ChatGPT posts on social media can help to identify emerging application areas in large volumes of data in near-real time, and to identify key contacts or opinion leaders who can share their expertise in an application area.
Together we also discussed the particularities of digital methods and how best to deal with them. One of the main advantages of digital datasets is that they are real-time: you can get a quick overview of current developments in a short space of time. However, knowledge of the tools, what they can and cannot do, is essential to avoid drawing the wrong conclusions from the data. For example, when dealing with unstructured datasets such as social media posts, it is important to bear in mind that, unlike empirical surveys, they are not representative of the population, as only a small proportion of the population participates in online discussions. Therefore, knowledge about the type of users of a medium is essential for interpreting the data. In addition, the results of an analysis are often not reproducible, as online data is usually ephemeral. The way in which findings can be derived is therefore subject to different rules than traditional empirical methods in TA or sociology.
I do not want to reveal more of our ongoing work for now. In our book chapter, we look at which digital methods are already standard TA tools, which best practice examples can inspire future research and scientific advice, and where we can learn even more from interdisciplinary and international collaboration. You can read more about this in our contribution to the Handbook, which will be published at the end of the year.