• Skip to primary navigation
  • Skip to main content

Pleiade

Management & Consultancy

  •  
  • Consultancy and research
  • about Pleiade
  • Clients
  • Publications
  • Blog
  • Contact
  • Dutch
  • French

Blog

What will be the effects of Generative AI on Academic Libraries?

24 August 2024 by Maurits van der Graaf

This blog is an excerpt of our White Paper.

What a librarian should know about Generative AI?

The essence is that AI models do not understand a topic as humans do, but weave words together based on complicated statistical calculations. To do this, AI models use vast datasets collected from a range of sources including some with bad data. Therefore, their answers are not transparent and sometimes wrong (‘hallucinations’). However, academic libraries can make the answers more transparent and reliable by forcing the commercial AI chatbots to use validated data for their answers with Retrieval Augmented Generative AI applications (RAG). This model works as follows: the user’s question is used to retrieve a list of relevant documents from the library database with validated and reliable information. These documents, along with the user’s question, are then send to the commercial AI chatbot with the instruction to find the answer within the documents from the library. The application returns the answer to the specific question generated by the chatbot, along with the relevant documents.

AI effects on the environment of the academic library

Next, we look at the impact of AI on science and on the behaviour of library users:

  • The behaviour of library users will in our view change dramatically as the interaction with internet is becoming a dialogue (‘conversational discovery). In the longer run, AI-powered personal assistants are predicted to interact with software applications as proxies for the end-user.
  • AI will transform science with the usage of AI models for complex systems, bringing together multiple disciplines. Also, the pace of scientific research is expected to quicken. As a result, we anticipate more scientific publications and a further atomization of the scientific record.

Paradigm shift in discovery

This changed interaction with the digital world is causing a paradigm shift in discovery. The transition to conversational discovery with AI chatbots answering user queries will lead to (much) less usage of library collections and potentially resulting in the need for a new type of usage figures: usage by AI chatbots on behalf of the end-user.

Access to library collections, AI literacy, Open Science and the library organisation

Finally, we touch upon four other issues relevant to academic libraries that have gotten so far little attention in the literature but surely will play a role in the coming years:

  • The contentious question of allowing access by commercial AI chatbots to library collections
  • The need to redevelop library courses on information literacy
  • The potential threats to Open Science practices in AI-driven science due to the dominance of a few Big Tech companies
  • The effects of AI tools for businesses on the library organisations.

In all, we predict 24 different effects of generative AI on academic library. You will a more elaborate description in our White Paper.

Filed Under: Blog

Nominations for an Open Science award

17 May 2024 by Maurits van der Graaf

The first Leo Waaijers Award

The first Leo Waaijers Award will be presented at the Open Science Festival on October 22, 2024. The Award is intended for a person or group that has taken a daring, innovative and/or impactful initiative in the field of Open Science in recent years. This initiative by the UKB is intended to highlight Open Science initiatives and thereby stimulate and inspire others. This new Award will be awarded periodically. If you are reading this and would like to nominate an Open Science initiative from someone or yourself, please complete this nomination form before September 1 and send it to ukb@uu.nl. We (the jury consisting of Saskia Woutersen-Windhouwer, Hubert Krekels and myself) will announce an initial selection in mid-September and the winner will be announced at the festival itself.

In the footsteps of Leo

Leo passed away unexpectedly last year, and his funeral showed how many people he had been a source of inspiration for. Tireless, always on the cutting edge, with bold, innovative proposals and actions. Combative, but always with humor and fun. With this Award we hope that others will follow in his footsteps and thus let his spirit and guts live on.

Filed Under: Blog

Open Access and the general public

18 January 2024 by Maurits van der Graaf

Access to scientific publications 

I conducted a study on access forms to scientific information for an assignment for the UKB – the Dutch consortium of university libraries and the National Library. There were two lines of research: (A) how can non-UKB research institutions access paywalled scholarly articles and (B) what are meaningful forms of access for the general public? The latter mainly involves Open Access scholarly articles. In this blog, I want to talk a bit more about the latter.

Target groups and key threshold

Who among the general public wants to read scholarly articles? Broadly speaking, three categories can be distinguished:

  • Extramural researchers, for example citizen scientists, independent researchers or researchers in industry, who want to consult articles for the purpose of their own research.
  • Evidence-informed professionals, such as education professionals or policymakers or NGOs who want to base their actions as much as possible on scientific grounds.
  • Individual citizens who want to gather scholarly information out of interest, study or for an important decision they have to make.

What hinders these target groups from accessing scientific articles? First, of course, the paywalls, but in recent years about half of articles are published Open Access internationally. So those articles can be accessed by them. However, research among Dutch users shows that scientific language as well as the use of English language are the main barriers: a so-called layman abstract, preferably in Dutch, would remove a major obstacle for these users.

Artificial intelligence comes to the rescue

At the APE2024 conference, some publishers also were working on this. In addition, a few start-ups were presenting services in this regard. All used AI for this, mostly (still) under human supervision. Besides summaries in layman’s terms, examples of visual abstracts (a kind of infographics) and audio summaries (‘conversational audio’) were also presented. Interesting fact for the authors of these articles: a summary in layman’s language also yielded many more citations!

Conclusion: open access alone is not enough

Much of the Open Access discussion is about costs, business models and other issues concerning the scientific community itself. For me, therefore, it was the first time to look at what target groups outside the scientific world need. It became clear that OA by itself is not enough: a service that support these users with summaries in an accessible language can ensure a much wider application of scientific output. And AI makes that nowadays a real and viable option!  

The full report (in Dutch) can be found here.

Filed Under: Blog

Effects of AI on digital inclusion and digital citizenship

5 January 2024 by Maurits van der Graaf

Alliance for Digital Society

I carried out a study (23 interviews + desk research) for the Alliance for Digital Society – an organisation that unites multiple parties in order to realise digital inclusion. As Artificial Intelligence rather suddenly came into the spotlight in 2023,  the Alliance wanted to have an insight in its effects on digital inclusion and digital citizenship. The objective of this study is to provide an informative overview of the situation and developments around AI from the perspective of digital inclusion and digital citizenship, enabling the Alliance and its partners to develop potential actions.

AI = system technology

After the launch of ChatGPT in November 2022 it looks like a big hype around AI. However, ChatGPT (and similar programs that are based on foundation models) is not the sole type of AI, speech recognition and computer vision were introduced earlier. The combination of these AI types will have a big impact in all walks of life: AI is a system technology, comparable to the combustion engine or electricity.

AI will help Digital inclusion for many but not all

In the Netherlands, a diverse group of in total an estimated three to four million people (on a total of 18 million) cannot sufficiently keep up with the digital world.

AI can indeed empower some of those groups. AI makes it possible to interact with Internet in different ways than typing and reading. It is and will increasingly become possible to interact with the internet through spoken dialogue, regardless of language. In the words of one respondent who I interviewed for this study: ‘away from the keyboard and away from English’. This will be of enormous help to people with difficulties in writing or reading and to people who are visually handicapped.

However, AI will not be of help to everyone of this group: there will be groups that will never participate in the digital world themselves because they cannot or do not want to. Thus, digital services should always offer human contact as an alternative.

Digital citizenship

What exactly is digital citizenship? In the report, I used a definition from the Rathenau institute, which distinguishes between the individual level (personal benefits), the community level (effects on community cohesion) and the political arena (participation and representation).

At the individual level, AI’s opportunities for digital citizenship lie in applications that assist people (e.g., as personal assistants) or enable new ways to generate computer code or images. For organizations, sectors, or communities, the opportunities primarily lie in applications that enhance efficiency, effectiveness, or customization. Finally, AI offers opportunities for social debate participation through deliberative democracy applications. While this sounds positive, there are significant downsides, as discussed next.

Downsides of AI: the four B’s

The downsides of AI can be summarized with four B’s:

  • Bias: The datasets used by AI applications can have an unwanted bias, are often not transparent, are sometimes collected without respect for copyrights and sometimes contain privacy-sensitive data.
  • Black box: Generative AI applications like ChatGPT, in particular, use billions of parameters that lead to outcomes that are not reducible and explainable and sometimes are outright nonsense. This is rather dangerous, as people tend to believe what the computer says – the so-called  automation bias.
  • Big Tech: Important parts of AI technology are in the hands of a few Big Tech companies. This near-monology makes public institutions increasingly dependent on Big Tech and threatens their digital autonomy.
  • Bad actors: Bad actors can use AI applications to produce large amounts of disinformation and deepfake. Alas, AI is also used in cybercrime.

Responsible AI is the answer

The consequences of these downsides are negative effects on human rights and public values. Therefore, a lot of work is being done to include the so-called ELSA aspects [the Ethical, Legal & Societal aspects of AI] in the development, implementation, and use of AI applications. AI that meets the ELSA preconditions is called responsible AI.

A personal note

AI will certainly change our lives in the coming years. Our interaction with computers will transform profoundly, In this respect, I found the image of a Centaur particularly striking: the human should be the head, the computer the legs. However, with AI applications combined with the automation bias, it could become the other way around. With Uber, this is already the case: the Uber app is managing the drivers and to some extent also the passengers.

Filed Under: Blog

In memoriam Leo Waaijers

12 August 2023 by Maurits van der Graaf

Vorige week kwam het bericht dat Leo Waaijers op 85-jarige leeftijd is overleden. Een grote en onverwachte schok omdat ik enkele weken daarvoor nog met hem uit eten was geweest.

In 1988 trad Leo de wereld binnen van de wetenschappelijke bibliotheken, eerst als was bibliothecaris van de TU Delft (waar hij het iconische bibliotheekgebouw van de TU Delft realiseerde – binnen het budget zoals hij graag opmerkte), toen van de Wageningse universiteit, daarna manager bij het SURF platform ‘ICT en onderzoek’. Na deze banen in loondienst werd hij Open Access consultant, richtte QOAM op en werd gastonderzoeker bij het CWTS in Leiden. Leo is nooit met pensioen gegaan!

Leo was een onvermoeibare strijder vóór Open Access en tegen de machtspositie van de grote wetenschappelijke uitgevers. Hij was talloze keren auteur en spreker op congressen, waar hij – vaak licht provocatief – innovatieve ideeën naar voren bracht en voor actie pleitte. En hij was een meester in het bedenken van concrete acties om iets in gang te zetten.

Zijn nalatenschap is dan ook indrukwekkend. Hij scoorde nationale en internationale bekendheid met het opzetten van het succesvolle ‘Keur der wetenschap’ ofwel ‘Cream of Science’. Een initiatief om aan de Open Access repositories van de universiteiten een kwaliteitsimpuls te geven door aan 200 vooraanstaande wetenschappers te vragen of zij al hun wetenschappelijke publicaties daarvoor wilden beschikbaar stellen. Hij ontving daarvoor in 2008 de SPARC Europe Award for Outstanding Achievements in Scholarly Communications. Hierna richtte Leo zijn aandacht op het wetenschappelijke uitgeefproces zelf en richtte Quality Open Access Market op. QOAM is bedoeld om de kwaliteit van een tijdschrift te laten bepalen door de auteurs zelf en daarmee de dominantie van de ’journal impact factor’ van een commerciële partij te doorbreken. Onvermoeibaar als altijd, publiceerde hij samen met collega’s van het CWTS in mei van dit jaar nog een pleidooi om een publicatie-infrastructuur te bouwen buiten de uitgevers om.

Ik ontmoette Leo in 1993 toen ik directeur werd van het toenmalige Nederlands Bureau voor Onderzoeksinformatie (NBOI) en hij als bibliothecaris van de TU Delft in het bestuur zat. Tijdens zijn periode bij SURF heb ik verscheidene opdrachten gedaan, o.a. een grote inventarisatie van repositories in Europa voor het Europese DRIVER project. Toen Leo Open Access consultant werd hebben we een aantal opdrachten samen gedaan en schreven rapporten met titels als ‘Surfboard for Riding the Wave’; ‘Authority files breaking out of the library silo’ en ‘Quality of Research Data, an operational approach’. Tijdens een opdracht bij de bouw van de nieuwe bibliotheek in Birmingham waren we onverwacht snel klaar met de ochtendsessie en hadden nog vele uren voordat ons vliegtuig vertrok. Bij de lunch gingen we helemaal op in ons gesprek met als gevolg dat we het vliegtuig hebben gemist!

De Open Access beweging zal Leo’s ideeën en initiatieven node missen. Hij liep in zijn denken soms wel 20 jaar voor (onder andere met een idee om publiceren en peer review te splitsen – toen heel raar, nu een geaccepteerd idee) en heeft Open Access enorm verder weten te brengen. Hij laat een enorm gat achter, ook in mijn leven – ik zal onze discussies enorm missen!

Filed Under: Blog

APCs in the (French) wild

7 January 2023 by Maurits van der Graaf

A shopping spree for data

My French colleagues Antoine Blanchard and Diane Thierry (from Datactivist) and me had the opportunity to do a study on APC costs for the French Ministry of Higher Education and Research (MESR).

Our proposal – to build a dataset with articles by French authors from the ground up and use that for a calculation of the APC costs – was accepted. We had a plan to do this, but – as is the case with many plans – we had to adjust it many times to circumvent unexpected obstacles.

It became a shopping spree for data. We started with data from the French Open Science Barometer (BSO). They delivered data of all journal articles with a French author for the year 2013-2020. This we put in our shopping cart (my colleague Diane did all the shopping cart work with R) and started shopping around for more relevant data.

Who are the corresponding authors? We shopped these data from the Web of Science. Which articles were Open Access published in Gold or Hybrid journals? We shopped these data from OpenAlex. Couperin gave us a dataset with journal titles they had contracts with and with DOAJ and QOAM data we identified hybrid, Gold and Diamond journal titles. BSO already had made the connection with OPEN APC data for the prices. I have to admit that sometimes our shopping efforts failed: we had originally misunderstood the exact data set-up of some datasets so that incorporation was not possible.

However, in the end we were able to reach our goal – a dataset with trustworthy data of articles by French authors from 2012 to 2020. This made a retrospective analysis possible and based upon that Antoine and Diane built a model with R for a prospective analysis.

Our whole data shopping spree and the analyses made it a super interesting study, while – I think – the results have interesting implications for the future. I show below the main results.

You see in the picture that – if all trends as seen in 2012 to 2020 continue unchanged – the French institutions will pay around €50 million in 2030 on APCs in the wild (the red line). We also made – with the help of Couperin – an estimate of the subscription expenditures in that year (the grey line). Finally we calculated the cost in a fully Open Access world, which we defined as 90% APC-paid OA and 10% Diamond OA.

Personally I found it very interesting that the hybrid situation (subscriptions and APCs in the wild) would cost around 150 million euro in 2030, while the full OA situation would take around 170 million of APC payments. Not much of a difference if you take into account all the margins of error of these calculations.

Let me finish with a warning: predicting the future is always a tricky business. However, with using a lot of data and some modelling, we noticed at presentations that some people think these predictions are absolutely sure to happen. But in reality, these predictions are based on the 2013-2020 data and are nothing more (or less) that extending the trend-lines of that period, albeit with a very complicated statistical model.

For more information, see the poster or the full report.

Filed Under: Blog

  • Page 1
  • Page 2
  • Go to Next Page »

© 2025 Pleiade | website: webtaurus | Log in

  • Dutch
  • English
  • French