This essay has now been published in Critical AI at this link. [https://doi.org/10.1215/2834703X-11556038]; the abstract is pasted in below. If your institution lacks access to Critical AI please encourage them to subscribe. If you are an independent scholar please write to criticalai@sas.rutgers.edu.
ABSTRACT:
This article explores the rise of generative AI and large language model (LLM) tools for internet search and their potential impact on student information seeking behavior. Reviewing what is known about best search practices, the essay identifies a schism between AI companies’ vision of search and information science’s understanding of humane search environments. It argues that generative search tools fail to create ethical spaces for search, leading to dangerous territories for knowledge production. Specifically, the article discusses “friction” as a critical concept where the field of information science and AI development philosophies diverge. Whereas information science views friction as a valuable and often necessary component to search, AI companies view friction as a problem to eliminate. It is this mismatch between corporate AI philosophies and known best practices for search that, this essay argues, renders current LLM and generative AI search tools fundamentally incompatible with ethical processes of knowledge production.
