N°81 - 2024/2

Philosophies of AI: thinking and writing with LLMs

Gefen Alexandre, Huneman Philippe

Call for contribution to special issue #81 "Philosophies of AI: thinking and writing with LLMs"



Coordination
Alexandre Gefen, literary theory, CNRS / Université Paris Sorbonne
Philippe Huneman, philosophy, CNRS / Université Paris Sorbonne

Call for contribution
This issue will be prepared on the basis of invitations and a restricted selection of articles following a call for contributions. It will include articles in French and English.

Calendar
- January 15, 2024: deadline for article submission (50,000 characters)
- June 15, 2024: deadline for final articles (revised version)

Format/langage
French speakers are encouraged to write in French. Non-French-speaking authors may write in English or French (proofreading by a native speaker is required if this is not the author's mother tongue).

Editorial policy and instructions for authors
Intellectica is a French-language journal founded by ARCo in 1985. It is dedicated to the publication of theoretical, epistemological and historical works, shedding light on current issues, in the field of cognitive science in the form of long reflective, critical and/or prospective articles. The journal's editorial choice is to exclude contributions of a strictly empirical nature (any form of experimentation, observation, etc.), as these works benefit from the multitude of specialized journals dedicated to this type of research. The editorial board does, however, ensure that the reflexive work presented is in line with current empirical developments in the field and, if necessary, that it is presented synthetically within the articles. You will find the full instructions for authors here: https://intellectica.org/en/authors

The articles published by Intellectica, whether in English or French, must be accessible to a wide audience of readers who already have some familiarity with the field. You can consult Intellectica's online archives to see the type of articles the journal has published over the years (http://intellectica.org/fr/numeros).

Embargo
Intellectica has opted for a two-year embargo on original articles. After two years, articles (publisher's post-print version) can be distributed free of charge from the Intellectica website or any other site. The complete archive (volumes outside the embargo) of the journal is available on the Persée open archive platform.

Submission
Please send you manuscript (or your questions) to: soumission@intellectica.org
Please contact the coordinators of the special issue and describe your project in a few lines.
Instructions for authors: https://intellectica.org/en/authors
Deadline: 15 January 2024

Presentation of the issue
From writing real estate ads to answering exam questionnaires, the extremely varied potential uses of GPT-3 and its consumer version, ChatPGT, have struck a chord with the general public, writers and artists alike, opening up a wide range of philosophical questions that need to be examined beyond the initial fascination. How can we characterize and describe an LLM's mode of intelligence? What are the limits to its "cognitive" capacities, and what technological prospects are emerging? What answers do such tools provide about the relationship between language and thought? What humanity and consciousness are (still) missing from GPT-3, and, upstream, is there a notion of consciousness that we can all agree on, and then ask ourselves whether this or that system has it? Regardless of its possible mental states, can we attribute agentivity to LLMs?
In addition to these epistemological and ontological questions, other questions are raised by the very use of LLMs, in view of their astonishing performance. A growing field, AI Ethics, is already asking questions about the moral and legal responsibility of such tools: Who would answer for malicious uses of an LLM (the panic surrounding fake news immediately springs to mind)? But quite apart from legitimate moral questions about our use of LLMs and AI in general, we need to ask what these machines themselves do - and the consequences their existence and operation have on the concepts by which we think about activity in general, and so-called creative activity in particular. If it's consensual to admit that a human equipped with a tool always engages in human action, does this intuition hold when the tool is an LLM and produces a sonnet or short story of its own accord? Does action then boil down to the enunciation of this "prompt" from which the LLM generates its text?
So, with artificial intelligence systems devoted to language and language's relationship with something else (image or music), such as these LLMs, specific questions arise. Of course, the history of writing is a history of tools - nibs, pens, typewriters, word processors. The very possibility of writing, the status of the written word, changes with scriptural innovations. What happens to the subject of writing when an LLM is involved? If it's clear that a writer equipped with a spelling or grammar checker, or even a style checker (Grammarly), does more or less the same thing as an old-fashioned writer who regularly opens a dictionary or a Bled, is GPT-3 - some of whose uses resemble a supercharged Wikipedia, a Google that knows how to write - more than an incremental improvement on these tools? Or does the use of LLMs and their specific way of tying words together, based on probabilistic distances rather than semantics, take us elsewhere? And how can we define this elsewhere? This special issue will address these new questions.
In other words, rather than asking whether or not the writer who uses LLMs, like the "assisted" visual artist of Dall-E or Midjourney (and what does "assisted", a term omnipresent in this lexical field, actually mean?), are "authors", this issue will attempt to reflect on what such tools tell us about notions such as originality and auctoriality, and how the law can regulate such uses. Rather than asking whether an LLM proves anything, we will reflect on what tests of proof by the use of AI tell us about what a proof is. And rather than being alarmed by machines thinking something, we'll be asking ourselves, assuming they mean something, what this meaning can tell us about meaning and language in general.

This special issue will bring together philosophers, cognitive scientists, jurists, linguists and literary theorists to discuss these issues.