About a Pseudo-Knowledge: Large Language Models and the Replika Hypothesis.
DOI: n/a
This paper deals with the kind of knowledge and discourse produced by Large Language Models (LLMs). I set aside questions concerning the consciousness and intelligence of LLMs, in favor of the position of a conceptual hyperspace of the notion of intelligence in which various AIs should be placed, along with animal intelligences. I then question the possibility of LLMs knowing things - assuming the classical epistemological difference between true belief and knowledge (as justified true belief). I argue that the first condition of justification, namely the possibility of referring to something, is not given in the same way for LLMs. I argue in favor of the Replika hypothesis, according to which LLMs refer, in their statements, to sets of possible worlds close to our own, without the possibility of distinguishing between the very possible and the actual. I draw the consequences of this with regard to the question of the epistemic opacity of LLMs.
Pour citer cet article :
Huneman Philippe (2024/2). About a Pseudo-Knowledge: Large Language Models and the Replika Hypothesis. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, (pp.55-83), DOI: n/a.