On the Semantics of Large Language Models
Schüle Martin
Language of the article : English
DOI: n/a
DOI: n/a
Product variations:
Large Language Models (LLMs) such as ChatGPT demonstrated the potential to replicate human language abilities through technology, ranging from text generation to engaging in conversations. However, it remains controversial to what extent these systems truly understand language. We examine this issue by narrowing the question down to the semantics of LLMs at the word and sentence level. By examining the inner workings of LLMs and their generated representation of language and by drawing on classical semantic theories by Frege and Russell, we get a more nuanced picture of the potential semantic capabilities of LLMs.
Pour citer cet article :
Schüle Martin (2024/2). On the Semantics of Large Language Models. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, (pp.15-36), DOI: n/a.