Generative Grammars, Minimal Operations and LLMs: True and False Problems
DOI: n/a
The scientific community of theoretical linguistics has been concerned with the implications of Large Language Models for grammatical theory for many years. The aim of this article is to show how this interest is based on an analogy between LLMs and certain controversial hypotheses regarding the faculty of language, especially language acquisition. The mechanisms behind "stochastic parrots" show clear similarities with the structure of natural language as depicted in so-called construction grammars. They are also very consistent with the behaviourist model of language acquisition associated with the legacy of B.F. Skinner. On the other hand, the functioning of neural AIs is the opposite of the models developed for human language in the field of generative grammar, i.e., the school of thought found by Noam Chomsky. The present article exposes the terms of the debate in theoretical linguistics and highlights to this proximity between neural AIs, construction grammars and behaviorist psychology before studying the opposite point of view. Generativists are led to assert that what conversational AIs produce cannot be language. Although the debate focuses on questions of syntax, it seems that the question of the semantics/pragmatics interface, especially the question of reference, is more effective in distinguishing the discursive activity of AIs from that of humans. A modular approach comparable to what has long been done in the comparison between human and animal psycholinguistics would prove more fruitful than an a priori debate between two theoretical families diverging on the very terms of the controversy.
Pour citer cet article :
Modicom Pierre-Yves (2024/2). Generative Grammars, Minimal Operations and LLMs: True and False Problems. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, (pp.119-133), DOI: n/a.