
Abstract: Traditional AI research dominated by recursive logic has been constrained by the combinatorial
explosion inherent in rule-based symbolic systems and by the problem of infinite recursion, which together hinder
an adequate formal representation of natural language. In contrast, large language models (LLMs) employ parallel,
distributed computation over high-dimensional vector spaces and chain-of-thought processing; by replacing
formal inference with probabilistic association, they achieve dynamic modeling of natural language. Each of these
construction principles has a counterpart in linguistics: generative grammar, committed to recursion-centrism,
has been challenged—indeed rejected—by ecolinguistic approaches, while distributed language theory further
contends that language is an evolutionary phenomenon distributed across individuals and their social interactions.
The shared foundational features of linguistic theory and technical practice indicate that a topology-oriented
paradigm centered on dynamic relations and a recursion-oriented paradigm grounded in unbounded symbolic
operations constitute two fundamental frameworks for explaining human cognitive structure. They also indicate an
ongoing shift in cognitive science from formal axiomatization toward practice-oriented model construction.
Key Words: Recursion; Distributed computation; Distributed language; Topology; Cognitive paradigm
