A Categorical Systems-Theoretic Survey of Neural Network
This survey classifies major neural architectures—MLPs, CNNs, Transformers, bi-encoders, cross-encoders, GNNs, and autoencoders—within the framework of categorical systems theory and open dynamical systems. Using the compositional tools developed by Baez, Fong, Myers, Spivak, Fritz, and Coecke, architectures are modeled as structured morphisms with typed inputs and outputs, where learning corresponds to parameterized functorial dynamics and composition is formalized via colimits and operadic wiring. Bi-encoders and cross-encoders are interpreted as distinct factorizations of learned scoring functionals, differing in interaction order and compositional structure, while message-passing GNNs and attention mechanisms are framed as structured colimit constructions over graphs. This categorical perspective unifies disparate architectural patterns under a common compositional semantics, clarifies what qualifies as a “neural architecture,” and separates architectural form from training dynamics. The result is a principled vocabulary for comparing, composing, and extending neural systems across domains.