Accuracy-Efficiency Trade-Offs in Spiking Neural Networks: A Lempel-Ziv Complexity Perspective on Learning Rules
arXiv:2506.06750v2 Announce Type: replace Abstract: Training spiking neural networks (SNNs) remains challenging due to temporal dynamics, non-differentiability of spike events, and sparse event-driven activations. This paper studies how the choice of learning paradigm (unsupervised, supervised, a...
arXiv:2506.06750v2 Announce Type: replace
Abstract: Training spiking neural networks (SNNs) remains challenging due to temporal dynamics, non-differentiability of spike events, and sparse event-driven activations. This paper studies how the choice of learning paradigm (unsupervised, supervised, and hybrid) affects classification performance and computational cost in temporal pattern recognition. Building on our earlier study [Rudnicka et al., 2026], we use Lempel-Ziv complexity (LZC) as a compact, decision-relevant descriptor of spike-train temporal organization to quantify how different learning rules reshape class-conditional temporal structure. The pipeline combines a leaky integrate-and-fire (LIF) SNN with an LZC-based decision rule. We evaluate learning rules on synthetic sources with controlled temporal statistics (Bernoulli, two-state Markov, and Poisson spike processes) and on two-class subsets of MNIST and N-MNIST. Across datasets, gradient-based learning achieves the highest accuracy but at high computational cost, whereas bio-inspired rules (e.g., Tempotron and SpikeProp) offer favorable accuracy--efficiency trade-offs. These results highlight that selecting a learning rule should be guided by application constraints and the desired balance between separability and computational overhead.