Overview
- The simulation reproduced results from 69 classic experiments spanning humans, monkeys, and rats.
- Across datasets, it matched behavior and outperformed the Bayesian Causal Inference model using the same number of adjustable parameters.
- The lattice predicted where viewers looked in audiovisual scenes, functioning as a lightweight saliency map.
- The approach extends the Multisensory Correlation Detector derived from insect motion circuitry and targets transient cross‑modal correlations.
- The authors present it as an efficient, training‑free candidate for multimodal AI and real‑world audiovisual processing, with broader deployment proposed as a next step.