Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Brain-to-AI and Brain-to-Brain Functional Alig...

Yuki Kamitani
November 07, 2024
380

Brain-to-AI and Brain-to-Brain Functional Alignments

Presentation at a meeting of JSPS Transformative Research Area (A) "Unified theory of prediction and action" at Kyoto University (2024.11.7)

Yuki Kamitani

November 07, 2024
Tweet

Transcript

  1. Brain-to-AI and Brain-to-Brain Functional Alignments Yuki Kamitani Kyoto University &

    ATR http://kamitani-lab.ist.i.kyoto-u.ac.jp @ykamit Pierre Huyghe ‘Uumwelt’ (2018)
  2. "mailbox" resented Brain activity Reconstructed Generator Translator Latent representation Aligning

    brains via AI’s latent representation while preserving the content Faceless, nameless, uninterpretable features
  3. Brain-inspired Fukushima, 1980 Hubel & Wiesel, 1962 Simple cell Complex

    cell Hubel & Wiesel’s accounts on the emergence of orientation selectivity and simple/complex classification are in debate (possibly false…)
  4. Figure 2: An illustration of the architecture of our CNN,

    explicitly showing the delineation of responsibilities between the two GPUs. One GPU runs the layer-parts at the top of the figure while the other runs the layer-parts at the bottom. The GPUs communicate only at certain layers. The network’s input is 150,528-dimensional, and the number of neurons in the network’s remaining layers is given by 253,440–186,624–64,896–64,896–43,264– 4096–4096–1000. neurons in a kernel map). The second convolutional layer takes as input the (response-normalized Krizhevsky et al., 2012 D N N 1ɹ D N N 2 D N N 3 D N N 4 D N N 5 D N N 6 D N N 7 D N N 8 convolutional layers fully-connected layers • Won the object recognition challenge in 2012 • 60 million parameters and 650,000 neurons (units) AlexNet • Trained with 1.2 million annotated images to classify 1,000 object categories
  5.         .POLFZ7 O

    .POLFZ*5 O *EFBM PCTFSWFST $POUSPM NPEFMT )$// MBZFST $POUSPM NPEFMT *EFBM PCTFSWFST )$// MBZFST 1JYFMT 7MJLF $BUFHPSZ "MMWBSJBCMFT 1-04 )."9 7-JLF 4*'5 1JYFMT 7-JLF 1-04 )."9 7MJLF 4*'5     4JOHMFTJUFOFVSBMQSFEJDUJWJUZ FYQMBJOFEWBSJBODF Yamins et al. PNAS 2014 Brain–DNN alignment (1): Encoding
  6. Khaligh-Razavi & Kriegeskorte PLoS Comput. Biol. 2014   

       )VNBO7r7 )VNBO*5 3%.WPYFMDPSSFMBUJPO ,FOEBMM`T " 4DPSFT -BZFS -BZFS -BZFS -BZFS -BZFS -BZFS -BZFS -BZFS -BZFS -BZFS -BZFS -BZFS -BZFS -BZFS $POWPMVUJPOBM 'VMMZDPOOFDUFE 47. (FPN FUSZ TVQFSWJTFE Brain–DNN alignment (2): Representational similarity
  7. Decoding: Brain to DNN translation Horikawa and Kamitani, 2015, 2017

    Nature Comms; Nonaka et al., 2021 iScience Brain–DNN alignment (3): Decoding = Translator Brain area V1 V2 V3 V4 HVC Category layer Layer 7 Layer 5 Layer 3 Layer 1 Proportion of best decoded units Decoder
  8. (Nonaka, Majima, Aoki, Kamitani, iScience 2021) Brain Hierarchy Score 50

    60 70 80 90 Image recognition accuracy 0.0 0.1 0.2 0.3 0.4 0.5 BH score V1 V2 V3 V4 HVC Layer 1 Layer 5 Layer 7 Layer 11 Category layer Brain area Brain area V1 V2 V3 V4 HVC Layer 1 Layer 12 Layer 24 Layer 42 Category layer AlexNet: BH score = 0.42 Category layer Layer 1 Layer 3 Layer 5 Layer 7 Brain area V1 V2 V3 V4 HVC Inception-v1: BH score = 0.26 ResNet-152-v2: BH score = 0.14 Proportion of best decoded units High-performance AIs are not brain-like
  9. What is NeuroAI? Unlike connectionist and brain-inspired approaches: • DNNs

    optimized for real world data or tasks to perform relevant functions • Allows for precise alignment between DNN activations and neural signals DNN as a model of the brain? • Acts as a mechanistic model for neural processes (e.g., Doerig et al., 2023) • Serves as a feature generator, functioning as an interface between the brain and the mind (Kamitani et al., 2025)
  10. (Self) criticisms Limited correspondence of DNN and brain hierarchies: •

    Only a subset of DNNs exhibit hierarchical similarity. Higher-performing DNNs often show less resemblance (Nonaka et al., 2021). • Prediction accuracy in individual DNN units is modest (correlations up to 0.5), with inflated prediction accuracy by noise ceiling. Encoding analysis and unit contribution: • Only a subset of DNN units contribute to neural prediction. Not suited to characterize the layer-wise representation (Nonaka et al., 2021). Dependency on analysis methods: • Both low-level and high-level brain areas can be explained by specific DNN layers, depending on the analytic approach (Sexton & Love, 2022). Dependence on training data over DNN architecture: • Prediction relies heavily on the diversity of training data rather than the network architecture itself (Conwell et al., 2023). ◦ Linear prediction may still be too flexible.
  11. Latent-to-image recovery Contrary to the common belief, hierarchical processing does

    not discard much pixel-level information. • Large receptive fields do not necessarily reduce neural coding capacity if the unit density remains sufficient (Zhang & Sejnowski, 1999; Majima et al., 2017). • Near perfect recovery even from high-level layers with a very weak image prior (deep image prior; Ulyanov et al., 2020)
  12. Reconstruction of subjective experiences Imagery Attention Illusion Shen et al.,

    2017, Plos CB 2019 Horikawa & Kamitani, Communications Biology, 2022 Cheng et al., Science Advances 2023
  13. Subject 1 Neural code converter Subject 2 Train inter-individual prediction

    model Yamada, Miyawaki, Kamitani, 2011, 2015; Ho, Horikawa, Majima, Kamitani, 2023; cf., Haxby et al., 2011
  14. Without shared stimuli or paired data: Content-loss based neural code

    converter Content representation Wang, Ho, Cheng, Aoki, Muraki, Tanaka, and Kamitani, arXiv 2024
  15. 25

  16. Cognitive/systems neuroscience Brain AI (theory) The Bitter Lesson (Sutton, 2019)

    1) AI researchers have often tried to build knowledge into their agents 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. Sutton, R. (2019). The bitter Lesson. http:// www.incompleteideas.net/IncIdeas/BitterLesson.html. Inspiration Constructs (e.g., orientation selectivity reward prediction error working memory..)
  17. NeuroAI Brain AI Faceless, nameless, uninterpretable latent features • Population-level

    characterization in latent space • Predictive validity with real-world variables (behavior, image, text, etc.) • Radical behavioralism (Skinner) • Prediction and control over explanation ਆ୩೭߁ (2023). ೴ͱ৺ͷՊֶͷʮϛουϥΠϑΫϥΠγεʯۚࢠॻ๪. https://www.note.kanekoshobo.co.jp/n/nd90894f959b1.
  18. Illusions in AI-driven scientific research Messeri & Crockett (2024) Alchemist:

    "It's shining golden... I’ve found how to make gold!” Illusion of explanatory depth Illusion of explanatory breadth Illusion of objectivity Shirakawa et al., (2024); “Spurious reconstruction” Kamitani et al. (2025) ਆ୩ʢ2022ʣ࣮ݧσʔλղੳ࠶ೖ໳ɿ࿦จΛʮϑΣΠΫχϡʔεʯʹ ͠ͳ͍ͨΊʹ Speaker Deck https://speakerdeck.com/ykamit/shi-yan- detajie-xi-zai-ru-men-lun-wen-wo-hueikuniyusu-nisinaitameni.