I spent time working with Andrej and the rest of the FSD team back in 2020/2021, and we had plenty of conversations on how human visual processing maps onto our neural network architectures.
Our approach—transformer-based attention blocks, multi-scale feature extraction, and temporal fusion—mirrors elements of the biological visual cortex (retina → LGN → V1 → V2 → V4 → IT) which break down raw inputs and integrate them over time. It’s amazing how closely this synthetic perceptual pipeline parallels the way our own brains interpret the world.
The key insight we discovered was that explicitly enforcing brain-like topographic organization (as some academic work attempts - such as this one here) isn't necessary - what matters is having the right functional components that parallel biological visual processing. Our experience showed that the key elements of biological visual processing - like hierarchical feature extraction and temporal integration - emerge naturally when you build architectures that have to solve real visual tasks.
The brain's organization serves its function, not the other way around. This was validated by the real-world performance of our synthetic visual cortex in the Tesla FSD stack.
We can think of a solution space, with potentially many good solutions to the vision problem, and we can, in science fiction-like speculation, that the other solutions will be very different and surprise us.
Then this experiment shows its solution is the same we already knew, and that's it.
Then there aren't many good potential solutions, there is only one, and the ocean of possibilities becomes the pond of this solution.
Did you read the part where he explicitly mentioned that they discovered how enforcing that architecture was not necessary, as it would emerge on its own?
The convolutional kernels in the first levels do converge to Gabors like the ones in V1 (and there were math works in the 90-ies, in neuro research, about optimality of such kernels) so it wouldn't be surprising if higher levels would converge to something that is similar to the higher levels of visual cortex (like hierarchical feature aggregation that is nicely illustrated by deep dreaming and also feels like it can be optimal under reasonable conditions and thus would be expected to emerge).
Unlike neural networks the brain contains massive numbers of lateral connections. This, combined with topographical organization, allows it to do within layer temporal predictions as activations travel across the visual field, create active competition between similarly tuned neurons in a layer (forming natural sub networks), and quite a bit more. So, yeah, the brain's organisation serves it's function, and it does so very very well.
The main reason topography emerges in physical brains is because spatially distant connections are physically difficult and expensive in biological systems. Artificial neural nets have no such trade-off. So what's the motivation here? I can understand this might be a very good regularizer, so it could help with generalization error on small-data tasks. But hard to see why this should be on the critical path to AGI. As compute and data grows, you want less inductive bias. For example, CNN will beat ViT on small data tasks, but that flips with enough scale because ViT imposes less inductive bias. Or at least any inductive bias should be chosen because it models the structure of the data well, such as with causal transformers and language.
Locality of data and computation is very important in neural nets. It's the number one reason why training and inference are as slow as they are. It's why GPUs need super expensive HBM memory, why NVLink is a thing, why Infiniband is a thing.
If the problem of training and inference on neural networks can be optimized so that a topology can be used to keep closely related data together, we will see huge advancements in training and inference speed, and probably in model size as a result.
And speed isn't just speed. Speed makes impossible (not enough time in our lifetime) things possible.
A huge factor in Deepseek being able to train on H800 (half HBM bandwith as H100) is that they used GPU cores to compress/decompress the data moved around between the GPU memory and the compute units. This reduces latency in accessing data and made up for the slower memory bandwith (which translates in higher latency when fetching data). Anything that reduces the latency of memory accesses is a huge accelerator for neural nets. The number one way to achieve this is to keep related data next to each other, so that it fits in the closest caches possible.
It's true, but isn't OP also correct? Ie. it's about speed, which implies locality, which implies approaches like MoE which does exactly that and it's unlike physical brain topology?
Having said that it would be fun to see things like rearrangement data moves based on temerature of silicon parts after training cycle.
Unless GPUs work markedly differently somehow or there’s been some fundamental shift in computer architecture I’m not aware of, spatial locality is still a factor in computers.
Aside from HW acceleration today, designs like Cebras would benefit heavily by reducing the amount of random access from accessing the weights (and thus freeing up cross-chip memory bandwidth for other things).
This makes me remember game developers back when games could still be played directly from the physical disc. They would often duplicate data to different parts of the disc, knowing that certain data would often be streamed from disc together, so that seek times were minimized.
But those game devs knew where everything was spatially on the disc, and how the data would generally be used during gameplay. It was consistent.
Do engineers have a lot of insight into how models get loaded spatially onto a given GPU at run time? Is this constant? Is it variable on a per GPU basis? I would think it would have to be.
The motivation was to induce structure in the weights of neural nets and see if the functional organization that emerges aligns with that of the brain or not. Turns out, it does -- both for vision and language.
The gains in parameter efficiency was a surprise even to us when we first tried it out.
Indeed. What's cool is that we were able to localize literal "regions" in the GPTs which encoded toxic concepts related to racism, politics, etc. A similar video can be found here: https://toponets.github.io
My understanding coming from mechanistic interpretability is that models are typically (or always) in superposition, meaning that most or all neurons are forced to encode semantically unrelated concepts because there are more concepts than neurons in a typical LM. We train SAEs (where we apply L1 reg and a sparsity penalty to “encourage” the encoder output latents to yield sparse representations of the originating raw activations), to hopefully disentangle these features, or make them more monosemantic.This allows us to use the SAE as a sort of microscope to see what’s going on in the LM, and apply techniques like activation patching to localize features of interest, which sounds similar to what you’ve described. I’m curious what this work means for mech interp. Is this a novel alternative to mitigating polysemanticity? Or perhaps neurons are still encoding multiple features, but the features tend to have greater semantical overlap? Fascinating stuff!
Was it toxicity though as understood by the model, or just a cluster of concepts that you've chosen to label as toxic?
I.e., is this something that could (and therefore, will) be turned towards identifying toxic concepts as understood by the chinese or us government, or to identify (say) pro-union concepts so they can be down-weighted in a released model, etc?
We localized "toxic" neurons by contrasting the activations of each neuron for toxic v/s normal texts. It's a method inspired by old-school neuroscience.
I imagine it could be easier to make sense of the 'biological' patterns that way? like, having bottlenecks or spatially-related challenges might have to be simulated too, to make sense of the ingested 'biological' information.
If you have things organized neatly together, you can also use pre-existing compression algorithms, like JPEG, to compress your data. That's what we're doing in Self-Organizing Gaussians [0]. There we take an unorganised (noisy) set of primitives that have 59 attributes and sort them into 59 2D grids which are locally smooth. Then we use off-the-shelf image formats to store the attributes. It's an incredibly effective compression scheme, and quite simple.
Yep. That is exactly the idea here. Our compression method is super duper naive. We literally keep every n-th weight column and discard the rest. Turns out that even after getting rid of 80% of the weight columns in this way, we were able to retain the same performance in a 125M GPT.
> The main reason topography emerges in physical brains is because spatially distant connections are physically difficult and expensive in biological systems.
The brain itself seems to have bottlenecks that aren't distance related, like hemispheres and the corpus callosum that are preserved over all placental mammals and other mammalian groups have something similar and still hemispheres. Maybe it's just an artifact of bilateral symmetry that is stuck in there from path dependence, or forcing a redundancy to make damage more recoverable, but maybe it has a big regularizing or alternatively specializing effect (regularization like dropout tends to force more distributed representations which seems kind of opposite to this work and other work like "Seeing is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability," https://arxiv.org/abs/2305.08746 ).
This paper imports an arbitrarily-chosen aspect of cortical architecture — topological maps of function — and ignores every other aspect of biological neural tissue. The resulting models show lower performance for the same number of parameters — not surprising, since they are more constrained compared with baseline. They may be slightly more robust against pruning — not surprising, since they are more regularised.
The figures show individual seeds, presumably, with no statistical analysis in the performance or pruning comparisons, so the null hypothesis is there is no difference between toponets and baseline. I would never let this paper be submitted by my team.
We haven't learned anything about the brain, or about ANNs.
Submitted title was "Inducing brain-like structure in GPT's weights makes them parameter efficient". We've reverted it now in keeping with the site guidelines (https://news.ycombinator.com/newsguidelines.html).
Since the submitter appears to be one of the authors, maybe they can explain the connection between the two titles? (Or maybe they already have! I haven't read the entire thread)
I hate to dog on research papers. They’re work to write. That said, I think this paper is not likely to be of interest to AI researchers — instead it may be of interest to Neuroscience folks or other brain research types.
The lede — adding topography worsens networks at similar weights — is not only buried, it’s obscured with statements claiming that topo networks show less upheaval when scaled down, e.g. they are more efficient than similar weight networks.
It’s hard for me to see how both these things can be true — the graphs show the more topography is added, the worse the networks perform at the trained model sizes.
To have the second statement “They compress better and are therefore more efficient” also be true, I think you’d need to show a pretty remarkable claim, which is that while a model trained at the same scale as a llama architecture is worse, when you scale them both down, this model becomes not only better than the scaled down llama, but also better than a natively trained model at the new smaller scale.
There is no proof of this in the paper, and good reason to be skeptical of this idea based on the data presented.
That said, like a lot of ideas in AI, this .. works! You can train a model successfully imposing these outside structures on it, and that model doesn’t even suck very much. Which is a cool statement about complexity theory and the resilience of these architectures, in my opinion. But I don’t think it says much else about either the brain or underlying AI ‘truths’.
1. Significantly lower dimensionality of internal representations
2. More interpretable (see: https://toponets.github.io)
> 7B model down to 6B
We remove ~80% of the parameters in topographic layers and retain the same performance in the model. The drop in parameter count is not significant because we did not experiment with applying TopoLoss in all of the layers of the model (did not align with the goal of the paper)
We are currently performing those strong sparsity experiments internally, and the results look very promising!
Our goal was never to optimize for performance. There's a long standing hypothesis that topographic structure in the human brain leads to metabolic efficiency. Thanks to topography in ANNs, we were able to test out this hypothesis in a computational setting.
> sketchy story this is "brain like".
we reproduce the hallmarks of functional organization seen in the visual and language cortex of the brain. I encourage you to read the paper before making such comments
Is this "brain-like" in any functional way, or "brain-like" in the same way that a tall rectangle is "door-like" even if it doesn't share any functions with a door?
I know quite a bit about machine learning, but very little to nothing about neuroscience and human cognition, so I am curious how an expert (that didn't work on the paper) would describe it.
(Forgive me for the pre-emptive negativity but I am so utterly exhausted by dishonest comparisons to sapient thought in the field of artificial intelligence that it has nearly drained me of the incredible amount of enthusiasm I used to carry for it.)
I spent time working with Andrej and the rest of the FSD team back in 2020/2021, and we had plenty of conversations on how human visual processing maps onto our neural network architectures. Our approach—transformer-based attention blocks, multi-scale feature extraction, and temporal fusion—mirrors elements of the biological visual cortex (retina → LGN → V1 → V2 → V4 → IT) which break down raw inputs and integrate them over time. It’s amazing how closely this synthetic perceptual pipeline parallels the way our own brains interpret the world.
The key insight we discovered was that explicitly enforcing brain-like topographic organization (as some academic work attempts - such as this one here) isn't necessary - what matters is having the right functional components that parallel biological visual processing. Our experience showed that the key elements of biological visual processing - like hierarchical feature extraction and temporal integration - emerge naturally when you build architectures that have to solve real visual tasks.
The brain's organization serves its function, not the other way around. This was validated by the real-world performance of our synthetic visual cortex in the Tesla FSD stack.
Link to the 2021 Tesla AI day talk: https://www.youtube.com/live/j0z4FweCy4M?t=3010s
"It’s amazing how closely this synthetic perceptual pipeline parallels the way our own brains interpret the world."
It is amazing, that the synthetic pipeline, that was build to mimick the brain, seems to mimick the brain?
That sounds a bit tautological and otherwise I doubt we have really understood how our brain exactly interprets the world.
In general this is definitely interesting research, but worded like this, it smells a bit hyped to me.
I interpreted it the other way around.
We can think of a solution space, with potentially many good solutions to the vision problem, and we can, in science fiction-like speculation, that the other solutions will be very different and surprise us.
Then this experiment shows its solution is the same we already knew, and that's it.
Then there aren't many good potential solutions, there is only one, and the ocean of possibilities becomes the pond of this solution.
Did you read the part where he explicitly mentioned that they discovered how enforcing that architecture was not necessary, as it would emerge on its own?
I did, but it was not clear to me, how it was meant. I assume the basic design was done before (with the brain in mind).
The convolutional kernels in the first levels do converge to Gabors like the ones in V1 (and there were math works in the 90-ies, in neuro research, about optimality of such kernels) so it wouldn't be surprising if higher levels would converge to something that is similar to the higher levels of visual cortex (like hierarchical feature aggregation that is nicely illustrated by deep dreaming and also feels like it can be optimal under reasonable conditions and thus would be expected to emerge).
Unlike neural networks the brain contains massive numbers of lateral connections. This, combined with topographical organization, allows it to do within layer temporal predictions as activations travel across the visual field, create active competition between similarly tuned neurons in a layer (forming natural sub networks), and quite a bit more. So, yeah, the brain's organisation serves it's function, and it does so very very well.
I've found how CNN map to visual cortex to be very clear. But I've always been a bit confused about how llms map to the brain. Is that even the case?
> how llms map to the brain
For the lower level - word embedings (word2vec, "King – Man + Woman = Queen") - one can see a similarity
https://www.nature.com/articles/d41586-019-00069-1 and https://gallantlab.org/viewer-huth-2016/
"The map reveals how language is spread throughout the cortex and across both hemispheres, showing groups of words clustered together by meaning."
The main reason topography emerges in physical brains is because spatially distant connections are physically difficult and expensive in biological systems. Artificial neural nets have no such trade-off. So what's the motivation here? I can understand this might be a very good regularizer, so it could help with generalization error on small-data tasks. But hard to see why this should be on the critical path to AGI. As compute and data grows, you want less inductive bias. For example, CNN will beat ViT on small data tasks, but that flips with enough scale because ViT imposes less inductive bias. Or at least any inductive bias should be chosen because it models the structure of the data well, such as with causal transformers and language.
Locality of data and computation is very important in neural nets. It's the number one reason why training and inference are as slow as they are. It's why GPUs need super expensive HBM memory, why NVLink is a thing, why Infiniband is a thing.
If the problem of training and inference on neural networks can be optimized so that a topology can be used to keep closely related data together, we will see huge advancements in training and inference speed, and probably in model size as a result.
And speed isn't just speed. Speed makes impossible (not enough time in our lifetime) things possible.
A huge factor in Deepseek being able to train on H800 (half HBM bandwith as H100) is that they used GPU cores to compress/decompress the data moved around between the GPU memory and the compute units. This reduces latency in accessing data and made up for the slower memory bandwith (which translates in higher latency when fetching data). Anything that reduces the latency of memory accesses is a huge accelerator for neural nets. The number one way to achieve this is to keep related data next to each other, so that it fits in the closest caches possible.
It's true, but isn't OP also correct? Ie. it's about speed, which implies locality, which implies approaches like MoE which does exactly that and it's unlike physical brain topology?
Having said that it would be fun to see things like rearrangement data moves based on temerature of silicon parts after training cycle.
Unless GPUs work markedly differently somehow or there’s been some fundamental shift in computer architecture I’m not aware of, spatial locality is still a factor in computers.
Aside from HW acceleration today, designs like Cebras would benefit heavily by reducing the amount of random access from accessing the weights (and thus freeing up cross-chip memory bandwidth for other things).
This makes me remember game developers back when games could still be played directly from the physical disc. They would often duplicate data to different parts of the disc, knowing that certain data would often be streamed from disc together, so that seek times were minimized.
But those game devs knew where everything was spatially on the disc, and how the data would generally be used during gameplay. It was consistent.
Do engineers have a lot of insight into how models get loaded spatially onto a given GPU at run time? Is this constant? Is it variable on a per GPU basis? I would think it would have to be.
Hard to optimize for this.
This brings to mind The Story of Mel from programming folklore.
http://beza1e1.tuxen.de/lore/story_of_mel.html
Such a good read - some people really are on another level in their chosen field.
That could explain compute efficiency, but has nothing to do with the parameter efficiency pointed at in the paper.
Maybe this would be relevant for datacenters with significant distance between machines, or multidatacenter systems.
> So what's the motivation here?
Better interpretability, I suppose. Could give insights into how cognition works.
The motivation was to induce structure in the weights of neural nets and see if the functional organization that emerges aligns with that of the brain or not. Turns out, it does -- both for vision and language.
The gains in parameter efficiency was a surprise even to us when we first tried it out.
That's true, and interpretability is helpful for AI safety.
Indeed. What's cool is that we were able to localize literal "regions" in the GPTs which encoded toxic concepts related to racism, politics, etc. A similar video can be found here: https://toponets.github.io
More work is being done on this as we speak.
My understanding coming from mechanistic interpretability is that models are typically (or always) in superposition, meaning that most or all neurons are forced to encode semantically unrelated concepts because there are more concepts than neurons in a typical LM. We train SAEs (where we apply L1 reg and a sparsity penalty to “encourage” the encoder output latents to yield sparse representations of the originating raw activations), to hopefully disentangle these features, or make them more monosemantic.This allows us to use the SAE as a sort of microscope to see what’s going on in the LM, and apply techniques like activation patching to localize features of interest, which sounds similar to what you’ve described. I’m curious what this work means for mech interp. Is this a novel alternative to mitigating polysemanticity? Or perhaps neurons are still encoding multiple features, but the features tend to have greater semantical overlap? Fascinating stuff!
Was it toxicity though as understood by the model, or just a cluster of concepts that you've chosen to label as toxic?
I.e., is this something that could (and therefore, will) be turned towards identifying toxic concepts as understood by the chinese or us government, or to identify (say) pro-union concepts so they can be down-weighted in a released model, etc?
We localized "toxic" neurons by contrasting the activations of each neuron for toxic v/s normal texts. It's a method inspired by old-school neuroscience.
Defining all politics as toxic is concerning, if it's not just a proof of concept. That's something dictatorships do so that people won't speak up.
I had this idea the other day. Not sure if it relates but maybe?
https://twitter.com/justinvincent/status/1884357300703400274
I imagine it could be easier to make sense of the 'biological' patterns that way? like, having bottlenecks or spatially-related challenges might have to be simulated too, to make sense of the ingested 'biological' information.
Perhaps they are more easily compressible? Once a bunch of nearby weights have similar roles one may not need all of them.
If you have things organized neatly together, you can also use pre-existing compression algorithms, like JPEG, to compress your data. That's what we're doing in Self-Organizing Gaussians [0]. There we take an unorganised (noisy) set of primitives that have 59 attributes and sort them into 59 2D grids which are locally smooth. Then we use off-the-shelf image formats to store the attributes. It's an incredibly effective compression scheme, and quite simple.
[0]: https://fraunhoferhhi.github.io/Self-Organizing-Gaussians/
Yep. That is exactly the idea here. Our compression method is super duper naive. We literally keep every n-th weight column and discard the rest. Turns out that even after getting rid of 80% of the weight columns in this way, we were able to retain the same performance in a 125M GPT.
> The main reason topography emerges in physical brains is because spatially distant connections are physically difficult and expensive in biological systems.
The brain itself seems to have bottlenecks that aren't distance related, like hemispheres and the corpus callosum that are preserved over all placental mammals and other mammalian groups have something similar and still hemispheres. Maybe it's just an artifact of bilateral symmetry that is stuck in there from path dependence, or forcing a redundancy to make damage more recoverable, but maybe it has a big regularizing or alternatively specializing effect (regularization like dropout tends to force more distributed representations which seems kind of opposite to this work and other work like "Seeing is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability," https://arxiv.org/abs/2305.08746 ).
This paper imports an arbitrarily-chosen aspect of cortical architecture — topological maps of function — and ignores every other aspect of biological neural tissue. The resulting models show lower performance for the same number of parameters — not surprising, since they are more constrained compared with baseline. They may be slightly more robust against pruning — not surprising, since they are more regularised.
The figures show individual seeds, presumably, with no statistical analysis in the performance or pruning comparisons, so the null hypothesis is there is no difference between toponets and baseline. I would never let this paper be submitted by my team.
We haven't learned anything about the brain, or about ANNs.
The title here doesn't seem to match. The paper is called "TopoNets: High Performing Vision and Language Models with Brain-Like Topography"
Even with their new method, models with topography seem to perform worse than models without.
Submitted title was "Inducing brain-like structure in GPT's weights makes them parameter efficient". We've reverted it now in keeping with the site guidelines (https://news.ycombinator.com/newsguidelines.html).
Since the submitter appears to be one of the authors, maybe they can explain the connection between the two titles? (Or maybe they already have! I haven't read the entire thread)
This is excellent. Since reading https://books.google.de/books/about/Models_of_the_Mind.html?... I've been expecting someone to start looking back into biology to try to move forward. I guess the poster is one of the authors. Kudos!
I hate to dog on research papers. They’re work to write. That said, I think this paper is not likely to be of interest to AI researchers — instead it may be of interest to Neuroscience folks or other brain research types.
The lede — adding topography worsens networks at similar weights — is not only buried, it’s obscured with statements claiming that topo networks show less upheaval when scaled down, e.g. they are more efficient than similar weight networks.
It’s hard for me to see how both these things can be true — the graphs show the more topography is added, the worse the networks perform at the trained model sizes.
To have the second statement “They compress better and are therefore more efficient” also be true, I think you’d need to show a pretty remarkable claim, which is that while a model trained at the same scale as a llama architecture is worse, when you scale them both down, this model becomes not only better than the scaled down llama, but also better than a natively trained model at the new smaller scale.
There is no proof of this in the paper, and good reason to be skeptical of this idea based on the data presented.
That said, like a lot of ideas in AI, this .. works! You can train a model successfully imposing these outside structures on it, and that model doesn’t even suck very much. Which is a cool statement about complexity theory and the resilience of these architectures, in my opinion. But I don’t think it says much else about either the brain or underlying AI ‘truths’.
Shouldn't there be a comparison in performance on common benchmarks to other models?
Like a 7B toponet model vs a 7B Llama model?
As a layperson I don't understand why topology is a thing to optimize for.
The only potential benefit shown in the paper is the topologically local models seem to be more resilient after pruning.
So you may be able to prune a 7B model down to 6B while maintaining most of the capability.
> The only potential benefit
Other benefits:
1. Significantly lower dimensionality of internal representations 2. More interpretable (see: https://toponets.github.io)
> 7B model down to 6B
We remove ~80% of the parameters in topographic layers and retain the same performance in the model. The drop in parameter count is not significant because we did not experiment with applying TopoLoss in all of the layers of the model (did not align with the goal of the paper)
We are currently performing those strong sparsity experiments internally, and the results look very promising!
The blurring in the sheets and the topo loss reminded me of https://arxiv.org/abs/2408.05446
They bury the part where inducing brain like structure hurts performance!
This is a method to just hurt your network in exchange for nothing useful at all aside from some sketchy story that this is "brain like".
Our goal was never to optimize for performance. There's a long standing hypothesis that topographic structure in the human brain leads to metabolic efficiency. Thanks to topography in ANNs, we were able to test out this hypothesis in a computational setting.
> sketchy story this is "brain like".
we reproduce the hallmarks of functional organization seen in the visual and language cortex of the brain. I encourage you to read the paper before making such comments
Is this "brain-like" in any functional way, or "brain-like" in the same way that a tall rectangle is "door-like" even if it doesn't share any functions with a door?
I know quite a bit about machine learning, but very little to nothing about neuroscience and human cognition, so I am curious how an expert (that didn't work on the paper) would describe it.
(Forgive me for the pre-emptive negativity but I am so utterly exhausted by dishonest comparisons to sapient thought in the field of artificial intelligence that it has nearly drained me of the incredible amount of enthusiasm I used to carry for it.)
[dead]