I've been reading Matthew Cobb's The Idea of the Brain: The Past and Future of Neuroscience lately, and one of the points that it has reinforced is that an overrarching paradigm to guide neuroscience research has really not emerged, and that neuroscience has, despite some fairly substantial achievements, made surprisingly little progress in explaining the brain. I'm using this page to track some thoughts I have on this point.
Unorganized bullet points / Notes
- The brain is clearly not a digital, electronic computer. Folks who see the brain as such a computer are confusing the direction of causality (brains inspired computers, not vice-versa) and mistaking the map for the territory. Aside from the fact that John von Neumann himself cautioned against taking the brain/computer analogy too far (as mind uploading advocates and Singulitarians often do), there are also obvious architectural differences. For instance, microprocessors, unlike brains, deliberately seek to curb electrical current using [latches/flip-flops](https://en.wikipedia.org/wiki/Flip-flop_(electronics) in order to avoid glitches. Electronic computers deliberately seek to prevent analog signals from transmitting fully, and deliberately quantize them, whereas there is no such deliberate / systematic quantization of analog signals to digital signals in the brain (cell membrane pumps and channels are not the same, and the action potential is not strictly binary in the same sense as a computer).
- That said, there's no reason to not use machine learning models / deep learning models as toy scenarios that could provide some insight into biology. This is, in fact, what many theoretical neuroscientists strive to do.
- A fixation on comparing conventional, electronic digital computers to the brain may not be ultimately as fruitful as anticipated. It may make sense to look to unconventional computing, such as gooware computers for fresh insights. Chemical computing is particularly interesting as an analogy for brain function due to the brain's heavy reliance on chemical messengers.
- The emphasis on network structure in connectomics is necessary, but not sufficient for an understanding of a brain. I think that we've been regarding the brain just as we regard a Rubin vase, with humanity collectively only seeing a wine glass (connectome) and not seeing the faces (chemical currents).
- To use another analogy, if a connectome / wiring diagram is like a map, the chemical concentrations throughout the brain at a given point in time are like weather patterns, and regions that are generally more steeped in one neurotransmitter over another (i.e., the nucleus accumbens w/ dopamine) are said to have a climate that is more suited to a given neurotransmitter (similar to how California is wine country, while upstate New York is generally snowy).
- As per the link here, there are two groups that most scientific computing tools fall into: 1) 3d simulations that often require multicomputing / MPI , and 2) Big Data applications that are more suited for multithreaded environments / nodes that don't require an interconnect or inter-node communication. Most contemporary neuroinformatics tools seem to fall into group 2 (big data), and are mostly focused on image processing and statistical tests. However, a full treatment of the brain will require software in group 1 (3d simulations) to model the flow/currents of neurotransmitters in the chemical soup that flows around a connectome.
- I would be surprised if I was the only person in the world who has had all of the above thoughts.
- For another geosciences analogy, I recall that Dan Margulies of Max Planck (now at CNRS apparently) at one point pitched the idea of thinking of neural networks in terms of tectonic plates. I think that this is another good way of viewing how a connectome can dynamically change (although on a much faster time scale).