Summaries / Reviews from the Neuroimaging Literature
- Connectome Networks: From Cells to Systems - offers a taxonomy for the scales of imaging. At a glance (I haven’t really read it yet as of 1/7/19), it looks like you could combine different factors at different levels of analysis as part of a useful nested hierarchical schema (e.g., microscale neurovascular coupling and microscale action potentials are components of macroscale fMRI and EEG readings).
- Functional Brain Imaging: A Comprehensive Survey
- A Comparative Analysis of Registration Tools: Traditional vs Deep Learning Approach on High Resolution Tissue Cleared Data
- Performance of Image Registration Tools on High-Resolution 3D Brain Images
- On the Definition of Signal-To-Noise Ratio and Contrast-To-Noise Ratio for fMRI Data
- Size and shape matter: The impact of voxel geometry on the identification of small nuclei
- Summary Table from Poldrack and Gorgolewski 2014
- Harvard Aging Brain Study
- UK Biobank
Stats on MR Usage
- Just as an errant thought, the file format used to store most MRI images (NifTI) is very similar to DV, in the sense that each frame is its own distinct bitmap(?) image. Unlike DV, no compression is applied to individual frames in NifTI (as far as I know). Compression is usually accomplished by gzipping an entire raw NifTI file, but I doubt that this provides an optimal compression ratio- in MPEG compression schemes the previous frame informs what the next frame will look like (motion compensation). It seems like something similar could be used for 3d volumetric scans. Why not encode each slice with a lossless video compression algo?
- In Vivo Optical Imaging of Brain Function, 2nd edition