This dataset is a compilation of data and results for PyMVPA Tutorial.
At the moment dataset is based on data for a single subject from a study published by Haxby et al. (2001). The full (raw) dataset of this study is also available. However, in contrast to the full data this single subject datasets has been preprocessed to a degree that should allow people without prior fMRI experience to perform meaningful analyses. Moreover, it should not require further preprocessing with external tools.
All preprocessing has been performed using tools from FSL. Specifically, the 4D fMRI timeseries has been motion-corrected by applying MCFLIRT to a skull-stripped and thresholded timeseries (to zero-out non-brain voxels, using a brain outline estimate significantly larger than the brain, to prevent removal of edge voxels actually covering brain tissue). The estimated motion parameters have been subsequently applied to the original (unthresholded, unstripped) timeseries. For simplicity the T1-weighed anatomical image has also been projected and resampled into the subjects functional space.
For surface-based mapping two other archives are distributed, called ‘tutorial_data_surf_minimal-0.1.tar.gz’ and ‘tutorial_data_surf_complete-0.1.tar.gz’. Both contain surfaces that were reconstructed using FreeSurfer and preprocessed using AFNI and SUMA. The ‘minimal’ archive contains the minimal set of surfaces to run doc/examples/searchlight_surf.py. The latter contains the full output from FreeSurfer’s recon-all and the full output from the anatomical preprocessing by the alignment script in bin/pymvpa2-prep-afni-surf. This output includes left, right, and merged (left combined with right) hemispheres at various resolutions. Surfaces produced by the alignment script are stored in ASCII format and can be read by the module mvpa2/misc/nibabel/surf_fs_asc. The surfaces can be visualized using AFNI’s SUMA (SUrface MApper).
The orginal authors of Haxby et al. (2001) hold the copyright of this dataset and made it available under the terms of the Creative Commons Attribution-Share Alike 3.0 license. The PyMVPA authors have preprocessed the data and released this derivative work under the same licensing terms.
Contains data files:
Data used for and generated by FreeSurfer‘s recon-all. Only included in the tutorial_data_surf_complete archive.
Contains the output from FreeSurfer’s recon-all. The command used to generate the output was:
recon-all -subject subj1 -i anat_nii.nii -all -cw256
Note that the environmental variable SUBJECTS_DIR was set to point to the current working directory (freesurfer). The version of FreeSurfer used for reconstruction is: freesurfer-Linux-centos4_x86_64-stable-pub-v5.0.0.
Surfaces generated by the AFNI / SUMA wrapper script in bin/pymvpa2-prep-afni-surf. Most files are available only in the tutorial_data_surf_complete archive. The minimal set for running doc/examples/searchlight_surf.py is provided in the tutorial_data_surf_minimal archive. These surfaces are aligned to bold_mean.nii.gz as indicated by the infix _al in the file name. The contents of this directory can be generated with:
PyMVPAROOT/bin/pymvpa-prep-afni-surf.py \
--refdir suma_surfaces \
--surfdir data/freesurfer/subj1/surf \
--epivol data/bold_mean.nii.gz
where PyMVPAROOT is the directory where PyMVPA is installed. Using this script requires that FreeSurfer, AFNI and SUMA are installed. The prefixes icoXX_Yh indicates that the surface was generated using AFNI’s MapIcosahedron with XX linear divisions (ld parameter) and represents the Y hemisphere (l=left, r=right, m=merged). Such a surface has 10*XX**2+2 nodes and 20*XX*2 surfaces for a single hemisphere, and twice that number for merged hemispheres. Merged hemispheres contain first the nodes of the left hemispheres, followed by the nodes in the right hemisphere. SUMA .spec files that define several views are also provided for these surfaces. Files were generated using FreeSurfer version stable5, and AFNI AFNI_2011_12_21_1014 running on a Mac with Mac OS 10.7.5.
>>> from mvpa2.suite import *
>>> datapath = os.path.join(pymvpa_datadbroot, 'tutorial_data',
... 'tutorial_data', 'data')
>>> attrs = SampleAttributes(os.path.join(datapath, 'attributes.txt'))
>>> ds = fmri_dataset(samples=os.path.join(datapath, 'bold.nii.gz'),
... targets=attrs.targets, chunks=attrs.chunks,
... mask=os.path.join(datapath, 'mask_brain.nii.gz'))
>>> print ds.shape
(1452, 39912)
>>> print ds.a.voxel_dim
(40, 64, 64)
>>> print ds.a.voxel_eldim
(3.5, 3.75, 3.75)
>>> print ds.a.mapper
<Chain: <Flatten>-<StaticFeatureSelection>>
>>> print ds.uniquetargets
['bottle' 'cat' 'chair' 'face' 'house' 'rest' 'scissors' 'scrambledpix'
'shoe']
Haxby, J., Gobbini, M., Furey, M., Ishai, A., Schouten, J., and Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425–2430.