'mn FIRST

FIRST   v1.1

FMRIB's Integrated Registration and Segmentation Tool

subcortical brain segmentation using Bayesian shape & appearance models


Introduction

FIRST is a model-based segmentation/registration tool. The shape/appearance models used in FIRST are constructed from manually segmented images provided by the Center for Morphometric Analysis (CMA), MGH, Boston. The manual labels are parameterized as surface meshes and modelled as a point distribution model. Deformable surfaces are used to automatically parameterize the volumetric labels in terms of meshes; the deformable surfaces are constrained to preserve vertex correspondence across the training data. Furthermore, normalized intensities along the surface normals are sampled and modelled. The shape and appearance model is based on multivariate Gaussian assumptions. Shape is then expressed as a mean with modes of variation (principal components). Based on our learned models, FIRST searches through linear combinations of shape modes of variation for the most probable shape instance given the observed intensities in your T1 image.

For more information on FIRST, see the D.Phil. thesis or the FMRIB technical report. The thesis provides a more thorough and complete description.


Referencing

Currently, the thesis is the best reference for FIRST. We've also included the HBM 2007 and 2008 abstract references:

1. Brian Patenaude. Bayesian Statistical Models of Shape and Appearance for Subcortical Brain Segmentation. D.Phil. Thesis. University of Oxford. 2007.

2. Brian Patenaude, Stephen Smith, David Kennedy, and Mark Jenkinson. Improved Surface Models for FIRST. In Human Brian Mapping Conference, 2008.

3. Brian Patenaude, Stephen Smith, David Kennedy, and Mark Jenkinson. FIRST - FMRIB's integrated registration and segmentation tool. In Human Brain Mapping Conference, 2007.


FIRST Training Data Contributors

We are very grateful for the training data for FIRST, particularly to David Kennedy at the CMA, and also to: Christian Haselgrove, Centre for Morphometric Analysis, Harvard; Bruce Fischl, Martinos Center for Biomedical Imaging, MGH; Janis Breeze and Jean Frazier, Child and Adolescent Neuropsychiatric Research Program, Cambridge Health Alliance; Larry Seidman and Jill Goldstein, Department of Psychiatry of Harvard Medical School; Barry Kosofsky, Weill Cornell Medical Center.


Running FIRST

FIRST segmentation requires firstly that you run first_flirt to find the affine transformation to standard space, and secondly that you run run_first to segment a single structure (re-running it for each further structure that you require). Alternatively, you can use run_first_all, which does all of the above for you, including running run_first on every subcortical structure in the models, and producing a summary segmentation image for all structures. **Please note, if the first stage fails then the models fitting will not work. run_first_all continues to run regardless of gross errors in registration.


Output


Models


Boundary Correction with first_utils

The program first_utils is used for the classification of the boundary voxels. It takes the segmentation image output by FIRST and classifies the boundary voxels as belonging to the structure or not. The output volume will have only a single label. first_utils only works on a single structure at a time.

Usage:

first_utils --singleBoundaryCorr -i output_name -r im1 -p 3 -o output_name_corr

For the models contained in ${FSLDIR}/data/first/models_336_bin/05mm/, all boundary voxels are considered as belonging to the structure. fslmaths can be used to combine the boundary and interior voxels (e.g. fslmaths subject_1_L_Thal -bin -mul 10 subject_1_L_Thal_bcorr).


first_utils

first_utils may also be used to fill meshes, calculate Dice overlap, and perform vertex wise statistics.


concat_bvars


first_roi_slicesdir