ROIs as tools

In our previous chapters on images and NiFTIs the elementary action of “masking” was discussed. In essence, what we were doing there was selecting a subset of the object’s elements based on the application of some quantitative criteria. In the digital 2-D image we used distance from a color, while in the brain NiFTI case we used whether or not a particular voxel corresponded to a given label in an associated parcellation. Conceptually we’re not doing anything radically different in the case of tractograms and streamlines: we want to subselect some finite number of the streamlines in a tractogram in accordance with an explicitly selected criterion. So how do we do that? We specify a Region of Interest (ROI) (and a streamline’s relation to it). Let’s consider ROIs a bit more closely.

What is an ROI?

In practice, a Region of Interest can be stored in a number of ways, but the specifics of this are beyond the scope of our goals here. What we want to know is “what is the fundamental characteristic of an ROI that makes it of use to us?”. At its heart, an ROI, in the context of streamline/tractogram segmentation, is a point cloud: a set of X, Y, and Z coordinates corresponding to points in the same relative coordinate space as your tractogram. Whereas the streamlines of a tractogram are an ordered sequence of coordinates, the coordinates of an ROI are not in any particular order. Typically, they are used to define a semi-coherent particular volume of space and are sampled in a regular fashion, forming a 3-D lattice. When performing segmentations these often take the shape of a sphere or plane. In a moment, we will begin by creating an example (and formatless) planar ROI.

ROIs in practice

It is worth noting that the ROI description provided above is quite general, and imposes only the most limited constraints on what an ROI could actually be. For example, there is nothing precluding an investigator from creating a single point ROI, or at the other extreme, an ROI spanning the entire reference volume (i.e. the volume of space that the ROI is sub-selecting from). In practice though, there are certain general categories of ROIs that tend to be used, due to the kinds of queries/applications they would be useful for. Let’s consider a few of these now:

A planar ROI

A full planar ROI is a special case of a region of interest. Specifically, it constitutes a “flat” (in that it exhibits the minimum level of thickness possible, in one dimension) set of coordinates which (in the case of a full plane ROI) extend the full span of the “other” dimensions of the reference space. For example, a planar ROI located at X = -20, would be a plane extending the entire Y and Z dimensions, but only occupying a single “slice” in the X dimension (at X = -20). Note that, the essential characteristics for specifying a planar ROI are:

    1. The coordinate of interest

    1. The dimension of interest

Lets interact with an example at -20 in the X plane. Note the use of WMA_pyFuncs.makePlanarROI with these inputs to generate the desired ROI. As you move the X slider (labeled xCoord) through the X dimension, notice that the entire sagittal image changes color, indicating the presence of the ROI/mask.

#this code ensures that we can navigate the WiMSE repo across multiple systems
import subprocess
import os
#get top directory path of the current git repository, under the presumption that 
#the notebook was launched from within the repo directory
gitRepoPath=subprocess.check_output(['git', 'rev-parse', '--show-toplevel']).decode('ascii').strip()

#establish path to the 
wma_toolsDirPath=os.path.join(gitRepoPath,'wma_pyTools')   

#change to the wma_tools path, load the function set, then change back to the top directory
os.chdir(wma_toolsDirPath)
import WMA_pyFuncs
os.chdir(gitRepoPath)

import nibabel as nib
from nilearn.plotting import plot_roi
import matplotlib

#establish path to t1
t1Path=os.path.join(gitRepoPath,'exampleData','t1.nii.gz')   

#import the data
t1img = nib.load(t1Path)

#use a wma_tools function to make a planar roi.
roi_img=WMA_pyFuncs.makePlanarROI(t1img, -20 , 'x')

import numpy as np
#done to establish bounds of image in acpc space
fullMask = nib.nifti1.Nifti1Image(np.ones(t1img.get_fdata().shape), t1img.affine, t1img.header)
#pass full mask to boundary function
t1DimBounds=WMA_pyFuncs.returnMaskBoundingBoxVoxelIndexes(fullMask)
#convert the coords to subject space in order set max min values for interactive visualization
convertedBoundCoords=nib.affines.apply_affine(t1img.affine,t1DimBounds)

from ipywidgets import interact, interactive, fixed, interact_manual
from ipywidgets import IntSlider

#wrapper for interactive visualization
def rotateAndPlotWrapper(xCoord,yCoord,zCoord):
    from nilearn import plotting
    import nibabel as nib
    import numpy as np

    %matplotlib inline
    plotting.plot_roi(roi_img=roi_img, bg_img=t1img, cut_coords=[xCoord,yCoord,zCoord])
    
interact(rotateAndPlotWrapper, \
    xCoord=IntSlider(min=np.min(convertedBoundCoords[:,0].astype(int)), max=np.max(convertedBoundCoords[:,0].astype(int)), step=1,continuous_update=False),  \
    yCoord=IntSlider(min=np.min(convertedBoundCoords[:,1].astype(int)), max=np.max(convertedBoundCoords[:,1].astype(int)), step=1,continuous_update=False), \
    zCoord=IntSlider(min=np.min(convertedBoundCoords[:,2].astype(int)), max=np.max(convertedBoundCoords[:,2].astype(int)), step=1,continuous_update=False))
<function __main__.rotateAndPlotWrapper(xCoord, yCoord, zCoord)>

A partial planar ROI

A full plane ROI isn’t the only trick we have up our sleeve. It’s also possible to “cut” the full plane ROI to obtain a partial plane ROI. Note that, in order to do this we would need to additionally provide:

    1. The coordinate to cut the original ROI

    1. The dimension along which that [singleton] cut coordinate should be interpreted

    1. The portion of the original ROI we want to keep, provided in relative anatomical terms.

Note how this is done by using WMA_pyFuncs.makePlanarROI to make knife_roi, and then how ‘posterior’ is used with WMA_pyFuncs.sliceROIwithPlane

#use a wma_tools function to make a planar roi.
#roi_img=WMA_pyFuncs.makePlanarROI(t1img, -20 , 'x')
#we already have this from the previous run, so let's use it here

#first though we need to generate a planar ROI to "cut" with
knife_roi=WMA_pyFuncs.makePlanarROI(t1img, 0 , 'y')

#now cut the roi_img ROI
cut_roi=WMA_pyFuncs.sliceROIwithPlane(roi_img,knife_roi,'posterior')

from ipywidgets import interact, interactive, fixed, interact_manual
from ipywidgets import IntSlider

#wrapper for interactive visualization
def rotateAndPlotWrapper(xCoord,yCoord,zCoord):
    from nilearn import plotting
    import nibabel as nib
    import numpy as np

    %matplotlib inline
    plotting.plot_roi(roi_img=cut_roi, bg_img=t1img, cut_coords=[xCoord,yCoord,zCoord])
    
interact(rotateAndPlotWrapper, \
    xCoord=IntSlider(min=np.min(convertedBoundCoords[:,0].astype(int)), max=np.max(convertedBoundCoords[:,0].astype(int)), step=1,continuous_update=False),  \
    yCoord=IntSlider(min=np.min(convertedBoundCoords[:,1].astype(int)), max=np.max(convertedBoundCoords[:,1].astype(int)), step=1,continuous_update=False), \
    zCoord=IntSlider(min=np.min(convertedBoundCoords[:,2].astype(int)), max=np.max(convertedBoundCoords[:,2].astype(int)), step=1,continuous_update=False))
<function __main__.rotateAndPlotWrapper(xCoord, yCoord, zCoord)>

A spherical ROI

The construction of a spherical ROI proceeds in much the same way as a planar roi, except the point cloud lattice now extends in 3 dimensions. There are two important characteristics for a spherical ROI which can, in essence, sum up (or be used to generate) the sphere.

    1. The center coordinate

    1. The radius

Lets generate a spherical ROI now. We will be using the algorithm underlying an existing function taken from the nitools toolkit to generate this example sphere (centered at 0,0,0).

radius=10
point=[0,0,0]
outputSphereNifti=WMA_pyFuncs.createSphere(radius, point, t1img)

from nilearn import plotting
import nibabel as nib
import numpy as np
def rotateAndPlotWrapper(xCoord,yCoord,zCoord):
    %matplotlib inline
    plotting.plot_roi(roi_img=outputSphereNifti, bg_img=t1img, cut_coords=[xCoord,yCoord,zCoord])
    
interact(rotateAndPlotWrapper, \
xCoord=IntSlider(min=np.min(convertedBoundCoords[:,0].astype(int)), max=np.max(convertedBoundCoords[:,0].astype(int)), step=1,continuous_update=False),  \
yCoord=IntSlider(min=np.min(convertedBoundCoords[:,1].astype(int)), max=np.max(convertedBoundCoords[:,1].astype(int)), step=1,continuous_update=False), \
zCoord=IntSlider(min=np.min(convertedBoundCoords[:,2].astype(int)), max=np.max(convertedBoundCoords[:,2].astype(int)), step=1,continuous_update=False))
<function __main__.rotateAndPlotWrapper(xCoord, yCoord, zCoord)>

A modified spherical ROI

It is also possible to modify this spherical ROI in the same way that we modified the planar ROI. As before we have to provide:

    1. The coordinate to cut the original ROI

    1. The dimension along which that singleton cut coordinate should be interpreted

    1. The portion of the original ROI we want to keep, provided in relative anatomical terms.

Lets cut the sphere from the previous example in half, at the origin and keep the anterior half.

#first though we need to generate a planar ROI to "cut" with
knife_roi=WMA_pyFuncs.makePlanarROI(t1img, 0 , 'y')

#now cut the roi_img ROI
cutSphere_roi=WMA_pyFuncs.sliceROIwithPlane(outputSphereNifti,knife_roi,'anterior')

from ipywidgets import interact, interactive, fixed, interact_manual
from ipywidgets import IntSlider

#wrapper for interactive visualization
def rotateAndPlotWrapper(xCoord,yCoord,zCoord):
    from nilearn import plotting
    import nibabel as nib
    import numpy as np

    %matplotlib inline
    plotting.plot_roi(roi_img=cutSphere_roi, bg_img=t1img, cut_coords=[xCoord,yCoord,zCoord])
    
interact(rotateAndPlotWrapper, \
    xCoord=IntSlider(min=np.min(convertedBoundCoords[:,0].astype(int)), max=np.max(convertedBoundCoords[:,0].astype(int)), step=1,continuous_update=False),  \
    yCoord=IntSlider(min=np.min(convertedBoundCoords[:,1].astype(int)), max=np.max(convertedBoundCoords[:,1].astype(int)), step=1,continuous_update=False), \
    zCoord=IntSlider(min=np.min(convertedBoundCoords[:,2].astype(int)), max=np.max(convertedBoundCoords[:,2].astype(int)), step=1,continuous_update=False))
<function __main__.rotateAndPlotWrapper(xCoord, yCoord, zCoord)>

An anatomically defined ROI

Finally, and certainly not lastly, we can generate ROIs by using pre-existing anatomical atlases. We only need provide the following:

  • A reference anatomical atlas/parcellation (volumetric)

  • An indicator of which label we would like extracted (typically a name or an integer)

This turns out to be an extremely useful capability which we will explore in subsequent chapters. Below you’ll be able to select the anatomically related ROI and view its location in the brain.

Remember back to the example of a multi object map from when we were considering 2 dimensional satellite images (chapter “Multi object maps in images”). This is, in essence, the same sort of masking operation, except that (1) we’re now doing it in 3 dimensions and (2) we are no longer having to use color-distance as a proxy as the labeled entries are provided directly to us in the form of integer labels (see chapter “How to interpret a volumetric brain segmentation” for a review).

Note: It’s possible to modify anatomically defined ROIs in the same fashion as we modified planar and spherical ROIs previously. We’ll save that for a more advanced lesson though.

After observing this, we’ll move on to consider how to use these tools.

#establish path to t1
atlasPath=os.path.join(gitRepoPath,'exampleData','parc.nii.gz')
#load it as an object
atlasImg = nib.load(atlasPath)
atlasData = atlasImg.get_fdata()
#set the print option so it isn't printing in scientific notation
np.set_printoptions(suppress=True)
#condense to unique values
uniqueAtlasEntries=np.unique(atlasData).astype(int)

import pandas as pd
FSTablePath=os.path.join(gitRepoPath,'exampleData','FreesurferLookup.csv')
#read table using pandas
FSTable=pd.read_csv(FSTablePath)
#create a boolean vector for the indexes which are in uniqueAtlasEntries
currentIndexesBool=FSTable['#No.'].isin(uniqueAtlasEntries)
#create new data frame with the relevant entries
currentParcellationEntries=FSTable.loc[currentIndexesBool]

dropDownList=list(zip(currentParcellationEntries['LabelName:'].to_list(), currentParcellationEntries['#No.'].to_list()))


#import the data
atlasImg = nib.load(atlasPath)

def rotateAndPlotWrapper(roiNum,xCoord,yCoord,zCoord):
    from nilearn import plotting
    import nibabel as nib
    import numpy as np
    
    anatomicalROI=WMA_pyFuncs.multiROIrequestToMask(atlasImg,roiNum)
    
    %matplotlib inline
    plotting.plot_roi(roi_img=WMA_pyFuncs.alignROItoReference(anatomicalROI,t1img), bg_img=t1img, cut_coords=[xCoord,yCoord,zCoord])
    
from ipywidgets import Dropdown

interact(rotateAndPlotWrapper, \
    roiNum=Dropdown(options=dropDownList, value=2, description="anatomicalLabel"), \
    xCoord=IntSlider(min=np.min(convertedBoundCoords[:,0].astype(int)), max=np.max(convertedBoundCoords[:,0].astype(int)), step=1,continuous_update=False),  \
    yCoord=IntSlider(min=np.min(convertedBoundCoords[:,1].astype(int)), max=np.max(convertedBoundCoords[:,1].astype(int)), step=1,continuous_update=False), \
    zCoord=IntSlider(min=np.min(convertedBoundCoords[:,2].astype(int)), max=np.max(convertedBoundCoords[:,2].astype(int)), step=1,continuous_update=False))
<function __main__.rotateAndPlotWrapper(roiNum, xCoord, yCoord, zCoord)>