nsf

annotate grant.txt @ 114:4f325b4bfcb4

add postdoc
author bshanks@bshanks-salk.dyndns.org
date Fri Jul 03 16:50:47 2009 -0700 (16 years ago)
parents 90b0ccb6c7f1
children 94284c1ca133

rev   line source
bshanks@112 1 \documentclass[11pt,letterpaper]{article}
bshanks@112 2
bshanks@112 3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
bshanks@112 4 \pagestyle{plain} %%
bshanks@112 5 %%%%%%%%%% EXACT 1in MARGINS %%%%%%% %%
bshanks@112 6 \setlength{\textwidth}{6.5in} %% %%
bshanks@112 7 \setlength{\oddsidemargin}{0in} %% (It is recommended that you %%
bshanks@112 8 \setlength{\evensidemargin}{0in} %% not change these parameters, %%
bshanks@112 9 \setlength{\textheight}{8.5in} %% at the risk of having your %%
bshanks@112 10 \setlength{\topmargin}{0in} %% proposal dismissed on the basis %%
bshanks@112 11 \setlength{\headheight}{0in} %% of incorrect formatting!!!) %%
bshanks@112 12 \setlength{\headsep}{0in} %% %%
bshanks@112 13 \setlength{\footskip}{.5in} %% %%
bshanks@112 14 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%
bshanks@112 15 \newcommand{\required}[1]{\section*{\hfil #1\hfil}} %%
bshanks@112 16 \renewcommand{\refname}{\hfil References Cited\hfil} %%
bshanks@112 17 \bibliographystyle{plain} %%
bshanks@112 18 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
bshanks@96 19
bshanks@101 20 \usepackage[small,compact]{titlesec}
bshanks@96 21
bshanks@69 22 \usepackage{wrapfig}
bshanks@69 23
bshanks@112 24 %% does this change the font?
bshanks@112 25 \usepackage{helvet}
bshanks@112 26 \renewcommand{\familydefault}{\sfdefault}
bshanks@112 27
bshanks@96 28 %%\renewcommand{\rmdefault}{phv} %% Arial
bshanks@96 29 %%\renewcommand{\sfdefault}{phv} %% Arial
bshanks@96 30
bshanks@96 31 %%\usepackage[T1]{fontenc}
bshanks@96 32 %%\usepackage[scaled]{uarial}
bshanks@96 33
bshanks@96 34 %% \fontencoding{T1}
bshanks@96 35 %% \fontfamily{garamond}
bshanks@96 36
bshanks@96 37 %% \fontseries{m}
bshanks@96 38 %% \fontshape{it}
bshanks@96 39
bshanks@96 40 %% \fontfamily{arial}
bshanks@96 41 %% \fontsize{11}{15}
bshanks@96 42 %% \selectfont
bshanks@96 43
bshanks@96 44 \begin{document}
bshanks@96 45
bshanks@96 46
bshanks@112 47 == Introduction ==
bshanks@112 48
bshanks@112 49 Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others, allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns. We will validate these methods by applying them to 46 anatomical areas within the cerebral cortex, by using the Allen Mouse Brain Atlas coronal dataset (ABA). %%This gene expression dataset was generated using ISH, and contains over 4,000 genes. For each gene, a digitized 3-D raster of the expression pattern is available: for each gene, the level of expression at each of 51,533 voxels is recorded.
bshanks@112 50
bshanks@112 51 This project has three primary goals:\\
bshanks@112 52
bshanks@112 53 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target anatomical regions.\\
bshanks@112 54
bshanks@112 55 (2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions, based on spatial patterns in gene expression.\\
bshanks@112 56
bshanks@112 57 (3) adapt our tools for the analysis of multi/hyperspectral imaging data from the Geographic Information Systems (GIS) community.\\
bshanks@112 58
bshanks@112 59 We will create a 2-D "flat map" dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. We will use this dataset to validate the methods developed in (1) and (2). In addition to its use in neuroscience, this dataset will be useful as a sample dataset for the machine learning community.
bshanks@112 60
bshanks@112 61 Although our particular application involves the 3D spatial distribution of gene expression, the methods we will develop will generalize to any high-dimensional data over points located in a low-dimensional space. In particular, our methods could be applied to the analysis of multi/hyperspectral imaging data, or alternately to genome-wide sequencing data derived from sets of tissues and disease states.
bshanks@112 62
bshanks@112 63 All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit and the datasets will be published and freely available for others to use.
bshanks@112 64
bshanks@112 65
bshanks@112 66 %%=== Contents ===
bshanks@112 67 %%First we will discuss background, then related work, then our data sharing plan and broader impacts, next the preliminary results that we have already achieved, and finally our plan to complete the project.
bshanks@87 68
bshanks@101 69 \vspace{0.3cm}\hrule
bshanks@112 70
bshanks@112 71 == Background and related work ==
bshanks@112 72 \vspace{0.3cm}**Cortical anatomy**
bshanks@112 73
bshanks@112 74 The cortex is divided into areas and layers. Because of the cortical columnar organization, the parcellation of the cortex into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can picture an area of the cortex as a slice of a six-layered cake\footnote{Outside of isocortex, the number of layers varies.}.
bshanks@112 75
bshanks@112 76 It is known that different cortical areas have distinct roles in both normal functioning and in disease processes, yet there are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their approximate location upon the cortical surface.
bshanks@112 77
bshanks@112 78 Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single agreed-upon map can be seen by contrasting the recent maps given by Swanson\cite{swanson_brain_2003} on the one hand, and Paxinos and Franklin\cite{paxinos_mouse_2001} on the other. While the maps are certainly very similar in their general arrangement, significant differences remain.
bshanks@112 79
bshanks@112 80 \vspace{0.3cm}**The Allen Mouse Brain Atlas dataset**
bshanks@112 81
bshanks@112 82 The Allen Mouse Brain Atlas (ABA) data\cite{lein_genome-wide_2007} were produced by doing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse brains were needed in order to measure the expression of many genes.
bshanks@112 83
bshanks@112 84 Mus musculus is thought to contain about 22,000 protein-coding genes\cite{waterston_initial_2002}. The ABA contains data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from only the coronal subset of the ABA\footnote{The sagittal data do not cover the entire cortex, and also have greater registration error\cite{ng_anatomic_2009}. Genes were selected by the Allen Institute for coronal sectioning based on, "classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern"\cite{ng_anatomic_2009}.}. An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 \= 159,326 voxels, of which 51,533 are in the brain\cite{ng_anatomic_2009}. For each voxel and each gene, the expression energy\cite{lein_genome-wide_2007} within that voxel is made available.
bshanks@112 85
bshanks@112 86
bshanks@112 87
bshanks@112 88 %%The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress\footnote{http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE}, EADHB\footnote{http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html}, MAMEP\footnote{http://mamep.molgen.mpg.de/index.php}, Xenbase\footnote{http://xenbase.org/}, ZFIN\cite{sprague_zebrafish_2006}, Aniseed\footnote{http://aniseed-ibdm.univ-mrs.fr/}, VisiGene\footnote{http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources}, GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE\footnote{http://compare.ibdml.univ-mrs.fr/} GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007}\footnote{GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.}. With the exception of the ABA, GenePaint, and EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression.
bshanks@112 89
bshanks@112 90 %%The ABA is not the only large public spatial gene expression dataset\cite{gong_gene_2003}\cite{visel_genepaint.org:atlas_2004}\cite{carson_digital_2005}\cite{magdaleno_bgem:in_2006}\cite{venkataraman_emage_2008}\footnote{http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE}\footnote{http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html}\footnote{http://mamep.molgen.mpg.de/index.php}\footnote{http://xenbase.org/}\cite{sprague_zebrafish_2006}\footnote{http://aniseed-ibdm.univ-mrs.fr/}\footnote{http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources}\cite{bell_geishawhole-mount_2004}\cite{tomancak_systematic_2002}\footnote{http://compare.ibdml.univ-mrs.fr/}\cite{smith_mouse_2007}\cite{barrett_ncbi_2007}.
bshanks@112 91
bshanks@112 92 The ABA is not the only large public spatial gene expression dataset\cite{gong_gene_2003}\cite{visel_genepaint.org:atlas_2004}\cite{carson_digital_2005}\cite{magdaleno_bgem:in_2006}\cite{venkataraman_emage_2008}\cite{bell_geishawhole-mount_2004}\cite{tomancak_systematic_2002}\cite{smith_mouse_2007}\cite{barrett_ncbi_2007}. However, with the exception of the ABA, GenePaint\cite{visel_genepaint.org:atlas_2004}, and EMAGE\cite{venkataraman_emage_2008}, most of the other resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space.
bshanks@112 93
bshanks@112 94
bshanks@112 95 %%, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression.
bshanks@112 96
bshanks@112 97 %% \footnote{Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress (http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE), EADHB (http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html), MAMEP (http://mamep.molgen.mpg.de/index.php), Xenbase (http://xenbase.org/), ZFIN\cite{sprague_zebrafish_2006}, Aniseed (http://aniseed-ibdm.univ-mrs.fr/), VisiGene (http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some of the other listed data sources), GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE (http://compare.ibdml.univ-mrs.fr/), GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007} (GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.)}
bshanks@112 98
bshanks@112 99 The remainder of the background section will be divided into three parts, one for each major goal.
bshanks@112 100
bshanks@112 101
bshanks@112 102 \vspace{0.3cm}
bshanks@112 103 === Goal 1, From Areas to Genes: Given a map of regions, find genes that mark those regions ===
bshanks@112 104
bshanks@84 105
bshanks@94 106 \vspace{0.3cm}**Machine learning terminology: classifiers** The task of looking for marker genes for known anatomical regions means that one is looking for a set of genes such that, if the expression level of those genes is known, then the locations of the regions can be inferred.
bshanks@85 107
bshanks@85 108 %% then instead of saying that we are using gene expression to find the locations of the regions,
bshanks@85 109
bshanks@85 110 %%If we define the regions so that they cover the entire anatomical structure to be divided, we may say that we are using gene expression to determine to which region each voxel within the structure belongs. We call this a __classification task__, because each voxel is being assigned to a class (namely, its region).
bshanks@85 111
bshanks@85 112 %%Therefore, an understanding of the relationship between the combination of their expression levels and the locations of the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function a __classifier__. In general, the input to a classifier is called an __instance__, and the output is called a __label__ (or a __class label__).
bshanks@85 113
bshanks@112 114 If we define the regions so that they cover the entire anatomical structure to be subdivided, and restrict ourselves to looking at one voxel at a time, we may say that we are using gene expression in each voxel to assign that voxel to the proper area. We call this a __classification task__, because each voxel is being assigned to a class (namely, its region). An understanding of the relationship between the combination of gene expression levels and the locations of the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function a __classifier__. In general, the input to a classifier is called an __instance__, and the output is called a __label__ (or a __class label__).
bshanks@85 115
bshanks@85 116 %% The construction of the classifier is called __training__ (also __learning__), and
bshanks@85 117
bshanks@112 118 Our goal is not to produce a single classifier, but rather to develop an automated method for determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. The initial gene expression dataset used in the construction of the classifier is called __training data__. In the machine learning literature, this sort of procedure may be thought of as a __supervised learning task__, defined as a task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances (voxels) for which the labels (regions) are known.
bshanks@112 119
bshanks@112 120 Each gene expression level is called a __feature__, and the selection of which genes\footnote{Strictly speaking, the features are gene expression levels, but we'll call them genes.} to look at is called __feature selection__. Feature selection is one component of the task of learning a classifier. %%Some methods for learning classifiers start out with a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
bshanks@0 121
bshanks@0 122 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic procedure may be used in which features are added and subtracted from the selected set depending on how much they raise the score. Such procedures are called "stepwise" or "greedy".
bshanks@0 123
bshanks@112 124 %%Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to calculate a voxel's sub-score, then we say it is a __local scoring method__. If only information from the voxel itself is used to calculate a voxel's sub-score, then we say it is a __pointwise scoring method__.
bshanks@112 125
bshanks@112 126 Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score. If only information from nearby voxels is used to calculate a voxel's sub-score, then we say it is a __local scoring method__. If only information from the voxel itself is used to calculate a voxel's sub-score, then we say it is a __pointwise scoring method__.
bshanks@112 127
bshanks@112 128 %%Both gene expression data and anatomical atlases have errors, due to a variety of factors. Individual subjects have idiosyncratic anatomy. Subjects may be improperly registered to the atlas. The method used to measure gene expression may be noisy. The atlas may have errors. It is even possible that some areas in the anatomical atlas are "wrong" in that they do not have the same shape as the natural domains of gene expression to which they correspond. These sources of error can affect the displacement and the shape of both the gene expression data and the anatomical target areas. Therefore, it is important to use feature selection methods which are robust to these kinds of errors.
bshanks@112 129
bshanks@112 130
bshanks@112 131 === Our Strategy for Goal 1 ===
bshanks@103 132
bshanks@103 133 Key questions when choosing a learning method are: What are the instances? What are the features? How are the features chosen? Here are four principles that outline our answers to these questions.
bshanks@103 134
bshanks@103 135
bshanks@103 136 \vspace{0.3cm}**Principle 1: Combinatorial gene expression**
bshanks@103 137
bshanks@112 138 It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene included in the ABA coronal dataset. However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Results, Figure \ref{MOcombo}). Therefore, each instance should contain multiple features (genes).
bshanks@103 139
bshanks@103 140
bshanks@103 141 \vspace{0.3cm}**Principle 2: Only look at combinations of small numbers of genes**
bshanks@103 142
bshanks@112 143 When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected as features. The more data that are available to a classifier, the better that it can do. Why not include every gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the level of more than a handful of genes. Therefore, we must select only a few genes as features.
bshanks@112 144
bshanks@112 145 %%When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected as features. The more data that are available to a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as features.
bshanks@103 146
bshanks@103 147 The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task combines feature selection with supervised learning.
bshanks@103 148
bshanks@103 149
bshanks@103 150 \vspace{0.3cm}**Principle 3: Use geometry in feature selection**
bshanks@103 151
bshanks@112 152 When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See Preliminary Results, figure \ref{AUDgeometry} for evidence of the complementary nature of pointwise and local scoring methods.
bshanks@103 153
bshanks@103 154
bshanks@103 155
bshanks@103 156 \vspace{0.3cm}**Principle 4: Work in 2-D whenever possible**
bshanks@103 157
bshanks@103 158
bshanks@112 159 There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D data. %%Therefore, when possible, the instances should represent pixels, not voxels.
bshanks@112 160
bshanks@112 161
bshanks@112 162
bshanks@112 163
bshanks@112 164 \vspace{0.3cm}
bshanks@112 165 === Goal 2, From Genes to Areas: given gene expression data, discover a map of regions ===
bshanks@112 166
bshanks@112 167
bshanks@112 168 \begin{wrapfigure}{L}{0.4\textwidth}\centering
bshanks@112 169 %%\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_3_654_jet.eps}
bshanks@112 170 %%\\
bshanks@112 171 %%\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_3_724_jet.eps}
bshanks@112 172 %%\caption{Top row: Genes Nfic, A930001M12Rik, C130038G02Rik are the most correlated with area SS (somatosensory cortex). Bottom row: Genes C130038G02Rik, Cacna1i, Car10 are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.}
bshanks@112 173
bshanks@112 174 \includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}
bshanks@112 175 \\
bshanks@112 176 \includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}
bshanks@112 177
bshanks@112 178 \caption{Top row: Genes $Nfic$ and $A930001M12Rik$ are the most correlated with area SS (somatosensory cortex). Bottom row: Genes $C130038G02Rik$ and $Cacna1i$ are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.}
bshanks@112 179 \label{SScorrLr}\end{wrapfigure}
bshanks@112 180
bshanks@112 181
bshanks@112 182 \vspace{0.3cm}**Machine learning terminology: clustering**
bshanks@112 183
bshanks@112 184 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as __unsupervised learning__ in the jargon of machine learning. One thing that you can do with such a dataset is to group instances together. A set of similar instances is called a __cluster__, and the activity of grouping the data into clusters is called __clustering__ or __cluster analysis__.
bshanks@112 185
bshanks@112 186 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from the same anatomical region have similar gene expression profiles, at least compared to the other regions. This means that clustering voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression.
bshanks@112 187
bshanks@112 188 %%It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests the outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering.
bshanks@112 189
bshanks@112 190 It is desirable to determine not just one set of regions, but also how these regions relate to each other. The outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering.
bshanks@112 191
bshanks@112 192
bshanks@112 193 \vspace{0.3cm}**Similarity scores**
bshanks@112 194 A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Goal 1) and scoring methods for similarity.
bshanks@112 195
bshanks@112 196
bshanks@112 197
bshanks@112 198 %%\vspace{0.3cm}**Spatially contiguous clusters; image segmentation**
bshanks@112 199 %%We have shown that Goal 2 is a type of clustering task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary Results, we show that one can get reasonable results without enforcing this constraint; however, we plan to compare these results against other methods which guarantee contiguous clusters.
bshanks@112 200
bshanks@112 201 %%%%Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Goal 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three. However, there are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
bshanks@112 202
bshanks@112 203 %%Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Goal 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three\footnote{There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery.}. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
bshanks@112 204
bshanks@112 205
bshanks@112 206
bshanks@112 207 \vspace{0.3cm}**Dimensionality reduction**
bshanks@112 208 In this section, we discuss reducing the length of the per-pixel gene expression feature vector. By "dimension", we mean the dimension of this vector, not the spatial dimension of the underlying data.
bshanks@112 209
bshanks@112 210 %% After the reduced feature set is created, the instances may be replaced by __reduced instances__, which have as their features the reduced feature set rather than the original feature set of all gene expression levels.
bshanks@112 211
bshanks@112 212 Unlike Goal 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the instances. However, some clustering algorithms perform better on small numbers of features\footnote{First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.}. There are techniques which "summarize" a larger number of features using a smaller number of features; these techniques go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique yields is called the __reduced feature set__. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
bshanks@112 213
bshanks@112 214 %%Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data. Another use for dimensionality reduction is to visualize the relationships between regions after clustering.
bshanks@112 215
bshanks@112 216 %%Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example, one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on a 2-D plan will exactly satisfy this property; however, dimensionality reduction techniques allow one to find arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering.
bshanks@112 217
bshanks@112 218
bshanks@112 219 \vspace{0.3cm}**Clustering genes rather than voxels**
bshanks@112 220 Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used.
bshanks@112 221
bshanks@112 222 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could have one reduced feature for each gene cluster.
bshanks@112 223
bshanks@112 224 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following procedure: cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters. In Preliminary Results, Figure \ref{geneClusters}, we show that a number of anatomically recognized cortical regions, as well as some "superregions" formed by lumping together a few regions, are associated with gene clusters in this fashion.
bshanks@112 225
bshanks@112 226 %% Therefore, it seems likely that an anatomically interesting region will have multiple genes which each individually pick it out\footnote{This would seem to contradict our finding in Goal 1 that some cortical areas are combinatorially coded by multiple genes. However, it is possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression; perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype.}.
bshanks@112 227
bshanks@112 228 %%The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering algorithms.
bshanks@112 229
bshanks@112 230
bshanks@112 231
bshanks@112 232
bshanks@112 233
bshanks@112 234
bshanks@112 235
bshanks@112 236
bshanks@112 237
bshanks@112 238 === Goal 3: interoperability with multi/hyperspectral imaging analysis software ===
bshanks@112 239 %%Whereas a typical color image associated each pixel with a vector of three values, multispectral and hyperspectral images associate each pixel with a vector containing many values. The different positions in the vector correspond to different bands of electromagnetic wavelengths.
bshanks@112 240 A typical color image associates each pixel with a vector of three values. Multispectral and hyperspectral images, however, are images which associate each pixel with a vector containing many values. The different positions in the vector correspond to different bands of electromagnetic wavelengths\footnote{In hyperspectral imaging, the bands are adjacent, and the number of different bands is larger. For conciseness, we discuss only hyperspectral imaging, but our methods are also well suited to multispectral imaging with many bands.}.
bshanks@112 241
bshanks@112 242 %%Typically multispectral imaging captures a few broad bands of wavelengths, whereas hyperspectral imaging captures a large number of adjacent narrow bands. Some analysis techniques for hyperspectral imaging, especially preprocessing and calibration techniques, make use of the information that the different values captured at each pixel represent adjacent bands of wavelengths of light, which can be combined to make a spectrum. Other analysis techniques ignore the interpretation of the values measured, and their relationship to each other within the electromagnetic spectrum, instead treating them blindly as completely separate features.
bshanks@112 243
bshanks@112 244 Some analysis techniques for hyperspectral imaging, especially preprocessing and calibration techniques, make use of the information that the different values captured at each pixel represent adjacent wavelengths of light, which can be combined to make a spectrum. Other analysis techniques ignore the interpretation of the values measured, and their relationship to each other within the electromagnetic spectrum, instead treating them blindly as completely separate features.
bshanks@112 245
bshanks@112 246 With both hyperspectral imaging and spatial gene expression data, each location in space is associated with more than three numerical feature values. The analysis of hyperspectral images can involve supervised classification and unsupervised learning. Often hyperspectral images come from satellites looking at the Earth, and it is desirable to classify what sort of objects occupy a given area of land. Sometimes detailed training data is not available, in which case it is desirable at least to cluster together those regions of land which contain similar objects.
bshanks@112 247
bshanks@112 248 %% The analogy is perhaps closer with hyperspectral imagining, in which the number of feature values tends to be large, which is the case with spatial gene expression data.
bshanks@112 249
bshanks@112 250
bshanks@112 251
bshanks@112 252 %%These tasks are similar to our goals for the analysis of spatial gene expression data. Starting with a satellite image and a list of known terrain types, and classifying pixels of the image according to which terrain it represents, is similar to starting with a spatial gene expression dataset and a set of known anatomical regions, and classifying locations according to which region they are within. Starting with a satellite image and clustering pixels together into groupings that represent regions of similar terrain is much like starting with a spatial gene expression dataset and clustering locations together into hypothetical anatomical regions.
bshanks@112 253
bshanks@112 254 We believe that it may be possible for these two different field to share some common computational tools. To this end, we intend to make use of existing hyperspectral imaging software when possible, and to develop new software in such a way so as to make it easy to use for the purpose of hyperspectral image analysis, as well as for our primary purpose of spatial gene expression data analysis.
bshanks@112 255
bshanks@112 256
bshanks@112 257 %% We now turn to efforts to find marker genes using spatial gene expression data using automated methods.
bshanks@112 258
bshanks@112 259
bshanks@112 260 == Related work ==
bshanks@112 261
bshanks@112 262 \begin{wrapfigure}{L}{0.25\textwidth}\centering
bshanks@112 263 \includegraphics[scale=.27]{holeExample_2682_SS_jet.eps}
bshanks@112 264 \caption{Gene $Pitx2$ is selectively underexpressed in area SS.}
bshanks@112 265 \label{hole}\end{wrapfigure}
bshanks@112 266
bshanks@112 267 %%As noted above, there has been much work in the machine learning literature on both supervised and unsupervised learning and there are many available algorithms for each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the way that this framework is set up has a large impact on performance. Creating a good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical "fine-tuning" of numerical parameters. For example, we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Results) may be necessary in order to achieve the best results in this application. So, the project involves more than the blind application of existing machine learning analysis programs to a new dataset.
bshanks@112 268
bshanks@112 269 As noted above, the GIS community has developed tools for supervised classification and unsupervised clustering in the context of the analysis of hyperspectral imaging data. One tool is Spectral Python\footnote{http://spectralpython.sourceforge.net/}. Spectral Python implements various supervised and unsupervised classification methods, as well as utility functions for loading, viewing, and saving spatial data. Although Spectral Python has feature extraction methods (such as principal components analysis) which create a small set of new features computed based on the original features, it does not have feature selection methods, that is, methods to select a small subset out of the original features (although feature selection in hyperspectral imaging has been investigated by others\cite{serpico_new_2001}. %%We intend to extend Spectral Python's reportoire of supervised and unsupervised machine learning methods, as well as to add feature selection methods.
bshanks@112 270
bshanks@112 271 There is a substantial body of work on the analysis of gene expression data. Most of this concerns gene expression data which are not fundamentally spatial\footnote{By "__fundamentally__ spatial" we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not just data which have only a few different locations or which is indexed by anatomical label.}. Here we review only that work which concerns the automated analysis of spatial gene expression data with respect to anatomy.
bshanks@112 272
bshanks@103 273
bshanks@103 274 %%GeneAtlas\cite{carson_digital_2005} allows the user to construct a search query by freely demarcating one or two 2-D regions on sagittal slices, and then to specify either the strength of expression or the name of another gene whose expression pattern is to be matched.
bshanks@103 275
bshanks@103 276 %% \footnote{For the similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel (actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity) whose expression is within four discretization levels. EMAGE uses Jaccard similarity (the number of true pixels in the intersection of the two images, divided by the number of pixels in their union).}
bshanks@103 277 %% \cite{lee_high-resolution_2007} mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of interest, computing what proportion of this structure is covered by the gene's spatial region.
bshanks@103 278
bshanks@112 279 Relating to Goal 1, GeneAtlas\cite{carson_digital_2005} and EMAGE \cite{venkataraman_emage_2008} allow the user to construct a search query by demarcating regions and then specifying either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a region in concert.
bshanks@112 280
bshanks@112 281 Relating to Goal 2, EMAGE\cite{venkataraman_emage_2008} allows the user to select a dataset from among a large number of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchical complete linkage clustering. %% with un-centered correlation as the similarity score.
bshanks@103 282
bshanks@103 283 \cite{ng_anatomic_2009} describes AGEA, "Anatomic Gene Expression
bshanks@103 284 Atlas". AGEA has three
bshanks@103 285 components. **Gene Finder**: The user selects a seed voxel and the system (1) chooses a
bshanks@103 286 cluster which includes the seed voxel, (2) yields a list of genes
bshanks@103 287 which are overexpressed in that cluster. **Correlation**: The user selects a seed voxel and the system
bshanks@103 288 then shows the user how much correlation there is between the gene
bshanks@112 289 expression profile of the seed voxel and every other voxel. **Clusters**: AGEA includes a preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with correlation as the similarity metric. AGEA has been applied to the cortex. The paper describes interesting results on the structure of correlations between voxel gene expression profiles within a handful of cortical areas. However, that analysis neither looks for genes marking cortical areas, nor does it suggest a cortical map based on gene expression data. Neither of the other components of AGEA can be applied to cortical areas; AGEA's Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA's hierarchical clustering does not produce clusters corresponding to the cortical areas\footnote{In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.}.
bshanks@103 290
bshanks@103 291 %% (there may be clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot the find marker genes for cortical areas is that, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed.
bshanks@103 292
bshanks@103 293 %% Most of the projects which have been discussed have been done by the same groups that develop the public datasets. Although these projects make their algorithms available for use on their own website, none of them have released an open-source software toolkit; instead, users are restricted to using the provided algorithms only on their own dataset.
bshanks@103 294
bshanks@112 295
bshanks@112 296 \begin{wrapfigure}{L}{0.4\textwidth}\centering
bshanks@85 297 %%\includegraphics[scale=.27]{singlegene_AUD_lr_top_1_3386_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_2_1258_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_3_420_jet.eps}
bshanks@69 298 %%
bshanks@85 299 %%\includegraphics[scale=.27]{singlegene_AUD_gr_top_1_2856_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_gr_top_2_420_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_gr_top_3_2072_jet.eps}
bshanks@69 300 %%\caption{The top row shows the three genes which (individually) best predict area AUD, according to logistic regression. The bottom row shows the three genes which (individually) best match area AUD, according to gradient similarity. From left to right and top to bottom, the genes are $Ssr1$, $Efcbp1$, $Aph1a$, $Ptk7$, $Aph1a$ again, and $Lepr$}
bshanks@85 301 \includegraphics[scale=.27]{singlegene_AUD_lr_top_1_3386_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_2_1258_jet.eps}
bshanks@60 302 \\
bshanks@85 303 \includegraphics[scale=.27]{singlegene_AUD_gr_top_1_2856_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_gr_top_2_420_jet.eps}
bshanks@69 304 \caption{The top row shows the two genes which (individually) best predict area AUD, according to logistic regression. The bottom row shows the two genes which (individually) best match area AUD, according to gradient similarity. From left to right and top to bottom, the genes are $Ssr1$, $Efcbp1$, $Ptk7$, and $Aph1a$.}
bshanks@69 305 \label{AUDgeometry}\end{wrapfigure}
bshanks@38 306
bshanks@103 307
bshanks@112 308 \cite{chin_genome-scale_2007} looks at the mean expression level of genes within anatomical regions, and applies a Student's t-test to determine whether the mean expression level of a gene is significantly higher in the target region. This relates to our Goal 1. \cite{chin_genome-scale_2007} also clusters genes, relating to our Goal 2. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
bshanks@112 309
bshanks@112 310 These related works differ from our strategy for Goal 1 in at least three ways. First, they find only single genes, whereas we will also look for combinations of genes. Second, they usually can only use overexpression as a marker, whereas we will also search for underexpression. Third, they use scores based on pointwise expression levels, whereas we will also use geometric scores such as gradient similarity (described in Preliminary Results). Figures \ref{MOcombo}, \ref{hole}, and \ref{AUDgeometry} in the Preliminary Results section contain evidence that each of our three choices is the right one.
bshanks@112 311
bshanks@112 312 \cite{hemert_matching_2008} describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image.
bshanks@112 313 %%Their match score is Jaccard similarity.
bshanks@112 314 They apply their technique for finding combinations of marker genes for the purpose of clustering genes around a "seed gene". %%They do this by using the pattern of expression of the seed gene as the target image, and then searching for other genes which can be combined to reproduce this pattern. Other genes which are found are considered to be related to the seed. The same team also describes a method\cite{van_hemert_mining_2007} for finding "association rules" such as, "if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene". This could be useful as part of a procedure for clustering voxels.
bshanks@112 315
bshanks@112 316
bshanks@112 317 Relating to our Goal 2, some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, \cite{schleicher_quantitative_2005}, \cite{annese_myelo-architectonic_2004}, \cite{schmitt_detection_2003}, and \cite{adamson_tracking_2005} associate spots on the cortex with the radial profile\footnote{A radial profile is a profile along a line perpendicular to the cortical surface.} of response to some stain (\cite{kruggel_analyzingneocortical_2003} uses MRI), extract features from this profile, and then use similarity between surface pixels to cluster.
bshanks@112 318
bshanks@112 319
bshanks@112 320
bshanks@112 321 %%Features used include statistical moments, wavelets, and the excess mass functional. Some of these features are motivated by the presence of tangential lines of stain intensity which correspond to laminar structure. Some methods use standard clustering procedures, whereas others make use of the spatial nature of the data to look for sudden transitions, which are identified as areal borders.
bshanks@112 322
bshanks@112 323 \cite{thompson_genomic_2008} describes an analysis of the anatomy of
bshanks@112 324 the hippocampus using the ABA dataset. In addition to manual analysis,
bshanks@112 325 two clustering methods were employed, a modified Non-negative Matrix
bshanks@112 326 Factorization (NNMF), and a hierarchical bifurcation clustering scheme using correlation as similarity. The paper yielded impressive results, proving the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset, and while the results are promising, other methods may perform as well or better (see Preliminary Results, Figure \ref{dimReduc}).
bshanks@112 327
bshanks@112 328 %% \footnote{We ran "vanilla" NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was needed. The paper under discussion also mentions that they tried a hierarchical variant of NNMF, which we have not yet tried.} and while the results are promising, they also demonstrate that NNMF is not necessarily the best dimensionality reduction method for this application (see Preliminary Results, Figure \ref{dimReduc}).
bshanks@112 329
bshanks@112 330 %% In addition, this paper described a visual screening of the data, specifically, a visual analysis of 6000 genes with the primary purpose of observing how the spatial pattern of their expression coincided with the regions that had been identified by NNMF. We propose to do this sort of screening automatically, which would yield an objective, quantifiable result, rather than qualitative observations.
bshanks@112 331
bshanks@112 332 %% \cite{thompson_genomic_2008} reports that both mNNMF and hierarchical mNNMF clustering were useful, and that hierarchical recursive bifurcation gave similar results.
bshanks@112 333
bshanks@112 334
bshanks@112 335
bshanks@112 336
bshanks@112 337 %%\cite{chin_genome-scale_2007} clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and ordered the rows of this matrix as follows: "the first row of the matrix was chosen to show the strongest contrast between the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing similarity using a least squares metric". The resulting matrix showed four clusters. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
bshanks@112 338
bshanks@112 339 Comparing previous work with our Goal 1, there has been fruitful work on finding marker genes, but only one of the projects explored combinations of marker genes, and none of them compared the results obtained by using different algorithms or scoring methods. Comparing previous work with Goal 2, although some projects obtained clusterings, there has not been much comparison between different algorithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. Also, none of these projects did a separate dimensionality reduction step before clustering pixels, or tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, or used co-clustering algorithms.
bshanks@112 340
bshanks@112 341 %%The projects using gene expression on cortex did not attempt to make use of the radial profile of gene expression.
bshanks@112 342
bshanks@112 343 In summary, (a) only one of the previous projects explores combinations of marker genes, (b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally finding marker genes applied to cortical areas, or on finding a hierarchical clustering that will yield a map of cortical areas de novo from gene expression data.
bshanks@112 344
bshanks@112 345 Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker genes for \begin{latex}/\end{latex} reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods.
bshanks@112 346
bshanks@112 347
bshanks@112 348
bshanks@112 349
bshanks@112 350
bshanks@112 351
bshanks@112 352
bshanks@112 353
bshanks@112 354
bshanks@112 355
bshanks@112 356
bshanks@112 357
bshanks@112 358
bshanks@112 359
bshanks@112 360 \vspace{0.3cm}\hrule
bshanks@112 361
bshanks@112 362 == Data sharing plan ==
bshanks@112 363
bshanks@112 364 \begin{wrapfigure}{L}{0.4\textwidth}\centering
bshanks@85 365 \includegraphics[scale=.27]{MO_vs_Wwc1_jet.eps}\includegraphics[scale=.27]{MO_vs_Mtif2_jet.eps}
bshanks@85 366
bshanks@85 367 \includegraphics[scale=.27]{MO_vs_Wwc1_plus_Mtif2_jet.eps}
bshanks@69 368 \caption{Upper left: $wwc1$. Upper right: $mtif2$. Lower left: wwc1 + mtif2 (each pixel's value on the lower left is the sum of the corresponding pixels in the upper row).}
bshanks@69 369 \label{MOcombo}\end{wrapfigure}
bshanks@69 370
bshanks@112 371 We are enthusiastic about the sharing of methods and data, and at the conclusion of the project, we will make all of our data and computer source code publically available, either in supplemental attachments to publications, or on a website. The source code will be released under the GNU Public License. We intend to include a software program which, when run, will take as input the Allen Brain Atlas raw data, and produce as output all numbers and charts found in publications resulting from the project. Source code to be released will include extensions to Caret\cite{van_essen_integrated_2001}, an existing open-source scientific imaging program, and to Spectral Python. Data to be released will include the 2-D "flat map" dataset. This dataset will be submitted to a machine learning dataset repository.
bshanks@112 372
bshanks@112 373 %% Our goal is that replicating our results, or applying the methods we develop to other targets, will be quick and easy for other investigators.
bshanks@112 374
bshanks@112 375
bshanks@112 376
bshanks@112 377
bshanks@112 378 == Broader impacts ==
bshanks@112 379
bshanks@112 380 In addition to validating the usefulness of the algorithms, the application of these methods to cortex will produce immediate benefits, because there are currently no known genetic markers for most cortical areas. %%The results of the project will support the development of new ways to selectively target cortical areas, and it will support the development of a method for identifying the cortical areal boundaries present in small tissue samples.
bshanks@112 381
bshanks@112 382 The method developed in Goal 1 will be applied to each cortical area to find a set of marker genes such that the combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively target individual cortical areas.
bshanks@112 383
bshanks@112 384 The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatomical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can find many of the areal boundaries at once.
bshanks@112 385
bshanks@112 386 %%This panel of marker genes will allow the development of an ISH protocol that will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex.
bshanks@112 387
bshanks@112 388 %% Since the number of classes of stains is small compared to the number of genes,
bshanks@112 389 The method developed in Goal 2 will provide a genoarchitectonic viewpoint that will contribute to the creation of a better cortical map. %%The development of present-day cortical maps was driven by the application of histological stains. If a different set of stains had been available which identified a different set of features, then today's cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to incorporate what we can learn from looking at the patterns of gene expression.
bshanks@112 390
bshanks@112 391
bshanks@112 392 The methods we will develop will be applicable to other datasets beyond
bshanks@112 393 the brain, and even to datasets outside of biology. The software we develop will be useful for the analysis of hyperspectral images. Our project will draw attention to this area of overlap between neuroscience and GIS, and may lead to future collaborations between these two fields. The cortical dataset that we produce will be useful in the machine learning community as a sample dataset that new algorithms can be tested against. The availability of this sample dataset to the machine learning community may lead to more interest in the design of machine learning algorithms to analyze spatial gene expression.
bshanks@112 394
bshanks@112 395 %%, which would benefit neuroscience down the road as the efforts of more machine learning researchers are focused on a problem of biological interest.
bshanks@112 396
bshanks@112 397
bshanks@112 398
bshanks@112 399
bshanks@112 400 \vspace{0.3cm}\hrule
bshanks@112 401
bshanks@112 402 == Preliminary Results ==
bshanks@112 403
bshanks@112 404
bshanks@112 405
bshanks@112 406 === Format conversion between SEV, MATLAB, NIFTI ===
bshanks@112 407 We have created software to (politely) download all of the SEV files\footnote{SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.} from the Allen Institute website. We have also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret's file formats.
bshanks@112 408
bshanks@112 409
bshanks@112 410 === Flatmap of cortex ===
bshanks@112 411
bshanks@112 412
bshanks@112 413 We downloaded the ABA data and selected only those voxels which belong to cerebral cortex. We divided the cortex into hemispheres. Using Caret\cite{van_essen_integrated_2001}, we created a mesh representation of the surface of the selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression of the voxels "underneath" that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We converted this grid into a MATLAB matrix. We manually traced the boundaries of each of 46 cortical areas from the ABA coronal reference atlas slides, and converted this region data into MATLAB format.
bshanks@112 414
bshanks@112 415 %%We manually traced the boundaries of each of 46 cortical areas from the ABA coronal reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the mesh surface and then into region data in MATLAB format.
bshanks@112 416
bshanks@112 417 %% We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values.
bshanks@112 418 %% We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the region data into MATLAB format.
bshanks@112 419
bshanks@112 420 At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries represent the average expression level underneath each surface pixel. The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface.
bshanks@112 421
bshanks@112 422 %% We created a normalized version of the gene expression data by subtracting each gene's mean expression level (over all surface pixels) and dividing the expression level of each gene by its standard deviation.
bshanks@112 423
bshanks@112 424 %%To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
bshanks@112 425
bshanks@112 426
bshanks@112 427
bshanks@112 428
bshanks@112 429
bshanks@112 430
bshanks@112 431
bshanks@112 432 === Feature selection and scoring methods ===
bshanks@112 433
bshanks@112 434
bshanks@112 435
bshanks@112 436 \vspace{0.3cm}**Underexpression of a gene can serve as a marker**
bshanks@112 437 Underexpression of a gene can sometimes serve as a marker. For example, see Figure \ref{hole}.
bshanks@112 438
bshanks@112 439
bshanks@112 440
bshanks@112 441
bshanks@112 442 \vspace{0.3cm}**Correlation**
bshanks@112 443 Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the surface pixels.
bshanks@112 444
bshanks@112 445 We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS.
bshanks@112 446
bshanks@112 447 %%One class of feature selection scoring methods contains methods which calculate some sort of "match" between each gene image and the target image. Those genes which match the best are good candidates for features.
bshanks@112 448
bshanks@112 449 %%One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS.
bshanks@112 450
bshanks@112 451
bshanks@112 452
bshanks@112 453 \vspace{0.3cm}**Conditional entropy**
bshanks@112 454 %%An information-theoretic scoring method is to find features such that, if the features (gene expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty, so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution to which we are referring is the probability distribution over the population of surface pixels.
bshanks@112 455
bshanks@112 456 %%The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating, for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two standard deviations, the mean plus one standard deviation, the mean plus two standard deviations.
bshanks@112 457
bshanks@112 458 %%Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized.
bshanks@112 459
bshanks@112 460 For each region, we created and ran a forward stepwise procedure which attempted to find pairs of genes such that the conditional entropy of the target area's boolean mask, conditioned upon the gene pair's thresholded expression levels, is minimized.
bshanks@112 461
bshanks@112 462 This finds pairs of genes which are most informative (at least at these threshold levels) relative to the question, "Is this surface pixel a member of the target area?". The advantage over linear methods such as logistic regression is that this takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional entropy would notice, whereas linear methods would not.
bshanks@112 463
bshanks@112 464
bshanks@112 465
bshanks@112 466
bshanks@112 467 \vspace{0.3cm}**Gradient similarity**
bshanks@112 468 We noticed that the previous two scoring methods, which are pointwise, often found genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar to the shape of the target region. We call this scoring method "gradient similarity". The formula is:
bshanks@112 469
bshanks@112 470 %%One might say that gradient similarity attempts to measure how much the border of the area of gene expression and the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its maximum value to zero, the spatial pattern of a gene's expression often does not have a discrete border. Therefore, instead of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have gradients which are oriented in a similar direction.
bshanks@112 471
bshanks@112 472
bshanks@112 473
bshanks@112 474 \begin{align*}
bshanks@112 475 \sum_{pixel \in pixels} cos(\angle \nabla_1 - \angle \nabla_2) \cdot \frac{\vert \nabla_1 \vert + \vert \nabla_2 \vert}{2} \cdot \frac{pixel\_value_1 + pixel\_value_2}{2}
bshanks@112 476 \end{align*}
bshanks@112 477
bshanks@112 478 where $\nabla_1$ and $\nabla_2$ are the gradient vectors of the two images at the current pixel; $\angle \nabla_i$ is the angle of the gradient of image $i$ at the current pixel; $\vert \nabla_i \vert$ is the magnitude of the gradient of image $i$ at the current pixel; and $pixel\_value_i$ is the value of the current pixel in image $i$.
bshanks@112 479
bshanks@112 480 The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a similar direction (because the borders are similar).
bshanks@112 481
bshanks@112 482
bshanks@112 483 \begin{wrapfigure}{L}{0.4\textwidth}\centering
bshanks@85 484 \includegraphics[scale=.27]{singlegene_example_2682_Pitx2_SS_jet.eps}\includegraphics[scale=.27]{singlegene_example_371_Aldh1a2_SSs_jet.eps}
bshanks@85 485 \includegraphics[scale=.27]{singlegene_example_2759_Ppfibp1_PIR_jet.eps}\includegraphics[scale=.27]{singlegene_example_3310_Slco1a5_FRP_jet.eps}
bshanks@85 486 \includegraphics[scale=.27]{singlegene_example_3709_Tshz2_RSP_jet.eps}\includegraphics[scale=.27]{singlegene_example_3674_Trhr_COApm_jet.eps}
bshanks@85 487 \includegraphics[scale=.27]{singlegene_example_925_Col12a1_ACA+PL+ILA+DP+ORB+MO_jet.eps}\includegraphics[scale=.27]{singlegene_example_1334_Ets1_post_lat_vis_jet.eps}
bshanks@69 488
bshanks@77 489 \caption{From left to right and top to bottom, single genes which roughly identify areas SS (somatosensory primary \begin{latex}+\end{latex} supplemental), SSs (supplemental somatosensory), PIR (piriform), FRP (frontal pole), RSP (retrosplenial), COApm (Cortical amygdalar, posterior part, medial zone). Grouping some areas together, we have also found genes to identify the groups ACA+PL+ILA+DP+ORB+MO (anterior cingulate, prelimbic, infralimbic, dorsal peduncular, orbital, motor), posterior and lateral visual (VISpm, VISpl, VISI, VISp; posteromedial, posterolateral, lateral, and primary visual; the posterior and lateral visual area is distinguished from its neighbors, but not from the entire rest of the cortex). The genes are $Pitx2$, $Aldh1a2$, $Ppfibp1$, $Slco1a5$, $Tshz2$, $Trhr$, $Col12a1$, $Ets1$.}
bshanks@69 490 \label{singleSoFar}\end{wrapfigure}
bshanks@61 491
bshanks@112 492 \vspace{0.3cm}**Gradient similarity provides information complementary to correlation**
bshanks@112 493
bshanks@112 494
bshanks@112 495
bshanks@112 496
bshanks@112 497
bshanks@112 498 To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area.
bshanks@112 499
bshanks@112 500 %%None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods.
bshanks@112 501
bshanks@112 502 %% The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.}
bshanks@112 503
bshanks@112 504 %% Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers.
bshanks@112 505
bshanks@112 506 \vspace{0.3cm}**Areas which can be identified by single genes**
bshanks@112 507 Using gradient similarity, we have already found single genes which roughly identify some areas and groupings of areas. For each of these areas, an example of a gene which roughly identifies it is shown in Figure \ref{singleSoFar}. We have not yet cross-verified these genes in other atlases.
bshanks@112 508
bshanks@112 509 In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior part of cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), VIS (visual), AUD (auditory).
bshanks@112 510
bshanks@112 511 These results validate our expectation that the ABA dataset can be exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring method, gradient similarity.
bshanks@112 512
bshanks@112 513
bshanks@112 514
bshanks@112 515 \vspace{0.3cm}**Combinations of multiple genes are useful and necessary for some areas**
bshanks@112 516
bshanks@112 517 In Figure \ref{MOcombo}, we give an example of a cortical area which is not marked by any single gene, but which can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left image. This combination captures area MO much better than any single gene.
bshanks@112 518
bshanks@112 519 This shows that our proposal to develop a method to find combinations of marker genes is both possible and necessary.
bshanks@112 520
bshanks@112 521 %% wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652}
bshanks@112 522 %% mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784}
bshanks@112 523
bshanks@112 524 %%According to logistic regression, gene wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface.
bshanks@112 525
bshanks@112 526 %%Gene mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} is shown in figure the upper-right of Fig. \ref{MOcombo}. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left of Figure \ref{MOcombo}. This combination captures area MO much better than any single gene.
bshanks@112 527
bshanks@112 528
bshanks@112 529
bshanks@112 530
bshanks@112 531 %%\vspace{0.3cm}**Feature selection integrated with prediction**
bshanks@112 532 %%As noted earlier, in general, any classifier can be used for feature selection by running it inside a stepwise wrapper. Also, some learning algorithms integrate soft constraints on number of features used. Examples of both of these will be seen in the section "Multivariate supervised learning".
bshanks@112 533
bshanks@112 534
bshanks@112 535 === Multivariate supervised learning ===
bshanks@112 536
bshanks@112 537
bshanks@112 538 \vspace{0.3cm}**Forward stepwise logistic regression**
bshanks@112 539 Logistic regression is a popular method for predictive modeling of categorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes found were shown in various figures throughout this document, and Figure \ref{MOcombo} shows a combination of genes which was found.
bshanks@112 540
bshanks@112 541 %%We felt that, for single genes, gradient similarity did a better job than logistic regression at capturing our subjective impression of a "good gene".
bshanks@112 542
bshanks@112 543
bshanks@112 544 \vspace{0.3cm}**SVM on all genes at once**
bshanks@112 545
bshanks@112 546 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%\footnote{5-fold cross-validation.}. However, as noted above, a classifier that looks at all the genes at once isn't as practically useful as a classifier that uses only a few genes.
bshanks@112 547
bshanks@112 548 %% This shows that the genes included in the ABA dataset are sufficient to define much of cortical anatomy.
bshanks@112 549
bshanks@112 550
bshanks@112 551
bshanks@112 552 === Data-driven redrawing of the cortical map ===
bshanks@98 553
bshanks@103 554
bshanks@103 555
bshanks@104 556
bshanks@104 557
bshanks@104 558
bshanks@104 559 We have applied the following dimensionality reduction algorithms to reduce the dimensionality of the gene expression profile associated with each pixel: Principal Components Analysis (PCA), Simple PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space Alignment, Stochastic Proximity Embedding, Fast Maximum Variance Unfolding, Non-negative Matrix Factorization (NNMF). Space constraints prevent us from showing many of the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in the first, second, and third rows of Figure \ref{dimReduc}.
bshanks@104 560
bshanks@104 561
bshanks@104 562
bshanks@112 563 After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we have tried k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are shown in the bottom row of Figure \ref{dimReduc}. To compare, the leftmost picture on the bottom row of Figure \ref{dimReduc} shows some of the major subdivisions of cortex. These results show that different dimensionality reduction techniques capture different aspects of the data and lead to different clusterings, indicating the utility of our proposal to produce a detailed comparison of these techniques as applied to the domain of genomic anatomy.
bshanks@104 564
bshanks@104 565
bshanks@104 566
bshanks@104 567
bshanks@104 568 \vspace{0.3cm}**Many areas are captured by clusters of genes**
bshanks@104 569 We also clustered the genes using gradient similarity to see if the spatial regions defined by any clusters matched known anatomical regions. Figure \ref{geneClusters} shows, for ten sample gene clusters, each cluster's average expression pattern, compared to a known anatomical boundary. This suggests that it is worth attempting to cluster genes, and then to use the results to cluster pixels.
bshanks@104 570
bshanks@104 571
bshanks@112 572 == Our plan: what remains to be done ==
bshanks@104 573
bshanks@103 574 %%\vspace{0.3cm}**Flatmap cortex and segment cortical layers**
bshanks@103 575
bshanks@103 576 === Flatmap cortex and segment cortical layers ===
bshanks@103 577
bshanks@103 578 %%In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo), or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane.
bshanks@103 579
bshanks@103 580 %%In the case of the cerebral cortex, it remains to be seen which method of mapping the manifold into a plane is optimal for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps).
bshanks@103 581
bshanks@103 582
bshanks@103 583 %%Often the surface of a structure serves as a natural 2-D basis for anatomical organization. Even when the shape of the surface is known, there are multiple ways to map it into a plane. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). Although there is much 2-D organization in anatomy, there are also structures whose anatomy is fundamentally 3-dimensional. We plan to include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong.
bshanks@103 584
bshanks@112 585 There are multiple ways to flatten 3-D data into 2-D. We will compare mappings from manifolds to planes which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). We will also develop a segmentation algorithm to automatically identify the layer boundaries.
bshanks@112 586
bshanks@112 587 %%Our method will include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong.
bshanks@112 588
bshanks@112 589 %%We have not yet made use of radial profiles. While the radial profiles may be used "raw", for laminar structures like the cortex another strategy is to group together voxels in the same cortical layer; each surface pixel would then be associated with one expression level per gene per layer. We will develop a segmentation algorithm to automatically identify the layer boundaries.
bshanks@103 590
bshanks@103 591 %%\vspace{0.3cm}**Develop algorithms that find genetic markers for anatomical regions**
bshanks@103 592 %%\vspace{0.3cm}****
bshanks@103 593
bshanks@103 594
bshanks@103 595 === Develop algorithms that find genetic markers for anatomical regions ===
bshanks@112 596 \begin{wrapfigure}{L}{0.7\textwidth}\centering
bshanks@66 597 \includegraphics[scale=1]{merge3_norm_hv_PCA_ndims50_prototypes_collage_sm_border.eps}
bshanks@74 598 \includegraphics[scale=.98]{nnmf_ndims7_collage_border.eps}
bshanks@66 599 \includegraphics[scale=1]{merge3_norm_hv_k150_LandmarkIsomap_ndims7_prototypes_collage_sm_border.eps}
bshanks@66 600 \\
bshanks@69 601 \includegraphics[scale=.24]{paint_merge3_major.eps}\includegraphics[scale=.22]{merge3_norm_hv_PCA_ndims50_kmeans_7clust.eps}\includegraphics[scale=.24]{norm_hv_NNMF_6_norm_kmeans_7clust.eps}\includegraphics[scale=.22]{merge3_norm_hv_k150_LandmarkIsomap_ndims7_kmeans_7clust.eps}
bshanks@69 602 \caption{First row: the first 6 reduced dimensions, using PCA. Second row: the first 6 reduced dimensions, using NNMF. Third row: the first six reduced dimensions, using landmark Isomap. Bottom row: examples of kmeans clustering applied to reduced datasets to find 7 clusters. Left: 19 of the major subdivisions of the cortex. Second from left: PCA. Third from left: NNMF. Right: Landmark Isomap. Additional details: In the third and fourth rows, 7 dimensions were found, but only 6 displayed. In the last row: for PCA, 50 dimensions were used; for NNMF, 6 dimensions were used; for landmark Isomap, 7 dimensions were used.}
bshanks@69 603 \label{dimReduc}\end{wrapfigure}
bshanks@69 604
bshanks@104 605 \vspace{0.3cm}**Scoring measures and feature selection**
bshanks@106 606 %%We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), and we plan to develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Hotelling's T-square test (a multivariate generalization of Student's t-test), ANOVA, and a multivariate version of the Mann-Whitney U test (a non-parametric test).
bshanks@104 607 We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Student's t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a scoring measure on genes by taking the prediction error when using that gene to predict the target.
bshanks@104 608
bshanks@112 609 Using some combination of these measures, we will develop a procedure to find single marker genes for anatomical regions: for each cortical area, we will rank the genes by their ability to delineate that area. We will quantitatively compare the list of single genes generated by our method to the lists generated by methods which are mentioned in Related Work.
bshanks@104 610
bshanks@104 611
bshanks@104 612 Some cortical areas have no single marker genes but can be identified by combinatorial coding. This requires multivariate scoring measures and feature selection procedures. Many of the measures, such as expression energy, gradient similarity, Jaccard, Dice, Hough, Student's t, and Mann-Whitney U are univariate. We will extend these scoring measures for use in multivariate feature selection, that is, for scoring how well combinations of genes, rather than individual genes, can distinguish a target area. There are existing multivariate forms of some of the univariate scoring measures, for example, Hotelling's T-square is a multivariate analog of Student's t.
bshanks@104 613
bshanks@104 614 We will develop a feature selection procedure for choosing the best small set of marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will also explore (a) feature selection using a stepwise wrapper over "vanilla" classifiers such as logistic regression, (b) supervised learning methods such as decision trees which incrementally/greedily combine single gene markers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features used, such as sparse support vector machines (SVMs).
bshanks@104 615
bshanks@112 616 Since errors of displacement and of shape may cause genes and target areas to match less than they should, we will consider the robustness of feature selection methods in the presence of error. Some of these methods, such as the Hough transform, are designed to be resistant in the presence of error, but many are not. %%We will consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a scoring method on small displacements and distortions of the data adds robustness to registration error at the expense of computation time.
bshanks@112 617
bshanks@112 618 %% We will extend our procedure to handle difficult areas by combining areas or redrawing their boundaries.
bshanks@112 619
bshanks@112 620 An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape of the natural domain of gene expression corresponding to the area is different from the shape of the area as recognized by anatomists. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly\footnote{Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by multiple sources of evidence. Interestingly, the need to detect "natural spatial domains of gene expression" in a data-driven fashion means that the methods of Goal 2 might be useful in achieving Goal 1, as well -- particularly discriminative dimensionality reduction.}, and (b) detect when a difficult area could be combined with adjacent areas to create a larger area which can be fit.
bshanks@112 621
bshanks@112 622 A future publication on the method that we develop in Goal 1 will review the scoring measures and quantitatively compare their performance in order to provide a foundation for future research of methods of marker gene finding. We will measure the robustness of the scoring measures as well as their absolute performance on our dataset.
bshanks@104 623
bshanks@108 624 %% (including spatial models\cite{paciorek_computational_2007})
bshanks@108 625
bshanks@112 626 %%\vspace{0.3cm}**Classifiers**
bshanks@112 627 %%We will explore and compare different classifiers. As noted above, this activity is not separate from the previous one, because some supervised learning algorithms include feature selection, and any classifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic regression (including spatial models), decision trees\footnote{Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate trees that use fewer genes.}, sparse SVMs, generative mixture models (including naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic algorithms, and artificial neural networks.
bshanks@104 628
bshanks@104 629
bshanks@104 630
bshanks@104 631
bshanks@104 632 === Develop algorithms to suggest a division of a structure into anatomical parts ===
bshanks@103 633
bshanks@112 634 \begin{wrapfigure}{L}{0.6\textwidth}\centering
bshanks@71 635 \includegraphics[scale=.2]{cosine_similarity1_rearrange_colorize.eps}
bshanks@99 636 \caption{Prototypes corresponding to sample gene clusters, clustered by gradient similarity. Region boundaries for the region that most matches each prototype are overlaid.}
bshanks@71 637 \label{geneClusters}\end{wrapfigure}
bshanks@71 638
bshanks@104 639 \vspace{0.3cm}**Dimensionality reduction on gene expression profiles**
bshanks@104 640 We have already described the application of ten dimensionality reduction algorithms for the purpose of replacing the gene expression profiles, which are vectors of about 4000 gene expression levels, with a smaller number of features. We plan to further explore and interpret these results, as well as to apply other unsupervised learning algorithms, including independent components analysis, self-organizing maps, and generative models such as deep Boltzmann machines. We will explore ways to quantitatively compare the relevance of the different dimensionality reduction methods for identifying cortical areal boundaries.
bshanks@104 641
bshanks@104 642 \vspace{0.3cm}**Dimensionality reduction on pixels**
bshanks@104 643 Instead of applying dimensionality reduction to the gene expression profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions.
bshanks@104 644
bshanks@104 645 %% \footnote{Consider a matrix whose rows represent pixel locations, and whose columns represent genes. An entry in this matrix represents the gene expression level at a given pixel. One can look at this matrix as a collection of pixels, each corresponding to a vector of many gene expression levels; or one can look at it as a collection of genes, each corresponding to a vector giving that gene's expression at each pixel. Similarly, dimensionality reduction can be used to replace a large number of genes with a small number of features, or it can be used to replace a large number of pixels with a small number of features.}
bshanks@104 646
bshanks@104 647 \vspace{0.3cm}**Clustering and segmentation on pixels**
bshanks@112 648 We will explore clustering and image segmentation algorithms in order to segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving\cite{hastie_gene_2000}, recursive division clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transformations, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with various linkage functions. These methods can be combined with dimensionality reduction.
bshanks@104 649
bshanks@104 650 \vspace{0.3cm}**Clustering on genes**
bshanks@112 651 We have already shown that the procedure of clustering genes according to gradient similarity, and then creating an averaged prototype of each cluster's expression pattern, yields some spatial patterns which match cortical areas (Figure \ref{geneClusters}). We will further explore the clustering of genes.
bshanks@104 652
bshanks@104 653 In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then replacing their expression levels with a single average expression level, thereby removing some redundancy from the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would help or hurt the ultimate goal of identifying interesting spatial regions.
bshanks@104 654
bshanks@104 655 \vspace{0.3cm}**Co-clustering**
bshanks@112 656 We will explore some algorithms which simultaneously incorporate clustering on instances and on features (in our case, pixels and genes), for example, IRM\cite{kemp_learning_2006}. These are called co-clustering or biclustering algorithms.
bshanks@112 657
bshanks@112 658 %%\vspace{0.3cm}**Radial profiles**
bshanks@112 659 %%We wil explore the use of the radial profile of gene expression under each pixel.
bshanks@104 660
bshanks@98 661 \vspace{0.3cm}**Compare different methods**
bshanks@98 662 In order to tell which method is best for genomic anatomy, for each experimental method we will compare the cortical map found by unsupervised learning to a cortical map derived from the Allen Reference Atlas. We will explore various quantitative metrics that purport to measure how similar two clusterings are, such as Jaccard, Rand index, Fowlkes-Mallows, variation of information, Larsen, Van Dongen, and others.
bshanks@96 663
bshanks@96 664
bshanks@96 665 \vspace{0.3cm}**Discriminative dimensionality reduction**
bshanks@96 666 In addition to using a purely data-driven approach to identify spatial regions, it might be useful to see how well the known regions can be reconstructed from a small number of features, even if those features are chosen by using knowledge of the regions. For example, linear discriminant analysis could be used as a dimensionality reduction technique in order to identify a few features which are the best linear summary of gene expression profiles for the purpose of discriminating between regions. This reduced feature set could then be used to cluster pixels into regions. Perhaps the resulting clusters will be similar to the reference atlas, yet more faithful to natural spatial domains of gene expression than the reference atlas is.
bshanks@96 667
bshanks@96 668
bshanks@96 669 === Apply the new methods to the cortex ===
bshanks@112 670 Using the methods developed in Goal 1, we will present, for each cortical area, a short list of markers to identify that area; and we will also present lists of "panels" of genes that can be used to delineate many areas at once.
bshanks@112 671
bshanks@112 672 Because in most cases the ABA coronal dataset only contains one ISH per gene, it is possible for an unrelated combination of genes to seem to identify an area when in fact it is only coincidence. There are three ways we will validate our marker genes to guard against this. First, we will confirm that putative combinations of marker genes express the same pattern in both hemispheres. Second, we will manually validate our final results on other gene expression datasets such as EMAGE, GeneAtlas, and GENSAT\cite{gong_gene_2003}. Third, we may conduct ISH experiments jointly with collaborators to get further data on genes of particular interest.
bshanks@112 673
bshanks@112 674 Using the methods developed in Goal 2, we will present one or more hierarchical cortical maps. We will identify and explain how the statistical structure in the gene expression data led to any unexpected or interesting features of these maps, and we will provide biological hypotheses to interpret any new cortical areas, or groupings of areas, which are discovered.
bshanks@96 675
bshanks@96 676
bshanks@96 677
bshanks@86 678
bshanks@86 679
bshanks@86 680 %%# note: slice artifact
bshanks@86 681
bshanks@86 682 %%\vspace{0.3cm}**Extension to probabalistic maps**
bshanks@112 683 %%Presently, we do not have a probabalistic atlas which is registered to the ABA space. However, in anticipation of the availability of such maps, we would like to explore extensions to our Goal 1 techniques which can handle probabalistic maps.
bshanks@112 684
bshanks@112 685
bshanks@112 686
bshanks@112 687 === Apply the new methods to hyperspectral datasets ===
bshanks@112 688 Our software will be able to read and write file formats common in the hyperspectral imaging community such as Erdas LAN and ENVI, and it will be able to convert between the SEV and NIFTI formats from neuroscience and the ENVI format from GIS. The methods developed in Goals 1 and 2 will be implemented either as part of Spectral Python or as a separate tool that interoperates with Spectral Python. The methods will be run on hyperspectral satellite image datasets, and their performance will be compared to existing hyperspectral analysis techniques.
bshanks@112 689
bshanks@112 690
bshanks@112 691
bshanks@112 692
bshanks@87 693
bshanks@33 694 \newpage
bshanks@33 695
bshanks@33 696 \bibliographystyle{plain}
bshanks@33 697 \bibliography{grant}
bshanks@33 698
bshanks@85 699 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
bshanks@17 700
bshanks@112 701
bshanks@112 702
bshanks@112 703 %% todo: postdoc mentoring plan
bshanks@85 704
bshanks@85 705
bshanks@85 706
bshanks@96 707 \end{document}
bshanks@112 708