cg

changeset 103:6ea7e2e5e6c3

.
author bshanks@bshanks.dyndns.org
date Wed Apr 22 07:26:09 2009 -0700 (16 years ago)
parents 4cca7c7d91d1
children d6ecbc494f0b
files grant.doc grant.txt
line diff
1.1 Binary file grant.doc has changed
2.1 --- a/grant.txt Wed Apr 22 07:09:37 2009 -0700 2.2 +++ b/grant.txt Wed Apr 22 07:26:09 2009 -0700 2.3 @@ -56,11 +56,6 @@ 2.4 2.5 === Aim 1: Given a map of regions, find genes that mark the regions === 2.6 2.7 -\begin{wrapfigure}{L}{0.2\textwidth}\centering 2.8 -\includegraphics[scale=.27]{holeExample_2682_SS_jet.eps} 2.9 -\caption{Gene $Pitx2$ is selectively underexpressed in area SS.} 2.10 -\label{hole}\end{wrapfigure} 2.11 - 2.12 \vspace{0.3cm}**Machine learning terminology: classifiers** The task of looking for marker genes for known anatomical regions means that one is looking for a set of genes such that, if the expression level of those genes is known, then the locations of the regions can be inferred. 2.13 2.14 %% then instead of saying that we are using gene expression to find the locations of the regions, 2.15 @@ -85,6 +80,219 @@ 2.16 2.17 2.18 === Our strategy for Aim 1 === 2.19 + 2.20 +Key questions when choosing a learning method are: What are the instances? What are the features? How are the features chosen? Here are four principles that outline our answers to these questions. 2.21 + 2.22 + 2.23 +\vspace{0.3cm}**Principle 1: Combinatorial gene expression** 2.24 + 2.25 +It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Studies, Figure \ref{MOcombo}). Therefore, each instance should contain multiple features (genes). 2.26 + 2.27 + 2.28 +\vspace{0.3cm}**Principle 2: Only look at combinations of small numbers of genes** 2.29 + 2.30 + 2.31 +When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected as features. The more data that are available to a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as features. 2.32 + 2.33 +The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task combines feature selection with supervised learning. 2.34 + 2.35 + 2.36 +\vspace{0.3cm}**Principle 3: Use geometry in feature selection** 2.37 + 2.38 +When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See Preliminary Studies, figure \ref{AUDgeometry} for evidence of the complementary nature of pointwise and local scoring methods. 2.39 + 2.40 + 2.41 + 2.42 +\vspace{0.3cm}**Principle 4: Work in 2-D whenever possible** 2.43 + 2.44 + 2.45 +There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D data. Therefore, when possible, the instances should represent pixels, not voxels. 2.46 + 2.47 + 2.48 +=== Related work === 2.49 + 2.50 +There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expression data which are not fundamentally spatial\footnote{By "__fundamentally__ spatial" we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not just data which have only a few different locations or which is indexed by anatomical label.}. 2.51 + 2.52 +As noted above, there has been much work on both supervised learning and there are many available algorithms for each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the way that this framework is set up has a large impact on performance. Creating a good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical "fine-tuning" of numerical parameters. For example, we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Studies) may be necessary in order to achieve the best results in this application. 2.53 + 2.54 +We now turn to efforts to find marker genes using spatial gene expression data using automated methods. 2.55 + 2.56 +%%GeneAtlas\cite{carson_digital_2005} allows the user to construct a search query by freely demarcating one or two 2-D regions on sagittal slices, and then to specify either the strength of expression or the name of another gene whose expression pattern is to be matched. 2.57 + 2.58 +%% \footnote{For the similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel (actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity) whose expression is within four discretization levels. EMAGE uses Jaccard similarity (the number of true pixels in the intersection of the two images, divided by the number of pixels in their union).} 2.59 +%% \cite{lee_high-resolution_2007} mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of interest, computing what proportion of this structure is covered by the gene's spatial region. 2.60 + 2.61 +GeneAtlas\cite{carson_digital_2005} and EMAGE \cite{venkataraman_emage_2008} allow the user to construct a search query by demarcating regions and then specifying either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a region in concert but not separately. 2.62 + 2.63 +\cite{ng_anatomic_2009} describes AGEA, "Anatomic Gene Expression 2.64 +Atlas". AGEA has three 2.65 +components. **Gene Finder**: The user selects a seed voxel and the system (1) chooses a 2.66 +cluster which includes the seed voxel, (2) yields a list of genes 2.67 +which are overexpressed in that cluster. **Correlation**: The user selects a seed voxel and the system 2.68 +then shows the user how much correlation there is between the gene 2.69 +expression profile of the seed voxel and every other voxel. **Clusters**: will be described later. \cite{chin_genome-scale_2007} looks at the mean expression level of genes within anatomical regions, and applies a Student's t-test with Bonferroni correction to determine whether the mean expression level of a gene is significantly higher in the target region. \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} differ from our Aim 1 in at least three ways. First, \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} find only single genes, whereas we will also look for combinations of genes. Second, \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} can only use overexpression as a marker, whereas we will also search for underexpression. Third, \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} use scores based on pointwise expression levels, whereas we will also use geometric scores such as gradient similarity (described in Preliminary Studies). Figures \ref{MOcombo}, \ref{hole}, and \ref{AUDgeometry} in the Preliminary Studies section contain evidence that each of our three choices is the right one. 2.70 + 2.71 +\cite{hemert_matching_2008} describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image. %%Their match score is Jaccard similarity. 2.72 + 2.73 + 2.74 +In summary, there has been fruitful work on finding marker genes, but only one of the previous projects explores combinations of marker genes, and none of these publications compare the results obtained by using different algorithms or scoring methods. 2.75 + 2.76 + 2.77 + 2.78 + 2.79 +=== Aim 2: From gene expression data, discover a map of regions === 2.80 + 2.81 + 2.82 + 2.83 +\vspace{0.3cm}**Machine learning terminology: clustering** 2.84 + 2.85 +If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as __unsupervised learning__ in the jargon of machine learning. One thing that you can do with such a dataset is to group instances together. A set of similar instances is called a __cluster__, and the activity of finding grouping the data into clusters is called __clustering__ or __cluster analysis__. 2.86 + 2.87 +The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from the same anatomical region have similar gene expression profiles, at least compared to the other regions. This means that clustering voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression. 2.88 + 2.89 +%%It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests the outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering. 2.90 + 2.91 +It is desirable to determine not just one set of regions, but also how these regions relate to each other. The outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering. 2.92 + 2.93 + 2.94 +\vspace{0.3cm}**Similarity scores** 2.95 +A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and scoring methods for similarity. 2.96 + 2.97 + 2.98 + 2.99 +\vspace{0.3cm}**Spatially contiguous clusters; image segmentation** 2.100 +We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary Studies, we show that one can get reasonable results without enforcing this constraint; however, we plan to compare these results against other methods which guarantee contiguous clusters. 2.101 + 2.102 +%%Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three. However, there are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images. 2.103 + 2.104 +Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three\footnote{There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery.}. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images. 2.105 + 2.106 + 2.107 + 2.108 +\vspace{0.3cm}**Dimensionality reduction** 2.109 +In this section, we discuss reducing the length of the per-pixel gene expression feature vector. By "dimension", we mean the dimension of this vector, not the spatial dimension of the underlying data. 2.110 + 2.111 +%% After the reduced feature set is created, the instances may be replaced by __reduced instances__, which have as their features the reduced feature set rather than the original feature set of all gene expression levels. 2.112 + 2.113 +Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the instances. However, some clustering algorithms perform better on small numbers of features\footnote{First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.}. There are techniques which "summarize" a larger number of features using a smaller number of features; these techniques go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique yields is called the __reduced feature set__. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels. 2.114 + 2.115 +%%Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data. Another use for dimensionality reduction is to visualize the relationships between regions after clustering. 2.116 + 2.117 +%%Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example, one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on a 2-D plan will exactly satisfy this property; however, dimensionality reduction techniques allow one to find arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering. 2.118 + 2.119 + 2.120 +\vspace{0.3cm}**Clustering genes rather than voxels** 2.121 +Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used. 2.122 + 2.123 +Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could have one reduced feature for each gene cluster. 2.124 + 2.125 +Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following procedure: cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters. In Preliminary Studies, Figure \ref{geneClusters}, we show that a number of anatomically recognized cortical regions, as well as some "superregions" formed by lumping together a few regions, are associated with gene clusters in this fashion. 2.126 + 2.127 +%% Therefore, it seems likely that an anatomically interesting region will have multiple genes which each individually pick it out\footnote{This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression; perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype.}. 2.128 + 2.129 +%%The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering algorithms. 2.130 + 2.131 +=== Related work === 2.132 + 2.133 + 2.134 +Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, \cite{schleicher_quantitative_2005}, \cite{annese_myelo-architectonic_2004}, \cite{schmitt_detection_2003}, and \cite{adamson_tracking_2005} associate spots on the cortex with the radial profile\footnote{A radial profile is a profile along a line perpendicular to the cortical surface.} of response to some stain (\cite{kruggel_analyzingneocortical_2003} uses MRI), extract features from this profile, and then use similarity between surface pixels to cluster. 2.135 + 2.136 + 2.137 + 2.138 +%%Features used include statistical moments, wavelets, and the excess mass functional. Some of these features are motivated by the presence of tangential lines of stain intensity which correspond to laminar structure. Some methods use standard clustering procedures, whereas others make use of the spatial nature of the data to look for sudden transitions, which are identified as areal borders. 2.139 + 2.140 +\cite{thompson_genomic_2008} describes an analysis of the anatomy of 2.141 +the hippocampus using the ABA dataset. In addition to manual analysis, 2.142 +two clustering methods were employed, a modified Non-negative Matrix 2.143 +Factorization (NNMF), and a hierarchical recursive bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset 2.144 + 2.145 +%% \footnote{We ran "vanilla" NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was needed. The paper under discussion also mentions that they tried a hierarchical variant of NNMF, which we have not yet tried.} and while the results are promising, they also demonstrate that NNMF is not necessarily the best dimensionality reduction method for this application (see Preliminary Studies, Figure \ref{dimReduc}). 2.146 + 2.147 +%% In addition, this paper described a visual screening of the data, specifically, a visual analysis of 6000 genes with the primary purpose of observing how the spatial pattern of their expression coincided with the regions that had been identified by NNMF. We propose to do this sort of screening automatically, which would yield an objective, quantifiable result, rather than qualitative observations. 2.148 + 2.149 +%% \cite{thompson_genomic_2008} reports that both mNNMF and hierarchical mNNMF clustering were useful, and that hierarchical recursive bifurcation gave similar results. 2.150 + 2.151 + 2.152 +AGEA\cite{ng_anatomic_2009} includes a preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with correlation as the similarity metric. EMAGE\cite{venkataraman_emage_2008} allows the user to select a dataset from among a large number of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchical complete linkage clustering. %% with un-centered correlation as the similarity score. 2.153 + 2.154 +%%\cite{chin_genome-scale_2007} clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and ordered the rows of this matrix as follows: "the first row of the matrix was chosen to show the strongest contrast between the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing similarity using a least squares metric". The resulting matrix showed four clusters. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels. 2.155 + 2.156 +\cite{chin_genome-scale_2007} clusters genes. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels. 2.157 + 2.158 +\cite{hemert_matching_2008} applies their technique for finding combinations of marker genes for the purpose of clustering genes around a "seed gene". %%They do this by using the pattern of expression of the seed gene as the target image, and then searching for other genes which can be combined to reproduce this pattern. Other genes which are found are considered to be related to the seed. The same team also describes a method\cite{van_hemert_mining_2007} for finding "association rules" such as, "if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene". This could be useful as part of a procedure for clustering voxels. 2.159 + 2.160 +In summary, although these projects obtained clusterings, there has not been much comparison between different algorithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. The projects using gene expression on cortex did not attempt to make use of the radial profile of gene expression. Also, none of these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms. 2.161 + 2.162 + 2.163 + 2.164 +=== Aim 3: apply the methods developed to the cerebral cortex === 2.165 + 2.166 + 2.167 +\vspace{0.3cm}**Background** 2.168 + 2.169 +The cortex is divided into areas and layers. Because of the cortical columnar organization, the parcellation of the cortex into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can picture an area of the cortex as a slice of a six-layered cake\footnote{Outside of isocortex, the number of layers varies.}. 2.170 + 2.171 +It is known that different cortical areas have distinct roles in both normal functioning and in disease processes, yet there are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their approximate location upon the cortical surface. 2.172 + 2.173 +Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single agreed-upon map can be seen by contrasting the recent maps given by Swanson\cite{swanson_brain_2003} on the one hand, and Paxinos and Franklin\cite{paxinos_mouse_2001} on the other. While the maps are certainly very similar in their general arrangement, significant differences remain. 2.174 + 2.175 +\vspace{0.3cm}**The Allen Mouse Brain Atlas dataset** 2.176 + 2.177 +The Allen Mouse Brain Atlas (ABA) data were produced by doing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse brains were needed in order to measure the expression of many genes. 2.178 + 2.179 +An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 \= 159,326 voxels in the 3D coordinate system, of which 51,533 are in the brain\cite{ng_anatomic_2009}. 2.180 + 2.181 +Mus musculus is thought to contain about 22,000 protein-coding genes\cite{waterston_initial_2002}. The ABA contains data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from only the coronal subset of the ABA\footnote{The sagittal data do not cover the entire cortex, and also have greater registration error\cite{ng_anatomic_2009}. Genes were selected by the Allen Institute for coronal sectioning based on, "classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern"\cite{ng_anatomic_2009}.}. 2.182 + 2.183 +%%The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress\footnote{http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE}, EADHB\footnote{http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html}, MAMEP\footnote{http://mamep.molgen.mpg.de/index.php}, Xenbase\footnote{http://xenbase.org/}, ZFIN\cite{sprague_zebrafish_2006}, Aniseed\footnote{http://aniseed-ibdm.univ-mrs.fr/}, VisiGene\footnote{http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources}, GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE\footnote{http://compare.ibdml.univ-mrs.fr/} GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007}\footnote{GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.}. With the exception of the ABA, GenePaint, and EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression. 2.184 + 2.185 +The ABA is not the only large public spatial gene expression dataset. However, with the exception of the ABA, GenePaint, and EMAGE, most of the other resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space. 2.186 + 2.187 +%%, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression. 2.188 + 2.189 +%% \footnote{Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress (http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE), EADHB (http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html), MAMEP (http://mamep.molgen.mpg.de/index.php), Xenbase (http://xenbase.org/), ZFIN\cite{sprague_zebrafish_2006}, Aniseed (http://aniseed-ibdm.univ-mrs.fr/), VisiGene (http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some of the other listed data sources), GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE (http://compare.ibdml.univ-mrs.fr/), GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007} (GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.)} 2.190 + 2.191 + 2.192 + 2.193 +=== Related work === 2.194 + 2.195 +\cite{ng_anatomic_2009} describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of the other components of AGEA can be applied to cortical areas; AGEA's Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA's hierarchical clustering does not produce clusters corresponding to the cortical areas\footnote{In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.}. 2.196 + 2.197 +%% (there may be clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot the find marker genes for cortical areas is that, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed. 2.198 + 2.199 +%% Most of the projects which have been discussed have been done by the same groups that develop the public datasets. Although these projects make their algorithms available for use on their own website, none of them have released an open-source software toolkit; instead, users are restricted to using the provided algorithms only on their own dataset. 2.200 + 2.201 +In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, (b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally finding marker genes for cortical areas, or on finding a hierarchical clustering that will yield a map of cortical areas de novo from gene expression data. 2.202 + 2.203 +Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker genes for \begin{latex}/\end{latex} reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods. 2.204 + 2.205 + 2.206 +== Significance == 2.207 + 2.208 + 2.209 + 2.210 +The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively target individual cortical areas. 2.211 + 2.212 +The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatomical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex. 2.213 + 2.214 + 2.215 +%% Since the number of classes of stains is small compared to the number of genes, 2.216 + 2.217 +The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation of a better map. The development of present-day cortical maps was driven by the application of histological stains. If a different set of stains had been available which identified a different set of features, then today's cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to incorporate what we can learn from looking at the patterns of gene expression. 2.218 + 2.219 +While we do not here propose to analyze human gene expression data, it 2.220 +is conceivable that the methods we propose to develop could be used to 2.221 +suggest modifications to the human cortical map as well. In fact, the 2.222 +methods we will develop will be applicable to other datasets beyond 2.223 +the brain. 2.224 + 2.225 + 2.226 + 2.227 + 2.228 + 2.229 +\vspace{0.3cm}\hrule 2.230 + 2.231 +== The approach: Preliminary Studies == 2.232 \begin{wrapfigure}{L}{0.35\textwidth}\centering 2.233 %%\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_3_654_jet.eps} 2.234 %%\\ 2.235 @@ -98,35 +306,83 @@ 2.236 \caption{Top row: Genes $Nfic$ and $A930001M12Rik$ are the most correlated with area SS (somatosensory cortex). Bottom row: Genes $C130038G02Rik$ and $Cacna1i$ are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.} 2.237 \label{SScorrLr}\end{wrapfigure} 2.238 2.239 -Key questions when choosing a learning method are: What are the instances? What are the features? How are the features chosen? Here are four principles that outline our answers to these questions. 2.240 - 2.241 - 2.242 -\vspace{0.3cm}**Principle 1: Combinatorial gene expression** 2.243 - 2.244 -It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Studies, Figure \ref{MOcombo}). Therefore, each instance should contain multiple features (genes). 2.245 - 2.246 - 2.247 -\vspace{0.3cm}**Principle 2: Only look at combinations of small numbers of genes** 2.248 - 2.249 - 2.250 -When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected as features. The more data that are available to a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as features. 2.251 - 2.252 -The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task combines feature selection with supervised learning. 2.253 - 2.254 - 2.255 -\vspace{0.3cm}**Principle 3: Use geometry in feature selection** 2.256 - 2.257 -When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See Preliminary Studies, figure \ref{AUDgeometry} for evidence of the complementary nature of pointwise and local scoring methods. 2.258 - 2.259 - 2.260 - 2.261 -\vspace{0.3cm}**Principle 4: Work in 2-D whenever possible** 2.262 - 2.263 - 2.264 -There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D data. Therefore, when possible, the instances should represent pixels, not voxels. 2.265 - 2.266 - 2.267 -=== Related work === 2.268 + 2.269 + 2.270 +=== Format conversion between SEV, MATLAB, NIFTI === 2.271 +We have created software to (politely) download all of the SEV files\footnote{SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.} from the Allen Institute website. We have also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret's file formats. 2.272 + 2.273 + 2.274 +=== Flatmap of cortex === 2.275 + 2.276 + 2.277 +We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. We divided the cortex into hemispheres. Using Caret\cite{van_essen_integrated_2001}, we created a mesh representation of the surface of the selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression of the voxels "underneath" that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this grid into a MATLAB matrix. We manually traced the boundaries of each of 49 cortical areas from the ABA coronal reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the mesh surface. We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the region data into MATLAB format. 2.278 + 2.279 +At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries represent the average expression level underneath each surface pixel. We created a normalized version of the gene expression data by subtracting each gene's mean expression level (over all surface pixels) and dividing the expression level of each gene by its standard deviation. The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface. 2.280 + 2.281 +To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex. 2.282 + 2.283 + 2.284 + 2.285 + 2.286 + 2.287 + 2.288 + 2.289 +=== Feature selection and scoring methods === 2.290 + 2.291 +\begin{wrapfigure}{L}{0.2\textwidth}\centering 2.292 +\includegraphics[scale=.27]{holeExample_2682_SS_jet.eps} 2.293 +\caption{Gene $Pitx2$ is selectively underexpressed in area SS.} 2.294 +\label{hole}\end{wrapfigure} 2.295 + 2.296 + 2.297 + 2.298 +\vspace{0.3cm}**Underexpression of a gene can serve as a marker** 2.299 +Underexpression of a gene can sometimes serve as a marker. See, for example, Figure \ref{hole}. 2.300 + 2.301 + 2.302 + 2.303 + 2.304 +\vspace{0.3cm}**Correlation** 2.305 +Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the surface pixels. 2.306 + 2.307 +We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS. 2.308 + 2.309 +%%One class of feature selection scoring methods contains methods which calculate some sort of "match" between each gene image and the target image. Those genes which match the best are good candidates for features. 2.310 + 2.311 +%%One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS. 2.312 + 2.313 + 2.314 + 2.315 +\vspace{0.3cm}**Conditional entropy** 2.316 +%%An information-theoretic scoring method is to find features such that, if the features (gene expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty, so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution to which we are referring is the probability distribution over the population of surface pixels. 2.317 + 2.318 +%%The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating, for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two standard deviations, the mean plus one standard deviation, the mean plus two standard deviations. 2.319 + 2.320 +%%Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized. 2.321 + 2.322 +For each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized. 2.323 + 2.324 +This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question, "Is this surface pixel a member of the target area?". Its advantage over linear methods such as logistic regression is that it takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional entropy would notice, whereas linear methods would not. 2.325 + 2.326 + 2.327 + 2.328 + 2.329 +\vspace{0.3cm}**Gradient similarity** 2.330 +We noticed that the previous two scoring methods, which are pointwise, often found genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar to the shape of the target region. We call this scoring method "gradient similarity". The formula is: 2.331 + 2.332 +%%One might say that gradient similarity attempts to measure how much the border of the area of gene expression and the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its maximum value to zero, the spatial pattern of a gene's expression often does not have a discrete border. Therefore, instead of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have gradients which are oriented in a similar direction. 2.333 + 2.334 + 2.335 + 2.336 +\begin{align*} 2.337 +\sum_{pixel \in pixels} cos(abs(\angle \nabla_1 - \angle \nabla_2)) \cdot \frac{\vert \nabla_1 \vert + \vert \nabla_2 \vert}{2} \cdot \frac{pixel\_value_1 + pixel\_value_2}{2} 2.338 +\end{align*} 2.339 + 2.340 +where $\nabla_1$ and $\nabla_2$ are the gradient vectors of the two images at the current pixel; $\angle \nabla_i$ is the angle of the gradient of image $i$ at the current pixel; $\vert \nabla_i \vert$ is the magnitude of the gradient of image $i$ at the current pixel; and $pixel\_value_i$ is the value of the current pixel in image $i$. 2.341 + 2.342 +The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a similar direction (because the borders are similar). 2.343 + 2.344 +\vspace{0.3cm}**Gradient similarity provides information complementary to correlation** 2.345 \begin{wrapfigure}{L}{0.35\textwidth}\centering 2.346 %%\includegraphics[scale=.27]{singlegene_AUD_lr_top_1_3386_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_2_1258_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_3_420_jet.eps} 2.347 %% 2.348 @@ -138,54 +394,46 @@ 2.349 \caption{The top row shows the two genes which (individually) best predict area AUD, according to logistic regression. The bottom row shows the two genes which (individually) best match area AUD, according to gradient similarity. From left to right and top to bottom, the genes are $Ssr1$, $Efcbp1$, $Ptk7$, and $Aph1a$.} 2.350 \label{AUDgeometry}\end{wrapfigure} 2.351 2.352 -There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expression data which are not fundamentally spatial\footnote{By "__fundamentally__ spatial" we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not just data which have only a few different locations or which is indexed by anatomical label.}. 2.353 - 2.354 -As noted above, there has been much work on both supervised learning and there are many available algorithms for each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the way that this framework is set up has a large impact on performance. Creating a good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical "fine-tuning" of numerical parameters. For example, we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Studies) may be necessary in order to achieve the best results in this application. 2.355 - 2.356 -We now turn to efforts to find marker genes using spatial gene expression data using automated methods. 2.357 - 2.358 -%%GeneAtlas\cite{carson_digital_2005} allows the user to construct a search query by freely demarcating one or two 2-D regions on sagittal slices, and then to specify either the strength of expression or the name of another gene whose expression pattern is to be matched. 2.359 - 2.360 -%% \footnote{For the similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel (actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity) whose expression is within four discretization levels. EMAGE uses Jaccard similarity (the number of true pixels in the intersection of the two images, divided by the number of pixels in their union).} 2.361 -%% \cite{lee_high-resolution_2007} mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of interest, computing what proportion of this structure is covered by the gene's spatial region. 2.362 - 2.363 -GeneAtlas\cite{carson_digital_2005} and EMAGE \cite{venkataraman_emage_2008} allow the user to construct a search query by demarcating regions and then specifying either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a region in concert but not separately. 2.364 - 2.365 -\cite{ng_anatomic_2009} describes AGEA, "Anatomic Gene Expression 2.366 -Atlas". AGEA has three 2.367 -components. **Gene Finder**: The user selects a seed voxel and the system (1) chooses a 2.368 -cluster which includes the seed voxel, (2) yields a list of genes 2.369 -which are overexpressed in that cluster. **Correlation**: The user selects a seed voxel and the system 2.370 -then shows the user how much correlation there is between the gene 2.371 -expression profile of the seed voxel and every other voxel. **Clusters**: will be described later. \cite{chin_genome-scale_2007} looks at the mean expression level of genes within anatomical regions, and applies a Student's t-test with Bonferroni correction to determine whether the mean expression level of a gene is significantly higher in the target region. \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} differ from our Aim 1 in at least three ways. First, \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} find only single genes, whereas we will also look for combinations of genes. Second, \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} can only use overexpression as a marker, whereas we will also search for underexpression. Third, \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} use scores based on pointwise expression levels, whereas we will also use geometric scores such as gradient similarity (described in Preliminary Studies). Figures \ref{MOcombo}, \ref{hole}, and \ref{AUDgeometry} in the Preliminary Studies section contain evidence that each of our three choices is the right one. 2.372 - 2.373 -\cite{hemert_matching_2008} describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image. %%Their match score is Jaccard similarity. 2.374 - 2.375 - 2.376 -In summary, there has been fruitful work on finding marker genes, but only one of the previous projects explores combinations of marker genes, and none of these publications compare the results obtained by using different algorithms or scoring methods. 2.377 - 2.378 - 2.379 - 2.380 - 2.381 -=== Aim 2: From gene expression data, discover a map of regions === 2.382 - 2.383 - 2.384 - 2.385 -\vspace{0.3cm}**Machine learning terminology: clustering** 2.386 - 2.387 -If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as __unsupervised learning__ in the jargon of machine learning. One thing that you can do with such a dataset is to group instances together. A set of similar instances is called a __cluster__, and the activity of finding grouping the data into clusters is called __clustering__ or __cluster analysis__. 2.388 - 2.389 -The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from the same anatomical region have similar gene expression profiles, at least compared to the other regions. This means that clustering voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression. 2.390 - 2.391 -%%It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests the outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering. 2.392 - 2.393 -It is desirable to determine not just one set of regions, but also how these regions relate to each other. The outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering. 2.394 - 2.395 - 2.396 -\vspace{0.3cm}**Similarity scores** 2.397 -A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and scoring methods for similarity. 2.398 - 2.399 - 2.400 + 2.401 + 2.402 +To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area. 2.403 + 2.404 +%%None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods. 2.405 + 2.406 +%% The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.} 2.407 + 2.408 +%% Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers. 2.409 + 2.410 +\vspace{0.3cm}**Areas which can be identified by single genes** 2.411 +Using gradient similarity, we have already found single genes which roughly identify some areas and groupings of areas. For each of these areas, an example of a gene which roughly identifies it is shown in Figure \ref{singleSoFar}. We have not yet cross-verified these genes in other atlases. 2.412 + 2.413 +In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior part of cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), VIS (visual), AUD (auditory). 2.414 + 2.415 +These results validate our expectation that the ABA dataset can be exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring method, gradient similarity. 2.416 + 2.417 + 2.418 + 2.419 +\vspace{0.3cm}**Combinations of multiple genes are useful and necessary for some areas** 2.420 + 2.421 +In Figure \ref{MOcombo}, we give an example of a cortical area which is not marked by any single gene, but which can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left image. This combination captures area MO much better than any single gene. 2.422 + 2.423 +This shows that our proposal to develop a method to find combinations of marker genes is both possible and necessary. 2.424 + 2.425 +%% wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} 2.426 +%% mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} 2.427 + 2.428 +%%According to logistic regression, gene wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface. 2.429 + 2.430 +%%Gene mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} is shown in figure the upper-right of Fig. \ref{MOcombo}. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left of Figure \ref{MOcombo}. This combination captures area MO much better than any single gene. 2.431 + 2.432 + 2.433 + 2.434 + 2.435 +%%\vspace{0.3cm}**Feature selection integrated with prediction** 2.436 +%%As noted earlier, in general, any classifier can be used for feature selection by running it inside a stepwise wrapper. Also, some learning algorithms integrate soft constraints on number of features used. Examples of both of these will be seen in the section "Multivariate supervised learning". 2.437 + 2.438 + 2.439 +=== Multivariate supervised learning === 2.440 \begin{wrapfigure}{L}{0.35\textwidth}\centering 2.441 \includegraphics[scale=.27]{MO_vs_Wwc1_jet.eps}\includegraphics[scale=.27]{MO_vs_Mtif2_jet.eps} 2.442 2.443 @@ -193,13 +441,42 @@ 2.444 \caption{Upper left: $wwc1$. Upper right: $mtif2$. Lower left: wwc1 + mtif2 (each pixel's value on the lower left is the sum of the corresponding pixels in the upper row).} 2.445 \label{MOcombo}\end{wrapfigure} 2.446 2.447 -\vspace{0.3cm}**Spatially contiguous clusters; image segmentation** 2.448 -We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary Studies, we show that one can get reasonable results without enforcing this constraint; however, we plan to compare these results against other methods which guarantee contiguous clusters. 2.449 - 2.450 -%%Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three. However, there are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images. 2.451 - 2.452 -Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three\footnote{There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery.}. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images. 2.453 - 2.454 + 2.455 + 2.456 +\vspace{0.3cm}**Forward stepwise logistic regression** 2.457 +Logistic regression is a popular method for predictive modeling of categorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes found were shown in various figures throughout this document, and Figure \ref{MOcombo} shows a combination of genes which was found. 2.458 + 2.459 +%%We felt that, for single genes, gradient similarity did a better job than logistic regression at capturing our subjective impression of a "good gene". 2.460 + 2.461 + 2.462 +\vspace{0.3cm}**SVM on all genes at once** 2.463 + 2.464 +In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%\footnote{5-fold cross-validation.}. This shows that the genes included in the ABA dataset are sufficient to define much of cortical anatomy. However, as noted above, a classifier that looks at all the genes at once isn't as practically useful as a classifier that uses only a few genes. 2.465 + 2.466 + 2.467 + 2.468 + 2.469 + 2.470 +=== Data-driven redrawing of the cortical map === 2.471 + 2.472 + 2.473 + 2.474 + 2.475 + 2.476 +We have applied the following dimensionality reduction algorithms to reduce the dimensionality of the gene expression profile associated with each pixel: Principal Components Analysis (PCA), Simple PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space Alignment, Stochastic Proximity Embedding, Fast Maximum Variance Unfolding, Non-negative Matrix Factorization (NNMF). Space constraints prevent us from showing many of the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in the first, second, and third rows of Figure \ref{dimReduc}. 2.477 + 2.478 + 2.479 + 2.480 +After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we have tried k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are shown in the last row of Figure \ref{dimReduc}. To compare, the leftmost picture on the bottom row of Figure \ref{dimReduc} shows some of the major subdivisions of cortex. These results clearly show that different dimensionality reduction techniques capture different aspects of the data and lead to different clusterings, indicating the utility of our proposal to produce a detailed comparison of these techniques as applied to the domain of genomic anatomy. 2.481 + 2.482 + 2.483 + 2.484 + 2.485 +\vspace{0.3cm}**Many areas are captured by clusters of genes** 2.486 +We also clustered the genes using gradient similarity to see if the spatial regions defined by any clusters matched known anatomical regions. Figure \ref{geneClusters} shows, for ten sample gene clusters, each cluster's average expression pattern, compared to a known anatomical boundary. This suggests that it is worth attempting to cluster genes, and then to use the results to cluster pixels. 2.487 + 2.488 + 2.489 +== The approach: what we plan to do == 2.490 \begin{wrapfigure}{L}{0.35\textwidth}\centering 2.491 \includegraphics[scale=.27]{singlegene_example_2682_Pitx2_SS_jet.eps}\includegraphics[scale=.27]{singlegene_example_371_Aldh1a2_SSs_jet.eps} 2.492 \includegraphics[scale=.27]{singlegene_example_2759_Ppfibp1_PIR_jet.eps}\includegraphics[scale=.27]{singlegene_example_3310_Slco1a5_FRP_jet.eps} 2.493 @@ -210,63 +487,52 @@ 2.494 \label{singleSoFar}\end{wrapfigure} 2.495 2.496 2.497 -\vspace{0.3cm}**Dimensionality reduction** 2.498 -In this section, we discuss reducing the length of the per-pixel gene expression feature vector. By "dimension", we mean the dimension of this vector, not the spatial dimension of the underlying data. 2.499 - 2.500 -%% After the reduced feature set is created, the instances may be replaced by __reduced instances__, which have as their features the reduced feature set rather than the original feature set of all gene expression levels. 2.501 - 2.502 -Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the instances. However, some clustering algorithms perform better on small numbers of features\footnote{First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.}. There are techniques which "summarize" a larger number of features using a smaller number of features; these techniques go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique yields is called the __reduced feature set__. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels. 2.503 - 2.504 -%%Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data. Another use for dimensionality reduction is to visualize the relationships between regions after clustering. 2.505 - 2.506 -%%Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example, one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on a 2-D plan will exactly satisfy this property; however, dimensionality reduction techniques allow one to find arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering. 2.507 - 2.508 - 2.509 -\vspace{0.3cm}**Clustering genes rather than voxels** 2.510 -Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used. 2.511 - 2.512 -Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could have one reduced feature for each gene cluster. 2.513 - 2.514 -Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following procedure: cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters. In Preliminary Studies, Figure \ref{geneClusters}, we show that a number of anatomically recognized cortical regions, as well as some "superregions" formed by lumping together a few regions, are associated with gene clusters in this fashion. 2.515 - 2.516 -%% Therefore, it seems likely that an anatomically interesting region will have multiple genes which each individually pick it out\footnote{This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression; perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype.}. 2.517 - 2.518 -%%The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering algorithms. 2.519 - 2.520 -=== Related work === 2.521 - 2.522 - 2.523 -Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, \cite{schleicher_quantitative_2005}, \cite{annese_myelo-architectonic_2004}, \cite{schmitt_detection_2003}, and \cite{adamson_tracking_2005} associate spots on the cortex with the radial profile\footnote{A radial profile is a profile along a line perpendicular to the cortical surface.} of response to some stain (\cite{kruggel_analyzingneocortical_2003} uses MRI), extract features from this profile, and then use similarity between surface pixels to cluster. 2.524 - 2.525 - 2.526 - 2.527 -%%Features used include statistical moments, wavelets, and the excess mass functional. Some of these features are motivated by the presence of tangential lines of stain intensity which correspond to laminar structure. Some methods use standard clustering procedures, whereas others make use of the spatial nature of the data to look for sudden transitions, which are identified as areal borders. 2.528 - 2.529 -\cite{thompson_genomic_2008} describes an analysis of the anatomy of 2.530 -the hippocampus using the ABA dataset. In addition to manual analysis, 2.531 -two clustering methods were employed, a modified Non-negative Matrix 2.532 -Factorization (NNMF), and a hierarchical recursive bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset 2.533 - 2.534 -%% \footnote{We ran "vanilla" NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was needed. The paper under discussion also mentions that they tried a hierarchical variant of NNMF, which we have not yet tried.} and while the results are promising, they also demonstrate that NNMF is not necessarily the best dimensionality reduction method for this application (see Preliminary Studies, Figure \ref{dimReduc}). 2.535 - 2.536 -%% In addition, this paper described a visual screening of the data, specifically, a visual analysis of 6000 genes with the primary purpose of observing how the spatial pattern of their expression coincided with the regions that had been identified by NNMF. We propose to do this sort of screening automatically, which would yield an objective, quantifiable result, rather than qualitative observations. 2.537 - 2.538 -%% \cite{thompson_genomic_2008} reports that both mNNMF and hierarchical mNNMF clustering were useful, and that hierarchical recursive bifurcation gave similar results. 2.539 - 2.540 - 2.541 -AGEA\cite{ng_anatomic_2009} includes a preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with correlation as the similarity metric. EMAGE\cite{venkataraman_emage_2008} allows the user to select a dataset from among a large number of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchical complete linkage clustering. %% with un-centered correlation as the similarity score. 2.542 - 2.543 -%%\cite{chin_genome-scale_2007} clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and ordered the rows of this matrix as follows: "the first row of the matrix was chosen to show the strongest contrast between the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing similarity using a least squares metric". The resulting matrix showed four clusters. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels. 2.544 - 2.545 -\cite{chin_genome-scale_2007} clusters genes. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels. 2.546 - 2.547 -\cite{hemert_matching_2008} applies their technique for finding combinations of marker genes for the purpose of clustering genes around a "seed gene". %%They do this by using the pattern of expression of the seed gene as the target image, and then searching for other genes which can be combined to reproduce this pattern. Other genes which are found are considered to be related to the seed. The same team also describes a method\cite{van_hemert_mining_2007} for finding "association rules" such as, "if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene". This could be useful as part of a procedure for clustering voxels. 2.548 - 2.549 -In summary, although these projects obtained clusterings, there has not been much comparison between different algorithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. The projects using gene expression on cortex did not attempt to make use of the radial profile of gene expression. Also, none of these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms. 2.550 - 2.551 - 2.552 - 2.553 -=== Aim 3: apply the methods developed to the cerebral cortex === 2.554 + 2.555 + 2.556 +%%\vspace{0.3cm}**Flatmap cortex and segment cortical layers** 2.557 + 2.558 +=== Flatmap cortex and segment cortical layers === 2.559 + 2.560 +%%In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo), or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane. 2.561 + 2.562 +%%In the case of the cerebral cortex, it remains to be seen which method of mapping the manifold into a plane is optimal for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). 2.563 + 2.564 + 2.565 +%%Often the surface of a structure serves as a natural 2-D basis for anatomical organization. Even when the shape of the surface is known, there are multiple ways to map it into a plane. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). Although there is much 2-D organization in anatomy, there are also structures whose anatomy is fundamentally 3-dimensional. We plan to include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong. 2.566 + 2.567 +There are multiple ways to flatten 3-D data into 2-D. We will compare mappings from manifolds to planes which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). Our method will include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong. 2.568 + 2.569 +We have not yet made use of radial profiles. While the radial profiles may be used "raw", for laminar structures like the cortex another strategy is to group together voxels in the same cortical layer; each surface pixel would then be associated with one expression level per gene per layer. We will develop a segmentation algorithm to automatically identify the layer boundaries. 2.570 + 2.571 +%%\vspace{0.3cm}**Develop algorithms that find genetic markers for anatomical regions** 2.572 +%%\vspace{0.3cm}**** 2.573 + 2.574 + 2.575 +=== Develop algorithms that find genetic markers for anatomical regions === 2.576 + 2.577 +\vspace{0.3cm}**Scoring measures and feature selection** 2.578 +%%We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Hotelling's T-square test (a multivariate generalization of Student's t-test), ANOVA, and a multivariate version of the Mann-Whitney U test (a non-parametric test). 2.579 +We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Student's t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a scoring measure on genes by taking the prediction error when using that gene to predict the target. 2.580 + 2.581 +Using some combination of these measures, we will develop a procedure to find single marker genes for anatomical regions: for each cortical area, we will rank the genes by their ability to delineate each area. We will quantitatively compare the list of single genes generated by our method to the lists generated by previous methods which are mentioned in Aim 1 Related Work. 2.582 + 2.583 + 2.584 +Some cortical areas have no single marker genes but can be identified by combinatorial coding. This requires multivariate scoring measures and feature selection procedures. Many of the measures, such as expression energy, gradient similarity, Jaccard, Dice, Hough, Student's t, and Mann-Whitney U are univariate. We will extend these scoring measures for use in multivariate feature selection, that is, for scoring how well combinations of genes, rather than individual genes, can distinguish a target area. There are existing multivariate forms of some of the univariate scoring measures, for example, Hotelling's T-square is a multivariate analog of Student's t. 2.585 + 2.586 +We will develop a feature selection procedure for choosing the best small set of marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will also explore (a) feature selection using a stepwise wrapper over "vanilla" classifiers such as logistic regression, (b) supervised learning methods such as decision trees which incrementally/greedily combine single gene markers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features used, such as sparse support vector machines (SVMs). 2.587 + 2.588 +Since errors of displacement and of shape may cause genes and target areas to match less than they should, we will consider the robustness of feature selection methods in the presence of error. Some of these methods, such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a scoring method on small displacements and distortions of the data adds robustness to registration error at the expense of computation time. 2.589 + 2.590 +An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape of the natural domain of gene expression corresponding to the area is different from the shape of the area as recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly\footnote{Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by multiple sources of evidence. Interestingly, the need to detect "natural spatial domains of gene expression" in a data-driven fashion means that the methods of Aim 2 might be useful in achieving Aim 1, as well -- particularly discriminative dimensionality reduction.}, and (b) detect when a difficult area could be combined with adjacent areas to create a larger area which can be fit. 2.591 + 2.592 +A future publication on the method that we develop in Aim 1 will review the scoring measures and quantitatively compare their performance in order to provide a foundation for future research of methods of marker gene finding. We will measure the robustness of the scoring measures as well as their absolute performance on our dataset. 2.593 + 2.594 +\vspace{0.3cm}**Classifiers** 2.595 +We will explore and compare different classifiers. As noted above, this activity is not separate from the previous one, because some supervised learning algorithms include feature selection, and any classifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic regression (including spatial models\cite{paciorek_computational_2007}), decision trees\footnote{Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate trees that use fewer genes.}, sparse SVMs, generative mixture models (including naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic algorithms, and artificial neural networks. 2.596 + 2.597 + 2.598 + 2.599 +=== Develop algorithms to suggest a division of a structure into anatomical parts === 2.600 \begin{wrapfigure}{L}{0.6\textwidth}\centering 2.601 \includegraphics[scale=1]{merge3_norm_hv_PCA_ndims50_prototypes_collage_sm_border.eps} 2.602 \includegraphics[scale=.98]{nnmf_ndims7_collage_border.eps} 2.603 @@ -276,299 +542,33 @@ 2.604 \caption{First row: the first 6 reduced dimensions, using PCA. Second row: the first 6 reduced dimensions, using NNMF. Third row: the first six reduced dimensions, using landmark Isomap. Bottom row: examples of kmeans clustering applied to reduced datasets to find 7 clusters. Left: 19 of the major subdivisions of the cortex. Second from left: PCA. Third from left: NNMF. Right: Landmark Isomap. Additional details: In the third and fourth rows, 7 dimensions were found, but only 6 displayed. In the last row: for PCA, 50 dimensions were used; for NNMF, 6 dimensions were used; for landmark Isomap, 7 dimensions were used.} 2.605 \label{dimReduc}\end{wrapfigure} 2.606 2.607 - 2.608 -\vspace{0.3cm}**Background** 2.609 - 2.610 -The cortex is divided into areas and layers. Because of the cortical columnar organization, the parcellation of the cortex into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can picture an area of the cortex as a slice of a six-layered cake\footnote{Outside of isocortex, the number of layers varies.}. 2.611 - 2.612 -It is known that different cortical areas have distinct roles in both normal functioning and in disease processes, yet there are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their approximate location upon the cortical surface. 2.613 - 2.614 -Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single agreed-upon map can be seen by contrasting the recent maps given by Swanson\cite{swanson_brain_2003} on the one hand, and Paxinos and Franklin\cite{paxinos_mouse_2001} on the other. While the maps are certainly very similar in their general arrangement, significant differences remain. 2.615 - 2.616 -\vspace{0.3cm}**The Allen Mouse Brain Atlas dataset** 2.617 - 2.618 -The Allen Mouse Brain Atlas (ABA) data were produced by doing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse brains were needed in order to measure the expression of many genes. 2.619 - 2.620 -An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 \= 159,326 voxels in the 3D coordinate system, of which 51,533 are in the brain\cite{ng_anatomic_2009}. 2.621 - 2.622 -Mus musculus is thought to contain about 22,000 protein-coding genes\cite{waterston_initial_2002}. The ABA contains data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from only the coronal subset of the ABA\footnote{The sagittal data do not cover the entire cortex, and also have greater registration error\cite{ng_anatomic_2009}. Genes were selected by the Allen Institute for coronal sectioning based on, "classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern"\cite{ng_anatomic_2009}.}. 2.623 - 2.624 -%%The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress\footnote{http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE}, EADHB\footnote{http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html}, MAMEP\footnote{http://mamep.molgen.mpg.de/index.php}, Xenbase\footnote{http://xenbase.org/}, ZFIN\cite{sprague_zebrafish_2006}, Aniseed\footnote{http://aniseed-ibdm.univ-mrs.fr/}, VisiGene\footnote{http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources}, GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE\footnote{http://compare.ibdml.univ-mrs.fr/} GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007}\footnote{GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.}. With the exception of the ABA, GenePaint, and EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression. 2.625 - 2.626 -The ABA is not the only large public spatial gene expression dataset. However, with the exception of the ABA, GenePaint, and EMAGE, most of the other resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space. 2.627 - 2.628 -%%, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression. 2.629 - 2.630 -%% \footnote{Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress (http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE), EADHB (http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html), MAMEP (http://mamep.molgen.mpg.de/index.php), Xenbase (http://xenbase.org/), ZFIN\cite{sprague_zebrafish_2006}, Aniseed (http://aniseed-ibdm.univ-mrs.fr/), VisiGene (http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some of the other listed data sources), GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE (http://compare.ibdml.univ-mrs.fr/), GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007} (GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.)} 2.631 - 2.632 - 2.633 - 2.634 -=== Related work === 2.635 - 2.636 -\cite{ng_anatomic_2009} describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of the other components of AGEA can be applied to cortical areas; AGEA's Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA's hierarchical clustering does not produce clusters corresponding to the cortical areas\footnote{In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.}. 2.637 - 2.638 -%% (there may be clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot the find marker genes for cortical areas is that, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed. 2.639 - 2.640 -%% Most of the projects which have been discussed have been done by the same groups that develop the public datasets. Although these projects make their algorithms available for use on their own website, none of them have released an open-source software toolkit; instead, users are restricted to using the provided algorithms only on their own dataset. 2.641 - 2.642 -In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, (b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally finding marker genes for cortical areas, or on finding a hierarchical clustering that will yield a map of cortical areas de novo from gene expression data. 2.643 - 2.644 -Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker genes for \begin{latex}/\end{latex} reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods. 2.645 - 2.646 - 2.647 -== Significance == 2.648 +\vspace{0.3cm}**Dimensionality reduction on gene expression profiles** 2.649 +We have already described the application of ten dimensionality reduction algorithms for the purpose of replacing the gene expression profiles, which are vectors of about 4000 gene expression levels, with a smaller number of features. We plan to further explore and interpret these results, as well as to apply other unsupervised learning algorithms, including independent components analysis, self-organizing maps, and generative models such as deep Boltzmann machines. We will explore ways to quantitatively compare the relevance of the different dimensionality reduction methods for identifying cortical areal boundaries. 2.650 + 2.651 +\vspace{0.3cm}**Dimensionality reduction on pixels** 2.652 +Instead of applying dimensionality reduction to the gene expression profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions. 2.653 + 2.654 +%% \footnote{Consider a matrix whose rows represent pixel locations, and whose columns represent genes. An entry in this matrix represents the gene expression level at a given pixel. One can look at this matrix as a collection of pixels, each corresponding to a vector of many gene expression levels; or one can look at it as a collection of genes, each corresponding to a vector giving that gene's expression at each pixel. Similarly, dimensionality reduction can be used to replace a large number of genes with a small number of features, or it can be used to replace a large number of pixels with a small number of features.} 2.655 + 2.656 +\vspace{0.3cm}**Clustering and segmentation on pixels** 2.657 +We will explore clustering and segmentation algorithms in order to segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving\cite{hastie_gene_2000}, recursive division clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transformations, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with various linkage functions. These methods can be combined with dimensionality reduction. 2.658 + 2.659 +\vspace{0.3cm}**Clustering on genes** 2.660 +We have already shown that the procedure of clustering genes according to gradient similarity, and then creating an averaged prototype of each cluster's expression pattern, yields some spatial patterns which match cortical areas. We will further explore the clustering of genes. 2.661 + 2.662 +In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then replacing their expression levels with a single average expression level, thereby removing some redundancy from the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would help or hurt the ultimate goal of identifying interesting spatial regions. 2.663 + 2.664 +\vspace{0.3cm}**Co-clustering** 2.665 +There are some algorithms which simultaneously incorporate clustering on instances and on features (in our case, genes and pixels), for example, IRM\cite{kemp_learning_2006}. These are called co-clustering or biclustering algorithms. 2.666 + 2.667 +\vspace{0.3cm}**Radial profiles** 2.668 +We wil explore the use of the radial profile of gene expression under each pixel. 2.669 + 2.670 \begin{wrapfigure}{L}{0.5\textwidth}\centering 2.671 \includegraphics[scale=.2]{cosine_similarity1_rearrange_colorize.eps} 2.672 \caption{Prototypes corresponding to sample gene clusters, clustered by gradient similarity. Region boundaries for the region that most matches each prototype are overlaid.} 2.673 \label{geneClusters}\end{wrapfigure} 2.674 2.675 - 2.676 - 2.677 -The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively target individual cortical areas. 2.678 - 2.679 -The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatomical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex. 2.680 - 2.681 - 2.682 -%% Since the number of classes of stains is small compared to the number of genes, 2.683 - 2.684 -The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation of a better map. The development of present-day cortical maps was driven by the application of histological stains. If a different set of stains had been available which identified a different set of features, then today's cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to incorporate what we can learn from looking at the patterns of gene expression. 2.685 - 2.686 -While we do not here propose to analyze human gene expression data, it 2.687 -is conceivable that the methods we propose to develop could be used to 2.688 -suggest modifications to the human cortical map as well. In fact, the 2.689 -methods we will develop will be applicable to other datasets beyond 2.690 -the brain. 2.691 - 2.692 - 2.693 - 2.694 - 2.695 - 2.696 -\vspace{0.3cm}\hrule 2.697 - 2.698 -== The approach: Preliminary Studies == 2.699 - 2.700 - 2.701 - 2.702 -=== Format conversion between SEV, MATLAB, NIFTI === 2.703 -We have created software to (politely) download all of the SEV files\footnote{SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.} from the Allen Institute website. We have also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret's file formats. 2.704 - 2.705 - 2.706 -=== Flatmap of cortex === 2.707 - 2.708 - 2.709 -We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. We divided the cortex into hemispheres. Using Caret\cite{van_essen_integrated_2001}, we created a mesh representation of the surface of the selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression of the voxels "underneath" that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this grid into a MATLAB matrix. We manually traced the boundaries of each of 49 cortical areas from the ABA coronal reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the mesh surface. We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the region data into MATLAB format. 2.710 - 2.711 -At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries represent the average expression level underneath each surface pixel. We created a normalized version of the gene expression data by subtracting each gene's mean expression level (over all surface pixels) and dividing the expression level of each gene by its standard deviation. The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface. 2.712 - 2.713 -To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex. 2.714 - 2.715 - 2.716 - 2.717 - 2.718 - 2.719 - 2.720 - 2.721 -=== Feature selection and scoring methods === 2.722 - 2.723 - 2.724 - 2.725 -\vspace{0.3cm}**Underexpression of a gene can serve as a marker** 2.726 -Underexpression of a gene can sometimes serve as a marker. See, for example, Figure \ref{hole}. 2.727 - 2.728 - 2.729 - 2.730 - 2.731 -\vspace{0.3cm}**Correlation** 2.732 -Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the surface pixels. 2.733 - 2.734 -We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS. 2.735 - 2.736 -%%One class of feature selection scoring methods contains methods which calculate some sort of "match" between each gene image and the target image. Those genes which match the best are good candidates for features. 2.737 - 2.738 -%%One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS. 2.739 - 2.740 - 2.741 - 2.742 -\vspace{0.3cm}**Conditional entropy** 2.743 -%%An information-theoretic scoring method is to find features such that, if the features (gene expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty, so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution to which we are referring is the probability distribution over the population of surface pixels. 2.744 - 2.745 -%%The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating, for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two standard deviations, the mean plus one standard deviation, the mean plus two standard deviations. 2.746 - 2.747 -%%Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized. 2.748 - 2.749 -For each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized. 2.750 - 2.751 -This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question, "Is this surface pixel a member of the target area?". Its advantage over linear methods such as logistic regression is that it takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional entropy would notice, whereas linear methods would not. 2.752 - 2.753 - 2.754 - 2.755 - 2.756 -\vspace{0.3cm}**Gradient similarity** 2.757 -We noticed that the previous two scoring methods, which are pointwise, often found genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar to the shape of the target region. We call this scoring method "gradient similarity". The formula is: 2.758 - 2.759 -%%One might say that gradient similarity attempts to measure how much the border of the area of gene expression and the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its maximum value to zero, the spatial pattern of a gene's expression often does not have a discrete border. Therefore, instead of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have gradients which are oriented in a similar direction. 2.760 - 2.761 - 2.762 - 2.763 -\begin{align*} 2.764 -\sum_{pixel \in pixels} cos(abs(\angle \nabla_1 - \angle \nabla_2)) \cdot \frac{\vert \nabla_1 \vert + \vert \nabla_2 \vert}{2} \cdot \frac{pixel\_value_1 + pixel\_value_2}{2} 2.765 -\end{align*} 2.766 - 2.767 -where $\nabla_1$ and $\nabla_2$ are the gradient vectors of the two images at the current pixel; $\angle \nabla_i$ is the angle of the gradient of image $i$ at the current pixel; $\vert \nabla_i \vert$ is the magnitude of the gradient of image $i$ at the current pixel; and $pixel\_value_i$ is the value of the current pixel in image $i$. 2.768 - 2.769 -The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a similar direction (because the borders are similar). 2.770 - 2.771 -\vspace{0.3cm}**Gradient similarity provides information complementary to correlation** 2.772 - 2.773 - 2.774 - 2.775 -To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area. 2.776 - 2.777 -%%None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods. 2.778 - 2.779 -%% The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.} 2.780 - 2.781 -%% Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers. 2.782 - 2.783 -\vspace{0.3cm}**Areas which can be identified by single genes** 2.784 -Using gradient similarity, we have already found single genes which roughly identify some areas and groupings of areas. For each of these areas, an example of a gene which roughly identifies it is shown in Figure \ref{singleSoFar}. We have not yet cross-verified these genes in other atlases. 2.785 - 2.786 -In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior part of cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), VIS (visual), AUD (auditory). 2.787 - 2.788 -These results validate our expectation that the ABA dataset can be exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring method, gradient similarity. 2.789 - 2.790 - 2.791 - 2.792 -\vspace{0.3cm}**Combinations of multiple genes are useful and necessary for some areas** 2.793 - 2.794 -In Figure \ref{MOcombo}, we give an example of a cortical area which is not marked by any single gene, but which can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left image. This combination captures area MO much better than any single gene. 2.795 - 2.796 -This shows that our proposal to develop a method to find combinations of marker genes is both possible and necessary. 2.797 - 2.798 -%% wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} 2.799 -%% mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} 2.800 - 2.801 -%%According to logistic regression, gene wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface. 2.802 - 2.803 -%%Gene mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} is shown in figure the upper-right of Fig. \ref{MOcombo}. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left of Figure \ref{MOcombo}. This combination captures area MO much better than any single gene. 2.804 - 2.805 - 2.806 - 2.807 - 2.808 -%%\vspace{0.3cm}**Feature selection integrated with prediction** 2.809 -%%As noted earlier, in general, any classifier can be used for feature selection by running it inside a stepwise wrapper. Also, some learning algorithms integrate soft constraints on number of features used. Examples of both of these will be seen in the section "Multivariate supervised learning". 2.810 - 2.811 - 2.812 -=== Multivariate supervised learning === 2.813 - 2.814 - 2.815 - 2.816 -\vspace{0.3cm}**Forward stepwise logistic regression** 2.817 -Logistic regression is a popular method for predictive modeling of categorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes found were shown in various figures throughout this document, and Figure \ref{MOcombo} shows a combination of genes which was found. 2.818 - 2.819 -%%We felt that, for single genes, gradient similarity did a better job than logistic regression at capturing our subjective impression of a "good gene". 2.820 - 2.821 - 2.822 -\vspace{0.3cm}**SVM on all genes at once** 2.823 - 2.824 -In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%\footnote{5-fold cross-validation.}. This shows that the genes included in the ABA dataset are sufficient to define much of cortical anatomy. However, as noted above, a classifier that looks at all the genes at once isn't as practically useful as a classifier that uses only a few genes. 2.825 - 2.826 - 2.827 - 2.828 - 2.829 - 2.830 -=== Data-driven redrawing of the cortical map === 2.831 - 2.832 - 2.833 - 2.834 - 2.835 - 2.836 -We have applied the following dimensionality reduction algorithms to reduce the dimensionality of the gene expression profile associated with each pixel: Principal Components Analysis (PCA), Simple PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space Alignment, Stochastic Proximity Embedding, Fast Maximum Variance Unfolding, Non-negative Matrix Factorization (NNMF). Space constraints prevent us from showing many of the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in the first, second, and third rows of Figure \ref{dimReduc}. 2.837 - 2.838 - 2.839 - 2.840 -After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we have tried k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are shown in the last row of Figure \ref{dimReduc}. To compare, the leftmost picture on the bottom row of Figure \ref{dimReduc} shows some of the major subdivisions of cortex. These results clearly show that different dimensionality reduction techniques capture different aspects of the data and lead to different clusterings, indicating the utility of our proposal to produce a detailed comparison of these techniques as applied to the domain of genomic anatomy. 2.841 - 2.842 - 2.843 - 2.844 - 2.845 -\vspace{0.3cm}**Many areas are captured by clusters of genes** 2.846 -We also clustered the genes using gradient similarity to see if the spatial regions defined by any clusters matched known anatomical regions. Figure \ref{geneClusters} shows, for ten sample gene clusters, each cluster's average expression pattern, compared to a known anatomical boundary. This suggests that it is worth attempting to cluster genes, and then to use the results to cluster pixels. 2.847 - 2.848 - 2.849 -== The approach: what we plan to do == 2.850 - 2.851 - 2.852 - 2.853 - 2.854 -%%\vspace{0.3cm}**Flatmap cortex and segment cortical layers** 2.855 - 2.856 -=== Flatmap cortex and segment cortical layers === 2.857 - 2.858 -%%In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo), or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane. 2.859 - 2.860 -%%In the case of the cerebral cortex, it remains to be seen which method of mapping the manifold into a plane is optimal for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). 2.861 - 2.862 - 2.863 -%%Often the surface of a structure serves as a natural 2-D basis for anatomical organization. Even when the shape of the surface is known, there are multiple ways to map it into a plane. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). Although there is much 2-D organization in anatomy, there are also structures whose anatomy is fundamentally 3-dimensional. We plan to include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong. 2.864 - 2.865 -There are multiple ways to flatten 3-D data into 2-D. We will compare mappings from manifolds to planes which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). Our method will include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong. 2.866 - 2.867 -We have not yet made use of radial profiles. While the radial profiles may be used "raw", for laminar structures like the cortex another strategy is to group together voxels in the same cortical layer; each surface pixel would then be associated with one expression level per gene per layer. We will develop a segmentation algorithm to automatically identify the layer boundaries. 2.868 - 2.869 -%%\vspace{0.3cm}**Develop algorithms that find genetic markers for anatomical regions** 2.870 -%%\vspace{0.3cm}**** 2.871 - 2.872 - 2.873 -=== Develop algorithms that find genetic markers for anatomical regions === 2.874 - 2.875 -\vspace{0.3cm}**Scoring measures and feature selection** 2.876 -%%We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Hotelling's T-square test (a multivariate generalization of Student's t-test), ANOVA, and a multivariate version of the Mann-Whitney U test (a non-parametric test). 2.877 -We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Student's t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a scoring measure on genes by taking the prediction error when using that gene to predict the target. 2.878 - 2.879 -Using some combination of these measures, we will develop a procedure to find single marker genes for anatomical regions: for each cortical area, we will rank the genes by their ability to delineate each area. We will quantitatively compare the list of single genes generated by our method to the lists generated by previous methods which are mentioned in Aim 1 Related Work. 2.880 - 2.881 - 2.882 -Some cortical areas have no single marker genes but can be identified by combinatorial coding. This requires multivariate scoring measures and feature selection procedures. Many of the measures, such as expression energy, gradient similarity, Jaccard, Dice, Hough, Student's t, and Mann-Whitney U are univariate. We will extend these scoring measures for use in multivariate feature selection, that is, for scoring how well combinations of genes, rather than individual genes, can distinguish a target area. There are existing multivariate forms of some of the univariate scoring measures, for example, Hotelling's T-square is a multivariate analog of Student's t. 2.883 - 2.884 -We will develop a feature selection procedure for choosing the best small set of marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will also explore (a) feature selection using a stepwise wrapper over "vanilla" classifiers such as logistic regression, (b) supervised learning methods such as decision trees which incrementally/greedily combine single gene markers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features used, such as sparse support vector machines (SVMs). 2.885 - 2.886 -Since errors of displacement and of shape may cause genes and target areas to match less than they should, we will consider the robustness of feature selection methods in the presence of error. Some of these methods, such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a scoring method on small displacements and distortions of the data adds robustness to registration error at the expense of computation time. 2.887 - 2.888 -An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape of the natural domain of gene expression corresponding to the area is different from the shape of the area as recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly\footnote{Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by multiple sources of evidence. Interestingly, the need to detect "natural spatial domains of gene expression" in a data-driven fashion means that the methods of Aim 2 might be useful in achieving Aim 1, as well -- particularly discriminative dimensionality reduction.}, and (b) detect when a difficult area could be combined with adjacent areas to create a larger area which can be fit. 2.889 - 2.890 -A future publication on the method that we develop in Aim 1 will review the scoring measures and quantitatively compare their performance in order to provide a foundation for future research of methods of marker gene finding. We will measure the robustness of the scoring measures as well as their absolute performance on our dataset. 2.891 - 2.892 -\vspace{0.3cm}**Classifiers** 2.893 -We will explore and compare different classifiers. As noted above, this activity is not separate from the previous one, because some supervised learning algorithms include feature selection, and any classifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic regression (including spatial models\cite{paciorek_computational_2007}), decision trees\footnote{Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate trees that use fewer genes.}, sparse SVMs, generative mixture models (including naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic algorithms, and artificial neural networks. 2.894 - 2.895 - 2.896 - 2.897 -=== Develop algorithms to suggest a division of a structure into anatomical parts === 2.898 - 2.899 -\vspace{0.3cm}**Dimensionality reduction on gene expression profiles** 2.900 -We have already described the application of ten dimensionality reduction algorithms for the purpose of replacing the gene expression profiles, which are vectors of about 4000 gene expression levels, with a smaller number of features. We plan to further explore and interpret these results, as well as to apply other unsupervised learning algorithms, including independent components analysis, self-organizing maps, and generative models such as deep Boltzmann machines. We will explore ways to quantitatively compare the relevance of the different dimensionality reduction methods for identifying cortical areal boundaries. 2.901 - 2.902 -\vspace{0.3cm}**Dimensionality reduction on pixels** 2.903 -Instead of applying dimensionality reduction to the gene expression profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions. 2.904 - 2.905 -%% \footnote{Consider a matrix whose rows represent pixel locations, and whose columns represent genes. An entry in this matrix represents the gene expression level at a given pixel. One can look at this matrix as a collection of pixels, each corresponding to a vector of many gene expression levels; or one can look at it as a collection of genes, each corresponding to a vector giving that gene's expression at each pixel. Similarly, dimensionality reduction can be used to replace a large number of genes with a small number of features, or it can be used to replace a large number of pixels with a small number of features.} 2.906 - 2.907 -\vspace{0.3cm}**Clustering and segmentation on pixels** 2.908 -We will explore clustering and segmentation algorithms in order to segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving\cite{hastie_gene_2000}, recursive division clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transformations, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with various linkage functions. These methods can be combined with dimensionality reduction. 2.909 - 2.910 -\vspace{0.3cm}**Clustering on genes** 2.911 -We have already shown that the procedure of clustering genes according to gradient similarity, and then creating an averaged prototype of each cluster's expression pattern, yields some spatial patterns which match cortical areas. We will further explore the clustering of genes. 2.912 - 2.913 -In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then replacing their expression levels with a single average expression level, thereby removing some redundancy from the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would help or hurt the ultimate goal of identifying interesting spatial regions. 2.914 - 2.915 -\vspace{0.3cm}**Co-clustering** 2.916 -There are some algorithms which simultaneously incorporate clustering on instances and on features (in our case, genes and pixels), for example, IRM\cite{kemp_learning_2006}. These are called co-clustering or biclustering algorithms. 2.917 - 2.918 -\vspace{0.3cm}**Radial profiles** 2.919 -We wil explore the use of the radial profile of gene expression under each pixel. 2.920 - 2.921 - 2.922 \vspace{0.3cm}**Compare different methods** 2.923 In order to tell which method is best for genomic anatomy, for each experimental method we will compare the cortical map found by unsupervised learning to a cortical map derived from the Allen Reference Atlas. We will explore various quantitative metrics that purport to measure how similar two clusterings are, such as Jaccard, Rand index, Fowlkes-Mallows, variation of information, Larsen, Van Dongen, and others. 2.924 2.925 @@ -593,6 +593,8 @@ 2.926 %%\vspace{0.3cm}**Extension to probabalistic maps** 2.927 %%Presently, we do not have a probabalistic atlas which is registered to the ABA space. However, in anticipation of the availability of such maps, we would like to explore extensions to our Aim 1 techniques which can handle probabalistic maps. 2.928 2.929 + 2.930 + 2.931 \vspace{0.3cm}\hrule 2.932 2.933 == Timeline and milestones ==