nsf
changeset 112:dad49a6f95b6
.
author | bshanks@bshanks.dyndns.org |
---|---|
date | Fri Jul 03 05:17:28 2009 -0700 (16 years ago) |
parents | 90b0ccb6c7f1 |
children | 5c01dccf2a4a |
files | grant.bib grant.html grant.odt grant.pdf grant.txt |
line diff
1.1 --- a/grant.bib Fri Apr 24 01:12:36 2009 -0700
1.2 +++ b/grant.bib Fri Jul 03 05:17:28 2009 -0700
1.3 @@ -460,7 +460,7 @@
1.4 @inbook{adamson_tracking_2005,
1.5 series = {Lecture Notes in Computer Science},
1.6 title = {A Tracking Approach to Parcellation of the Cerebral Cortex},
1.7 - volume = {Volume 3749/2005},
1.8 + volume = {3749/2005},
1.9 isbn = {978-3-540-29327-9},
1.10 url = {http://dx.doi.org/10.1007/11566465_37},
1.11 abstract = {The cerebral cortex is composed of regions with distinct laminar structure. Functional neuroimaging results are often reported with respect to these regions, usually by means of a brain “atlas”. Motivated by the need for more precise atlases, and the lack of model-based approaches in prior work in the field, this paper introduces a novel approach to parcellating the cortex into regions of distinct laminar structure, based on the theory of target tracking. The cortical layers are modelled by hidden Markov models and are tracked to determine the Bayesian evidence of layer hypotheses. This model-based parcellation method, evaluated here on a set of histological images of the cortex, is extensible to {3-D} images.},
1.12 @@ -527,4 +527,35 @@
1.13 author = {C Kemp and {JB} Tenenbaum and {TL} Griffiths and T Yamada and N Ueda},
1.14 year = {2006},
1.15 keywords = {infinite,model,relational}
1.16 +},
1.17 +
1.18 +@article{serpico_new_2001,
1.19 + title = {A new search algorithm for feature selection in hyperspectral remote sensing images},
1.20 + volume = {39},
1.21 + issn = {0196-2892},
1.22 + doi = {10.1109/36.934069},
1.23 + abstract = {A new suboptimal search strategy suitable for feature selection in
1.24 +very high-dimensional remote sensing images (e.g., those acquired by
1.25 +hyperspectral sensors) is proposed. Each solution of the feature
1.26 +selection problem is represented as a binary string that indicates which
1.27 +features are selected and which are disregarded. In turn, each binary
1.28 +string corresponds to a point of a multidimensional binary space. Given
1.29 +a criterion function to evaluate the effectiveness of a selected
1.30 +solution, the proposed strategy is based on the search for constrained
1.31 +local extremes of such a function in the above-defined binary space. In
1.32 +particular, two different algorithms are presented that explore the
1.33 +space of solutions in different ways. These algorithms are compared with
1.34 +the classical sequential forward selection and sequential forward
1.35 +floating selection suboptimal techniques, using hyperspectral remote
1.36 +sensing images (acquired by the airborne visible/infrared imaging
1.37 +spectrometer {[AVIRIS]} sensor) as a data set. Experimental results point
1.38 +out the effectiveness of both algorithms, which can be regarded as valid
1.39 +alternatives to classical methods, as they allow interesting tradeoffs
1.40 +between the qualities of selected feature subsets and computational cost},
1.41 + number = {7},
1.42 + journal = {Geoscience and Remote Sensing, {IEEE} Transactions on},
1.43 + author = {{S.B.} Serpico and L. Bruzzone},
1.44 + year = {2001},
1.45 + keywords = {algorithm,binary string,feature extraction,feature selection,geophysical measurement technique,geophysical signal processing,geophysical techniques,hyperspectral remote sensing,image processing,land surface,multidimensional signal processing,multispectral remote sensing,optical imaging,remote sensing,suboptimal search strategy,terrain mapping},
1.46 + pages = {1360--1367}
1.47 }
1.48 \ No newline at end of file
2.1 --- a/grant.html Fri Apr 24 01:12:36 2009 -0700
2.2 +++ b/grant.html Fri Jul 03 05:17:28 2009 -0700
2.3 @@ -1,244 +1,185 @@
2.4 -Specific aims
2.5 -Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ
2.6 -transgenic reporter, microarray voxelation, and others, allow the expression levels of many genes at many loca-
2.7 -tions to be compared. Our goal is to develop automated methods to relate spatial variation in gene expression to
2.8 -anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical maps
2.9 -based on gene expression patterns. We will validate these methods by applying them to 46 anatomical areas
2.10 -within the cerebral cortex, by using the Allen Mouse Brain Atlas coronal dataset (ABA). This gene expression
2.11 -dataset was generated using ISH, and contains over 4,000 genes. For each gene, a digitized 3-D raster of the
2.12 -expression pattern is available: for each gene, the level of expression at each of 51,533 voxels is recorded.
2.13 -We have three specific aims:
2.14 -(1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which
2.15 -selectively target anatomical regions
2.16 -(2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions,
2.17 -based on spatial patterns in gene expression
2.18 -(3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen
2.19 -Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending
2.20 -the functionality of Caret, an existing open-source scientific imaging program. Use this dataset to validate the
2.21 -methods developed in (1) and (2).
2.22 -Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that
2.23 -the methods developed in aims (1) and (2) will generalize to any sort of high-dimensional data over points located
2.24 -in a low-dimensional space. In particular, our method could be applied to genome-wide sequencing data derived
2.25 -from sets of tissues and disease states.
2.26 -In terms of the application of the methods to cerebral cortex, aim (1) is to go from cortical areas to marker
2.27 -genes, and aim (2) is to let the gene profile define the cortical areas. In addition to validating the usefulness
2.28 -of the algorithms, the application of these methods to cortex will produce immediate benefits, because there
2.29 -are currently no known genetic markers for most cortical areas. The results of the project will support the
2.30 -development of new ways to selectively target cortical areas, and it will support the development of a method for
2.31 -identifying the cortical areal boundaries present in small tissue samples.
2.32 -All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well
2.33 -as the machine-readable datasets developed in aim (3), will be published and freely available for others to use.
2.34 -The challenge topic
2.35 -This proposal addresses challenge topic 06-HG-101. Massive new datasets obtained with techniques such as
2.36 -in situ hybridization (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others,
2.37 -allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated
2.38 -methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific
2.39 +Introduction
2.40 +Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohisto-
2.41 +chemistry, in situ transgenic reporter, microarray voxelation, and others, allow the expression levels
2.42 +of many genes at many locations to be compared. Our goal is to develop automated methods to
2.43 +relate spatial variation in gene expression to anatomy. We want to find marker genes for specific
2.44 anatomical regions, and also to draw new anatomical maps based on gene expression patterns.
2.45 -______________
2.46 - The Challenge and Potential impact
2.47 -Each of our three aims will be discussed in turn. For each aim, we will develop a conceptual framework for
2.48 -thinking about the task. Next we will discuss related work, and then summarize why our strategy is different from
2.49 -what has been done before. After we have discussed all three aims, we will describe the potential impact.
2.50 - Aim 1: Given a map of regions, find genes that mark the regions
2.51 -Machine learning terminology: classifiers The task of looking for marker genes for known anatomical regions
2.52 -means that one is looking for a set of genes such that, if the expression level of those genes is known, then the
2.53 -locations of the regions can be inferred.
2.54 - If we define the regions so that they cover the entire anatomical structure to be subdivided, we may say that
2.55 -we are using gene expression in each voxel to assign that voxel to the proper area. We call this a classification
2.56 -task, because each voxel is being assigned to a class (namely, its region). An understanding of the relationship
2.57 -between the combination of their expression levels and the locations of the regions may be expressed as a
2.58 -function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is
2.59 -the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function
2.60 -a classifier. In general, the input to a classifier is called an instance, and the output is called a label (or a class
2.61 -label).
2.62 - The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for
2.63 -determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene
2.64 -expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. The
2.65 -initial gene expression dataset used in the construction of the classifier is called training data. In the machine
2.66 -learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a task in
2.67 -which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances
2.68 -(voxels) for which the labels (regions) are known.
2.69 - Each gene expression level is called a feature, and the selection of which genes1 to include is called feature
2.70 -selection. Feature selection is one component of the task of learning a classifier. Some methods for learning
2.71 -classifiers start out with a separate feature selection phase, whereas other methods combine feature selection
2.72 -with other aspects of training.
2.73 - One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked
2.74 -genes are then chosen. Some scoring measures can assign a score to a set of selected genes, not just to a
2.75 -single gene; in this case, a dynamic procedure may be used in which features are added and subtracted from the
2.76 -selected set depending on how much they raise the score. Such procedures are called “stepwise” or “greedy”.
2.77 - Although the classifier itself may only look at the gene expression data within each voxel before classifying
2.78 -that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize
2.79 -score-based feature selection methods depending on how the score of calculated. Often the score calculation
2.80 -consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the
2.81 -aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to
2.82 -calculate a voxel’s sub-score, then we say it is a local scoring method. If only information from the voxel itself is
2.83 -used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method.
2.84 - Both gene expression data and anatomical atlases have errors, due to a variety of factors. Individual subjects
2.85 -have idiosyncratic anatomy. Subjects may be improperly registered to the atlas. The method used to measure
2.86 -gene expression may be noisy. The atlas may have errors. It is even possible that some areas in the anatomical
2.87 -atlas are “wrong” in that they do not have the same shape as the natural domains of gene expression to which
2.88 - 1Strictly speaking, the features are gene expression levels, but we’ll call them genes.
2.89 -they correspond. These sources of error can affect the displacement and the shape of both the gene expression
2.90 -data and the anatomical target areas. Therefore, it is important to use feature selection methods which are
2.91 -robust to these kinds of errors.
2.92 -Our strategy for Aim 1
2.93 -Key questions when choosing a learning method are: What are the instances? What are the features? How are
2.94 -the features chosen? Here are four principles that outline our answers to these questions.
2.95 -Principle 1: Combinatorial gene expression
2.96 -It is too much to hope that every anatomical region of interest will be identified by a single gene. For example,
2.97 -in the cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas
2.98 -(ABA) dataset. However, at least some of these areas can be delineated by looking at combinations of genes
2.99 -(an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Studies,
2.100 -Figure 4). Therefore, each instance should contain multiple features (genes).
2.101 -Principle 2: Only look at combinations of small numbers of genes
2.102 -When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have
2.103 -been selected as features. The more data that are available to a classifier, the better that it can do. For example,
2.104 -perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every
2.105 -gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to
2.106 -gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for
2.107 -some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the
2.108 -expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that
2.109 -checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on
2.110 -tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we
2.111 -must select only a few genes as features.
2.112 -The requirement to find combinations of only a small number of genes limits us from straightforwardly ap-
2.113 -plying many of the most simple techniques from the field of supervised machine learning. In the parlance of
2.114 -machine learning, our task combines feature selection with supervised learning.
2.115 -Principle 3: Use geometry in feature selection
2.116 -When doing feature selection with score-based methods, the simplest thing to do would be to score the per-
2.117 -formance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach
2.118 -is to also use information about the geometric relations between each voxel and its neighbors; this requires non-
2.119 -pointwise, local scoring methods. See Preliminary Studies, figure 3 for evidence of the complementary nature of
2.120 -pointwise and local scoring methods.
2.121 -Principle 4: Work in 2-D whenever possible
2.122 -There are many anatomical structures which are commonly characterized in terms of a two-dimensional
2.123 -manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be
2.124 -improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for
2.125 -humans to visualize and work with 2-D data. Therefore, when possible, the instances should represent pixels,
2.126 -not voxels.
2.127 -Related work
2.128 -There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expres-
2.129 -sion data which are not fundamentally spatial2.
2.130 -As noted above, there has been much work on both supervised learning and there are many available
2.131 -algorithms for each. However, the algorithms require the scientist to provide a framework for representing the
2.132 -problem domain, and the way that this framework is set up has a large impact on performance. Creating a
2.133 -good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical
2.134 -“fine-tuning” of numerical parameters. For example, we believe that domain-specific scoring measures (such
2.135 -_________________________________________
2.136 - 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates;
2.137 -not just data which have only a few different locations or which is indexed by anatomical label.
2.138 -as gradient similarity, which is discussed in Preliminary Studies) may be necessary in order to achieve the best
2.139 -results in this application.
2.140 -We now turn to efforts to find marker genes using spatial gene expression data using automated methods.
2.141 -GeneAtlas[3] and EMAGE [19] allow the user to construct a search query by demarcating regions and then
2.142 -specifying either the strength of expression or the name of another gene or dataset whose expression pattern
2.143 -is to be matched. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a
2.144 -region in concert but not separately.
2.145 -[12 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components. Gene Finder: The
2.146 -user selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2) yields a list of
2.147 -genes which are overexpressed in that cluster. Correlation: The user selects a seed voxel and the system then
2.148 -shows the user how much correlation there is between the gene expression profile of the seed voxel and every
2.149 -other voxel. Clusters: will be described later. [4] looks at the mean expression level of genes within anatomical
2.150 -regions, and applies a Student’s t-test with Bonferroni correction to determine whether the mean expression
2.151 -level of a gene is significantly higher in the target region. [12] and [4] differ from our Aim 1 in at least three
2.152 -ways. First, [12] and [4] find only single genes, whereas we will also look for combinations of genes. Second,
2.153 -[12 ] and [4] can only use overexpression as a marker, whereas we will also search for underexpression. Third,
2.154 -[12 ] and [4] use scores based on pointwise expression levels, whereas we will also use geometric scores such
2.155 -as gradient similarity (described in Preliminary Studies). Figures 4, 2, and 3 in the Preliminary Studies section
2.156 -contain evidence that each of our three choices is the right one.
2.157 -[8 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use
2.158 -an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to
2.159 -match a target image.
2.160 -In summary, there has been fruitful work on finding marker genes, but only one of the previous projects
2.161 -explores combinations of marker genes, and none of these publications compare the results obtained by using
2.162 -different algorithms or scoring methods.
2.163 -Aim 2: From gene expression data, discover a map of regions
2.164 +We will validate these methods by applying them to 46 anatomical areas within the cerebral cortex,
2.165 +by using the Allen Mouse Brain Atlas coronal dataset (ABA).
2.166 + This project has three primary goals:
2.167 + (1) develop an algorithm to screen spatial gene expression data for combinations of marker
2.168 +genes which selectively target anatomical regions.
2.169 + (2) develop an algorithm to suggest new ways of carving up a structure into anatomically dis-
2.170 +tinct regions, based on spatial patterns in gene expression.
2.171 + (3) adapt our tools for the analysis of multi/hyperspectral imaging data from the Geographic
2.172 +Information Systems (GIS) community.
2.173 + We will create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened
2.174 +version of the Allen Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical
2.175 +areas. We will use this dataset to validate the methods developed in (1) and (2). In addition to
2.176 +its use in neuroscience, this dataset will be useful as a sample dataset for the machine learning
2.177 +community.
2.178 + Although our particular application involves the 3D spatial distribution of gene expression, the
2.179 +methods we will develop will generalize to any high-dimensional data over points located in a low-
2.180 +dimensional space. In particular, our methods could be applied to the analysis of multi/hyperspectral
2.181 +imaging data, or alternately to genome-wide sequencing data derived from sets of tissues and dis-
2.182 +ease states.
2.183 + All algorithms that we develop will be implemented in a GPL open-source software toolkit. The
2.184 +toolkit and the datasets will be published and freely available for others to use.
2.185 +__________________
2.186 + Background and related work
2.187 +Cortical anatomy
2.188 + The cortex is divided into areas and layers. Because of the cortical columnar organization, the
2.189 +parcellation of the cortex into areas can be drawn as a 2-D map on the surface of the cortex. In the
2.190 +third dimension, the boundaries between the areas continue downwards into the cortical depth,
2.191 +perpendicular to the surface. The layer boundaries run parallel to the surface. One can picture an
2.192 +area of the cortex as a slice of a six-layered cake1.
2.193 + It is known that different cortical areas have distinct roles in both normal functioning and in
2.194 +disease processes, yet there are no known marker genes for most cortical areas. When it is nec-
2.195 +essary to divide a tissue sample into cortical areas, this is a manual process that requires a skilled
2.196 + 1Outside of isocortex, the number of layers varies.
2.197 + 1
2.198 +
2.199 +human to combine multiple visual cues and interpret them in the context of their approximate
2.200 +location upon the cortical surface.
2.201 + Even the questions of how many areas should be recognized in cortex, and what their arrange-
2.202 +ment is, are still not completely settled. A proposed division of the cortex into areas is called a
2.203 +cortical map. In the rodent, the lack of a single agreed-upon map can be seen by contrasting the
2.204 +recent maps given by Swanson[21] on the one hand, and Paxinos and Franklin[16] on the other.
2.205 +While the maps are certainly very similar in their general arrangement, significant differences re-
2.206 +main.
2.207 + The Allen Mouse Brain Atlas dataset
2.208 + The Allen Mouse Brain Atlas (ABA) data[13] were produced by doing in-situ hybridization on
2.209 +slices of male, 56-day-old C57BL/6J mouse brains. Pictures were taken of the processed slice,
2.210 +and these pictures were semi-automatically analyzed to create a digital measurement of gene
2.211 +expression levels at each location in each slice. Per slice, cellular spatial resolution is achieved.
2.212 +Using this method, a single physical slice can only be used to measure one single gene; many
2.213 +different mouse brains were needed in order to measure the expression of many genes.
2.214 + Mus musculus is thought to contain about 22,000 protein-coding genes[26]. The ABA contains
2.215 +data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured
2.216 +in coronal sections. Our dataset is derived from only the coronal subset of the ABA2. An auto-
2.217 +mated nonlinear alignment procedure located the 2D data from the various slices in a single 3D
2.218 +coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a
2.219 +side. There are 67x41x58 = 159,326 voxels, of which 51,533 are in the brain[15]. For each voxel
2.220 +and each gene, the expression energy[13] within that voxel is made available.
2.221 + The ABA is not the only large public spatial gene expression dataset[8][25][5][14][24][4][23][20][3].
2.222 +However, with the exception of the ABA, GenePaint[25], and EMAGE[24], most of the other re-
2.223 +sources have not (yet) extracted the expression intensity from the ISH images and registered the
2.224 +results into a single 3-D space.
2.225 + The remainder of the background section will be divided into three parts, one for each major
2.226 +goal.
2.227 + Goal 1, From Areas to Genes: Given a map of regions, find genes that mark those regions
2.228 +Machine learning terminology: classifiers The task of looking for marker genes for known
2.229 +anatomical regions means that one is looking for a set of genes such that, if the expression level
2.230 +of those genes is known, then the locations of the regions can be inferred.
2.231 + If we define the regions so that they cover the entire anatomical structure to be subdivided,
2.232 +and restrict ourselves to looking at one voxel at a time, we may say that we are using gene
2.233 +expression in each voxel to assign that voxel to the proper area. We call this a classification
2.234 +task, because each voxel is being assigned to a class (namely, its region). An understanding
2.235 +of the relationship between the combination of gene expression levels and the locations of the
2.236 +regions may be expressed as a function. The input to this function is a voxel, along with the gene
2.237 +expression levels within that voxel; the output is the regional identity of the target voxel, that is, the
2.238 +____________________________________
2.239 + 2The sagittal data do not cover the entire cortex, and also have greater registration error[15]. Genes were selected
2.240 +by the Allen Institute for coronal sectioning based on, “classes of known neuroscientific interest... or through post hoc
2.241 +identification of a marked non-ubiquitous expression pattern”[15].
2.242 + 2
2.243 +
2.244 +region to which the target voxel belongs. We call this function a classifier. In general, the input to
2.245 +a classifier is called an instance, and the output is called a label (or a class label).
2.246 + Our goal is not to produce a single classifier, but rather to develop an automated method for
2.247 +determining a classifier for any known anatomical structure. Therefore, we seek a procedure by
2.248 +which a gene expression dataset may be analyzed in concert with an anatomical atlas in order to
2.249 +produce a classifier. The initial gene expression dataset used in the construction of the classifier
2.250 +is called training data. In the machine learning literature, this sort of procedure may be thought
2.251 +of as a supervised learning task, defined as a task in which the goal is to learn a mapping from
2.252 +instances to labels, and the training data consists of a set of instances (voxels) for which the labels
2.253 +(regions) are known.
2.254 + Each gene expression level is called a feature, and the selection of which genes3 to look at is
2.255 +called feature selection. Feature selection is one component of the task of learning a classifier.
2.256 + One class of feature selection methods assigns some sort of score to each candidate gene.
2.257 +The top-ranked genes are then chosen. Some scoring measures can assign a score to a set of
2.258 +selected genes, not just to a single gene; in this case, a dynamic procedure may be used in which
2.259 +features are added and subtracted from the selected set depending on how much they raise the
2.260 +score. Such procedures are called “stepwise” or “greedy”.
2.261 + Although the classifier itself may only look at the gene expression data within each voxel be-
2.262 +fore classifying that voxel, the algorithm which constructs the classifier may look over the entire
2.263 +dataset. We can categorize score-based feature selection methods depending on how the score
2.264 +of calculated. Often the score calculation consists of assigning a sub-score to each voxel, and
2.265 +then aggregating these sub-scores into a final score. If only information from nearby voxels is
2.266 +used to calculate a voxel’s sub-score, then we say it is a local scoring method. If only information
2.267 +from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a pointwise scoring
2.268 +method.
2.269 + Our Strategy for Goal 1
2.270 +Key questions when choosing a learning method are: What are the instances? What are the
2.271 +features? How are the features chosen? Here are four principles that outline our answers to these
2.272 +questions.
2.273 + Principle 1: Combinatorial gene expression
2.274 + It is too much to hope that every anatomical region of interest will be identified by a single
2.275 +gene. For example, in the cortex, there are some areas which are not clearly delineated by any
2.276 +gene included in the ABA coronal dataset. However, at least some of these areas can be delin-
2.277 +eated by looking at combinations of genes (an example of an area for which multiple genes are
2.278 +necessary and sufficient is provided in Preliminary Results, Figure 4). Therefore, each instance
2.279 +should contain multiple features (genes).
2.280 + Principle 2: Only look at combinations of small numbers of genes
2.281 + When the classifier classifies a voxel, it is only allowed to look at the expression of the genes
2.282 +which have been selected as features. The more data that are available to a classifier, the better
2.283 +that it can do. Why not include every gene as a feature? The reason is that we wish to employ the
2.284 +classifier in situations in which it is not feasible to gather data about every gene. For example, if we
2.285 +____________________________________
2.286 + 3Strictly speaking, the features are gene expression levels, but we’ll call them genes.
2.287 + 3
2.288 +
2.289 +want to use the expression of marker genes as a trigger for some regionally-targeted intervention,
2.290 +then our intervention must contain a molecular mechanism to check the expression level of each
2.291 +marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks
2.292 +the level of more than a handful of genes. Therefore, we must select only a few genes as features.
2.293 + The requirement to find combinations of only a small number of genes limits us from straightfor-
2.294 +wardly applying many of the most simple techniques from the field of supervised machine learning.
2.295 +In the parlance of machine learning, our task combines feature selection with supervised learning.
2.296 + Principle 3: Use geometry in feature selection
2.297 + When doing feature selection with score-based methods, the simplest thing to do would be
2.298 +to score the performance of each voxel by itself and then combine these scores (pointwise scor-
2.299 +ing). A more powerful approach is to also use information about the geometric relations between
2.300 +each voxel and its neighbors; this requires non-pointwise, local scoring methods. See Preliminary
2.301 +Results, figure 3 for evidence of the complementary nature of pointwise and local scoring methods.
2.302 + Principle 4: Work in 2-D whenever possible
2.303 + There are many anatomical structures which are commonly characterized in terms of a two-
2.304 +dimensional manifold. When it is known that the structure that one is looking for is two-dimensional,
2.305 +the results may be improved by allowing the analysis algorithm to take advantage of this prior
2.306 +knowledge. In addition, it is easier for humans to visualize and work with 2-D data.
2.307 + Goal 2, From Genes to Areas: given gene expression data, discover a map of regions
2.308 Machine learning terminology: clustering
2.309 -If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is
2.310 -referred to as unsupervised learning in the jargon of machine learning. One thing that you can do with such a
2.311 -dataset is to group instances together. A set of similar instances is called a cluster, and the activity of finding
2.312 -grouping the data into clusters is called clustering or cluster analysis.
2.313 -The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The
2.314 -instances are once again voxels (or pixels) along with their associated gene expression profiles. We make
2.315 -the assumption that voxels from the same anatomical region have similar gene expression profiles, at least
2.316 -compared to the other regions. This means that clustering voxels is the same as finding potential regions; we
2.317 -seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression.
2.318 -It is desirable to determine not just one set of regions, but also how these regions relate to each other. The
2.319 -outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition
2.320 -the voxels. This is called hierarchical clustering.
2.321 -Similarity scores A crucial choice when designing a clustering method is how to measure similarity, across
2.322 -either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature
2.323 -selection (discussed above under Aim 1) and scoring methods for similarity.
2.324 -Spatially contiguous clusters; image segmentation We have shown that aim 2 is a type of clustering
2.325 -task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels
2.326 -grouped together into a cluster must be spatially contiguous. In Preliminary Studies, we show that one can get
2.327 -reasonable results without enforcing this constraint; however, we plan to compare these results against other
2.328 -methods which guarantee contiguous clusters.
2.329 -Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous
2.330 -clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are
2.331 -thousands of color channels (one for each gene), rather than just three3. A more crucial difference is that there
2.332 -are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not
2.333 -appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation
2.334 -algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these
2.335 -algorithms are specialized for visual images.
2.336 -Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression
2.337 -feature vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying
2.338 -data.
2.339 -Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion
2.340 -in the instances. However, some clustering algorithms perform better on small numbers of features4. There are
2.341 -techniques which “summarize” a larger number of features using a smaller number of features; these techniques
2.342 -go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique
2.343 -yields is called the reduced feature set. Note that the features in the reduced feature set do not necessarily
2.344 -correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
2.345 -Clustering genes rather than voxels Although the ultimate goal is to cluster the instances (voxels or pixels),
2.346 -one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes
2.347 -could be used.
2.348 -Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene,
2.349 -we could have one reduced feature for each gene cluster.
2.350 -Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have
2.351 -an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following
2.352 -procedure: cluster together genes which pick out similar regions, and then to use the more popular common
2.353 -regions as the final clusters. In Preliminary Studies, Figure 7, we show that a number of anatomically recognized
2.354 -cortical regions, as well as some “superregions” formed by lumping together a few regions, are associated with
2.355 -gene clusters in this fashion.
2.356 -Related work
2.357 -Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example,
2.358 -[15 ], [2 ], [16], and [1] associate spots on the cortex with the radial profile5 of response to some stain ([10] uses
2.359 -MRI), extract features from this profile, and then use similarity between surface pixels to cluster.
2.360 -[18 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual
2.361 -analysis, two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and
2.362 -a hierarchical recursive bifurcation clustering scheme based on correlation as the similarity score. The paper
2.363 -yielded impressive results, proving the usefulness of computational genomic anatomy. We have run NNMF on
2.364 -the cortical dataset
2.365 -AGEA[12] includes a preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with
2.366 -correlation as the similarity metric. EMAGE[19] allows the user to select a dataset from among a large number
2.367 -of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters
2.368 -via hierarchical complete linkage clustering.
2.369 -[4 ] clusters genes. For each cluster, prototypical spatial expression patterns were created by averaging the
2.370 -genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
2.371 -[8 ] applies their technique for finding combinations of marker genes for the purpose of clustering genes
2.372 -around a “seed gene”.
2.373 -In summary, although these projects obtained clusterings, there has not been much comparison between
2.374 -different algorithms or scoring methods, so it is likely that the best clustering method for this application has not
2.375 -yet been found. The projects using gene expression on cortex did not attempt to make use of the radial profile
2.376 -of gene expression. Also, none of these projects did a separate dimensionality reduction step before clustering
2.377 -_________________________________________
2.378 - 3There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are
2.379 -often used to process satellite imagery.
2.380 - 4First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering
2.381 -algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.
2.382 - 5A radial profile is a profile along a line perpendicular to the cortical surface.
2.383 -pixels, none tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, and
2.384 -none used co-clustering algorithms.
2.385 -Aim 3: apply the methods developed to the cerebral cortex
2.386 -
2.387 -
2.388 + If one is given a dataset consisting merely of instances, with no class labels, then analysis of
2.389 +the dataset is referred to as unsupervised learning in the jargon of machine learning. One thing
2.390 +that you can do with such a dataset is to group instances together. A set of similar instances is
2.391 +called a cluster, and the activity of grouping the data into clusters is called clustering or cluster
2.392 +analysis.
2.393 + The task of deciding how to carve up a structure into anatomical regions can be put into these
2.394 +terms. The instances are once again voxels (or pixels) along with their associated gene expression
2.395 +profiles. We make the assumption that voxels from the same anatomical region have similar gene
2.396 +expression profiles, at least compared to the other regions. This means that clustering voxels is
2.397 +the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into
2.398 +clusters of voxels with similar gene expression.
2.399 + It is desirable to determine not just one set of regions, but also how these regions relate to
2.400 +each other. The outcome of clustering may be a hierarchical tree of clusters, rather than a single
2.401 +set of clusters which partition the voxels. This is called hierarchical clustering.
2.402 + Similarity scores A crucial choice when designing a clustering method is how to measure
2.403 +similarity, across either pairs of instances, or clusters, or both. There is much overlap between
2.404 +scoring methods for feature selection (discussed above under Goal 1) and scoring methods for
2.405 +similarity.
2.406 + Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene
2.407 +expression feature vector. By “dimension”, we mean the dimension of this vector, not the spatial
2.408 + 4
2.409 +
2.410 +dimension of the underlying data.
2.411 +
2.412 +
2.413 Figure 1: Top row: Genes Nfic
2.414 -and A930001M12Rik are the most
2.415 +and A930001M12Rik are the most
2.416 correlated with area SS (somatosen-
2.417 -sory cortex). Bottom row: Genes
2.418 +sory cortex). Bottom row: Genes
2.419 C130038G02Rik and Cacna1i are
2.420 -those with the best fit using logistic
2.421 +those with the best fit using logistic
2.422 regression. Within each picture, the
2.423 vertical axis roughly corresponds to
2.424 anterior at the top and posterior at the
2.425 @@ -248,570 +189,709 @@
2.426 the boundary of region SS. Pixels are
2.427 colored according to correlation, with
2.428 red meaning high correlation and blue
2.429 -meaning low. Background
2.430 - The cortex is divided into areas and layers. Because of the cortical
2.431 - columnar organization, the parcellation of the cortex into areas can be
2.432 - drawn as a 2-D map on the surface of the cortex. In the third dimension,
2.433 - the boundaries between the areas continue downwards into the cortical
2.434 - depth, perpendicular to the surface. The layer boundaries run parallel
2.435 - to the surface. One can picture an area of the cortex as a slice of a
2.436 - six-layered cake6.
2.437 - It is known that different cortical areas have distinct roles in both
2.438 - normal functioning and in disease processes, yet there are no known
2.439 - marker genes for most cortical areas. When it is necessary to divide a
2.440 - tissue sample into cortical areas, this is a manual process that requires
2.441 - a skilled human to combine multiple visual cues and interpret them in
2.442 - the context of their approximate location upon the cortical surface.
2.443 - Even the questions of how many areas should be recognized in
2.444 - cortex, and what their arrangement is, are still not completely settled.
2.445 - A proposed division of the cortex into areas is called a cortical map.
2.446 - In the rodent, the lack of a single agreed-upon map can be seen by
2.447 - contrasting the recent maps given by Swanson[17] on the one hand,
2.448 - and Paxinos and Franklin[14] on the other. While the maps are cer-
2.449 - tainly very similar in their general arrangement, significant differences
2.450 - remain.
2.451 - The Allen Mouse Brain Atlas dataset
2.452 - The Allen Mouse Brain Atlas (ABA) data[11] were produced by do-
2.453 - ing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse
2.454 - brains. Pictures were taken of the processed slice, and these pictures
2.455 - were semi-automatically analyzed to create a digital measurement of
2.456 - gene expression levels at each location in each slice. Per slice, cellular
2.457 - spatial resolution is achieved. Using this method, a single physical slice
2.458 -can only be used to measure one single gene; many different mouse brains were needed in order to measure
2.459 -the expression of many genes.
2.460 -Mus musculus is thought to contain about 22,000 protein-coding genes[20]. The ABA contains data on
2.461 -about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections.
2.462 -Our dataset is derived from only the coronal subset of the ABA7. An automated nonlinear alignment procedure
2.463 -located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system,
2.464 -voxels are cubes with 200 microns on a side. There are 67x41x58 = 159,326 voxels, of which 51,533 are in the
2.465 -brain[12]. For each voxel and each gene, the expression energy[11] within that voxel is made available.
2.466 -The ABA is not the only large public spatial gene expression dataset. However, with the exception of the ABA,
2.467 -GenePaint, and EMAGE, most of the other resources have not (yet) extracted the expression intensity from the
2.468 -ISH images and registered the results into a single 3-D space.
2.469 -Related work
2.470 -[12 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure
2.471 -of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort
2.472 -_________________________________________
2.473 - 6Outside of isocortex, the number of layers varies.
2.474 - 7The sagittal data do not cover the entire cortex, and also have greater registration error[12]. Genes were selected by the Allen
2.475 -Institute for coronal sectioning based on, “classes of known neuroscientific interest... or through post hoc identification of a marked
2.476 -non-ubiquitous expression pattern”[12].
2.477 -of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical
2.478 -map based on gene expression data. Neither of the other components of AGEA can be applied to cortical
2.479 -areas; AGEA’s Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA’s hierarchical
2.480 -clustering does not produce clusters corresponding to the cortical areas8.
2.481 -In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes,
2.482 -(b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no
2.483 -work on computationally finding marker genes for cortical areas, or on finding a hierarchical clustering that will
2.484 -yield a map of cortical areas de novo from gene expression data.
2.485 -Our project is guided by a concrete application with a well-specified criterion of success (how well we can
2.486 -find marker genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing
2.487 -different methods.
2.488 -Significance
2.489 -
2.490 -Figure 2: Gene Pitx2
2.491 -is selectively underex-
2.492 -pressed in area SS. The method developed in aim (1) will be applied to each cortical area to find a set of
2.493 - marker genes such that the combinatorial expression pattern of those genes uniquely
2.494 - picks out the target area. Finding marker genes will be useful for drug discovery as
2.495 - well as for experimentation because marker genes can be used to design interventions
2.496 - which selectively target individual cortical areas.
2.497 - The application of the marker gene finding algorithm to the cortex will also support
2.498 - the development of new neuroanatomical methods. In addition to finding markers for
2.499 - each individual cortical areas, we will find a small panel of genes that can find many of
2.500 - the areal boundaries at once. This panel of marker genes will allow the development of
2.501 - an ISH protocol that will allow experimenters to more easily identify which anatomical
2.502 - areas are present in small samples of cortex.
2.503 -The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation
2.504 -of a better map. The development of present-day cortical maps was driven by the application of histological
2.505 -stains. If a different set of stains had been available which identified a different set of features, then today’s
2.506 -cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns
2.507 -in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to
2.508 -incorporate what we can learn from looking at the patterns of gene expression.
2.509 -While we do not here propose to analyze human gene expression data, it is conceivable that the methods
2.510 -we propose to develop could be used to suggest modifications to the human cortical map as well. In fact, the
2.511 -methods we will develop will be applicable to other datasets beyond the brain.
2.512 -_______________________________
2.513 - The approach: Preliminary Studies
2.514 - Format conversion between SEV, MATLAB, NIFTI
2.515 -We have created software to (politely) download all of the SEV files9 from the Allen Institute website. We have
2.516 -also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s
2.517 -file formats.
2.518 - Flatmap of cortex
2.519 -We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex.
2.520 -We divided the cortex into hemispheres. Using Caret[5], we created a mesh representation of the surface of the
2.521 -selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression
2.522 -of the voxels “underneath” that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We
2.523 -sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this
2.524 -grid into a MATLAB matrix. We manually traced the boundaries of each of 46 cortical areas from the ABA coronal
2.525 -reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the
2.526 - 8In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer
2.527 -are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a
2.528 -pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.
2.529 - 9SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.
2.530 -mesh surface. We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the
2.531 -region data into MATLAB format.
2.532 -At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries
2.533 -representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent
2.534 -the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries
2.535 -represent the average expression level underneath each surface pixel. We created a normalized version of the
2.536 -gene expression data by subtracting each gene’s mean expression level (over all surface pixels) and dividing the
2.537 -expression level of each gene by its standard deviation. The features and the target area are both functions on
2.538 -the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can
2.539 -be thought of as images which can be displayed on the flatmapped surface.
2.540 -To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix
2.541 -for each cortical layer to represent the average expression level within that layer. Cortical layers are found at
2.542 -different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have
2.543 -extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the
2.544 -Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually
2.545 -demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
2.546 -Feature selection and scoring methods
2.547 -
2.548 -
2.549 +meaning low. Unlike Goal 1, there is no externally-imposed need to
2.550 + select only a handful of informative genes for inclusion
2.551 + in the instances. However, some clustering algorithms
2.552 + perform better on small numbers of features4. There are
2.553 + techniques which “summarize” a larger number of fea-
2.554 + tures using a smaller number of features; these tech-
2.555 + niques go by the name of feature extraction or dimen-
2.556 + sionality reduction. The small set of features that such a
2.557 + technique yields is called the reduced feature set. Note
2.558 + that the features in the reduced feature set do not neces-
2.559 + sarily correspond to genes; each feature in the reduced
2.560 + set may be any function of the set of gene expression
2.561 + levels.
2.562 + Clustering genes rather than voxels Although the
2.563 + ultimate goal is to cluster the instances (voxels or pixels),
2.564 + one strategy to achieve this goal is to first cluster the
2.565 + features (genes). There are two ways that clusters of
2.566 + genes could be used.
2.567 + Gene clusters could be used as part of dimensionality
2.568 + reduction: rather than have one feature for each gene,
2.569 + we could have one reduced feature for each gene cluster.
2.570 + Gene clusters could also be used to directly yield a
2.571 + clustering on instances. This is because many genes
2.572 + have an expression pattern which seems to pick out a
2.573 + single, spatially contiguous region. This suggests the fol-
2.574 + lowing procedure: cluster together genes which pick out
2.575 + similar regions, and then to use the more popular com-
2.576 + mon regions as the final clusters. In Preliminary Results,
2.577 + Figure 7, we show that a number of anatomically recog-
2.578 +nized cortical regions, as well as some “superregions” formed by lumping together a few regions,
2.579 +are associated with gene clusters in this fashion.
2.580 + Goal 3: interoperability with multi/hyperspectral imaging analysis software
2.581 +A typical color image associates each pixel with a vector of three values. Multispectral and hyper-
2.582 +spectral images, however, are images which associate each pixel with a vector containing many
2.583 +values. The different positions in the vector correspond to different bands of electromagnetic
2.584 +wavelengths5.
2.585 + Some analysis techniques for hyperspectral imaging, especially preprocessing and calibration
2.586 +techniques, make use of the information that the different values captured at each pixel represent
2.587 +____________________________________
2.588 + 4First, because the number of features in the reduced dataset is less than in the original dataset, the running time of
2.589 +clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results
2.590 +on reduced data.
2.591 + 5In hyperspectral imaging, the bands are adjacent, and the number of different bands is larger. For conciseness, we
2.592 +discuss only hyperspectral imaging, but our methods are also well suited to multispectral imaging with many bands.
2.593 + 5
2.594 +
2.595 +adjacent wavelengths of light, which can be combined to make a spectrum. Other analysis tech-
2.596 +niques ignore the interpretation of the values measured, and their relationship to each other within
2.597 +the electromagnetic spectrum, instead treating them blindly as completely separate features.
2.598 + With both hyperspectral imaging and spatial gene expression data, each location in space
2.599 +is associated with more than three numerical feature values. The analysis of hyperspectral im-
2.600 +ages can involve supervised classification and unsupervised learning. Often hyperspectral images
2.601 +come from satellites looking at the Earth, and it is desirable to classify what sort of objects occupy
2.602 +a given area of land. Sometimes detailed training data is not available, in which case it is desirable
2.603 +at least to cluster together those regions of land which contain similar objects.
2.604 + We believe that it may be possible for these two different field to share some common compu-
2.605 +tational tools. To this end, we intend to make use of existing hyperspectral imaging software when
2.606 +possible, and to develop new software in such a way so as to make it easy to use for the purpose
2.607 +of hyperspectral image analysis, as well as for our primary purpose of spatial gene expression
2.608 +data analysis.
2.609 + Related work
2.610 +
2.611 +Figure 2: Gene Pitx2
2.612 +is selectively underex-
2.613 +pressed in area SS. As noted above, the GIS community has developed tools for supervised
2.614 + classification and unsupervised clustering in the context of the analysis
2.615 + of hyperspectral imaging data. One tool is Spectral Python6. Spectral
2.616 + Python implements various supervised and unsupervised classification
2.617 + methods, as well as utility functions for loading, viewing, and saving
2.618 + spatial data. Although Spectral Python has feature extraction methods
2.619 + (such as principal components analysis) which create a small set of
2.620 + new features computed based on the original features, it does not have
2.621 + feature selection methods, that is, methods to select a small subset
2.622 + out of the original features (although feature selection in hyperspectral
2.623 + imaging has been investigated by others[19].
2.624 + There is a substantial body of work on the analysis of gene expression data. Most of this con-
2.625 +cerns gene expression data which are not fundamentally spatial7. Here we review only that work
2.626 +which concerns the automated analysis of spatial gene expression data with respect to anatomy.
2.627 + Relating to Goal 1, GeneAtlas[5] and EMAGE [24] allow the user to construct a search query by
2.628 +demarcating regions and then specifying either the strength of expression or the name of another
2.629 +gene or dataset whose expression pattern is to be matched. Neither GeneAtlas nor EMAGE allow
2.630 +one to search for combinations of genes that define a region in concert.
2.631 + Relating to Goal 2, EMAGE[24] allows the user to select a dataset from among a large number
2.632 +of alternatives, or by running a search query, and then to cluster the genes within that dataset.
2.633 +EMAGE clusters via hierarchical complete linkage clustering.
2.634 + [15] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components. Gene
2.635 +Finder: The user selects a seed voxel and the system (1) chooses a cluster which includes the
2.636 +seed voxel, (2) yields a list of genes which are overexpressed in that cluster. Correlation: The user
2.637 +selects a seed voxel and the system then shows the user how much correlation there is between
2.638 +the gene expression profile of the seed voxel and every other voxel. Clusters: AGEA includes a
2.639 +____________________________________
2.640 + 6http://spectralpython.sourceforge.net/
2.641 + 7By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by
2.642 +spatial coordinates; not just data which have only a few different locations or which is indexed by anatomical label.
2.643 + 6
2.644 +
2.645 +preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with correlation
2.646 +as the similarity metric. AGEA has been applied to the cortex. The paper describes interesting
2.647 +results on the structure of correlations between voxel gene expression profiles within a handful of
2.648 +cortical areas. However, that analysis neither looks for genes marking cortical areas, nor does it
2.649 +suggest a cortical map based on gene expression data. Neither of the other components of AGEA
2.650 +can be applied to cortical areas; AGEA’s Gene Finder cannot be used to find marker genes for the
2.651 +cortical areas; and AGEA’s hierarchical clustering does not produce clusters corresponding to the
2.652 +cortical areas8.
2.653 +
2.654 +
2.655 Figure 3: The top row shows the two
2.656 genes which (individually) best predict
2.657 area AUD, according to logistic regres-
2.658 -sion. The bottom row shows the two
2.659 -genes which (individually) best match
2.660 -area AUD, according to gradient sim-
2.661 +sion. The bottom row shows the two
2.662 +genes which (individually) best match
2.663 +area AUD, according to gradient sim-
2.664 ilarity. From left to right and top to
2.665 bottom, the genes are Ssr1, Efcbp1,
2.666 -Ptk7, and Aph1a. Underexpression of a gene can serve as a marker Underexpression
2.667 - of a gene can sometimes serve as a marker. See, for example, Figure
2.668 - 2.
2.669 - Correlation Recall that the instances are surface pixels, and con-
2.670 - sider the problem of attempting to classify each instance as either a
2.671 - member of a particular anatomical area, or not. The target area can be
2.672 - represented as a boolean mask over the surface pixels.
2.673 - We calculated the correlation between each gene and each cortical
2.674 - area. The top row of Figure 1 shows the three genes most correlated
2.675 - with area SS.
2.676 - Conditional entropy
2.677 - For each region, we created and ran a forward stepwise procedure
2.678 - which attempted to find pairs of gene expression boolean masks such
2.679 - that the conditional entropy of the target area’s boolean mask, condi-
2.680 - tioned upon the pair of gene expression boolean masks, is minimized.
2.681 - This finds pairs of genes which are most informative (at least at
2.682 - these discretization thresholds) relative to the question, “Is this surface
2.683 - pixel a member of the target area?”. Its advantage over linear methods
2.684 - such as logistic regression is that it takes account of arbitrarily nonlin-
2.685 - ear relationships; for example, if the XOR of two variables predicts the
2.686 - target, conditional entropy would notice, whereas linear methods would
2.687 - not.
2.688 -Gradient similarity We noticed that the previous two scoring methods, which are pointwise, often found
2.689 -genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed
2.690 -a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a
2.691 -boundary whose shape is similar to the shape of the target region. We call this scoring method “gradient
2.692 -similarity”. The formula is:
2.693 - ∑
2.694 - pixel<img src="cmsy8-32.png" alt="∈" />pixels cos(abs(∠∇1 -∠∇2)) ⋅|∇1| + |∇2|
2.695 - 2 ⋅ pixel_value1 + pixel_value2
2.696 - 2
2.697 -where ∇1 and ∇2 are the gradient vectors of the two images at the current pixel; ∠∇i is the angle of the
2.698 -gradient of image i at the current pixel; |∇i| is the magnitude of the gradient of image i at the current pixel; and
2.699 -pixel valuei is the value of the current pixel in image i.
2.700 -The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders
2.701 -are similar, then both images will have corresponding pixels with large gradients (because this is a border) which
2.702 -are oriented in a similar direction (because the borders are similar).
2.703 -Gradient similarity provides information complementary to correlation
2.704 -
2.705 -
2.706 +Ptk7, and Aph1a. [6] looks at the mean expression level of genes within
2.707 + anatomical regions, and applies a Student’s t-test to de-
2.708 + termine whether the mean expression level of a gene is
2.709 + significantly higher in the target region. This relates to
2.710 + our Goal 1. [6] also clusters genes, relating to our Goal
2.711 + 2. For each cluster, prototypical spatial expression pat-
2.712 + terns were created by averaging the genes in the cluster.
2.713 + The prototypes were analyzed manually, without cluster-
2.714 + ing voxels.
2.715 + These related works differ from our strategy for Goal
2.716 + 1 in at least three ways. First, they find only single genes,
2.717 + whereas we will also look for combinations of genes.
2.718 + Second, they usually can only use overexpression as
2.719 + a marker, whereas we will also search for underexpres-
2.720 + sion. Third, they use scores based on pointwise expres-
2.721 + sion levels, whereas we will also use geometric scores
2.722 + such as gradient similarity (described in Preliminary Re-
2.723 + sults). Figures 4, 2, and 3 in the Preliminary Results
2.724 + section contain evidence that each of our three choices
2.725 + is the right one.
2.726 + [10] describes a technique to find combinations of
2.727 + marker genes to pick out an anatomical region. They
2.728 +use an evolutionary algorithm to evolve logical operators which combine boolean (thresholded)
2.729 +images in order to match a target image. They apply their technique for finding combinations of
2.730 +marker genes for the purpose of clustering genes around a “seed gene”.
2.731 + Relating to our Goal 2, some researchers have attempted to parcellate cortex on the basis of
2.732 +non-gene expression data. For example, [17], [2], [18], and [1] associate spots on the cortex with
2.733 +the radial profile9 of response to some stain ([12] uses MRI), extract features from this profile, and
2.734 +then use similarity between surface pixels to cluster.
2.735 + [22] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In
2.736 +addition to manual analysis, two clustering methods were employed, a modified Non-negative
2.737 +Matrix Factorization (NNMF), and a hierarchical bifurcation clustering scheme using correlation as
2.738 +____________________________________
2.739 + 8In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but
2.740 +the same layer are often stronger than pairwise correlations between the gene expression of voxels in different layers
2.741 +but the same area. Therefore, a pairwise voxel correlation clustering algorithm will tend to create clusters representing
2.742 +cortical layers, not areas.
2.743 + 9A radial profile is a profile along a line perpendicular to the cortical surface.
2.744 + 7
2.745 +
2.746 +similarity. The paper yielded impressive results, proving the usefulness of computational genomic
2.747 +anatomy. We have run NNMF on the cortical dataset, and while the results are promising, other
2.748 +methods may perform as well or better (see Preliminary Results, Figure 6).
2.749 + Comparing previous work with our Goal 1, there has been fruitful work on finding marker genes,
2.750 +but only one of the projects explored combinations of marker genes, and none of them compared
2.751 +the results obtained by using different algorithms or scoring methods. Comparing previous work
2.752 +with Goal 2, although some projects obtained clusterings, there has not been much comparison
2.753 +between different algorithms or scoring methods, so it is likely that the best clustering method for
2.754 +this application has not yet been found. Also, none of these projects did a separate dimensionality
2.755 +reduction step before clustering pixels, or tried to cluster genes first in order to guide automated
2.756 +clustering of pixels into spatial regions, or used co-clustering algorithms.
2.757 + In summary, (a) only one of the previous projects explores combinations of marker genes, (b)
2.758 +there has been almost no comparison of different algorithms or scoring methods, and (c) there
2.759 +has been no work on computationally finding marker genes applied to cortical areas, or on finding
2.760 +a hierarchical clustering that will yield a map of cortical areas de novo from gene expression data.
2.761 + Our project is guided by a concrete application with a well-specified criterion of success (how
2.762 +well we can find marker genes for / reproduce the layout of cortical areas), which will provide a
2.763 +solid basis for comparing different methods.
2.764 +_________________________________________________
2.765 + Data sharing plan
2.766 +
2.767 +
2.768 Figure 4: Upper left: wwc1. Upper
2.769 right: mtif2. Lower left: wwc1 + mtif2
2.770 (each pixel’s value on the lower left is
2.771 the sum of the corresponding pixels in
2.772 -the upper row). To show that gradient similarity can provide useful information that
2.773 - cannot be detected via pointwise analyses, consider Fig. 3. The
2.774 - pointwise method in the top row identifies genes which express more
2.775 - strongly in AUD than outside of it; its weakness is that this includes
2.776 - many areas which don’t have a salient border matching the areal bor-
2.777 - der. The geometric method identifies genes whose salient expression
2.778 - border seems to partially line up with the border of AUD; its weakness
2.779 - is that this includes genes which don’t express over the entire area.
2.780 - Areas which can be identified by single genes Using gradient
2.781 - similarity, we have already found single genes which roughly identify
2.782 - some areas and groupings of areas. For each of these areas, an ex-
2.783 - ample of a gene which roughly identifies it is shown in Figure 5. We
2.784 - have not yet cross-verified these genes in other atlases.
2.785 - In addition, there are a number of areas which are almost identified
2.786 - by single genes: COAa+NLOT (anterior part of cortical amygdalar area,
2.787 - nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral
2.788 - anterior cingulate), VIS (visual), AUD (auditory).
2.789 - These results validate our expectation that the ABA dataset can be
2.790 -exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring
2.791 -method, gradient similarity.
2.792 -Combinations of multiple genes are useful and necessary for some areas
2.793 -In Figure 4, we give an example of a cortical area which is not marked by any single gene, but which
2.794 -can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for
2.795 -predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left
2.796 -picture in Figure 4 shows wwc1’s spatial expression pattern over the cortex. The lower-right boundary of MO is
2.797 -represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D
2.798 -representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex.
2.799 -MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO’s upper-left
2.800 -boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding
2.801 -together the values at each pixel in these two figures, we get the lower-left image. This combination captures
2.802 -area MO much better than any single gene.
2.803 -This shows that our proposal to develop a method to find combinations of marker genes is both possible and
2.804 -necessary.
2.805 -Multivariate supervised learning
2.806 -Forward stepwise logistic regression Logistic regression is a popular method for predictive modeling of cate-
2.807 -gorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise
2.808 -logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is
2.809 -an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes
2.810 -found were shown in various figures throughout this document, and Figure 4 shows a combination of genes
2.811 -which was found.
2.812 -SVM on all genes at once
2.813 -In order to see how well one can do when looking at all genes at once, we ran a support vector machine to
2.814 -classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of
2.815 -about 81%10. This shows that the genes included in the ABA dataset are sufficient to define much of cortical
2.816 -anatomy. However, as noted above, a classifier that looks at all the genes at once isn’t as practically useful as a
2.817 -classifier that uses only a few genes.
2.818 -Data-driven redrawing of the cortical map
2.819 -
2.820 -
2.821 -
2.822 -
2.823 +the upper row). We are enthusiastic about the sharing of methods and
2.824 + data, and at the conclusion of the project, we will make
2.825 + all of our data and computer source code publically avail-
2.826 + able, either in supplemental attachments to publications,
2.827 + or on a website. The source code will be released under
2.828 + the GNU Public License. We intend to include a soft-
2.829 + ware program which, when run, will take as input the
2.830 + Allen Brain Atlas raw data, and produce as output all
2.831 + numbers and charts found in publications resulting from
2.832 + the project. Source code to be released will include ex-
2.833 + tensions to Caret[7], an existing open-source scientific
2.834 + imaging program, and to Spectral Python. Data to be
2.835 + released will include the 2-D “flat map” dataset. This
2.836 + dataset will be submitted to a machine learning dataset
2.837 + repository.
2.838 + Broader impacts
2.839 + In addition to validating the usefulness of the algorithms,
2.840 + the application of these methods to cortex will produce
2.841 +immediate benefits, because there are currently no known genetic markers for most cortical areas.
2.842 + The method developed in Goal 1 will be applied to each cortical area to find a set of marker
2.843 +genes such that the combinatorial expression pattern of those genes uniquely picks out the target
2.844 +area. Finding marker genes will be useful for drug discovery as well as for experimentation be-
2.845 +cause marker genes can be used to design interventions which selectively target individual cortical
2.846 +areas.
2.847 + 8
2.848 +
2.849 + The application of the marker gene finding algorithm to the cortex will also support the develop-
2.850 +ment of new neuroanatomical methods. In addition to finding markers for each individual cortical
2.851 +areas, we will find a small panel of genes that can find many of the areal boundaries at once.
2.852 + The method developed in Goal 2 will provide a genoarchitectonic viewpoint that will contribute
2.853 +to the creation of a better cortical map.
2.854 + The methods we will develop will be applicable to other datasets beyond the brain, and even to
2.855 +datasets outside of biology. The software we develop will be useful for the analysis of hyperspectral
2.856 +images. Our project will draw attention to this area of overlap between neuroscience and GIS, and
2.857 +may lead to future collaborations between these two fields. The cortical dataset that we produce
2.858 +will be useful in the machine learning community as a sample dataset that new algorithms can be
2.859 +tested against. The availability of this sample dataset to the machine learning community may lead
2.860 +to more interest in the design of machine learning algorithms to analyze spatial gene expression.
2.861 +_
2.862 + Preliminary Results
2.863 + Format conversion between SEV, MATLAB, NIFTI
2.864 +We have created software to (politely) download all of the SEV files10 from the Allen Institute
2.865 +website. We have also created software to convert between the SEV, MATLAB, and NIFTI file
2.866 +formats, as well as some of Caret’s file formats.
2.867 + Flatmap of cortex
2.868 +We downloaded the ABA data and selected only those voxels which belong to cerebral cortex.
2.869 +We divided the cortex into hemispheres. Using Caret[7], we created a mesh representation of the
2.870 +surface of the selected voxels. For each gene, and for each node of the mesh, we calculated an
2.871 +average of the gene expression of the voxels “underneath” that mesh node. We then flattened
2.872 +the cortex, creating a two-dimensional mesh. We converted this grid into a MATLAB matrix. We
2.873 +manually traced the boundaries of each of 46 cortical areas from the ABA coronal reference atlas
2.874 +slides, and converted this region data into MATLAB format.
2.875 + At this point, the data are in the form of a number of 2-D matrices, all in registration, with the
2.876 +matrix entries representing a grid of points (pixels) over the cortical surface. There is one 2-D
2.877 +matrix whose entries represent the regional label associated with each surface pixel. And for each
2.878 +gene, there is a 2-D matrix whose entries represent the average expression level underneath each
2.879 +surface pixel. The features and the target area are both functions on the surface pixels. They can
2.880 +be referred to as scalar fields over the space of surface pixels; alternately, they can be thought of
2.881 +as images which can be displayed on the flatmapped surface.
2.882 + Feature selection and scoring methods
2.883 +Underexpression of a gene can serve as a marker Underexpression of a gene can sometimes
2.884 +serve as a marker. For example, see Figure 2.
2.885 + Correlation Recall that the instances are surface pixels, and consider the problem of attempt-
2.886 +ing to classify each instance as either a member of a particular anatomical area, or not. The target
2.887 +area can be represented as a boolean mask over the surface pixels.
2.888 + 10SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.
2.889 + 9
2.890 +
2.891 + We calculated the correlation between each gene and each cortical area. The top row of Figure
2.892 +1 shows the three genes most correlated with area SS.
2.893 + Conditional entropy
2.894 + For each region, we created and ran a forward stepwise procedure which attempted to find
2.895 +pairs of genes such that the conditional entropy of the target area’s boolean mask, conditioned
2.896 +upon the gene pair’s thresholded expression levels, is minimized.
2.897 + This finds pairs of genes which are most informative (at least at these threshold levels) relative
2.898 +to the question, “Is this surface pixel a member of the target area?”. The advantage over linear
2.899 +methods such as logistic regression is that this takes account of arbitrarily nonlinear relationships;
2.900 +for example, if the XOR of two variables predicts the target, conditional entropy would notice,
2.901 +whereas linear methods would not.
2.902 + Gradient similarity We noticed that the previous two scoring methods, which are pointwise,
2.903 +often found genes whose pattern of expression did not look similar in shape to the target region.
2.904 +For this reason we designed a non-pointwise scoring method to detect when a gene had a pattern
2.905 +of expression which looked like it had a boundary whose shape is similar to the shape of the target
2.906 +region. We call this scoring method “gradient similarity”. The formula is:
2.907 + ∑
2.908 + pixel<img src="cmsy8-32.png" alt="∈" />pixels cos(∠∇1 -∠∇2) ⋅|∇1| + |∇2|
2.909 + 2 ⋅ pixel_value1 + pixel_value2
2.910 + 2
2.911 + where ∇1 and ∇2 are the gradient vectors of the two images at the current pixel; ∠∇i is the
2.912 +angle of the gradient of image i at the current pixel; |∇i| is the magnitude of the gradient of image
2.913 +i at the current pixel; and pixel_valuei is the value of the current pixel in image i.
2.914 + The intuition is that we want to see if the borders of the pattern in the two images are similar; if
2.915 +the borders are similar, then both images will have corresponding pixels with large gradients (be-
2.916 +cause this is a border) which are oriented in a similar direction (because the borders are similar).
2.917 + Gradient similarity provides information complementary to correlation
2.918 + To show that gradient similarity can provide useful information that cannot be detected via
2.919 +pointwise analyses, consider Fig. 3. The pointwise method in the top row identifies genes which
2.920 +express more strongly in AUD than outside of it; its weakness is that this includes many areas
2.921 +which don’t have a salient border matching the areal border. The geometric method identifies
2.922 +genes whose salient expression border seems to partially line up with the border of AUD; its
2.923 +weakness is that this includes genes which don’t express over the entire area.
2.924 + Areas which can be identified by single genes Using gradient similarity, we have already
2.925 +found single genes which roughly identify some areas and groupings of areas. For each of these
2.926 +areas, an example of a gene which roughly identifies it is shown in Figure 5. We have not yet
2.927 +cross-verified these genes in other atlases.
2.928 + In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT
2.929 +(anterior part of cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal),
2.930 +ACAv (ventral anterior cingulate), VIS (visual), AUD (auditory).
2.931 + These results validate our expectation that the ABA dataset can be exploited to find marker
2.932 +genes for many cortical areas, while also validating the relevancy of our new scoring method,
2.933 +gradient similarity.
2.934 + 10
2.935 +
2.936 +
2.937 +
2.938 +
2.939 +
2.940 Figure 5: From left to right and top
2.941 to bottom, single genes which roughly
2.942 identify areas SS (somatosensory pri-
2.943 -mary + supplemental), SSs (supple-
2.944 +mary + supplemental), SSs (supple-
2.945 mental somatosensory), PIR (piriform),
2.946 -FRP (frontal pole), RSP (retrosple-
2.947 -nial), COApm (Cortical amygdalar, pos-
2.948 -terior part, medial zone). Grouping
2.949 -some areas together, we have also
2.950 -found genes to identify the groups
2.951 +FRP (frontal pole), RSP (retrosplenial),
2.952 +COApm (Cortical amygdalar, poste-
2.953 +rior part, medial zone). Grouping
2.954 +some areas together, we have also
2.955 +found genes to identify the groups
2.956 ACA+PL+ILA+DP+ORB+MO (anterior
2.957 cingulate, prelimbic, infralimbic, dor-
2.958 sal peduncular, orbital, motor), poste-
2.959 -rior and lateral visual (VISpm, VISpl,
2.960 +rior and lateral visual (VISpm, VISpl,
2.961 VISI, VISp; posteromedial, posterolat-
2.962 -eral, lateral, and primary visual; the
2.963 +eral, lateral, and primary visual; the
2.964 posterior and lateral visual area is dis-
2.965 tinguished from its neighbors, but not
2.966 from the entire rest of the cortex). The
2.967 -genes are Pitx2, Aldh1a2, Ppfibp1,
2.968 -Slco1a5, Tshz2, Trhr, Col12a1, Ets1. We have applied the following dimensionality reduction algorithms
2.969 - to reduce the dimensionality of the gene expression profile associ-
2.970 - ated with each pixel: Principal Components Analysis (PCA), Simple
2.971 - PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian
2.972 - eigenmaps, Local Tangent Space Alignment, Stochastic Proximity Em-
2.973 - bedding, Fast Maximum Variance Unfolding, Non-negative Matrix Fac-
2.974 - torization (NNMF). Space constraints prevent us from showing many of
2.975 - the results, but as a sample, PCA, NNMF, and landmark Isomap are
2.976 - shown in the first, second, and third rows of Figure 6.
2.977 - After applying the dimensionality reduction, we ran clustering algo-
2.978 - rithms on the reduced data. To date we have tried k-means and spec-
2.979 - tral clustering. The results of k-means after PCA, NNMF, and landmark
2.980 - Isomap are shown in the last row of Figure 6. To compare, the leftmost
2.981 - picture on the bottom row of Figure 6 shows some of the major subdivi-
2.982 - sions of cortex. These results clearly show that different dimensionality
2.983 - reduction techniques capture different aspects of the data and lead to
2.984 - different clusterings, indicating the utility of our proposal to produce a
2.985 - detailed comparison of these techniques as applied to the domain of
2.986 - genomic anatomy.
2.987 - Many areas are captured by clusters of genes We also clustered
2.988 - the genes using gradient similarity to see if the spatial regions defined
2.989 - by any clusters matched known anatomical regions. Figure 7 shows, for
2.990 - ten sample gene clusters, each cluster’s average expression pattern,
2.991 - compared to a known anatomical boundary. This suggests that it is
2.992 - worth attempting to cluster genes, and then to use the results to cluster
2.993 - pixels.
2.994 - The approach: what we plan to do
2.995 - Flatmap cortex and segment cortical layers
2.996 - There are multiple ways to flatten 3-D data into 2-D. We will compare
2.997 - mappings from manifolds to planes which attempt to preserve size
2.998 - (such as the one used by Caret[5]) with mappings which preserve an-
2.999 - gle (conformal maps). Our method will include a statistical test that
2.1000 - warns the user if the assumption of 2-D structure seems to be wrong.
2.1001 - We have not yet made use of radial profiles. While the radial pro-
2.1002 - files may be used “raw”, for laminar structures like the cortex another
2.1003 - strategy is to group together voxels in the same cortical layer; each sur-
2.1004 - face pixel would then be associated with one expression level per gene
2.1005 - per layer. We will develop a segmentation algorithm to automatically
2.1006 - identify the layer boundaries.
2.1007 - Develop algorithms that find genetic markers for anatomical re-
2.1008 - gions
2.1009 - Scoring measures and feature selection We will develop scoring
2.1010 - methods for evaluating how good individual genes are at marking ar-
2.1011 - eas. We will compare pointwise, geometric, and information-theoretic
2.1012 -_________________________________________
2.1013 - 105-fold cross-validation.
2.1014 -measures. We already developed one entirely new scoring method (gradient similarity), but we may develop
2.1015 -more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, con-
2.1016 -ditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such
2.1017 -as Student’s t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a
2.1018 -scoring measure on genes by taking the prediction error when using that gene to predict the target.
2.1019 -
2.1020 -
2.1021 -
2.1022 -
2.1023 +genes are Pitx2, Aldh1a2, Ppfibp1,
2.1024 +Slco1a5, Tshz2, Trhr, Col12a1, Ets1. Combinations of multiple genes are useful and
2.1025 + necessary for some areas
2.1026 + In Figure 4, we give an example of a cortical area
2.1027 + which is not marked by any single gene, but which can be
2.1028 + identified combinatorially. According to logistic regres-
2.1029 + sion, gene wwc1 is the best fit single gene for predicting
2.1030 + whether or not a pixel on the cortical surface belongs to
2.1031 + the motor area (area MO). The upper-left picture in Fig-
2.1032 + ure 4 shows wwc1’s spatial expression pattern over the
2.1033 + cortex. The lower-right boundary of MO is represented
2.1034 + reasonably well by this gene, but the gene overshoots
2.1035 + the upper-left boundary. This flattened 2-D representa-
2.1036 + tion does not show it, but the area corresponding to the
2.1037 + overshoot is the medial surface of the cortex. MO is only
2.1038 + found on the dorsal surface. Gene mtif2 is shown in the
2.1039 + upper-right. Mtif2 captures MO’s upper-left boundary, but
2.1040 + not its lower-right boundary. Mtif2 does not express very
2.1041 + much on the medial surface. By adding together the val-
2.1042 + ues at each pixel in these two figures, we get the lower-
2.1043 + left image. This combination captures area MO much
2.1044 + better than any single gene.
2.1045 + This shows that our proposal to develop a method to
2.1046 + find combinations of marker genes is both possible and
2.1047 + necessary.
2.1048 + Multivariate supervised learning
2.1049 + Forward stepwise logistic regression Logistic regres-
2.1050 + sion is a popular method for predictive modeling of cat-
2.1051 + egorical data. As a pilot run, for five cortical areas (SS,
2.1052 + AUD, RSP, VIS, and MO), we performed forward step-
2.1053 + wise logistic regression to find single genes, pairs of
2.1054 + genes, and triplets of genes which predict areal identify.
2.1055 + This is an example of feature selection integrated with
2.1056 + prediction using a stepwise wrapper. Some of the sin-
2.1057 + gle genes found were shown in various figures through-
2.1058 + out this document, and Figure 4 shows a combination of
2.1059 + genes which was found.
2.1060 + SVM on all genes at once
2.1061 + In order to see how well one can do when looking at
2.1062 + all genes at once, we ran a support vector machine to
2.1063 + classify cortical surface pixels based on their gene ex-
2.1064 + pression profiles. We achieved classification accuracy of
2.1065 + about 81%11. However, as noted above, a classifier that
2.1066 +____________________________________
2.1067 + 115-fold cross-validation.
2.1068 + 11
2.1069 +
2.1070 + looks at all the genes at once isn’t as practically useful
2.1071 + as a classifier that uses only a few genes.
2.1072 + Data-driven redrawing of the cortical map
2.1073 +We have applied the following dimensionality reduction algorithms to reduce the dimensionality
2.1074 +of the gene expression profile associated with each pixel: Principal Components Analysis (PCA),
2.1075 +Simple PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian eigenmaps, Local
2.1076 +Tangent Space Alignment, Stochastic Proximity Embedding, Fast Maximum Variance Unfolding,
2.1077 +Non-negative Matrix Factorization (NNMF). Space constraints prevent us from showing many of
2.1078 +the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in the first, second,
2.1079 +and third rows of Figure 6.
2.1080 + After applying the dimensionality reduction, we ran clustering algorithms on the reduced data.
2.1081 +To date we have tried k-means and spectral clustering. The results of k-means after PCA, NNMF,
2.1082 +and landmark Isomap are shown in the bottom row of Figure 6. To compare, the leftmost picture
2.1083 +on the bottom row of Figure 6 shows some of the major subdivisions of cortex. These results show
2.1084 +that different dimensionality reduction techniques capture different aspects of the data and lead
2.1085 +to different clusterings, indicating the utility of our proposal to produce a detailed comparison of
2.1086 +these techniques as applied to the domain of genomic anatomy.
2.1087 + Many areas are captured by clusters of genes We also clustered the genes using gradient
2.1088 +similarity to see if the spatial regions defined by any clusters matched known anatomical regions.
2.1089 +Figure 7 shows, for ten sample gene clusters, each cluster’s average expression pattern, com-
2.1090 +pared to a known anatomical boundary. This suggests that it is worth attempting to cluster genes,
2.1091 +and then to use the results to cluster pixels.
2.1092 + Our plan: what remains to be done
2.1093 + Flatmap cortex and segment cortical layers
2.1094 +There are multiple ways to flatten 3-D data into 2-D. We will compare mappings from manifolds to
2.1095 +planes which attempt to preserve size (such as the one used by Caret[7]) with mappings which
2.1096 +preserve angle (conformal maps). We will also develop a segmentation algorithm to automatically
2.1097 +identify the layer boundaries.
2.1098 + Develop algorithms that find genetic markers for anatomical regions
2.1099 +Scoring measures and feature selection We will develop scoring methods for evaluating how
2.1100 +good individual genes are at marking areas. We will compare pointwise, geometric, and information-
2.1101 +theoretic measures. We already developed one entirely new scoring method (gradient similarity),
2.1102 +but we may develop more. Scoring measures that we will explore will include the L1 norm, cor-
2.1103 +relation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice
2.1104 +similarity, Hough transform, and statistical tests such as Student’s t-test, and the Mann-Whitney
2.1105 +U test (a non-parametric test). In addition, any classifier induces a scoring measure on genes by
2.1106 +taking the prediction error when using that gene to predict the target.
2.1107 + Using some combination of these measures, we will develop a procedure to find single marker
2.1108 +genes for anatomical regions: for each cortical area, we will rank the genes by their ability to
2.1109 +delineate that area. We will quantitatively compare the list of single genes generated by our
2.1110 +method to the lists generated by methods which are mentioned in Related Work.
2.1111 + 12
2.1112 +
2.1113 +
2.1114 Figure 6: First row: the first 6 reduced dimensions, using PCA. Sec-
2.1115 -ond row: the first 6 reduced dimensions, using NNMF. Third row:
2.1116 -the first six reduced dimensions, using landmark Isomap. Bottom
2.1117 -row: examples of kmeans clustering applied to reduced datasets
2.1118 -to find 7 clusters. Left: 19 of the major subdivisions of the cortex.
2.1119 -Second from left: PCA. Third from left: NNMF. Right: Landmark
2.1120 -Isomap. Additional details: In the third and fourth rows, 7 dimen-
2.1121 -sions were found, but only 6 displayed. In the last row: for PCA,
2.1122 -50 dimensions were used; for NNMF, 6 dimensions were used; for
2.1123 -landmark Isomap, 7 dimensions were used. Using some combination of these mea-
2.1124 - sures, we will develop a procedure to
2.1125 - find single marker genes for anatomical
2.1126 - regions: for each cortical area, we will
2.1127 - rank the genes by their ability to delineate
2.1128 - each area. We will quantitatively compare
2.1129 - the list of single genes generated by our
2.1130 - method to the lists generated by previous
2.1131 - methods which are mentioned in Aim 1 Re-
2.1132 - lated Work.
2.1133 - Some cortical areas have no single
2.1134 - marker genes but can be identified by com-
2.1135 - binatorial coding. This requires multivari-
2.1136 - ate scoring measures and feature selec-
2.1137 - tion procedures. Many of the measures,
2.1138 - such as expression energy, gradient sim-
2.1139 - ilarity, Jaccard, Dice, Hough, Student’s t,
2.1140 - and Mann-Whitney U are univariate. We
2.1141 - will extend these scoring measures for use
2.1142 - in multivariate feature selection, that is, for
2.1143 - scoring how well combinations of genes,
2.1144 - rather than individual genes, can distin-
2.1145 - guish a target area. There are existing
2.1146 - multivariate forms of some of the univariate
2.1147 - scoring measures, for example, Hotelling’s
2.1148 - T-square is a multivariate analog of Stu-
2.1149 - dent’s t.
2.1150 - We will develop a feature selection pro-
2.1151 - cedure for choosing the best small set of
2.1152 -marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will
2.1153 -also explore (a) feature selection using a stepwise wrapper over “vanilla” classifiers such as logistic regression,
2.1154 -(b) supervised learning methods such as decision trees which incrementally/greedily combine single gene mark-
2.1155 -ers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features
2.1156 -used, such as sparse support vector machines (SVMs).
2.1157 -Since errors of displacement and of shape may cause genes and target areas to match less than they should,
2.1158 -we will consider the robustness of feature selection methods in the presence of error. Some of these methods,
2.1159 -such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will
2.1160 -consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a
2.1161 -scoring method on small displacements and distortions of the data adds robustness to registration error at the
2.1162 -expense of computation time.
2.1163 -An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape
2.1164 -of the natural domain of gene expression corresponding to the area is different from the shape of the area as
2.1165 -recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing
2.1166 -their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be
2.1167 -fit if its boundary were redrawn slightly11, and (b) detect when a difficult area could be combined with adjacent
2.1168 -_________________________________________
2.1169 - 11Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by
2.1170 -multiple sources of evidence. Interestingly, the need to detect “natural spatial domains of gene expression” in a data-driven fashion
2.1171 -means that the methods of Aim 2 might be useful in achieving Aim 1, as well – particularly discriminative dimensionality reduction.
2.1172 -areas to create a larger area which can be fit.
2.1173 -A future publication on the method that we develop in Aim 1 will review the scoring measures and quantita-
2.1174 -tively compare their performance in order to provide a foundation for future research of methods of marker gene
2.1175 -finding. We will measure the robustness of the scoring measures as well as their absolute performance on our
2.1176 -dataset.
2.1177 -Classifiers We will explore and compare different classifiers. As noted above, this activity is not separate
2.1178 -from the previous one, because some supervised learning algorithms include feature selection, and any clas-
2.1179 -sifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic
2.1180 -regression (including spatial models[13]), decision trees12, sparse SVMs, generative mixture models (including
2.1181 -naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic
2.1182 -algorithms, and artificial neural networks.
2.1183 -Develop algorithms to suggest a division of a structure into anatomical parts
2.1184 -
2.1185 -Figure 7: Prototypes corresponding to sample gene
2.1186 -clusters, clustered by gradient similarity. Region bound-
2.1187 -aries for the region that most matches each prototype
2.1188 -are overlaid. Dimensionality reduction on gene expression pro-
2.1189 - files We have already described the application of
2.1190 - ten dimensionality reduction algorithms for the pur-
2.1191 - pose of replacing the gene expression profiles, which
2.1192 - are vectors of about 4000 gene expression levels,
2.1193 - with a smaller number of features. We plan to fur-
2.1194 - ther explore and interpret these results, as well as to
2.1195 - apply other unsupervised learning algorithms, includ-
2.1196 - ing independent components analysis, self-organizing
2.1197 - maps, and generative models such as deep Boltz-
2.1198 - mann machines. We will explore ways to quantitatively
2.1199 - compare the relevance of the different dimensionality
2.1200 - reduction methods for identifying cortical areal bound-
2.1201 - aries.
2.1202 -Dimensionality reduction on pixels Instead of applying dimensionality reduction to the gene expression
2.1203 -profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in
2.1204 -this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions.
2.1205 -Clustering and segmentation on pixels We will explore clustering and segmentation algorithms in order to
2.1206 -segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving[7], recursive division
2.1207 -clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transforma-
2.1208 -tions, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with
2.1209 -various linkage functions. These methods can be combined with dimensionality reduction.
2.1210 -Clustering on genes We have already shown that the procedure of clustering genes according to gradient
2.1211 -similarity, and then creating an averaged prototype of each cluster’s expression pattern, yields some spatial
2.1212 -patterns which match cortical areas. We will further explore the clustering of genes.
2.1213 -In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful
2.1214 -as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then
2.1215 -replacing their expression levels with a single average expression level, thereby removing some redundancy from
2.1216 -the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality
2.1217 -reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would
2.1218 -help or hurt the ultimate goal of identifying interesting spatial regions.
2.1219 -Co-clustering There are some algorithms which simultaneously incorporate clustering on instances and on
2.1220 -features (in our case, genes and pixels), for example, IRM[9]. These are called co-clustering or biclustering
2.1221 -_________________________________________
2.1222 - 12Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision
2.1223 -tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was
2.1224 -too large. We plan to implement a pruning procedure to generate trees that use fewer genes.
2.1225 -algorithms.
2.1226 -Radial profiles We wil explore the use of the radial profile of gene expression under each pixel.
2.1227 -Compare different methods In order to tell which method is best for genomic anatomy, for each experimental
2.1228 -method we will compare the cortical map found by unsupervised learning to a cortical map derived from the Allen
2.1229 -Reference Atlas. We will explore various quantitative metrics that purport to measure how similar two clusterings
2.1230 -are, such as Jaccard, Rand index, Fowlkes-Mallows, variation of information, Larsen, Van Dongen, and others.
2.1231 -Discriminative dimensionality reduction In addition to using a purely data-driven approach to identify
2.1232 -spatial regions, it might be useful to see how well the known regions can be reconstructed from a small number
2.1233 -of features, even if those features are chosen by using knowledge of the regions. For example, linear discriminant
2.1234 -analysis could be used as a dimensionality reduction technique in order to identify a few features which are the
2.1235 -best linear summary of gene expression profiles for the purpose of discriminating between regions. This reduced
2.1236 -feature set could then be used to cluster pixels into regions. Perhaps the resulting clusters will be similar to the
2.1237 -reference atlas, yet more faithful to natural spatial domains of gene expression than the reference atlas is.
2.1238 -Apply the new methods to the cortex
2.1239 -Using the methods developed in Aim 1, we will present, for each cortical area, a short list of markers to identify
2.1240 -that area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once.
2.1241 -Because in most cases the ABA coronal dataset only contains one ISH per gene, it is possible for an unrelated
2.1242 -combination of genes to seem to identify an area when in fact it is only coincidence. There are two ways we will
2.1243 -validate our marker genes to guard against this. First, we will confirm that putative combinations of marker genes
2.1244 -express the same pattern in both hemispheres. Second, we will manually validate our final results on other gene
2.1245 -expression datasets such as EMAGE, GeneAtlas, and GENSAT[6].
2.1246 -Using the methods developed in Aim 2, we will present one or more hierarchical cortical maps. We will identify
2.1247 -and explain how the statistical structure in the gene expression data led to any unexpected or interesting features
2.1248 -of these maps, and we will provide biological hypotheses to interpret any new cortical areas, or groupings of
2.1249 -areas, which are discovered.
2.1250 -____________________________________________________________________________
2.1251 - Timeline and milestones
2.1252 -Finding marker genes
2.1253 -September-November 2009: Develop an automated mechanism for segmenting the cortical voxels into layers
2.1254 -November 2009 (milestone): Have completed construction of a flatmapped, cortical dataset with information
2.1255 -for each layer
2.1256 -October 2009-April 2010: Develop scoring and supervised learning methods.
2.1257 -January 2010 (milestone): Submit a publication on single marker genes for cortical areas
2.1258 -February-July 2010: Continue to develop scoring methods and supervised learning frameworks. Extend tech-
2.1259 -niques for robustness. Compare the performance of techniques. Validate marker genes. Prepare software
2.1260 -toolbox for Aim 1.
2.1261 -June 2010 (milestone): Submit a paper describing a method fulfilling Aim 1. Release toolbox.
2.1262 -July 2010 (milestone): Submit a paper describing combinations of marker genes for each cortical area, and a
2.1263 -small number of marker genes that can, in combination, define most of the areas at once
2.1264 - Revealing new ways to parcellate a structure into regions
2.1265 -June 2010-March 2011: Explore dimensionality reduction algorithms. Explore clustering algorithms. Adapt
2.1266 -clustering algorithms to use radial profile information. Compare the performance of techniques.
2.1267 -March 2011 (milestone): Submit a paper describing a method fulfilling Aim 2. Release toolbox.
2.1268 -February-May 2011: Using the methods developed for Aim 2, explore the genomic anatomy of the cortex,
2.1269 -interpret the results. Prepare software toolbox for Aim 2.
2.1270 -May 2011 (milestone): Submit a paper on the genomic anatomy of the cortex, using the methods developed in
2.1271 -Aim 2
2.1272 -May-August 2011: Revisit Aim 1 to see if what was learned during Aim 2 can improve the methods for Aim 1.
2.1273 -Possibly submit another paper.
2.1274 -Bibliography & References Cited
2.1275 -[1]Chris Adamson, Leigh Johnston, Terrie Inder, Sandra Rees, Iven Mareels, and Gary Egan. A Tracking
2.1276 -Approach to Parcellation of the Cerebral Cortex, volume Volume 3749/2005 of Lecture Notes in Computer
2.1277 -Science, pages 294–301. Springer Berlin / Heidelberg, 2005.
2.1278 -[2]J. Annese, A. Pitiot, I. D. Dinov, and A. W. Toga. A myelo-architectonic method for the structural classification
2.1279 -of cortical areas. NeuroImage, 21(1):15–26, 2004.
2.1280 -[3]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe
2.1281 -Warren, Wah Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome.
2.1282 -PLoS Comput Biol, 1(4):e41, 2005.
2.1283 -[4]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy,
2.1284 -Arthur W. Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of
2.1285 -expression for a mouse brain section obtained using voxelation. Physiol. Genomics, 30(3):313–321, August
2.1286 -2007.
2.1287 -[5]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite
2.1288 -for surface-based analyses of cerebral cortex. Journal of the American Medical Informatics Association:
2.1289 -JAMIA, 8(5):443–59, 2001. PMID: 11522765.
2.1290 -[6]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Scham-
2.1291 -bra, Norma J. Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A
2.1292 -gene expression atlas of the central nervous system based on bacterial artificial chromosomes. Nature,
2.1293 -425(6961):917–925, October 2003.
2.1294 -[7]Trevor Hastie, Robert Tibshirani, Michael Eisen, Ash Alizadeh, Ronald Levy, Louis Staudt, Wing Chan,
2.1295 -David Botstein, and Patrick Brown. ’Gene shaving’ as a method for identifying distinct sets of genes with
2.1296 -similar expression patterns. Genome Biology, 1(2):research0003.1–research0003.21, 2000.
2.1297 -[8]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Ex-
2.1298 -pression Patterns, volume 13 of Communications in Computer and Information Science, pages 347–361.
2.1299 -Springer Berlin Heidelberg, 2008.
2.1300 -[9]C Kemp, JB Tenenbaum, TL Griffiths, T Yamada, and N Ueda. Learning systems of concepts with an infinite
2.1301 -relational model. In AAAI, 2006.
2.1302 -[10]F. Kruggel, M. K. Brckner, Th. Arendt, C. J. Wiggins, and D. Y. von Cramon. Analyzing the neocortical
2.1303 -fine-structure. Medical Image Analysis, 7(3):251–264, September 2003.
2.1304 -[11]Ed S. Lein, Michael J. Hawrylycz, Nancy Ao, Mikael Ayres, Amy Bensinger, Amy Bernard, Andrew F. Boe,
2.1305 -Mark S. Boguski, Kevin S. Brockway, Emi J. Byrnes, Lin Chen, Li Chen, Tsuey-Ming Chen, Mei Chi Chin,
2.1306 -Jimmy Chong, Brian E. Crook, Aneta Czaplinska, Chinh N. Dang, Suvro Datta, Nick R. Dee, Aimee L.
2.1307 -Desaki, Tsega Desta, Ellen Diep, Tim A. Dolbeare, Matthew J. Donelan, Hong-Wei Dong, Jennifer G.
2.1308 -Dougherty, Ben J. Duncan, Amanda J. Ebbert, Gregor Eichele, Lili K. Estin, Casey Faber, Benjamin A.
2.1309 -Facer, Rick Fields, Shanna R. Fischer, Tim P. Fliss, Cliff Frensley, Sabrina N. Gates, Katie J. Glattfelder,
2.1310 -Kevin R. Halverson, Matthew R. Hart, John G. Hohmann, Maureen P. Howell, Darren P. Jeung, Rebecca A.
2.1311 -Johnson, Patrick T. Karr, Reena Kawal, Jolene M. Kidney, Rachel H. Knapik, Chihchau L. Kuan, James H.
2.1312 -Lake, Annabel R. Laramee, Kirk D. Larsen, Christopher Lau, Tracy A. Lemon, Agnes J. Liang, Ying Liu,
2.1313 -Lon T. Luong, Jesse Michaels, Judith J. Morgan, Rebecca J. Morgan, Marty T. Mortrud, Nerick F. Mosqueda,
2.1314 -Lydia L. Ng, Randy Ng, Geralyn J. Orta, Caroline C. Overly, Tu H. Pak, Sheana E. Parry, Sayan D. Pathak,
2.1315 -Owen C. Pearson, Ralph B. Puchalski, Zackery L. Riley, Hannah R. Rockett, Stephen A. Rowland, Joshua J.
2.1316 -Royall, Marcos J. Ruiz, Nadia R. Sarno, Katherine Schaffnit, Nadiya V. Shapovalova, Taz Sivisay, Clif-
2.1317 -ford R. Slaughterbeck, Simon C. Smith, Kimberly A. Smith, Bryan I. Smith, Andy J. Sodt, Nick N. Stewart,
2.1318 -Kenda-Ruth Stumpf, Susan M. Sunkin, Madhavi Sutram, Angelene Tam, Carey D. Teemer, Christina Thaller,
2.1319 -Carol L. Thompson, Lee R. Varnam, Axel Visel, Ray M. Whitlock, Paul E. Wohnoutka, Crissa K. Wolkey,
2.1320 -Victoria Y. Wong, Matthew Wood, Murat B. Yaylaoglu, Rob C. Young, Brian L. Youngstrom, Xu Feng Yuan,
2.1321 -Bin Zhang, Theresa A. Zwingman, and Allan R. Jones. Genome-wide atlas of gene expression in the adult
2.1322 -mouse brain. Nature, 445(7124):168–176, 2007.
2.1323 -[12]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Su-
2.1324 -san M Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann,
2.1325 -David J Anderson, Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas
2.1326 -of the adult mouse brain. Nat Neurosci, 12(3):356–362, March 2009.
2.1327 -[13]Christopher J. Paciorek. Computational techniques for spatial logistic regression with large data sets. Com-
2.1328 -putational Statistics & Data Analysis, 51(8):3631–3653, May 2007.
2.1329 -[14]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2
2.1330 -edition, July 2001.
2.1331 -[15]A. Schleicher, N. Palomero-Gallagher, P. Morosan, S. Eickhoff, T. Kowalski, K. Vos, K. Amunts, and
2.1332 -K. Zilles. Quantitative architectural analysis: a new approach to cortical mapping. Anatomy and Em-
2.1333 -bryology, 210(5):373–386, December 2005.
2.1334 -[16]Oliver Schmitt, Lars Hmke, and Lutz Dmbgen. Detection of cortical transition regions utilizing statistical
2.1335 -analyses of excess masses. NeuroImage, 19(1):42–63, May 2003.
2.1336 -[17]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003.
2.1337 -[18]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T.
2.1338 -Mortrud, Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H.
2.1339 -Gage, Allan R. Jones, Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the
2.1340 -hippocampus. Neuron, 60(6):1010–1021, December 2008.
2.1341 -[19]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton,
2.1342 -Thomas P. Perry, Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen.
2.1343 -EMAGE edinburgh mouse atlas of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860–
2.1344 -865, 2008.
2.1345 -[20]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa
2.1346 -Agarwala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood,
2.1347 -Robert Baertsch, Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer
2.1348 -Bork, Marc Botcherby, Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John
2.1349 -Burton, Jonathan Butler, Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T
2.1350 -Chinwalla, Deanna M Church, Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R
2.1351 -Copley, Alan Coulson, Olivier Couronne, James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy
2.1352 -Davies, Kimberly D Delehaunty, Justin Deri, Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens,
2.1353 -Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes,
2.1354 -Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A Fewell, Paul Flicek, Karen Foley, Wayne N
2.1355 -Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage, Richard A Gibbs, Gustavo
2.1356 -Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves, Eric D Green,
2.1357 -Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki,
2.1358 -LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard,
2.1359 -Adrienne Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy,
2.1360 -Michael Kamal, Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn
2.1361 -Kells, W James Kent, Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David
2.1362 -Kulp, Tom Landers, J P Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd,
2.1363 -Susan Lucas, Bin Ma, Donna R Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer,
2.1364 -Megan McCarthy, W Richard McCombie, Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim,
2.1365 -Beverley Meredith, Jill P Mesirov, Webb Miller, Tracie L Miner, Emmanuel Mongin, Kate T Montgomery,
2.1366 -Michael Morgan, Richard Mott, James C Mullikin, Donna M Muzny, William E Nash, Joanne O Nelson,
2.1367 -Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J O’Connor, Yasushi Okazaki, Karen
2.1368 -Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane Peterson, Pavel Pevzner,
2.1369 -Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter, Michael Quail,
2.1370 -Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph Santos,
2.1371 -Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven
2.1372 -Seaman, Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer,
2.1373 -Guy Slater, Arian Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles
2.1374 -Sugnet, Mikita Suyama, Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp,
2.1375 -Catherine Ucla, Abel Ureta-Vidal, Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie
2.1376 -Wall, Ryan J Weber, Robert B Weiss, Michael C Wendl, Anthony P West, Kris Wetterstrand, Raymond
2.1377 -Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey, Sophie Williams, Richard K Wilson, Eitan Win-
2.1378 -ter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang, Evgeny M Zdobnov, Michael C Zody, and
2.1379 -Eric S Lander. Initial sequencing and comparative analysis of the mouse genome. Nature, 420(6915):520–
2.1380 -62, December 2002. PMID: 12466850.
2.1381 -
2.1382 -
2.1383 +ond row: the first 6 reduced dimensions, using NNMF. Third row: the
2.1384 +first six reduced dimensions, using landmark Isomap. Bottom row:
2.1385 +examples of kmeans clustering applied to reduced datasets to find
2.1386 +7 clusters. Left: 19 of the major subdivisions of the cortex. Sec-
2.1387 +ond from left: PCA. Third from left: NNMF. Right: Landmark Isomap.
2.1388 +Additional details: In the third and fourth rows, 7 dimensions were
2.1389 +found, but only 6 displayed. In the last row: for PCA, 50 dimensions
2.1390 +were used; for NNMF, 6 dimensions were used; for landmark Isomap,
2.1391 +7 dimensions were used. Some cortical areas have
2.1392 + no single marker genes but
2.1393 + can be identified by com-
2.1394 + binatorial coding. This re-
2.1395 + quires multivariate scoring
2.1396 + measures and feature se-
2.1397 + lection procedures. Many
2.1398 + of the measures, such
2.1399 + as expression energy, gra-
2.1400 + dient similarity, Jaccard,
2.1401 + Dice, Hough, Student’s t,
2.1402 + and Mann-Whitney U are
2.1403 + univariate. We will ex-
2.1404 + tend these scoring mea-
2.1405 + sures for use in multivariate
2.1406 + feature selection, that is,
2.1407 + for scoring how well com-
2.1408 + binations of genes, rather
2.1409 + than individual genes, can
2.1410 + distinguish a target area.
2.1411 + There are existing mul-
2.1412 + tivariate forms of some
2.1413 + of the univariate scoring
2.1414 + measures, for example,
2.1415 + Hotelling’s T-square is a
2.1416 + multivariate analog of Stu-
2.1417 + dent’s t.
2.1418 + We will develop a fea-
2.1419 +ture selection procedure for choosing the best small set of marker genes for a given anatomical
2.1420 +area. In addition to using the scoring measures that we develop, we will also explore (a) feature
2.1421 +selection using a stepwise wrapper over “vanilla” classifiers such as logistic regression, (b) super-
2.1422 +vised learning methods such as decision trees which incrementally/greedily combine single gene
2.1423 +markers into sets, and (c) supervised learning methods which use soft constraints to minimize
2.1424 +number of features used, such as sparse support vector machines (SVMs).
2.1425 + Since errors of displacement and of shape may cause genes and target areas to match less
2.1426 +than they should, we will consider the robustness of feature selection methods in the presence of
2.1427 +error. Some of these methods, such as the Hough transform, are designed to be resistant in the
2.1428 +presence of error, but many are not.
2.1429 + An area may be difficult to identify because the boundaries are misdrawn in the atlas, or be-
2.1430 +cause the shape of the natural domain of gene expression corresponding to the area is different
2.1431 +from the shape of the area as recognized by anatomists. We will develop extensions to our pro-
2.1432 +cedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly12,
2.1433 +____________________________________
2.1434 + 12Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene ex-
2.1435 +pression by multiple sources of evidence. Interestingly, the need to detect “natural spatial domains of gene expression”
2.1436 +in a data-driven fashion means that the methods of Goal 2 might be useful in achieving Goal 1, as well – particularly
2.1437 + 13
2.1438 +
2.1439 +and (b) detect when a difficult area could be combined with adjacent areas to create a larger area
2.1440 +which can be fit.
2.1441 + A future publication on the method that we develop in Goal 1 will review the scoring measures
2.1442 +and quantitatively compare their performance in order to provide a foundation for future research
2.1443 +of methods of marker gene finding. We will measure the robustness of the scoring measures as
2.1444 +well as their absolute performance on our dataset.
2.1445 + Develop algorithms to suggest a division of a structure into anatomical parts
2.1446 +
2.1447 +Figure 7: Prototypes corresponding to sample gene clus-
2.1448 +ters, clustered by gradient similarity. Region boundaries for
2.1449 +the region that most matches each prototype are overlaid. Dimensionality reduction on gene
2.1450 + expression profiles We have al-
2.1451 + ready described the application of
2.1452 + ten dimensionality reduction algo-
2.1453 + rithms for the purpose of replacing
2.1454 + the gene expression profiles, which
2.1455 + are vectors of about 4000 gene ex-
2.1456 + pression levels, with a smaller num-
2.1457 + ber of features. We plan to further ex-
2.1458 + plore and interpret these results, as
2.1459 + well as to apply other unsupervised
2.1460 + learning algorithms, including inde-
2.1461 + pendent components analysis, self-
2.1462 +organizing maps, and generative models such as deep Boltzmann machines. We will explore
2.1463 +ways to quantitatively compare the relevance of the different dimensionality reduction methods for
2.1464 +identifying cortical areal boundaries.
2.1465 + Dimensionality reduction on pixels Instead of applying dimensionality reduction to the gene
2.1466 +expression profiles, the same techniques can be applied instead to the pixels. It is possible that
2.1467 +the features generated in this way by some dimensionality reduction techniques will directly corre-
2.1468 +spond to interesting spatial regions.
2.1469 + Clustering and segmentation on pixels We will explore clustering and image segmentation
2.1470 +algorithms in order to segment the pixels into regions. We will explore k-means, spectral cluster-
2.1471 +ing, gene shaving[9], recursive division clustering, multivariate generalizations of edge detectors,
2.1472 +multivariate generalizations of watershed transformations, region growing, active contours, graph
2.1473 +partitioning methods, and recursive agglomerative clustering with various linkage functions. These
2.1474 +methods can be combined with dimensionality reduction.
2.1475 + Clustering on genes We have already shown that the procedure of clustering genes according
2.1476 +to gradient similarity, and then creating an averaged prototype of each cluster’s expression pattern,
2.1477 +yields some spatial patterns which match cortical areas (Figure 7). We will further explore the
2.1478 +clustering of genes.
2.1479 + In addition to using the cluster expression prototypes directly to identify spatial regions, this
2.1480 +might be useful as a component of dimensionality reduction. For example, one could imagine
2.1481 +clustering similar genes and then replacing their expression levels with a single average expression
2.1482 +____________________________________
2.1483 +discriminative dimensionality reduction.
2.1484 + 14
2.1485 +
2.1486 +level, thereby removing some redundancy from the gene expression profiles. One could then
2.1487 +perform clustering on pixels (possibly after a second dimensionality reduction step) in order to
2.1488 +identify spatial regions. It remains to be seen whether removal of redundancy would help or hurt
2.1489 +the ultimate goal of identifying interesting spatial regions.
2.1490 + Co-clustering We will explore some algorithms which simultaneously incorporate clustering
2.1491 +on instances and on features (in our case, pixels and genes), for example, IRM[11]. These are
2.1492 +called co-clustering or biclustering algorithms.
2.1493 + Compare different methods In order to tell which method is best for genomic anatomy, for
2.1494 +each experimental method we will compare the cortical map found by unsupervised learning to a
2.1495 +cortical map derived from the Allen Reference Atlas. We will explore various quantitative metrics
2.1496 +that purport to measure how similar two clusterings are, such as Jaccard, Rand index, Fowlkes-
2.1497 +Mallows, variation of information, Larsen, Van Dongen, and others.
2.1498 + Discriminative dimensionality reduction In addition to using a purely data-driven approach
2.1499 +to identify spatial regions, it might be useful to see how well the known regions can be recon-
2.1500 +structed from a small number of features, even if those features are chosen by using knowledge of
2.1501 +the regions. For example, linear discriminant analysis could be used as a dimensionality reduction
2.1502 +technique in order to identify a few features which are the best linear summary of gene expression
2.1503 +profiles for the purpose of discriminating between regions. This reduced feature set could then be
2.1504 +used to cluster pixels into regions. Perhaps the resulting clusters will be similar to the reference
2.1505 +atlas, yet more faithful to natural spatial domains of gene expression than the reference atlas is.
2.1506 + Apply the new methods to the cortex
2.1507 +Using the methods developed in Goal 1, we will present, for each cortical area, a short list of
2.1508 +markers to identify that area; and we will also present lists of “panels” of genes that can be used
2.1509 +to delineate many areas at once.
2.1510 + Because in most cases the ABA coronal dataset only contains one ISH per gene, it is possible
2.1511 +for an unrelated combination of genes to seem to identify an area when in fact it is only coinci-
2.1512 +dence. There are three ways we will validate our marker genes to guard against this. First, we
2.1513 +will confirm that putative combinations of marker genes express the same pattern in both hemi-
2.1514 +spheres. Second, we will manually validate our final results on other gene expression datasets
2.1515 +such as EMAGE, GeneAtlas, and GENSAT[8]. Third, we may conduct ISH experiments jointly with
2.1516 +collaborators to get further data on genes of particular interest.
2.1517 + Using the methods developed in Goal 2, we will present one or more hierarchical cortical
2.1518 +maps. We will identify and explain how the statistical structure in the gene expression data led to
2.1519 +any unexpected or interesting features of these maps, and we will provide biological hypotheses
2.1520 +to interpret any new cortical areas, or groupings of areas, which are discovered.
2.1521 + Apply the new methods to hyperspectral datasets
2.1522 +Our software will be able to read and write file formats common in the hyperspectral imaging
2.1523 +community such as Erdas LAN and ENVI, and it will be able to convert between the SEV and NIFTI
2.1524 +formats from neuroscience and the ENVI format from GIS. The methods developed in Goals 1 and
2.1525 +2 will be implemented either as part of Spectral Python or as a separate tool that interoperates
2.1526 +with Spectral Python. The methods will be run on hyperspectral satellite image datasets, and their
2.1527 +performance will be compared to existing hyperspectral analysis techniques.
2.1528 + 15
2.1529 +
2.1530 + References Cited
2.1531 + [1] Chris Adamson, Leigh Johnston, Terrie Inder, Sandra Rees, Iven Mareels, and Gary Egan.
2.1532 + A Tracking Approach to Parcellation of the Cerebral Cortex, volume 3749/2005 of Lecture
2.1533 + Notes in Computer Science, pages 294–301. Springer Berlin / Heidelberg, 2005.
2.1534 + [2] J. Annese, A. Pitiot, I. D. Dinov, and A. W. Toga. A myelo-architectonic method for the struc-
2.1535 + tural classification of cortical areas. NeuroImage, 21(1):15–26, 2004.
2.1536 + [3] Tanya Barrett, Dennis B. Troup, Stephen E. Wilhite, Pierre Ledoux, Dmitry Rudnev, Carlos
2.1537 + Evangelista, Irene F. Kim, Alexandra Soboleva, Maxim Tomashevsky, and Ron Edgar. NCBI
2.1538 + GEO: mining tens of millions of expression profiles–database and tools update. Nucl. Acids
2.1539 + Res., 35(suppl_1):D760–765, 2007.
2.1540 + [4] George W. Bell, Tatiana A. Yatskievych, and Parker B. Antin. GEISHA, a whole-mount in
2.1541 + situ hybridization gene expression screen in chicken embryos. Developmental Dynamics,
2.1542 + 229(3):677–687, 2004.
2.1543 + [5] James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C
2.1544 + Crair, Joe Warren, Wah Chiu, and Gregor Eichele. A digital atlas to characterize the mouse
2.1545 + brain transcriptome. PLoS Comput Biol, 1(4):e41, 2005.
2.1546 + [6] Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline,
2.1547 + Shawn Levy, Arthur W. Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith.
2.1548 + A genome-scale map of expression for a mouse brain section obtained using voxelation.
2.1549 + Physiol. Genomics, 30(3):313–321, August 2007.
2.1550 + [7] D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated
2.1551 + software suite for surface-based analyses of cerebral cortex. Journal of the American Medical
2.1552 + Informatics Association: JAMIA, 8(5):443–59, 2001. PMID: 11522765.
2.1553 + [8] Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B.
2.1554 + Schambra, Norma J. Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and
2.1555 + Nathaniel Heintz. A gene expression atlas of the central nervous system based on bacte-
2.1556 + rial artificial chromosomes. Nature, 425(6961):917–925, October 2003.
2.1557 + [9] Trevor Hastie, Robert Tibshirani, Michael Eisen, Ash Alizadeh, Ronald Levy, Louis Staudt,
2.1558 + Wing Chan, David Botstein, and Patrick Brown. ’Gene shaving’ as a method for identifying dis-
2.1559 + tinct sets of genes with similar expression patterns. Genome Biology, 1(2):research0003.1–
2.1560 + research0003.21, 2000.
2.1561 +[10] Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interact-
2.1562 + ing Gene Expression Patterns, volume 13 of Communications in Computer and Information
2.1563 + Science, pages 347–361. Springer Berlin Heidelberg, 2008.
2.1564 +[11] C Kemp, JB Tenenbaum, TL Griffiths, T Yamada, and N Ueda. Learning systems of concepts
2.1565 + with an infinite relational model. In AAAI, 2006.
2.1566 +[12] F. Kruggel, M. K. Brckner, Th. Arendt, C. J. Wiggins, and D. Y. von Cramon. Analyzing the
2.1567 + neocortical fine-structure. Medical Image Analysis, 7(3):251–264, September 2003.
2.1568 + 16
2.1569 +
2.1570 +[13] Ed S. Lein, Michael J. Hawrylycz, Nancy Ao, Mikael Ayres, Amy Bensinger, Amy Bernard,
2.1571 + Andrew F. Boe, Mark S. Boguski, Kevin S. Brockway, Emi J. Byrnes, Lin Chen, Li Chen,
2.1572 + Tsuey-Ming Chen, Mei Chi Chin, Jimmy Chong, Brian E. Crook, Aneta Czaplinska, Chinh N.
2.1573 + Dang, Suvro Datta, Nick R. Dee, Aimee L. Desaki, Tsega Desta, Ellen Diep, Tim A. Dolbeare,
2.1574 + Matthew J. Donelan, Hong-Wei Dong, Jennifer G. Dougherty, Ben J. Duncan, Amanda J.
2.1575 + Ebbert, Gregor Eichele, Lili K. Estin, Casey Faber, Benjamin A. Facer, Rick Fields, Shanna R.
2.1576 + Fischer, Tim P. Fliss, Cliff Frensley, Sabrina N. Gates, Katie J. Glattfelder, Kevin R. Halverson,
2.1577 + Matthew R. Hart, John G. Hohmann, Maureen P. Howell, Darren P. Jeung, Rebecca A. John-
2.1578 + son, Patrick T. Karr, Reena Kawal, Jolene M. Kidney, Rachel H. Knapik, Chihchau L. Kuan,
2.1579 + James H. Lake, Annabel R. Laramee, Kirk D. Larsen, Christopher Lau, Tracy A. Lemon,
2.1580 + Agnes J. Liang, Ying Liu, Lon T. Luong, Jesse Michaels, Judith J. Morgan, Rebecca J. Mor-
2.1581 + gan, Marty T. Mortrud, Nerick F. Mosqueda, Lydia L. Ng, Randy Ng, Geralyn J. Orta, Car-
2.1582 + oline C. Overly, Tu H. Pak, Sheana E. Parry, Sayan D. Pathak, Owen C. Pearson, Ralph B.
2.1583 + Puchalski, Zackery L. Riley, Hannah R. Rockett, Stephen A. Rowland, Joshua J. Royall,
2.1584 + Marcos J. Ruiz, Nadia R. Sarno, Katherine Schaffnit, Nadiya V. Shapovalova, Taz Sivisay,
2.1585 + Clifford R. Slaughterbeck, Simon C. Smith, Kimberly A. Smith, Bryan I. Smith, Andy J. Sodt,
2.1586 + Nick N. Stewart, Kenda-Ruth Stumpf, Susan M. Sunkin, Madhavi Sutram, Angelene Tam,
2.1587 + Carey D. Teemer, Christina Thaller, Carol L. Thompson, Lee R. Varnam, Axel Visel, Ray M.
2.1588 + Whitlock, Paul E. Wohnoutka, Crissa K. Wolkey, Victoria Y. Wong, Matthew Wood, Murat B.
2.1589 + Yaylaoglu, Rob C. Young, Brian L. Youngstrom, Xu Feng Yuan, Bin Zhang, Theresa A. Zwing-
2.1590 + man, and Allan R. Jones. Genome-wide atlas of gene expression in the adult mouse brain.
2.1591 + Nature, 445(7124):168–176, 2007.
2.1592 +[14] Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew
2.1593 + Asbury, Tony Cheung, Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M.
2.1594 + Norland, Dennis S. Rice, Nilesh Dosooye, Sundeep Shakya, Perdeep Mehta, and Tom Cur-
2.1595 + ran. BGEM: an in situ hybridization database of gene expression in the embryonic and adult
2.1596 + mouse nervous system. PLoS Biology, 4(4):e86 EP –, April 2006.
2.1597 +[15] Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan,
2.1598 + Sayan Pathak, Susan M Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P
2.1599 + Mitra, Luis Puelles, John Hohmann, David J Anderson, Ed S Lein, Allan R Jones, and Michael
2.1600 + Hawrylycz. An anatomic gene expression atlas of the adult mouse brain. Nat Neurosci,
2.1601 + 12(3):356–362, March 2009.
2.1602 +[16] George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Aca-
2.1603 + demic Press, 2 edition, July 2001.
2.1604 +[17] A. Schleicher, N. Palomero-Gallagher, P. Morosan, S. Eickhoff, T. Kowalski, K. Vos,
2.1605 + K. Amunts, and K. Zilles. Quantitative architectural analysis: a new approach to cortical
2.1606 + mapping. Anatomy and Embryology, 210(5):373–386, December 2005.
2.1607 +[18] Oliver Schmitt, Lars Hmke, and Lutz Dmbgen. Detection of cortical transition regions utilizing
2.1608 + statistical analyses of excess masses. NeuroImage, 19(1):42–63, May 2003.
2.1609 +[19] S.B. Serpico and L. Bruzzone. A new search algorithm for feature selection in hyperspec-
2.1610 + tral remote sensing images. Geoscience and Remote Sensing, IEEE Transactions on,
2.1611 + 39(7):1360–1367, 2001.
2.1612 + 17
2.1613 +
2.1614 +[20] Constance M. Smith, Jacqueline H. Finger, Terry F. Hayamizu, Ingeborg J. McCright, Janan T.
2.1615 + Eppig, James A. Kadin, Joel E. Richardson, and Martin Ringwald. The mouse gene expres-
2.1616 + sion database (GXD): 2007 update. Nucl. Acids Res., 35(suppl_1):D618–623, 2007.
2.1617 +[21] Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November
2.1618 + 2003.
2.1619 +[22] Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPher-
2.1620 + son, Marty T. Mortrud, Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard,
2.1621 + Ralph B. Puchalski, Fred H. Gage, Allan R. Jones, Vladimir B. Bajic, Michael J. Hawrylycz,
2.1622 + and Ed S. Lein. Genomic anatomy of the hippocampus. Neuron, 60(6):1010–1021, Decem-
2.1623 + ber 2008.
2.1624 +[23] Pavel Tomancak, Amy Beaton, Richard Weiszmann, Elaine Kwan, ShengQiang Shu,
2.1625 + Suzanna E Lewis, Stephen Richards, Michael Ashburner, Volker Hartenstein, Susan E Cel-
2.1626 + niker, and Gerald M Rubin. Systematic determination of patterns of gene expression during
2.1627 + drosophila embryogenesis. Genome Biology, 3(12):research008818814, 2002. PMC151190.
2.1628 +[24] Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson,
2.1629 + Nicholas Burton, Thomas P. Perry, Paul Smith, Richard A. Baldock, Duncan R. Davidson,
2.1630 + and Jeffrey H. Christiansen. EMAGE edinburgh mouse atlas of gene expression: 2008 up-
2.1631 + date. Nucl. Acids Res., 36(suppl_1):D860–865, 2008.
2.1632 +[25] Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression
2.1633 + patterns in the mouse embryo. Nucl. Acids Res., 32(suppl_1):D552–556, 2004.
2.1634 +[26] Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj
2.1635 + Agarwal, Richa Agarwala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E
2.1636 + Antonarakis, John Attwood, Robert Baertsch, Jonathon Bailey, Karen Barlow, Stephan Beck,
2.1637 + Eric Berry, Bruce Birren, Toby Bloom, Peer Bork, Marc Botcherby, Nicolas Bray, Michael R
2.1638 + Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John Burton, Jonathan Butler,
2.1639 + Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T Chin-
2.1640 + walla, Deanna M Church, Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook,
2.1641 + Richard R Copley, Alan Coulson, Olivier Couronne, James Cuff, Val Curwen, Tim Cutts,
2.1642 + Mark Daly, Robert David, Joy Davies, Kimberly D Delehaunty, Justin Deri, Emmanouil T Der-
2.1643 + mitzakis, Colin Dewey, Nicholas J Dickens, Mark Diekhans, Sheila Dodge, Inna Dubchak,
2.1644 + Diane M Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes, Pallavi Eswara, Eduardo
2.1645 + Eyras, Adam Felsenfeld, Ginger A Fewell, Paul Flicek, Karen Foley, Wayne N Frankel, Lu-
2.1646 + cinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage, Richard A Gibbs, Gustavo
2.1647 + Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves,
2.1648 + Eric D Green, Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler,
2.1649 + Yoshihide Hayashizaki, LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer,
2.1650 + Fan Hsu, Axin Hua, Tim Hubbard, Adrienne Hunt, Ian Jackson, David B Jaffe, L Steven John-
2.1651 + son, Matthew Jones, Thomas A Jones, Ann Joy, Michael Kamal, Elinor K Karlsson, Donna
2.1652 + Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn Kells, W James Kent, An-
2.1653 + drew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David Kulp,
2.1654 + Tom Landers, J P Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Chris-
2.1655 + tine Lloyd, Susan Lucas, Bin Ma, Donna R Maglott, Elaine R Mardis, Lucy Matthews, Evan
2.1656 + 18
2.1657 +
2.1658 + Mauceli, John H Mayer, Megan McCarthy, W Richard McCombie, Stuart McLaren, Kirsten
2.1659 + McLay, John D McPherson, Jim Meldrim, Beverley Meredith, Jill P Mesirov, Webb Miller, Tra-
2.1660 + cie L Miner, Emmanuel Mongin, Kate T Montgomery, Michael Morgan, Richard Mott, James C
2.1661 + Mullikin, Donna M Muzny, William E Nash, Joanne O Nelson, Michael N Nhan, Robert Nicol,
2.1662 + Zemin Ning, Chad Nusbaum, Michael J O’Connor, Yasushi Okazaki, Karen Oliver, Emma
2.1663 + Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane Peterson, Pavel Pevzner,
2.1664 + Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter,
2.1665 + Michael Quail, Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alis-
2.1666 + tair G Rust, Ralph Santos, Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz,
2.1667 + Scott Schwartz, Carol Scott, Steven Seaman, Steve Searle, Ted Sharpe, Andrew Sheridan,
2.1668 + Ratna Shownkeen, Sarah Sims, Jonathan B Singer, Guy Slater, Arian Smit, Douglas R Smith,
2.1669 + Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles Sugnet, Mikita Suyama,
2.1670 + Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp, Cather-
2.1671 + ine Ucla, Abel Ureta-Vidal, Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade,
2.1672 + Melanie Wall, Ryan J Weber, Robert B Weiss, Michael C Wendl, Anthony P West, Kris
2.1673 + Wetterstrand, Raymond Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey, Sophie
2.1674 + Williams, Richard K Wilson, Eitan Winter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-
2.1675 + Pyng Yang, Evgeny M Zdobnov, Michael C Zody, and Eric S Lander. Initial sequencing and
2.1676 + comparative analysis of the mouse genome. Nature, 420(6915):520–62, December 2002.
2.1677 + PMID: 12466850.
2.1678 + 19
2.1679 +
2.1680 +
3.1 Binary file grant.odt has changed
4.1 Binary file grant.pdf has changed
5.1 --- a/grant.txt Fri Apr 24 01:12:36 2009 -0700
5.2 +++ b/grant.txt Fri Jul 03 05:17:28 2009 -0700
5.3 @@ -1,12 +1,30 @@
5.4 -\documentclass[11pt]{nih-blank}
5.5 +\documentclass[11pt,letterpaper]{article}
5.6 +
5.7 +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
5.8 +\pagestyle{plain} %%
5.9 +%%%%%%%%%% EXACT 1in MARGINS %%%%%%% %%
5.10 +\setlength{\textwidth}{6.5in} %% %%
5.11 +\setlength{\oddsidemargin}{0in} %% (It is recommended that you %%
5.12 +\setlength{\evensidemargin}{0in} %% not change these parameters, %%
5.13 +\setlength{\textheight}{8.5in} %% at the risk of having your %%
5.14 +\setlength{\topmargin}{0in} %% proposal dismissed on the basis %%
5.15 +\setlength{\headheight}{0in} %% of incorrect formatting!!!) %%
5.16 +\setlength{\headsep}{0in} %% %%
5.17 +\setlength{\footskip}{.5in} %% %%
5.18 +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%
5.19 +\newcommand{\required}[1]{\section*{\hfil #1\hfil}} %%
5.20 +\renewcommand{\refname}{\hfil References Cited\hfil} %%
5.21 +\bibliographystyle{plain} %%
5.22 +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
5.23
5.24 \usepackage[small,compact]{titlesec}
5.25
5.26 -%%\piname{Stevens, Charles F.}
5.27 -
5.28 -%%\usepackage{floatflt}
5.29 \usepackage{wrapfig}
5.30
5.31 +%% does this change the font?
5.32 +\usepackage{helvet}
5.33 +\renewcommand{\familydefault}{\sfdefault}
5.34 +
5.35 %%\renewcommand{\rmdefault}{phv} %% Arial
5.36 %%\renewcommand{\sfdefault}{phv} %% Arial
5.37
5.38 @@ -26,36 +44,64 @@
5.39 \begin{document}
5.40
5.41
5.42 -== Specific aims ==
5.43 -
5.44 -Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others, allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns. We will validate these methods by applying them to 46 anatomical areas within the cerebral cortex, by using the Allen Mouse Brain Atlas coronal dataset (ABA). This gene expression dataset was generated using ISH, and contains over 4,000 genes. For each gene, a digitized 3-D raster of the expression pattern is available: for each gene, the level of expression at each of 51,533 voxels is recorded.
5.45 -
5.46 -We have three specific aims:\\
5.47 -
5.48 -(1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target anatomical regions\\
5.49 -
5.50 -(2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions, based on spatial patterns in gene expression\\
5.51 -
5.52 -(3) create a 2-D "flat map" dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2).\\
5.53 -
5.54 -Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that the methods developed in aims (1) and (2) will generalize to any sort of high-dimensional data over points located in a low-dimensional space. In particular, our method could be applied to genome-wide sequencing data derived from sets of tissues and disease states.
5.55 -
5.56 -In terms of the application of the methods to cerebral cortex, aim (1) is to go from cortical areas to marker genes, and aim (2) is to let the gene profile define the cortical areas. In addition to validating the usefulness of the algorithms, the application of these methods to cortex will produce immediate benefits, because there are currently no known genetic markers for most cortical areas. The results of the project will support the development of new ways to selectively target cortical areas, and it will support the development of a method for identifying the cortical areal boundaries present in small tissue samples.
5.57 -
5.58 -All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well as the machine-readable datasets developed in aim (3), will be published and freely available for others to use.
5.59 -
5.60 -
5.61 -\newpage
5.62 -
5.63 -== Analysis of high dimensional data for genomic anatomy in the brain ==
5.64 -This application addresses broad Challenge Area (06) __Enabling Technologies__ and specific Challenge Topic, 06-HG-101: __New computational and statistical methods for the analysis of large data sets from next-generation sequencing technologies__. Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others, allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns.
5.65 +== Introduction ==
5.66 +
5.67 +Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others, allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns. We will validate these methods by applying them to 46 anatomical areas within the cerebral cortex, by using the Allen Mouse Brain Atlas coronal dataset (ABA). %%This gene expression dataset was generated using ISH, and contains over 4,000 genes. For each gene, a digitized 3-D raster of the expression pattern is available: for each gene, the level of expression at each of 51,533 voxels is recorded.
5.68 +
5.69 +This project has three primary goals:\\
5.70 +
5.71 +(1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target anatomical regions.\\
5.72 +
5.73 +(2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions, based on spatial patterns in gene expression.\\
5.74 +
5.75 +(3) adapt our tools for the analysis of multi/hyperspectral imaging data from the Geographic Information Systems (GIS) community.\\
5.76 +
5.77 +We will create a 2-D "flat map" dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. We will use this dataset to validate the methods developed in (1) and (2). In addition to its use in neuroscience, this dataset will be useful as a sample dataset for the machine learning community.
5.78 +
5.79 +Although our particular application involves the 3D spatial distribution of gene expression, the methods we will develop will generalize to any high-dimensional data over points located in a low-dimensional space. In particular, our methods could be applied to the analysis of multi/hyperspectral imaging data, or alternately to genome-wide sequencing data derived from sets of tissues and disease states.
5.80 +
5.81 +All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit and the datasets will be published and freely available for others to use.
5.82 +
5.83 +
5.84 +%%=== Contents ===
5.85 +%%First we will discuss background, then related work, then our data sharing plan and broader impacts, next the preliminary results that we have already achieved, and finally our plan to complete the project.
5.86
5.87 \vspace{0.3cm}\hrule
5.88 -== The Challenge and Potential impact ==
5.89 -
5.90 -Each of our three aims will be discussed in turn. For each aim, we will develop a conceptual framework for thinking about the task. Next we will discuss related work, and then summarize why our strategy is different from what has been done before. After we have discussed all three aims, we will describe the potential impact.
5.91 -
5.92 -=== Aim 1: Given a map of regions, find genes that mark the regions ===
5.93 +
5.94 +== Background and related work ==
5.95 +\vspace{0.3cm}**Cortical anatomy**
5.96 +
5.97 +The cortex is divided into areas and layers. Because of the cortical columnar organization, the parcellation of the cortex into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can picture an area of the cortex as a slice of a six-layered cake\footnote{Outside of isocortex, the number of layers varies.}.
5.98 +
5.99 +It is known that different cortical areas have distinct roles in both normal functioning and in disease processes, yet there are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their approximate location upon the cortical surface.
5.100 +
5.101 +Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single agreed-upon map can be seen by contrasting the recent maps given by Swanson\cite{swanson_brain_2003} on the one hand, and Paxinos and Franklin\cite{paxinos_mouse_2001} on the other. While the maps are certainly very similar in their general arrangement, significant differences remain.
5.102 +
5.103 +\vspace{0.3cm}**The Allen Mouse Brain Atlas dataset**
5.104 +
5.105 +The Allen Mouse Brain Atlas (ABA) data\cite{lein_genome-wide_2007} were produced by doing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse brains were needed in order to measure the expression of many genes.
5.106 +
5.107 +Mus musculus is thought to contain about 22,000 protein-coding genes\cite{waterston_initial_2002}. The ABA contains data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from only the coronal subset of the ABA\footnote{The sagittal data do not cover the entire cortex, and also have greater registration error\cite{ng_anatomic_2009}. Genes were selected by the Allen Institute for coronal sectioning based on, "classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern"\cite{ng_anatomic_2009}.}. An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 \= 159,326 voxels, of which 51,533 are in the brain\cite{ng_anatomic_2009}. For each voxel and each gene, the expression energy\cite{lein_genome-wide_2007} within that voxel is made available.
5.108 +
5.109 +
5.110 +
5.111 +%%The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress\footnote{http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE}, EADHB\footnote{http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html}, MAMEP\footnote{http://mamep.molgen.mpg.de/index.php}, Xenbase\footnote{http://xenbase.org/}, ZFIN\cite{sprague_zebrafish_2006}, Aniseed\footnote{http://aniseed-ibdm.univ-mrs.fr/}, VisiGene\footnote{http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources}, GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE\footnote{http://compare.ibdml.univ-mrs.fr/} GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007}\footnote{GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.}. With the exception of the ABA, GenePaint, and EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression.
5.112 +
5.113 +%%The ABA is not the only large public spatial gene expression dataset\cite{gong_gene_2003}\cite{visel_genepaint.org:atlas_2004}\cite{carson_digital_2005}\cite{magdaleno_bgem:in_2006}\cite{venkataraman_emage_2008}\footnote{http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE}\footnote{http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html}\footnote{http://mamep.molgen.mpg.de/index.php}\footnote{http://xenbase.org/}\cite{sprague_zebrafish_2006}\footnote{http://aniseed-ibdm.univ-mrs.fr/}\footnote{http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources}\cite{bell_geishawhole-mount_2004}\cite{tomancak_systematic_2002}\footnote{http://compare.ibdml.univ-mrs.fr/}\cite{smith_mouse_2007}\cite{barrett_ncbi_2007}.
5.114 +
5.115 +The ABA is not the only large public spatial gene expression dataset\cite{gong_gene_2003}\cite{visel_genepaint.org:atlas_2004}\cite{carson_digital_2005}\cite{magdaleno_bgem:in_2006}\cite{venkataraman_emage_2008}\cite{bell_geishawhole-mount_2004}\cite{tomancak_systematic_2002}\cite{smith_mouse_2007}\cite{barrett_ncbi_2007}. However, with the exception of the ABA, GenePaint\cite{visel_genepaint.org:atlas_2004}, and EMAGE\cite{venkataraman_emage_2008}, most of the other resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space.
5.116 +
5.117 +
5.118 +%%, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression.
5.119 +
5.120 +%% \footnote{Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress (http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE), EADHB (http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html), MAMEP (http://mamep.molgen.mpg.de/index.php), Xenbase (http://xenbase.org/), ZFIN\cite{sprague_zebrafish_2006}, Aniseed (http://aniseed-ibdm.univ-mrs.fr/), VisiGene (http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some of the other listed data sources), GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE (http://compare.ibdml.univ-mrs.fr/), GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007} (GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.)}
5.121 +
5.122 +The remainder of the background section will be divided into three parts, one for each major goal.
5.123 +
5.124 +
5.125 +\vspace{0.3cm}
5.126 +=== Goal 1, From Areas to Genes: Given a map of regions, find genes that mark those regions ===
5.127 +
5.128
5.129 \vspace{0.3cm}**Machine learning terminology: classifiers** The task of looking for marker genes for known anatomical regions means that one is looking for a set of genes such that, if the expression level of those genes is known, then the locations of the regions can be inferred.
5.130
5.131 @@ -65,65 +111,174 @@
5.132
5.133 %%Therefore, an understanding of the relationship between the combination of their expression levels and the locations of the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function a __classifier__. In general, the input to a classifier is called an __instance__, and the output is called a __label__ (or a __class label__).
5.134
5.135 -If we define the regions so that they cover the entire anatomical structure to be subdivided, we may say that we are using gene expression in each voxel to assign that voxel to the proper area. We call this a __classification task__, because each voxel is being assigned to a class (namely, its region). An understanding of the relationship between the combination of their expression levels and the locations of the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function a __classifier__. In general, the input to a classifier is called an __instance__, and the output is called a __label__ (or a __class label__).
5.136 +If we define the regions so that they cover the entire anatomical structure to be subdivided, and restrict ourselves to looking at one voxel at a time, we may say that we are using gene expression in each voxel to assign that voxel to the proper area. We call this a __classification task__, because each voxel is being assigned to a class (namely, its region). An understanding of the relationship between the combination of gene expression levels and the locations of the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function a __classifier__. In general, the input to a classifier is called an __instance__, and the output is called a __label__ (or a __class label__).
5.137
5.138 %% The construction of the classifier is called __training__ (also __learning__), and
5.139
5.140 -The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. The initial gene expression dataset used in the construction of the classifier is called __training data__. In the machine learning literature, this sort of procedure may be thought of as a __supervised learning task__, defined as a task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances (voxels) for which the labels (regions) are known.
5.141 -
5.142 -Each gene expression level is called a __feature__, and the selection of which genes\footnote{Strictly speaking, the features are gene expression levels, but we'll call them genes.} to include is called __feature selection__. Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
5.143 +Our goal is not to produce a single classifier, but rather to develop an automated method for determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. The initial gene expression dataset used in the construction of the classifier is called __training data__. In the machine learning literature, this sort of procedure may be thought of as a __supervised learning task__, defined as a task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances (voxels) for which the labels (regions) are known.
5.144 +
5.145 +Each gene expression level is called a __feature__, and the selection of which genes\footnote{Strictly speaking, the features are gene expression levels, but we'll call them genes.} to look at is called __feature selection__. Feature selection is one component of the task of learning a classifier. %%Some methods for learning classifiers start out with a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
5.146
5.147 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic procedure may be used in which features are added and subtracted from the selected set depending on how much they raise the score. Such procedures are called "stepwise" or "greedy".
5.148
5.149 -Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to calculate a voxel's sub-score, then we say it is a __local scoring method__. If only information from the voxel itself is used to calculate a voxel's sub-score, then we say it is a __pointwise scoring method__.
5.150 -
5.151 -Both gene expression data and anatomical atlases have errors, due to a variety of factors. Individual subjects have idiosyncratic anatomy. Subjects may be improperly registered to the atlas. The method used to measure gene expression may be noisy. The atlas may have errors. It is even possible that some areas in the anatomical atlas are "wrong" in that they do not have the same shape as the natural domains of gene expression to which they correspond. These sources of error can affect the displacement and the shape of both the gene expression data and the anatomical target areas. Therefore, it is important to use feature selection methods which are robust to these kinds of errors.
5.152 -
5.153 -
5.154 -=== Our strategy for Aim 1 ===
5.155 +%%Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to calculate a voxel's sub-score, then we say it is a __local scoring method__. If only information from the voxel itself is used to calculate a voxel's sub-score, then we say it is a __pointwise scoring method__.
5.156 +
5.157 +Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score. If only information from nearby voxels is used to calculate a voxel's sub-score, then we say it is a __local scoring method__. If only information from the voxel itself is used to calculate a voxel's sub-score, then we say it is a __pointwise scoring method__.
5.158 +
5.159 +%%Both gene expression data and anatomical atlases have errors, due to a variety of factors. Individual subjects have idiosyncratic anatomy. Subjects may be improperly registered to the atlas. The method used to measure gene expression may be noisy. The atlas may have errors. It is even possible that some areas in the anatomical atlas are "wrong" in that they do not have the same shape as the natural domains of gene expression to which they correspond. These sources of error can affect the displacement and the shape of both the gene expression data and the anatomical target areas. Therefore, it is important to use feature selection methods which are robust to these kinds of errors.
5.160 +
5.161 +
5.162 +=== Our Strategy for Goal 1 ===
5.163
5.164 Key questions when choosing a learning method are: What are the instances? What are the features? How are the features chosen? Here are four principles that outline our answers to these questions.
5.165
5.166
5.167 \vspace{0.3cm}**Principle 1: Combinatorial gene expression**
5.168
5.169 -It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Studies, Figure \ref{MOcombo}). Therefore, each instance should contain multiple features (genes).
5.170 +It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene included in the ABA coronal dataset. However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Results, Figure \ref{MOcombo}). Therefore, each instance should contain multiple features (genes).
5.171
5.172
5.173 \vspace{0.3cm}**Principle 2: Only look at combinations of small numbers of genes**
5.174
5.175 -
5.176 -When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected as features. The more data that are available to a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as features.
5.177 +When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected as features. The more data that are available to a classifier, the better that it can do. Why not include every gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the level of more than a handful of genes. Therefore, we must select only a few genes as features.
5.178 +
5.179 +%%When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected as features. The more data that are available to a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as features.
5.180
5.181 The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task combines feature selection with supervised learning.
5.182
5.183
5.184 \vspace{0.3cm}**Principle 3: Use geometry in feature selection**
5.185
5.186 -When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See Preliminary Studies, figure \ref{AUDgeometry} for evidence of the complementary nature of pointwise and local scoring methods.
5.187 +When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See Preliminary Results, figure \ref{AUDgeometry} for evidence of the complementary nature of pointwise and local scoring methods.
5.188
5.189
5.190
5.191 \vspace{0.3cm}**Principle 4: Work in 2-D whenever possible**
5.192
5.193
5.194 -There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D data. Therefore, when possible, the instances should represent pixels, not voxels.
5.195 -
5.196 -
5.197 -=== Related work ===
5.198 -
5.199 -There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expression data which are not fundamentally spatial\footnote{By "__fundamentally__ spatial" we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not just data which have only a few different locations or which is indexed by anatomical label.}.
5.200 -
5.201 -As noted above, there has been much work on both supervised learning and there are many available algorithms for each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the way that this framework is set up has a large impact on performance. Creating a good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical "fine-tuning" of numerical parameters. For example, we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Studies) may be necessary in order to achieve the best results in this application.
5.202 -
5.203 -We now turn to efforts to find marker genes using spatial gene expression data using automated methods.
5.204 +There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D data. %%Therefore, when possible, the instances should represent pixels, not voxels.
5.205 +
5.206 +
5.207 +
5.208 +
5.209 +\vspace{0.3cm}
5.210 +=== Goal 2, From Genes to Areas: given gene expression data, discover a map of regions ===
5.211 +
5.212 +
5.213 +\begin{wrapfigure}{L}{0.4\textwidth}\centering
5.214 +%%\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_3_654_jet.eps}
5.215 +%%\\
5.216 +%%\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_3_724_jet.eps}
5.217 +%%\caption{Top row: Genes Nfic, A930001M12Rik, C130038G02Rik are the most correlated with area SS (somatosensory cortex). Bottom row: Genes C130038G02Rik, Cacna1i, Car10 are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.}
5.218 +
5.219 +\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}
5.220 +\\
5.221 +\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}
5.222 +
5.223 +\caption{Top row: Genes $Nfic$ and $A930001M12Rik$ are the most correlated with area SS (somatosensory cortex). Bottom row: Genes $C130038G02Rik$ and $Cacna1i$ are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.}
5.224 +\label{SScorrLr}\end{wrapfigure}
5.225 +
5.226 +
5.227 +\vspace{0.3cm}**Machine learning terminology: clustering**
5.228 +
5.229 +If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as __unsupervised learning__ in the jargon of machine learning. One thing that you can do with such a dataset is to group instances together. A set of similar instances is called a __cluster__, and the activity of grouping the data into clusters is called __clustering__ or __cluster analysis__.
5.230 +
5.231 +The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from the same anatomical region have similar gene expression profiles, at least compared to the other regions. This means that clustering voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression.
5.232 +
5.233 +%%It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests the outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering.
5.234 +
5.235 +It is desirable to determine not just one set of regions, but also how these regions relate to each other. The outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering.
5.236 +
5.237 +
5.238 +\vspace{0.3cm}**Similarity scores**
5.239 +A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Goal 1) and scoring methods for similarity.
5.240 +
5.241 +
5.242 +
5.243 +%%\vspace{0.3cm}**Spatially contiguous clusters; image segmentation**
5.244 +%%We have shown that Goal 2 is a type of clustering task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary Results, we show that one can get reasonable results without enforcing this constraint; however, we plan to compare these results against other methods which guarantee contiguous clusters.
5.245 +
5.246 +%%%%Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Goal 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three. However, there are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
5.247 +
5.248 +%%Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Goal 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three\footnote{There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery.}. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
5.249 +
5.250 +
5.251 +
5.252 +\vspace{0.3cm}**Dimensionality reduction**
5.253 +In this section, we discuss reducing the length of the per-pixel gene expression feature vector. By "dimension", we mean the dimension of this vector, not the spatial dimension of the underlying data.
5.254 +
5.255 +%% After the reduced feature set is created, the instances may be replaced by __reduced instances__, which have as their features the reduced feature set rather than the original feature set of all gene expression levels.
5.256 +
5.257 +Unlike Goal 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the instances. However, some clustering algorithms perform better on small numbers of features\footnote{First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.}. There are techniques which "summarize" a larger number of features using a smaller number of features; these techniques go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique yields is called the __reduced feature set__. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
5.258 +
5.259 +%%Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data. Another use for dimensionality reduction is to visualize the relationships between regions after clustering.
5.260 +
5.261 +%%Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example, one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on a 2-D plan will exactly satisfy this property; however, dimensionality reduction techniques allow one to find arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering.
5.262 +
5.263 +
5.264 +\vspace{0.3cm}**Clustering genes rather than voxels**
5.265 +Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used.
5.266 +
5.267 +Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could have one reduced feature for each gene cluster.
5.268 +
5.269 +Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following procedure: cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters. In Preliminary Results, Figure \ref{geneClusters}, we show that a number of anatomically recognized cortical regions, as well as some "superregions" formed by lumping together a few regions, are associated with gene clusters in this fashion.
5.270 +
5.271 +%% Therefore, it seems likely that an anatomically interesting region will have multiple genes which each individually pick it out\footnote{This would seem to contradict our finding in Goal 1 that some cortical areas are combinatorially coded by multiple genes. However, it is possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression; perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype.}.
5.272 +
5.273 +%%The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering algorithms.
5.274 +
5.275 +
5.276 +
5.277 +
5.278 +
5.279 +
5.280 +
5.281 +
5.282 +
5.283 +=== Goal 3: interoperability with multi/hyperspectral imaging analysis software ===
5.284 +%%Whereas a typical color image associated each pixel with a vector of three values, multispectral and hyperspectral images associate each pixel with a vector containing many values. The different positions in the vector correspond to different bands of electromagnetic wavelengths.
5.285 +A typical color image associates each pixel with a vector of three values. Multispectral and hyperspectral images, however, are images which associate each pixel with a vector containing many values. The different positions in the vector correspond to different bands of electromagnetic wavelengths\footnote{In hyperspectral imaging, the bands are adjacent, and the number of different bands is larger. For conciseness, we discuss only hyperspectral imaging, but our methods are also well suited to multispectral imaging with many bands.}.
5.286 +
5.287 +%%Typically multispectral imaging captures a few broad bands of wavelengths, whereas hyperspectral imaging captures a large number of adjacent narrow bands. Some analysis techniques for hyperspectral imaging, especially preprocessing and calibration techniques, make use of the information that the different values captured at each pixel represent adjacent bands of wavelengths of light, which can be combined to make a spectrum. Other analysis techniques ignore the interpretation of the values measured, and their relationship to each other within the electromagnetic spectrum, instead treating them blindly as completely separate features.
5.288 +
5.289 +Some analysis techniques for hyperspectral imaging, especially preprocessing and calibration techniques, make use of the information that the different values captured at each pixel represent adjacent wavelengths of light, which can be combined to make a spectrum. Other analysis techniques ignore the interpretation of the values measured, and their relationship to each other within the electromagnetic spectrum, instead treating them blindly as completely separate features.
5.290 +
5.291 +With both hyperspectral imaging and spatial gene expression data, each location in space is associated with more than three numerical feature values. The analysis of hyperspectral images can involve supervised classification and unsupervised learning. Often hyperspectral images come from satellites looking at the Earth, and it is desirable to classify what sort of objects occupy a given area of land. Sometimes detailed training data is not available, in which case it is desirable at least to cluster together those regions of land which contain similar objects.
5.292 +
5.293 +%% The analogy is perhaps closer with hyperspectral imagining, in which the number of feature values tends to be large, which is the case with spatial gene expression data.
5.294 +
5.295 +
5.296 +
5.297 +%%These tasks are similar to our goals for the analysis of spatial gene expression data. Starting with a satellite image and a list of known terrain types, and classifying pixels of the image according to which terrain it represents, is similar to starting with a spatial gene expression dataset and a set of known anatomical regions, and classifying locations according to which region they are within. Starting with a satellite image and clustering pixels together into groupings that represent regions of similar terrain is much like starting with a spatial gene expression dataset and clustering locations together into hypothetical anatomical regions.
5.298 +
5.299 +We believe that it may be possible for these two different field to share some common computational tools. To this end, we intend to make use of existing hyperspectral imaging software when possible, and to develop new software in such a way so as to make it easy to use for the purpose of hyperspectral image analysis, as well as for our primary purpose of spatial gene expression data analysis.
5.300 +
5.301 +
5.302 +%% We now turn to efforts to find marker genes using spatial gene expression data using automated methods.
5.303 +
5.304 +
5.305 +== Related work ==
5.306 +
5.307 +\begin{wrapfigure}{L}{0.25\textwidth}\centering
5.308 +\includegraphics[scale=.27]{holeExample_2682_SS_jet.eps}
5.309 +\caption{Gene $Pitx2$ is selectively underexpressed in area SS.}
5.310 +\label{hole}\end{wrapfigure}
5.311 +
5.312 +%%As noted above, there has been much work in the machine learning literature on both supervised and unsupervised learning and there are many available algorithms for each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the way that this framework is set up has a large impact on performance. Creating a good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical "fine-tuning" of numerical parameters. For example, we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Results) may be necessary in order to achieve the best results in this application. So, the project involves more than the blind application of existing machine learning analysis programs to a new dataset.
5.313 +
5.314 +As noted above, the GIS community has developed tools for supervised classification and unsupervised clustering in the context of the analysis of hyperspectral imaging data. One tool is Spectral Python\footnote{http://spectralpython.sourceforge.net/}. Spectral Python implements various supervised and unsupervised classification methods, as well as utility functions for loading, viewing, and saving spatial data. Although Spectral Python has feature extraction methods (such as principal components analysis) which create a small set of new features computed based on the original features, it does not have feature selection methods, that is, methods to select a small subset out of the original features (although feature selection in hyperspectral imaging has been investigated by others\cite{serpico_new_2001}. %%We intend to extend Spectral Python's reportoire of supervised and unsupervised machine learning methods, as well as to add feature selection methods.
5.315 +
5.316 +There is a substantial body of work on the analysis of gene expression data. Most of this concerns gene expression data which are not fundamentally spatial\footnote{By "__fundamentally__ spatial" we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not just data which have only a few different locations or which is indexed by anatomical label.}. Here we review only that work which concerns the automated analysis of spatial gene expression data with respect to anatomy.
5.317 +
5.318
5.319 %%GeneAtlas\cite{carson_digital_2005} allows the user to construct a search query by freely demarcating one or two 2-D regions on sagittal slices, and then to specify either the strength of expression or the name of another gene whose expression pattern is to be matched.
5.320
5.321 %% \footnote{For the similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel (actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity) whose expression is within four discretization levels. EMAGE uses Jaccard similarity (the number of true pixels in the intersection of the two images, divided by the number of pixels in their union).}
5.322 %% \cite{lee_high-resolution_2007} mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of interest, computing what proportion of this structure is covered by the gene's spatial region.
5.323
5.324 -GeneAtlas\cite{carson_digital_2005} and EMAGE \cite{venkataraman_emage_2008} allow the user to construct a search query by demarcating regions and then specifying either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a region in concert but not separately.
5.325 +Relating to Goal 1, GeneAtlas\cite{carson_digital_2005} and EMAGE \cite{venkataraman_emage_2008} allow the user to construct a search query by demarcating regions and then specifying either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a region in concert.
5.326 +
5.327 +Relating to Goal 2, EMAGE\cite{venkataraman_emage_2008} allows the user to select a dataset from among a large number of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchical complete linkage clustering. %% with un-centered correlation as the similarity score.
5.328
5.329 \cite{ng_anatomic_2009} describes AGEA, "Anatomic Gene Expression
5.330 Atlas". AGEA has three
5.331 @@ -131,215 +286,14 @@
5.332 cluster which includes the seed voxel, (2) yields a list of genes
5.333 which are overexpressed in that cluster. **Correlation**: The user selects a seed voxel and the system
5.334 then shows the user how much correlation there is between the gene
5.335 -expression profile of the seed voxel and every other voxel. **Clusters**: will be described later. \cite{chin_genome-scale_2007} looks at the mean expression level of genes within anatomical regions, and applies a Student's t-test with Bonferroni correction to determine whether the mean expression level of a gene is significantly higher in the target region. \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} differ from our Aim 1 in at least three ways. First, \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} find only single genes, whereas we will also look for combinations of genes. Second, \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} can only use overexpression as a marker, whereas we will also search for underexpression. Third, \cite{ng_anatomic_2009} and \cite{chin_genome-scale_2007} use scores based on pointwise expression levels, whereas we will also use geometric scores such as gradient similarity (described in Preliminary Studies). Figures \ref{MOcombo}, \ref{hole}, and \ref{AUDgeometry} in the Preliminary Studies section contain evidence that each of our three choices is the right one.
5.336 -
5.337 -\cite{hemert_matching_2008} describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image. %%Their match score is Jaccard similarity.
5.338 -
5.339 -
5.340 -In summary, there has been fruitful work on finding marker genes, but only one of the previous projects explores combinations of marker genes, and none of these publications compare the results obtained by using different algorithms or scoring methods.
5.341 -
5.342 -
5.343 -
5.344 -
5.345 -=== Aim 2: From gene expression data, discover a map of regions ===
5.346 -
5.347 -
5.348 -
5.349 -\vspace{0.3cm}**Machine learning terminology: clustering**
5.350 -
5.351 -If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as __unsupervised learning__ in the jargon of machine learning. One thing that you can do with such a dataset is to group instances together. A set of similar instances is called a __cluster__, and the activity of finding grouping the data into clusters is called __clustering__ or __cluster analysis__.
5.352 -
5.353 -The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from the same anatomical region have similar gene expression profiles, at least compared to the other regions. This means that clustering voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression.
5.354 -
5.355 -%%It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests the outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering.
5.356 -
5.357 -It is desirable to determine not just one set of regions, but also how these regions relate to each other. The outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchical clustering.
5.358 -
5.359 -
5.360 -\vspace{0.3cm}**Similarity scores**
5.361 -A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and scoring methods for similarity.
5.362 -
5.363 -
5.364 -
5.365 -\vspace{0.3cm}**Spatially contiguous clusters; image segmentation**
5.366 -We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary Studies, we show that one can get reasonable results without enforcing this constraint; however, we plan to compare these results against other methods which guarantee contiguous clusters.
5.367 -
5.368 -%%Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three. However, there are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
5.369 -
5.370 -Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three\footnote{There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery.}. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
5.371 -
5.372 -
5.373 -
5.374 -\vspace{0.3cm}**Dimensionality reduction**
5.375 -In this section, we discuss reducing the length of the per-pixel gene expression feature vector. By "dimension", we mean the dimension of this vector, not the spatial dimension of the underlying data.
5.376 -
5.377 -%% After the reduced feature set is created, the instances may be replaced by __reduced instances__, which have as their features the reduced feature set rather than the original feature set of all gene expression levels.
5.378 -
5.379 -Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the instances. However, some clustering algorithms perform better on small numbers of features\footnote{First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.}. There are techniques which "summarize" a larger number of features using a smaller number of features; these techniques go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique yields is called the __reduced feature set__. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
5.380 -
5.381 -%%Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data. Another use for dimensionality reduction is to visualize the relationships between regions after clustering.
5.382 -
5.383 -%%Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example, one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on a 2-D plan will exactly satisfy this property; however, dimensionality reduction techniques allow one to find arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering.
5.384 -
5.385 -
5.386 -\vspace{0.3cm}**Clustering genes rather than voxels**
5.387 -Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used.
5.388 -
5.389 -Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could have one reduced feature for each gene cluster.
5.390 -
5.391 -Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following procedure: cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters. In Preliminary Studies, Figure \ref{geneClusters}, we show that a number of anatomically recognized cortical regions, as well as some "superregions" formed by lumping together a few regions, are associated with gene clusters in this fashion.
5.392 -
5.393 -%% Therefore, it seems likely that an anatomically interesting region will have multiple genes which each individually pick it out\footnote{This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression; perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype.}.
5.394 -
5.395 -%%The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering algorithms.
5.396 -
5.397 -=== Related work ===
5.398 -
5.399 -
5.400 -Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, \cite{schleicher_quantitative_2005}, \cite{annese_myelo-architectonic_2004}, \cite{schmitt_detection_2003}, and \cite{adamson_tracking_2005} associate spots on the cortex with the radial profile\footnote{A radial profile is a profile along a line perpendicular to the cortical surface.} of response to some stain (\cite{kruggel_analyzingneocortical_2003} uses MRI), extract features from this profile, and then use similarity between surface pixels to cluster.
5.401 -
5.402 -
5.403 -
5.404 -%%Features used include statistical moments, wavelets, and the excess mass functional. Some of these features are motivated by the presence of tangential lines of stain intensity which correspond to laminar structure. Some methods use standard clustering procedures, whereas others make use of the spatial nature of the data to look for sudden transitions, which are identified as areal borders.
5.405 -
5.406 -\cite{thompson_genomic_2008} describes an analysis of the anatomy of
5.407 -the hippocampus using the ABA dataset. In addition to manual analysis,
5.408 -two clustering methods were employed, a modified Non-negative Matrix
5.409 -Factorization (NNMF), and a hierarchical bifurcation clustering scheme using correlation as similarity. The paper yielded impressive results, proving the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset, and while the results are promising, other methods may perform as well or better (see Preliminary Studies, Figure \ref{dimReduc}).
5.410 -
5.411 -%% \footnote{We ran "vanilla" NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was needed. The paper under discussion also mentions that they tried a hierarchical variant of NNMF, which we have not yet tried.} and while the results are promising, they also demonstrate that NNMF is not necessarily the best dimensionality reduction method for this application (see Preliminary Studies, Figure \ref{dimReduc}).
5.412 -
5.413 -%% In addition, this paper described a visual screening of the data, specifically, a visual analysis of 6000 genes with the primary purpose of observing how the spatial pattern of their expression coincided with the regions that had been identified by NNMF. We propose to do this sort of screening automatically, which would yield an objective, quantifiable result, rather than qualitative observations.
5.414 -
5.415 -%% \cite{thompson_genomic_2008} reports that both mNNMF and hierarchical mNNMF clustering were useful, and that hierarchical recursive bifurcation gave similar results.
5.416 -
5.417 -
5.418 -AGEA\cite{ng_anatomic_2009} includes a preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with correlation as the similarity metric. EMAGE\cite{venkataraman_emage_2008} allows the user to select a dataset from among a large number of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchical complete linkage clustering. %% with un-centered correlation as the similarity score.
5.419 -
5.420 -%%\cite{chin_genome-scale_2007} clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and ordered the rows of this matrix as follows: "the first row of the matrix was chosen to show the strongest contrast between the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing similarity using a least squares metric". The resulting matrix showed four clusters. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
5.421 -
5.422 -\cite{chin_genome-scale_2007} clusters genes. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
5.423 -
5.424 -\cite{hemert_matching_2008} applies their technique for finding combinations of marker genes for the purpose of clustering genes around a "seed gene". %%They do this by using the pattern of expression of the seed gene as the target image, and then searching for other genes which can be combined to reproduce this pattern. Other genes which are found are considered to be related to the seed. The same team also describes a method\cite{van_hemert_mining_2007} for finding "association rules" such as, "if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene". This could be useful as part of a procedure for clustering voxels.
5.425 -
5.426 -In summary, although these projects obtained clusterings, there has not been much comparison between different algorithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. The projects using gene expression on cortex did not attempt to make use of the radial profile of gene expression. Also, none of these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms.
5.427 -
5.428 -
5.429 -
5.430 -=== Aim 3: apply the methods developed to the cerebral cortex ===
5.431 -\begin{wrapfigure}{L}{0.35\textwidth}\centering
5.432 -%%\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_3_654_jet.eps}
5.433 -%%\\
5.434 -%%\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_3_724_jet.eps}
5.435 -%%\caption{Top row: Genes Nfic, A930001M12Rik, C130038G02Rik are the most correlated with area SS (somatosensory cortex). Bottom row: Genes C130038G02Rik, Cacna1i, Car10 are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.}
5.436 -
5.437 -\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}
5.438 -\\
5.439 -\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}
5.440 -
5.441 -\caption{Top row: Genes $Nfic$ and $A930001M12Rik$ are the most correlated with area SS (somatosensory cortex). Bottom row: Genes $C130038G02Rik$ and $Cacna1i$ are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.}
5.442 -\label{SScorrLr}\end{wrapfigure}
5.443 -
5.444 -
5.445 -\vspace{0.3cm}**Background**
5.446 -
5.447 -The cortex is divided into areas and layers. Because of the cortical columnar organization, the parcellation of the cortex into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can picture an area of the cortex as a slice of a six-layered cake\footnote{Outside of isocortex, the number of layers varies.}.
5.448 -
5.449 -It is known that different cortical areas have distinct roles in both normal functioning and in disease processes, yet there are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their approximate location upon the cortical surface.
5.450 -
5.451 -Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single agreed-upon map can be seen by contrasting the recent maps given by Swanson\cite{swanson_brain_2003} on the one hand, and Paxinos and Franklin\cite{paxinos_mouse_2001} on the other. While the maps are certainly very similar in their general arrangement, significant differences remain.
5.452 -
5.453 -\vspace{0.3cm}**The Allen Mouse Brain Atlas dataset**
5.454 -
5.455 -%%The Allen Mouse Brain Atlas (ABA) data\cite{lein_genome-wide_2007}
5.456 -
5.457 -The Allen Mouse Brain Atlas (ABA) data were produced by doing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse brains were needed in order to measure the expression of many genes.
5.458 -
5.459 -%%Mus musculus is thought to contain about 22,000 protein-coding genes\cite{waterston_initial_2002}.
5.460 -
5.461 -Mus musculus is thought to contain about 22,000 protein-coding genes. The ABA contains data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from only the coronal subset of the ABA\footnote{The sagittal data do not cover the entire cortex, and also have greater registration error\cite{ng_anatomic_2009}. Genes were selected by the Allen Institute for coronal sectioning based on, "classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern"\cite{ng_anatomic_2009}.}. An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 \= 159,326 voxels, of which 51,533 are in the brain\cite{ng_anatomic_2009}. For each voxel and each gene, the expression energy within that voxel is made available.
5.462 -
5.463 -%% For each voxel and each gene, the expression energy\cite{lein_genome-wide_2007} within that voxel is made available.
5.464 -
5.465 -
5.466 -
5.467 -%%The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress\footnote{http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE}, EADHB\footnote{http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html}, MAMEP\footnote{http://mamep.molgen.mpg.de/index.php}, Xenbase\footnote{http://xenbase.org/}, ZFIN\cite{sprague_zebrafish_2006}, Aniseed\footnote{http://aniseed-ibdm.univ-mrs.fr/}, VisiGene\footnote{http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources}, GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE\footnote{http://compare.ibdml.univ-mrs.fr/} GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007}\footnote{GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.}. With the exception of the ABA, GenePaint, and EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression.
5.468 -
5.469 -The ABA is not the only large public spatial gene expression dataset. However, with the exception of the ABA, GenePaint, and EMAGE, most of the other resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space.
5.470 -
5.471 -%%, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression.
5.472 -
5.473 -%% \footnote{Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress (http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE), EADHB (http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html), MAMEP (http://mamep.molgen.mpg.de/index.php), Xenbase (http://xenbase.org/), ZFIN\cite{sprague_zebrafish_2006}, Aniseed (http://aniseed-ibdm.univ-mrs.fr/), VisiGene (http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some of the other listed data sources), GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE (http://compare.ibdml.univ-mrs.fr/), GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007} (GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.)}
5.474 -
5.475 -
5.476 -
5.477 -=== Related work ===
5.478 -
5.479 -
5.480 -
5.481 -\cite{ng_anatomic_2009} describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of the other components of AGEA can be applied to cortical areas; AGEA's Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA's hierarchical clustering does not produce clusters corresponding to the cortical areas\footnote{In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.}.
5.482 +expression profile of the seed voxel and every other voxel. **Clusters**: AGEA includes a preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with correlation as the similarity metric. AGEA has been applied to the cortex. The paper describes interesting results on the structure of correlations between voxel gene expression profiles within a handful of cortical areas. However, that analysis neither looks for genes marking cortical areas, nor does it suggest a cortical map based on gene expression data. Neither of the other components of AGEA can be applied to cortical areas; AGEA's Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA's hierarchical clustering does not produce clusters corresponding to the cortical areas\footnote{In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.}.
5.483
5.484 %% (there may be clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot the find marker genes for cortical areas is that, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed.
5.485
5.486 %% Most of the projects which have been discussed have been done by the same groups that develop the public datasets. Although these projects make their algorithms available for use on their own website, none of them have released an open-source software toolkit; instead, users are restricted to using the provided algorithms only on their own dataset.
5.487
5.488 -In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, (b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally finding marker genes for cortical areas, or on finding a hierarchical clustering that will yield a map of cortical areas de novo from gene expression data.
5.489 -
5.490 -Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker genes for \begin{latex}/\end{latex} reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods.
5.491 -
5.492 -
5.493 -== Significance ==
5.494 -\begin{wrapfigure}{L}{0.2\textwidth}\centering
5.495 -\includegraphics[scale=.27]{holeExample_2682_SS_jet.eps}
5.496 -\caption{Gene $Pitx2$ is selectively underexpressed in area SS.}
5.497 -\label{hole}\end{wrapfigure}
5.498 -
5.499 -
5.500 -
5.501 -The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively target individual cortical areas.
5.502 -
5.503 -The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatomical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex.
5.504 -
5.505 -
5.506 -%% Since the number of classes of stains is small compared to the number of genes,
5.507 -
5.508 -The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation of a better map. The development of present-day cortical maps was driven by the application of histological stains. If a different set of stains had been available which identified a different set of features, then today's cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to incorporate what we can learn from looking at the patterns of gene expression.
5.509 -
5.510 -While we do not here propose to analyze human gene expression data, it
5.511 -is conceivable that the methods we propose to develop could be used to
5.512 -suggest modifications to the human cortical map as well. In fact, the
5.513 -methods we will develop will be applicable to other datasets beyond
5.514 -the brain.
5.515 -
5.516 -
5.517 -
5.518 -
5.519 -
5.520 -\vspace{0.3cm}\hrule
5.521 -
5.522 -== The approach: Preliminary Studies ==
5.523 -
5.524 -=== Format conversion between SEV, MATLAB, NIFTI ===
5.525 -We have created software to (politely) download all of the SEV files\footnote{SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.} from the Allen Institute website. We have also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret's file formats.
5.526 -
5.527 -
5.528 -=== Flatmap of cortex ===
5.529 -
5.530 -
5.531 -We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. We divided the cortex into hemispheres. Using Caret\cite{van_essen_integrated_2001}, we created a mesh representation of the surface of the selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression of the voxels "underneath" that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this grid into a MATLAB matrix. We manually traced the boundaries of each of 46 cortical areas from the ABA coronal reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the mesh surface. We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the region data into MATLAB format.
5.532 -
5.533 -At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries represent the average expression level underneath each surface pixel. We created a normalized version of the gene expression data by subtracting each gene's mean expression level (over all surface pixels) and dividing the expression level of each gene by its standard deviation. The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface.
5.534 -
5.535 -To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
5.536 -
5.537 -
5.538 -
5.539 -
5.540 -
5.541 -
5.542 -
5.543 -=== Feature selection and scoring methods ===
5.544 -\begin{wrapfigure}{L}{0.35\textwidth}\centering
5.545 +
5.546 +\begin{wrapfigure}{L}{0.4\textwidth}\centering
5.547 %%\includegraphics[scale=.27]{singlegene_AUD_lr_top_1_3386_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_2_1258_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_3_420_jet.eps}
5.548 %%
5.549 %%\includegraphics[scale=.27]{singlegene_AUD_gr_top_1_2856_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_gr_top_2_420_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_gr_top_3_2072_jet.eps}
5.550 @@ -351,122 +305,182 @@
5.551 \label{AUDgeometry}\end{wrapfigure}
5.552
5.553
5.554 -
5.555 -
5.556 -\vspace{0.3cm}**Underexpression of a gene can serve as a marker**
5.557 -Underexpression of a gene can sometimes serve as a marker. See, for example, Figure \ref{hole}.
5.558 -
5.559 -
5.560 -
5.561 -
5.562 -\vspace{0.3cm}**Correlation**
5.563 -Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the surface pixels.
5.564 -
5.565 -We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS.
5.566 -
5.567 -%%One class of feature selection scoring methods contains methods which calculate some sort of "match" between each gene image and the target image. Those genes which match the best are good candidates for features.
5.568 -
5.569 -%%One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS.
5.570 -
5.571 -
5.572 -
5.573 -\vspace{0.3cm}**Conditional entropy**
5.574 -%%An information-theoretic scoring method is to find features such that, if the features (gene expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty, so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution to which we are referring is the probability distribution over the population of surface pixels.
5.575 -
5.576 -%%The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating, for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two standard deviations, the mean plus one standard deviation, the mean plus two standard deviations.
5.577 -
5.578 -%%Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized.
5.579 -
5.580 -For each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized.
5.581 -
5.582 -This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question, "Is this surface pixel a member of the target area?". Its advantage over linear methods such as logistic regression is that it takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional entropy would notice, whereas linear methods would not.
5.583 -
5.584 -
5.585 -
5.586 -
5.587 -\vspace{0.3cm}**Gradient similarity**
5.588 -We noticed that the previous two scoring methods, which are pointwise, often found genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar to the shape of the target region. We call this scoring method "gradient similarity". The formula is:
5.589 -
5.590 -%%One might say that gradient similarity attempts to measure how much the border of the area of gene expression and the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its maximum value to zero, the spatial pattern of a gene's expression often does not have a discrete border. Therefore, instead of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have gradients which are oriented in a similar direction.
5.591 -
5.592 -
5.593 -
5.594 -\begin{align*}
5.595 -\sum_{pixel \in pixels} cos(abs(\angle \nabla_1 - \angle \nabla_2)) \cdot \frac{\vert \nabla_1 \vert + \vert \nabla_2 \vert}{2} \cdot \frac{pixel\_value_1 + pixel\_value_2}{2}
5.596 -\end{align*}
5.597 -
5.598 -where $\nabla_1$ and $\nabla_2$ are the gradient vectors of the two images at the current pixel; $\angle \nabla_i$ is the angle of the gradient of image $i$ at the current pixel; $\vert \nabla_i \vert$ is the magnitude of the gradient of image $i$ at the current pixel; and $pixel\_value_i$ is the value of the current pixel in image $i$.
5.599 -
5.600 -The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a similar direction (because the borders are similar).
5.601 -
5.602 -\vspace{0.3cm}**Gradient similarity provides information complementary to correlation**
5.603 -\begin{wrapfigure}{L}{0.35\textwidth}\centering
5.604 +\cite{chin_genome-scale_2007} looks at the mean expression level of genes within anatomical regions, and applies a Student's t-test to determine whether the mean expression level of a gene is significantly higher in the target region. This relates to our Goal 1. \cite{chin_genome-scale_2007} also clusters genes, relating to our Goal 2. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
5.605 +
5.606 +These related works differ from our strategy for Goal 1 in at least three ways. First, they find only single genes, whereas we will also look for combinations of genes. Second, they usually can only use overexpression as a marker, whereas we will also search for underexpression. Third, they use scores based on pointwise expression levels, whereas we will also use geometric scores such as gradient similarity (described in Preliminary Results). Figures \ref{MOcombo}, \ref{hole}, and \ref{AUDgeometry} in the Preliminary Results section contain evidence that each of our three choices is the right one.
5.607 +
5.608 +\cite{hemert_matching_2008} describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image.
5.609 +%%Their match score is Jaccard similarity.
5.610 +They apply their technique for finding combinations of marker genes for the purpose of clustering genes around a "seed gene". %%They do this by using the pattern of expression of the seed gene as the target image, and then searching for other genes which can be combined to reproduce this pattern. Other genes which are found are considered to be related to the seed. The same team also describes a method\cite{van_hemert_mining_2007} for finding "association rules" such as, "if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene". This could be useful as part of a procedure for clustering voxels.
5.611 +
5.612 +
5.613 +Relating to our Goal 2, some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, \cite{schleicher_quantitative_2005}, \cite{annese_myelo-architectonic_2004}, \cite{schmitt_detection_2003}, and \cite{adamson_tracking_2005} associate spots on the cortex with the radial profile\footnote{A radial profile is a profile along a line perpendicular to the cortical surface.} of response to some stain (\cite{kruggel_analyzingneocortical_2003} uses MRI), extract features from this profile, and then use similarity between surface pixels to cluster.
5.614 +
5.615 +
5.616 +
5.617 +%%Features used include statistical moments, wavelets, and the excess mass functional. Some of these features are motivated by the presence of tangential lines of stain intensity which correspond to laminar structure. Some methods use standard clustering procedures, whereas others make use of the spatial nature of the data to look for sudden transitions, which are identified as areal borders.
5.618 +
5.619 +\cite{thompson_genomic_2008} describes an analysis of the anatomy of
5.620 +the hippocampus using the ABA dataset. In addition to manual analysis,
5.621 +two clustering methods were employed, a modified Non-negative Matrix
5.622 +Factorization (NNMF), and a hierarchical bifurcation clustering scheme using correlation as similarity. The paper yielded impressive results, proving the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset, and while the results are promising, other methods may perform as well or better (see Preliminary Results, Figure \ref{dimReduc}).
5.623 +
5.624 +%% \footnote{We ran "vanilla" NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was needed. The paper under discussion also mentions that they tried a hierarchical variant of NNMF, which we have not yet tried.} and while the results are promising, they also demonstrate that NNMF is not necessarily the best dimensionality reduction method for this application (see Preliminary Results, Figure \ref{dimReduc}).
5.625 +
5.626 +%% In addition, this paper described a visual screening of the data, specifically, a visual analysis of 6000 genes with the primary purpose of observing how the spatial pattern of their expression coincided with the regions that had been identified by NNMF. We propose to do this sort of screening automatically, which would yield an objective, quantifiable result, rather than qualitative observations.
5.627 +
5.628 +%% \cite{thompson_genomic_2008} reports that both mNNMF and hierarchical mNNMF clustering were useful, and that hierarchical recursive bifurcation gave similar results.
5.629 +
5.630 +
5.631 +
5.632 +
5.633 +%%\cite{chin_genome-scale_2007} clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and ordered the rows of this matrix as follows: "the first row of the matrix was chosen to show the strongest contrast between the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing similarity using a least squares metric". The resulting matrix showed four clusters. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
5.634 +
5.635 +Comparing previous work with our Goal 1, there has been fruitful work on finding marker genes, but only one of the projects explored combinations of marker genes, and none of them compared the results obtained by using different algorithms or scoring methods. Comparing previous work with Goal 2, although some projects obtained clusterings, there has not been much comparison between different algorithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. Also, none of these projects did a separate dimensionality reduction step before clustering pixels, or tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, or used co-clustering algorithms.
5.636 +
5.637 +%%The projects using gene expression on cortex did not attempt to make use of the radial profile of gene expression.
5.638 +
5.639 +In summary, (a) only one of the previous projects explores combinations of marker genes, (b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally finding marker genes applied to cortical areas, or on finding a hierarchical clustering that will yield a map of cortical areas de novo from gene expression data.
5.640 +
5.641 +Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker genes for \begin{latex}/\end{latex} reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods.
5.642 +
5.643 +
5.644 +
5.645 +
5.646 +
5.647 +
5.648 +
5.649 +
5.650 +
5.651 +
5.652 +
5.653 +
5.654 +
5.655 +
5.656 +\vspace{0.3cm}\hrule
5.657 +
5.658 +== Data sharing plan ==
5.659 +
5.660 +\begin{wrapfigure}{L}{0.4\textwidth}\centering
5.661 \includegraphics[scale=.27]{MO_vs_Wwc1_jet.eps}\includegraphics[scale=.27]{MO_vs_Mtif2_jet.eps}
5.662
5.663 \includegraphics[scale=.27]{MO_vs_Wwc1_plus_Mtif2_jet.eps}
5.664 \caption{Upper left: $wwc1$. Upper right: $mtif2$. Lower left: wwc1 + mtif2 (each pixel's value on the lower left is the sum of the corresponding pixels in the upper row).}
5.665 \label{MOcombo}\end{wrapfigure}
5.666
5.667 -
5.668 -
5.669 -
5.670 -To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area.
5.671 -
5.672 -%%None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods.
5.673 -
5.674 -%% The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.}
5.675 -
5.676 -%% Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers.
5.677 -
5.678 -\vspace{0.3cm}**Areas which can be identified by single genes**
5.679 -Using gradient similarity, we have already found single genes which roughly identify some areas and groupings of areas. For each of these areas, an example of a gene which roughly identifies it is shown in Figure \ref{singleSoFar}. We have not yet cross-verified these genes in other atlases.
5.680 -
5.681 -In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior part of cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), VIS (visual), AUD (auditory).
5.682 -
5.683 -These results validate our expectation that the ABA dataset can be exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring method, gradient similarity.
5.684 -
5.685 -
5.686 -
5.687 -\vspace{0.3cm}**Combinations of multiple genes are useful and necessary for some areas**
5.688 -
5.689 -In Figure \ref{MOcombo}, we give an example of a cortical area which is not marked by any single gene, but which can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left image. This combination captures area MO much better than any single gene.
5.690 -
5.691 -This shows that our proposal to develop a method to find combinations of marker genes is both possible and necessary.
5.692 -
5.693 -%% wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652}
5.694 -%% mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784}
5.695 -
5.696 -%%According to logistic regression, gene wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface.
5.697 -
5.698 -%%Gene mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} is shown in figure the upper-right of Fig. \ref{MOcombo}. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left of Figure \ref{MOcombo}. This combination captures area MO much better than any single gene.
5.699 -
5.700 -
5.701 -
5.702 -
5.703 -%%\vspace{0.3cm}**Feature selection integrated with prediction**
5.704 -%%As noted earlier, in general, any classifier can be used for feature selection by running it inside a stepwise wrapper. Also, some learning algorithms integrate soft constraints on number of features used. Examples of both of these will be seen in the section "Multivariate supervised learning".
5.705 -
5.706 -
5.707 -=== Multivariate supervised learning ===
5.708 -
5.709 -
5.710 -\vspace{0.3cm}**Forward stepwise logistic regression**
5.711 -Logistic regression is a popular method for predictive modeling of categorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes found were shown in various figures throughout this document, and Figure \ref{MOcombo} shows a combination of genes which was found.
5.712 -
5.713 -%%We felt that, for single genes, gradient similarity did a better job than logistic regression at capturing our subjective impression of a "good gene".
5.714 -
5.715 -
5.716 -\vspace{0.3cm}**SVM on all genes at once**
5.717 -
5.718 -In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%\footnote{5-fold cross-validation.}. This shows that the genes included in the ABA dataset are sufficient to define much of cortical anatomy. However, as noted above, a classifier that looks at all the genes at once isn't as practically useful as a classifier that uses only a few genes.
5.719 -
5.720 -
5.721 -
5.722 -
5.723 -
5.724 -=== Data-driven redrawing of the cortical map ===
5.725 -
5.726 -\begin{wrapfigure}{L}{0.35\textwidth}\centering
5.727 +We are enthusiastic about the sharing of methods and data, and at the conclusion of the project, we will make all of our data and computer source code publically available, either in supplemental attachments to publications, or on a website. The source code will be released under the GNU Public License. We intend to include a software program which, when run, will take as input the Allen Brain Atlas raw data, and produce as output all numbers and charts found in publications resulting from the project. Source code to be released will include extensions to Caret\cite{van_essen_integrated_2001}, an existing open-source scientific imaging program, and to Spectral Python. Data to be released will include the 2-D "flat map" dataset. This dataset will be submitted to a machine learning dataset repository.
5.728 +
5.729 +%% Our goal is that replicating our results, or applying the methods we develop to other targets, will be quick and easy for other investigators.
5.730 +
5.731 +
5.732 +
5.733 +
5.734 +== Broader impacts ==
5.735 +
5.736 +In addition to validating the usefulness of the algorithms, the application of these methods to cortex will produce immediate benefits, because there are currently no known genetic markers for most cortical areas. %%The results of the project will support the development of new ways to selectively target cortical areas, and it will support the development of a method for identifying the cortical areal boundaries present in small tissue samples.
5.737 +
5.738 +The method developed in Goal 1 will be applied to each cortical area to find a set of marker genes such that the combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively target individual cortical areas.
5.739 +
5.740 +The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatomical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can find many of the areal boundaries at once.
5.741 +
5.742 +%%This panel of marker genes will allow the development of an ISH protocol that will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex.
5.743 +
5.744 +%% Since the number of classes of stains is small compared to the number of genes,
5.745 +The method developed in Goal 2 will provide a genoarchitectonic viewpoint that will contribute to the creation of a better cortical map. %%The development of present-day cortical maps was driven by the application of histological stains. If a different set of stains had been available which identified a different set of features, then today's cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to incorporate what we can learn from looking at the patterns of gene expression.
5.746 +
5.747 +
5.748 +The methods we will develop will be applicable to other datasets beyond
5.749 +the brain, and even to datasets outside of biology. The software we develop will be useful for the analysis of hyperspectral images. Our project will draw attention to this area of overlap between neuroscience and GIS, and may lead to future collaborations between these two fields. The cortical dataset that we produce will be useful in the machine learning community as a sample dataset that new algorithms can be tested against. The availability of this sample dataset to the machine learning community may lead to more interest in the design of machine learning algorithms to analyze spatial gene expression.
5.750 +
5.751 +%%, which would benefit neuroscience down the road as the efforts of more machine learning researchers are focused on a problem of biological interest.
5.752 +
5.753 +
5.754 +
5.755 +
5.756 +\vspace{0.3cm}\hrule
5.757 +
5.758 +== Preliminary Results ==
5.759 +
5.760 +
5.761 +
5.762 +=== Format conversion between SEV, MATLAB, NIFTI ===
5.763 +We have created software to (politely) download all of the SEV files\footnote{SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.} from the Allen Institute website. We have also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret's file formats.
5.764 +
5.765 +
5.766 +=== Flatmap of cortex ===
5.767 +
5.768 +
5.769 +We downloaded the ABA data and selected only those voxels which belong to cerebral cortex. We divided the cortex into hemispheres. Using Caret\cite{van_essen_integrated_2001}, we created a mesh representation of the surface of the selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression of the voxels "underneath" that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We converted this grid into a MATLAB matrix. We manually traced the boundaries of each of 46 cortical areas from the ABA coronal reference atlas slides, and converted this region data into MATLAB format.
5.770 +
5.771 +%%We manually traced the boundaries of each of 46 cortical areas from the ABA coronal reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the mesh surface and then into region data in MATLAB format.
5.772 +
5.773 +%% We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values.
5.774 +%% We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the region data into MATLAB format.
5.775 +
5.776 +At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries represent the average expression level underneath each surface pixel. The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface.
5.777 +
5.778 +%% We created a normalized version of the gene expression data by subtracting each gene's mean expression level (over all surface pixels) and dividing the expression level of each gene by its standard deviation.
5.779 +
5.780 +%%To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
5.781 +
5.782 +
5.783 +
5.784 +
5.785 +
5.786 +
5.787 +
5.788 +=== Feature selection and scoring methods ===
5.789 +
5.790 +
5.791 +
5.792 +\vspace{0.3cm}**Underexpression of a gene can serve as a marker**
5.793 +Underexpression of a gene can sometimes serve as a marker. For example, see Figure \ref{hole}.
5.794 +
5.795 +
5.796 +
5.797 +
5.798 +\vspace{0.3cm}**Correlation**
5.799 +Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the surface pixels.
5.800 +
5.801 +We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS.
5.802 +
5.803 +%%One class of feature selection scoring methods contains methods which calculate some sort of "match" between each gene image and the target image. Those genes which match the best are good candidates for features.
5.804 +
5.805 +%%One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS.
5.806 +
5.807 +
5.808 +
5.809 +\vspace{0.3cm}**Conditional entropy**
5.810 +%%An information-theoretic scoring method is to find features such that, if the features (gene expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty, so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution to which we are referring is the probability distribution over the population of surface pixels.
5.811 +
5.812 +%%The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating, for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two standard deviations, the mean plus one standard deviation, the mean plus two standard deviations.
5.813 +
5.814 +%%Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized.
5.815 +
5.816 +For each region, we created and ran a forward stepwise procedure which attempted to find pairs of genes such that the conditional entropy of the target area's boolean mask, conditioned upon the gene pair's thresholded expression levels, is minimized.
5.817 +
5.818 +This finds pairs of genes which are most informative (at least at these threshold levels) relative to the question, "Is this surface pixel a member of the target area?". The advantage over linear methods such as logistic regression is that this takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional entropy would notice, whereas linear methods would not.
5.819 +
5.820 +
5.821 +
5.822 +
5.823 +\vspace{0.3cm}**Gradient similarity**
5.824 +We noticed that the previous two scoring methods, which are pointwise, often found genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar to the shape of the target region. We call this scoring method "gradient similarity". The formula is:
5.825 +
5.826 +%%One might say that gradient similarity attempts to measure how much the border of the area of gene expression and the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its maximum value to zero, the spatial pattern of a gene's expression often does not have a discrete border. Therefore, instead of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have gradients which are oriented in a similar direction.
5.827 +
5.828 +
5.829 +
5.830 +\begin{align*}
5.831 +\sum_{pixel \in pixels} cos(\angle \nabla_1 - \angle \nabla_2) \cdot \frac{\vert \nabla_1 \vert + \vert \nabla_2 \vert}{2} \cdot \frac{pixel\_value_1 + pixel\_value_2}{2}
5.832 +\end{align*}
5.833 +
5.834 +where $\nabla_1$ and $\nabla_2$ are the gradient vectors of the two images at the current pixel; $\angle \nabla_i$ is the angle of the gradient of image $i$ at the current pixel; $\vert \nabla_i \vert$ is the magnitude of the gradient of image $i$ at the current pixel; and $pixel\_value_i$ is the value of the current pixel in image $i$.
5.835 +
5.836 +The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a similar direction (because the borders are similar).
5.837 +
5.838 +
5.839 +\begin{wrapfigure}{L}{0.4\textwidth}\centering
5.840 \includegraphics[scale=.27]{singlegene_example_2682_Pitx2_SS_jet.eps}\includegraphics[scale=.27]{singlegene_example_371_Aldh1a2_SSs_jet.eps}
5.841 \includegraphics[scale=.27]{singlegene_example_2759_Ppfibp1_PIR_jet.eps}\includegraphics[scale=.27]{singlegene_example_3310_Slco1a5_FRP_jet.eps}
5.842 \includegraphics[scale=.27]{singlegene_example_3709_Tshz2_RSP_jet.eps}\includegraphics[scale=.27]{singlegene_example_3674_Trhr_COApm_jet.eps}
5.843 @@ -475,6 +489,67 @@
5.844 \caption{From left to right and top to bottom, single genes which roughly identify areas SS (somatosensory primary \begin{latex}+\end{latex} supplemental), SSs (supplemental somatosensory), PIR (piriform), FRP (frontal pole), RSP (retrosplenial), COApm (Cortical amygdalar, posterior part, medial zone). Grouping some areas together, we have also found genes to identify the groups ACA+PL+ILA+DP+ORB+MO (anterior cingulate, prelimbic, infralimbic, dorsal peduncular, orbital, motor), posterior and lateral visual (VISpm, VISpl, VISI, VISp; posteromedial, posterolateral, lateral, and primary visual; the posterior and lateral visual area is distinguished from its neighbors, but not from the entire rest of the cortex). The genes are $Pitx2$, $Aldh1a2$, $Ppfibp1$, $Slco1a5$, $Tshz2$, $Trhr$, $Col12a1$, $Ets1$.}
5.845 \label{singleSoFar}\end{wrapfigure}
5.846
5.847 +\vspace{0.3cm}**Gradient similarity provides information complementary to correlation**
5.848 +
5.849 +
5.850 +
5.851 +
5.852 +
5.853 +To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area.
5.854 +
5.855 +%%None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods.
5.856 +
5.857 +%% The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.}
5.858 +
5.859 +%% Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers.
5.860 +
5.861 +\vspace{0.3cm}**Areas which can be identified by single genes**
5.862 +Using gradient similarity, we have already found single genes which roughly identify some areas and groupings of areas. For each of these areas, an example of a gene which roughly identifies it is shown in Figure \ref{singleSoFar}. We have not yet cross-verified these genes in other atlases.
5.863 +
5.864 +In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior part of cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), VIS (visual), AUD (auditory).
5.865 +
5.866 +These results validate our expectation that the ABA dataset can be exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring method, gradient similarity.
5.867 +
5.868 +
5.869 +
5.870 +\vspace{0.3cm}**Combinations of multiple genes are useful and necessary for some areas**
5.871 +
5.872 +In Figure \ref{MOcombo}, we give an example of a cortical area which is not marked by any single gene, but which can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left image. This combination captures area MO much better than any single gene.
5.873 +
5.874 +This shows that our proposal to develop a method to find combinations of marker genes is both possible and necessary.
5.875 +
5.876 +%% wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652}
5.877 +%% mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784}
5.878 +
5.879 +%%According to logistic regression, gene wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface.
5.880 +
5.881 +%%Gene mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} is shown in figure the upper-right of Fig. \ref{MOcombo}. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left of Figure \ref{MOcombo}. This combination captures area MO much better than any single gene.
5.882 +
5.883 +
5.884 +
5.885 +
5.886 +%%\vspace{0.3cm}**Feature selection integrated with prediction**
5.887 +%%As noted earlier, in general, any classifier can be used for feature selection by running it inside a stepwise wrapper. Also, some learning algorithms integrate soft constraints on number of features used. Examples of both of these will be seen in the section "Multivariate supervised learning".
5.888 +
5.889 +
5.890 +=== Multivariate supervised learning ===
5.891 +
5.892 +
5.893 +\vspace{0.3cm}**Forward stepwise logistic regression**
5.894 +Logistic regression is a popular method for predictive modeling of categorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes found were shown in various figures throughout this document, and Figure \ref{MOcombo} shows a combination of genes which was found.
5.895 +
5.896 +%%We felt that, for single genes, gradient similarity did a better job than logistic regression at capturing our subjective impression of a "good gene".
5.897 +
5.898 +
5.899 +\vspace{0.3cm}**SVM on all genes at once**
5.900 +
5.901 +In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%\footnote{5-fold cross-validation.}. However, as noted above, a classifier that looks at all the genes at once isn't as practically useful as a classifier that uses only a few genes.
5.902 +
5.903 +%% This shows that the genes included in the ABA dataset are sufficient to define much of cortical anatomy.
5.904 +
5.905 +
5.906 +
5.907 +=== Data-driven redrawing of the cortical map ===
5.908
5.909
5.910
5.911 @@ -485,7 +560,7 @@
5.912
5.913
5.914
5.915 -After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we have tried k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are shown in the last row of Figure \ref{dimReduc}. To compare, the leftmost picture on the bottom row of Figure \ref{dimReduc} shows some of the major subdivisions of cortex. These results clearly show that different dimensionality reduction techniques capture different aspects of the data and lead to different clusterings, indicating the utility of our proposal to produce a detailed comparison of these techniques as applied to the domain of genomic anatomy.
5.916 +After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we have tried k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are shown in the bottom row of Figure \ref{dimReduc}. To compare, the leftmost picture on the bottom row of Figure \ref{dimReduc} shows some of the major subdivisions of cortex. These results show that different dimensionality reduction techniques capture different aspects of the data and lead to different clusterings, indicating the utility of our proposal to produce a detailed comparison of these techniques as applied to the domain of genomic anatomy.
5.917
5.918
5.919
5.920 @@ -494,7 +569,7 @@
5.921 We also clustered the genes using gradient similarity to see if the spatial regions defined by any clusters matched known anatomical regions. Figure \ref{geneClusters} shows, for ten sample gene clusters, each cluster's average expression pattern, compared to a known anatomical boundary. This suggests that it is worth attempting to cluster genes, and then to use the results to cluster pixels.
5.922
5.923
5.924 -== The approach: what we plan to do ==
5.925 +== Our plan: what remains to be done ==
5.926
5.927 %%\vspace{0.3cm}**Flatmap cortex and segment cortical layers**
5.928
5.929 @@ -507,16 +582,18 @@
5.930
5.931 %%Often the surface of a structure serves as a natural 2-D basis for anatomical organization. Even when the shape of the surface is known, there are multiple ways to map it into a plane. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). Although there is much 2-D organization in anatomy, there are also structures whose anatomy is fundamentally 3-dimensional. We plan to include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong.
5.932
5.933 -There are multiple ways to flatten 3-D data into 2-D. We will compare mappings from manifolds to planes which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). Our method will include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong.
5.934 -
5.935 -We have not yet made use of radial profiles. While the radial profiles may be used "raw", for laminar structures like the cortex another strategy is to group together voxels in the same cortical layer; each surface pixel would then be associated with one expression level per gene per layer. We will develop a segmentation algorithm to automatically identify the layer boundaries.
5.936 +There are multiple ways to flatten 3-D data into 2-D. We will compare mappings from manifolds to planes which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). We will also develop a segmentation algorithm to automatically identify the layer boundaries.
5.937 +
5.938 +%%Our method will include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong.
5.939 +
5.940 +%%We have not yet made use of radial profiles. While the radial profiles may be used "raw", for laminar structures like the cortex another strategy is to group together voxels in the same cortical layer; each surface pixel would then be associated with one expression level per gene per layer. We will develop a segmentation algorithm to automatically identify the layer boundaries.
5.941
5.942 %%\vspace{0.3cm}**Develop algorithms that find genetic markers for anatomical regions**
5.943 %%\vspace{0.3cm}****
5.944
5.945
5.946 === Develop algorithms that find genetic markers for anatomical regions ===
5.947 -\begin{wrapfigure}{L}{0.6\textwidth}\centering
5.948 +\begin{wrapfigure}{L}{0.7\textwidth}\centering
5.949 \includegraphics[scale=1]{merge3_norm_hv_PCA_ndims50_prototypes_collage_sm_border.eps}
5.950 \includegraphics[scale=.98]{nnmf_ndims7_collage_border.eps}
5.951 \includegraphics[scale=1]{merge3_norm_hv_k150_LandmarkIsomap_ndims7_prototypes_collage_sm_border.eps}
5.952 @@ -529,30 +606,32 @@
5.953 %%We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), and we plan to develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Hotelling's T-square test (a multivariate generalization of Student's t-test), ANOVA, and a multivariate version of the Mann-Whitney U test (a non-parametric test).
5.954 We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Student's t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a scoring measure on genes by taking the prediction error when using that gene to predict the target.
5.955
5.956 -Using some combination of these measures, we will develop a procedure to find single marker genes for anatomical regions: for each cortical area, we will rank the genes by their ability to delineate each area. We will quantitatively compare the list of single genes generated by our method to the lists generated by previous methods which are mentioned in Aim 1 Related Work.
5.957 +Using some combination of these measures, we will develop a procedure to find single marker genes for anatomical regions: for each cortical area, we will rank the genes by their ability to delineate that area. We will quantitatively compare the list of single genes generated by our method to the lists generated by methods which are mentioned in Related Work.
5.958
5.959
5.960 Some cortical areas have no single marker genes but can be identified by combinatorial coding. This requires multivariate scoring measures and feature selection procedures. Many of the measures, such as expression energy, gradient similarity, Jaccard, Dice, Hough, Student's t, and Mann-Whitney U are univariate. We will extend these scoring measures for use in multivariate feature selection, that is, for scoring how well combinations of genes, rather than individual genes, can distinguish a target area. There are existing multivariate forms of some of the univariate scoring measures, for example, Hotelling's T-square is a multivariate analog of Student's t.
5.961
5.962 We will develop a feature selection procedure for choosing the best small set of marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will also explore (a) feature selection using a stepwise wrapper over "vanilla" classifiers such as logistic regression, (b) supervised learning methods such as decision trees which incrementally/greedily combine single gene markers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features used, such as sparse support vector machines (SVMs).
5.963
5.964 -Since errors of displacement and of shape may cause genes and target areas to match less than they should, we will consider the robustness of feature selection methods in the presence of error. Some of these methods, such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a scoring method on small displacements and distortions of the data adds robustness to registration error at the expense of computation time.
5.965 -
5.966 -An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape of the natural domain of gene expression corresponding to the area is different from the shape of the area as recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly\footnote{Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by multiple sources of evidence. Interestingly, the need to detect "natural spatial domains of gene expression" in a data-driven fashion means that the methods of Aim 2 might be useful in achieving Aim 1, as well -- particularly discriminative dimensionality reduction.}, and (b) detect when a difficult area could be combined with adjacent areas to create a larger area which can be fit.
5.967 -
5.968 -A future publication on the method that we develop in Aim 1 will review the scoring measures and quantitatively compare their performance in order to provide a foundation for future research of methods of marker gene finding. We will measure the robustness of the scoring measures as well as their absolute performance on our dataset.
5.969 +Since errors of displacement and of shape may cause genes and target areas to match less than they should, we will consider the robustness of feature selection methods in the presence of error. Some of these methods, such as the Hough transform, are designed to be resistant in the presence of error, but many are not. %%We will consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a scoring method on small displacements and distortions of the data adds robustness to registration error at the expense of computation time.
5.970 +
5.971 +%% We will extend our procedure to handle difficult areas by combining areas or redrawing their boundaries.
5.972 +
5.973 +An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape of the natural domain of gene expression corresponding to the area is different from the shape of the area as recognized by anatomists. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly\footnote{Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by multiple sources of evidence. Interestingly, the need to detect "natural spatial domains of gene expression" in a data-driven fashion means that the methods of Goal 2 might be useful in achieving Goal 1, as well -- particularly discriminative dimensionality reduction.}, and (b) detect when a difficult area could be combined with adjacent areas to create a larger area which can be fit.
5.974 +
5.975 +A future publication on the method that we develop in Goal 1 will review the scoring measures and quantitatively compare their performance in order to provide a foundation for future research of methods of marker gene finding. We will measure the robustness of the scoring measures as well as their absolute performance on our dataset.
5.976
5.977 %% (including spatial models\cite{paciorek_computational_2007})
5.978
5.979 -\vspace{0.3cm}**Classifiers**
5.980 -We will explore and compare different classifiers. As noted above, this activity is not separate from the previous one, because some supervised learning algorithms include feature selection, and any classifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic regression (including spatial models), decision trees\footnote{Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate trees that use fewer genes.}, sparse SVMs, generative mixture models (including naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic algorithms, and artificial neural networks.
5.981 +%%\vspace{0.3cm}**Classifiers**
5.982 +%%We will explore and compare different classifiers. As noted above, this activity is not separate from the previous one, because some supervised learning algorithms include feature selection, and any classifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic regression (including spatial models), decision trees\footnote{Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate trees that use fewer genes.}, sparse SVMs, generative mixture models (including naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic algorithms, and artificial neural networks.
5.983
5.984
5.985
5.986
5.987 === Develop algorithms to suggest a division of a structure into anatomical parts ===
5.988
5.989 -\begin{wrapfigure}{L}{0.5\textwidth}\centering
5.990 +\begin{wrapfigure}{L}{0.6\textwidth}\centering
5.991 \includegraphics[scale=.2]{cosine_similarity1_rearrange_colorize.eps}
5.992 \caption{Prototypes corresponding to sample gene clusters, clustered by gradient similarity. Region boundaries for the region that most matches each prototype are overlaid.}
5.993 \label{geneClusters}\end{wrapfigure}
5.994 @@ -566,20 +645,18 @@
5.995 %% \footnote{Consider a matrix whose rows represent pixel locations, and whose columns represent genes. An entry in this matrix represents the gene expression level at a given pixel. One can look at this matrix as a collection of pixels, each corresponding to a vector of many gene expression levels; or one can look at it as a collection of genes, each corresponding to a vector giving that gene's expression at each pixel. Similarly, dimensionality reduction can be used to replace a large number of genes with a small number of features, or it can be used to replace a large number of pixels with a small number of features.}
5.996
5.997 \vspace{0.3cm}**Clustering and segmentation on pixels**
5.998 -We will explore clustering and segmentation algorithms in order to segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving\cite{hastie_gene_2000}, recursive division clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transformations, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with various linkage functions. These methods can be combined with dimensionality reduction.
5.999 +We will explore clustering and image segmentation algorithms in order to segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving\cite{hastie_gene_2000}, recursive division clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transformations, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with various linkage functions. These methods can be combined with dimensionality reduction.
5.1000
5.1001 \vspace{0.3cm}**Clustering on genes**
5.1002 -We have already shown that the procedure of clustering genes according to gradient similarity, and then creating an averaged prototype of each cluster's expression pattern, yields some spatial patterns which match cortical areas. We will further explore the clustering of genes.
5.1003 +We have already shown that the procedure of clustering genes according to gradient similarity, and then creating an averaged prototype of each cluster's expression pattern, yields some spatial patterns which match cortical areas (Figure \ref{geneClusters}). We will further explore the clustering of genes.
5.1004
5.1005 In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then replacing their expression levels with a single average expression level, thereby removing some redundancy from the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would help or hurt the ultimate goal of identifying interesting spatial regions.
5.1006
5.1007 \vspace{0.3cm}**Co-clustering**
5.1008 -There are some algorithms which simultaneously incorporate clustering on instances and on features (in our case, genes and pixels), for example, IRM. These are called co-clustering or biclustering algorithms.
5.1009 -
5.1010 -%%IRM\cite{kemp_learning_2006}.
5.1011 -
5.1012 -\vspace{0.3cm}**Radial profiles**
5.1013 -We wil explore the use of the radial profile of gene expression under each pixel.
5.1014 +We will explore some algorithms which simultaneously incorporate clustering on instances and on features (in our case, pixels and genes), for example, IRM\cite{kemp_learning_2006}. These are called co-clustering or biclustering algorithms.
5.1015 +
5.1016 +%%\vspace{0.3cm}**Radial profiles**
5.1017 +%%We wil explore the use of the radial profile of gene expression under each pixel.
5.1018
5.1019 \vspace{0.3cm}**Compare different methods**
5.1020 In order to tell which method is best for genomic anatomy, for each experimental method we will compare the cortical map found by unsupervised learning to a cortical map derived from the Allen Reference Atlas. We will explore various quantitative metrics that purport to measure how similar two clusterings are, such as Jaccard, Rand index, Fowlkes-Mallows, variation of information, Larsen, Van Dongen, and others.
5.1021 @@ -590,13 +667,11 @@
5.1022
5.1023
5.1024 === Apply the new methods to the cortex ===
5.1025 -Using the methods developed in Aim 1, we will present, for each cortical area, a short list of markers to identify that area; and we will also present lists of "panels" of genes that can be used to delineate many areas at once.
5.1026 -
5.1027 -%% GENSAT\cite{gong_gene_2003}
5.1028 -
5.1029 -Because in most cases the ABA coronal dataset only contains one ISH per gene, it is possible for an unrelated combination of genes to seem to identify an area when in fact it is only coincidence. There are two ways we will validate our marker genes to guard against this. First, we will confirm that putative combinations of marker genes express the same pattern in both hemispheres. Second, we will manually validate our final results on other gene expression datasets such as EMAGE, GeneAtlas, and GENSAT.
5.1030 -
5.1031 -Using the methods developed in Aim 2, we will present one or more hierarchical cortical maps. We will identify and explain how the statistical structure in the gene expression data led to any unexpected or interesting features of these maps, and we will provide biological hypotheses to interpret any new cortical areas, or groupings of areas, which are discovered.
5.1032 +Using the methods developed in Goal 1, we will present, for each cortical area, a short list of markers to identify that area; and we will also present lists of "panels" of genes that can be used to delineate many areas at once.
5.1033 +
5.1034 +Because in most cases the ABA coronal dataset only contains one ISH per gene, it is possible for an unrelated combination of genes to seem to identify an area when in fact it is only coincidence. There are three ways we will validate our marker genes to guard against this. First, we will confirm that putative combinations of marker genes express the same pattern in both hemispheres. Second, we will manually validate our final results on other gene expression datasets such as EMAGE, GeneAtlas, and GENSAT\cite{gong_gene_2003}. Third, we may conduct ISH experiments jointly with collaborators to get further data on genes of particular interest.
5.1035 +
5.1036 +Using the methods developed in Goal 2, we will present one or more hierarchical cortical maps. We will identify and explain how the statistical structure in the gene expression data led to any unexpected or interesting features of these maps, and we will provide biological hypotheses to interpret any new cortical areas, or groupings of areas, which are discovered.
5.1037
5.1038
5.1039
5.1040 @@ -605,29 +680,16 @@
5.1041 %%# note: slice artifact
5.1042
5.1043 %%\vspace{0.3cm}**Extension to probabalistic maps**
5.1044 -%%Presently, we do not have a probabalistic atlas which is registered to the ABA space. However, in anticipation of the availability of such maps, we would like to explore extensions to our Aim 1 techniques which can handle probabalistic maps.
5.1045 -
5.1046 -
5.1047 -
5.1048 -\vspace{0.3cm}\hrule
5.1049 -
5.1050 -== Timeline and milestones ==
5.1051 -
5.1052 -\vspace{0.3cm}**Finding marker genes**
5.1053 -\\ **September-November 2009**: Develop an automated mechanism for segmenting the cortical voxels into layers
5.1054 -\\ **November 2009 (milestone)**: Have completed construction of a flatmapped, cortical dataset with information for each layer
5.1055 -\\ **October 2009-April 2010**: Develop scoring and supervised learning methods.
5.1056 -\\ **January 2010 (milestone)**: Submit a publication on single marker genes for cortical areas
5.1057 -\\ **February-July 2010**: Continue to develop scoring methods and supervised learning frameworks. Extend techniques for robustness. Compare the performance of techniques. Validate marker genes. Prepare software toolbox for Aim 1.
5.1058 -\\ **June 2010 (milestone)**: Submit a paper describing a method fulfilling Aim 1. Release toolbox.
5.1059 -\\ **July 2010 (milestone)**: Submit a paper describing combinations of marker genes for each cortical area, and a small number of marker genes that can, in combination, define most of the areas at once
5.1060 -
5.1061 -\vspace{0.3cm}**Revealing new ways to parcellate a structure into regions**
5.1062 -\\ **June 2010-March 2011**: Explore dimensionality reduction algorithms. Explore clustering algorithms. Adapt clustering algorithms to use radial profile information. Compare the performance of techniques.
5.1063 -\\ **March 2011 (milestone)**: Submit a paper describing a method fulfilling Aim 2. Release toolbox.
5.1064 -\\ **February-May 2011**: Using the methods developed for Aim 2, explore the genomic anatomy of the cortex, interpret the results. Prepare software toolbox for Aim 2.
5.1065 -\\ **May 2011 (milestone)**: Submit a paper on the genomic anatomy of the cortex, using the methods developed in Aim 2
5.1066 -\\ **May-August 2011**: Revisit Aim 1 to see if what was learned during Aim 2 can improve the methods for Aim 1. Possibly submit another paper.
5.1067 +%%Presently, we do not have a probabalistic atlas which is registered to the ABA space. However, in anticipation of the availability of such maps, we would like to explore extensions to our Goal 1 techniques which can handle probabalistic maps.
5.1068 +
5.1069 +
5.1070 +
5.1071 +=== Apply the new methods to hyperspectral datasets ===
5.1072 +Our software will be able to read and write file formats common in the hyperspectral imaging community such as Erdas LAN and ENVI, and it will be able to convert between the SEV and NIFTI formats from neuroscience and the ENVI format from GIS. The methods developed in Goals 1 and 2 will be implemented either as part of Spectral Python or as a separate tool that interoperates with Spectral Python. The methods will be run on hyperspectral satellite image datasets, and their performance will be compared to existing hyperspectral analysis techniques.
5.1073 +
5.1074 +
5.1075 +
5.1076 +
5.1077
5.1078 \newpage
5.1079
5.1080 @@ -636,10 +698,11 @@
5.1081
5.1082 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
5.1083
5.1084 -%%if we need citations for aim 3 significance, http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WSS-4V70FHY-9&_user=4429&_coverDate=12%2F26%2F2008&_rdoc=1&_fmt=full&_orig=na&_cdi=7054&_docanchor=&_acct=C000059602&_version=1&_urlVersion=0&_userid=4429&md5=551eccc743a2bfe6e992eee0c3194203#app2 has examples of genetic targeting to specific anatomical regions
5.1085 -
5.1086 -
5.1087 +
5.1088 +
5.1089 +%% todo: postdoc mentoring plan
5.1090
5.1091
5.1092
5.1093 \end{document}
5.1094 +