cg

annotate grant.html @ 85:da8f81785211

.
author bshanks@bshanks.dyndns.org
date Tue Apr 21 03:36:06 2009 -0700 (16 years ago)
parents d89a99c9ea9a
children aafe6f8c3593

rev   line source
bshanks@0 1 Specific aims
bshanks@53 2 Massivenew datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ transgenic
bshanks@53 3 reporter, microarray voxelation, and others, allow the expression levels of many genes at many locations to be compared.
bshanks@53 4 Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker
bshanks@53 5 genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns. We have
bshanks@53 6 three specific aims:
bshanks@30 7 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target
bshanks@30 8 anatomical regions
bshanks@84 9 (2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions, based on
bshanks@84 10 spatial patterns in gene expression
bshanks@33 11 (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse
bshanks@35 12 Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of
bshanks@35 13 Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2).
bshanks@84 14 Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that the methods
bshanks@84 15 developed in aims (1) and (2) will generalize to any sort of high-dimensional data over points located in a low-dimensional
bshanks@84 16 space.
bshanks@84 17 In terms of the application of the methods to cerebral cortex, aim (1) is to go from cortical areas to marker genes,
bshanks@84 18 and aim (2) is to let the gene profile define the cortical areas. In addition to validating the usefulness of the algorithms,
bshanks@84 19 the application of these methods to cortex will produce immediate benefits, because there are currently no known genetic
bshanks@84 20 markers for most cortical areas. The results of the project will support the development of new ways to selectively target
bshanks@84 21 cortical areas, and it will support the development of a method for identifying the cortical areal boundaries present in small
bshanks@84 22 tissue samples.
bshanks@53 23 All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well as the
bshanks@30 24 machine-readable datasets developed in aim (3), will be published and freely available for others to use.
bshanks@30 25 Background and significance
bshanks@84 26 Aim 1: Given a map of regions, find genes that mark the regions
bshanks@85 27 Machine learning terminology The task of looking for marker genes for known anatomical regions means that one is
bshanks@85 28 looking for a set of genes such that, if the expression level of those genes is known, then the locations of the regions can be
bshanks@85 29 inferred.
bshanks@85 30 If we define the regions so that they cover the entire anatomical structure to be divided, we may say that we are using
bshanks@85 31 gene expression to determine to which region each voxel within the structure belongs. We call this a classification task,
bshanks@85 32 because each voxel is being assigned to a class (namely, its region). An understanding of the relationship between the
bshanks@85 33 combination of their expression levels and the locations of the regions may be expressed as a function. The input to this
bshanks@85 34 function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target
bshanks@85 35 voxel, that is, the region to which the target voxel belongs. We call this function a classifier. In general, the input to a
bshanks@85 36 classifier is called an instance, and the output is called a label (or a class label).
bshanks@30 37 The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a
bshanks@30 38 classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be
bshanks@85 39 analyzed in concert with an anatomical atlas in order to produce a classifier. The initial gene expression dataset used in
bshanks@85 40 the construction of the classifier is called training data. In the machine learning literature, this sort of procedure may be
bshanks@85 41 thought of as a supervised learning task, defined as a task in which the goal is to learn a mapping from instances to labels,
bshanks@85 42 and the training data consists of a set of instances (voxels) for which the labels (regions) are known.
bshanks@30 43 Each gene expression level is called a feature, and the selection of which genes1 to include is called feature selection.
bshanks@33 44 Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with
bshanks@33 45 a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
bshanks@30 46 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then
bshanks@30 47 chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic
bshanks@30 48 procedure may be used in which features are added and subtracted from the selected set depending on how much they raise
bshanks@30 49 the score. Such procedures are called “stepwise” or “greedy”.
bshanks@30 50 Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the
bshanks@85 51 algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection
bshanks@85 52 methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel,
bshanks@85 53 and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or average). If
bshanks@85 54 only information from nearby voxels is used to calculate a voxel’s sub-score, then we say it is a local scoring method. If only
bshanks@85 55 information from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method.
bshanks@85 56 Our strategy for Aim 1
bshanks@85 57 Key questions when choosing a learning method are: What are the instances? What are the features? How are the features
bshanks@85 58 chosen? Here are four principles that outline our answers to these questions.
bshanks@84 59 Principle 1: Combinatorial gene expression
bshanks@84 60 It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, in the
bshanks@84 61 cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas (ABA) dataset.
bshanks@84 62 However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for
bshanks@84 63 which multiple genes are necessary and sufficient is provided in Preliminary Studies, Figure 4). Therefore, each instance
bshanks@84 64 should contain multiple features (genes).
bshanks@84 65 Principle 2: Only look at combinations of small numbers of genes
bshanks@84 66 When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected
bshanks@84 67 as features. The more data that are available to a classifier, the better that it can do. For example, perhaps there are weak
bshanks@84 68 correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that
bshanks@84 69 we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we
bshanks@84 70 want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must
bshanks@84 71 contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible
bshanks@84 72 to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a
bshanks@84 73 procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes.
bshanks@84 74 Therefore, we must select only a few genes as features.
bshanks@63 75 The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many
bshanks@63 76 of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task
bshanks@63 77 combines feature selection with supervised learning.
bshanks@85 78 _________________________________________
bshanks@85 79 1Strictly speaking, the features are gene expression levels, but we’ll call them genes.
bshanks@30 80 Principle 3: Use geometry in feature selection
bshanks@30 81 When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of
bshanks@30 82 each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information
bshanks@30 83 about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See
bshanks@84 84 Preliminary Studies, figure 3 for evidence of the complementary nature of pointwise and local scoring methods.
bshanks@30 85 Principle 4: Work in 2-D whenever possible
bshanks@30 86 There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When
bshanks@30 87 it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis
bshanks@33 88 algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D
bshanks@85 89 data. Therefore, when possible, the instances should represent pixels, not voxels.
bshanks@43 90 Related work
bshanks@44 91 There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expression data
bshanks@84 92 which are not fundamentally spatial2.
bshanks@43 93 As noted above, there has been much work on both supervised learning and there are many available algorithms for
bshanks@43 94 each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the
bshanks@43 95 way that this framework is set up has a large impact on performance. Creating a good framework can require creatively
bshanks@43 96 reconceptualizing the problem domain, and is not merely a mechanical “fine-tuning” of numerical parameters. For example,
bshanks@84 97 we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Studies) may
bshanks@43 98 be necessary in order to achieve the best results in this application.
bshanks@53 99 We are aware of six existing efforts to find marker genes using spatial gene expression data using automated methods.
bshanks@85 100 [11 ] mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of
bshanks@53 101 interest, computing what proportion of this structure is covered by the gene’s spatial region.
bshanks@85 102 GeneAtlas[5] and EMAGE [23] allow the user to construct a search query by demarcating regions and then specifing
bshanks@53 103 either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. For the
bshanks@53 104 similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses
bshanks@53 105 the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel3
bshanks@85 106 whose expression is within four discretization levels. EMAGE uses Jaccard similarity4. Neither GeneAtlas nor EMAGE
bshanks@53 107 allow one to search for combinations of genes that define a region in concert but not separately.
bshanks@85 108 [13 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components. Gene Finder: The user
bshanks@85 109 selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2) yields a list of genes which are
bshanks@85 110 overexpressed in that cluster. (note: the ABA website also contains pre-prepared lists of overexpressed genes for selected
bshanks@85 111 structures). Correlation: The user selects a seed voxel and the system then shows the user how much correlation there is
bshanks@85 112 between the gene expression profile of the seed voxel and every other voxel. Clusters: will be described later
bshanks@43 113 Gene Finder is different from our Aim 1 in at least three ways. First, Gene Finder finds only single genes, whereas we
bshanks@43 114 will also look for combinations of genes. Second, gene finder can only use overexpression as a marker, whereas we will also
bshanks@85 115 search for underexpression. Third, Gene Finder uses a simple pointwise score5, whereas we will also use geometric scores
bshanks@84 116 such as gradient similarity (described in Preliminary Studies). Figures 4, 2, and 3 in the Preliminary Studies section contains
bshanks@84 117 evidence that each of our three choices is the right one.
bshanks@85 118 [6 ] looks at the mean expression level of genes within anatomical regions, and applies a Student’s t-test with Bonferroni
bshanks@51 119 correction to determine whether the mean expression level of a gene is significantly higher in the target region. Like AGEA,
bshanks@51 120 this is a pointwise measure (only the mean expression level per pixel is being analyzed), it is not being used to look for
bshanks@51 121 underexpression, and does not look for combinations of genes.
bshanks@85 122 [9 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary
bshanks@46 123 algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image. Their
bshanks@51 124 match score is Jaccard similarity.
bshanks@84 125 In summary, there has been fruitful work on finding marker genes, but only one of the previous projects explores
bshanks@51 126 combinations of marker genes, and none of these publications compare the results obtained by using different algorithms or
bshanks@51 127 scoring methods.
bshanks@84 128 Aim 2: From gene expression data, discover a map of regions
bshanks@30 129 Machine learning terminology: clustering
bshanks@85 130 _
bshanks@85 131 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not
bshanks@85 132 just data which have only a few different locations or which is indexed by anatomical label.
bshanks@85 133 3Actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity.
bshanks@85 134 4the number of true pixels in the intersection of the two images, divided by the number of pixels in their union.
bshanks@85 135 5“Expression energy ratio”, which captures overexpression.
bshanks@30 136 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as
bshanks@30 137 unsupervised learning in the jargon of machine learning. One thing that you can do with such a dataset is to group instances
bshanks@46 138 together. A set of similar instances is called a cluster, and the activity of finding grouping the data into clusters is called
bshanks@46 139 clustering or cluster analysis.
bshanks@84 140 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances
bshanks@84 141 are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels
bshanks@84 142 from the same anatomical region have similar gene expression profiles, at least compared to the other regions. This means
bshanks@84 143 that clustering voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into
bshanks@84 144 clusters of voxels with similar gene expression.
bshanks@85 145 It is desirable to determine not just one set of regions, but also how these regions relate to each other. The outcome of
bshanks@85 146 clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels. This is called
bshanks@85 147 hierarchial clustering.
bshanks@85 148 Similarity scores A crucial choice when designing a clustering method is how to measure similarity, across either pairs
bshanks@85 149 of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above
bshanks@85 150 under Aim 1) and scoring methods for similarity.
bshanks@85 151 Spatially contiguous clusters; image segmentation We have shown that aim 2 is a type of clustering task. In fact,
bshanks@85 152 it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a
bshanks@85 153 cluster must be spatially contiguous. In Preliminary Studies, we show that one can get reasonable results without enforcing
bshanks@85 154 this constraint; however, we plan to compare these results against other methods which guarantee contiguous clusters.
bshanks@85 155 Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim
bshanks@85 156 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels
bshanks@85 157 (one for each gene), rather than just three6. A more crucial difference is that there are various cues which are appropriate
bshanks@85 158 for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data
bshanks@85 159 such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other
bshanks@85 160 sorts of spatially arranged data, some of these algorithms are specialized for visual images.
bshanks@51 161 Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression feature
bshanks@51 162 vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying data.
bshanks@33 163 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the
bshanks@85 164 instances. However, some clustering algorithms perform better on small numbers of features7. There are techniques which
bshanks@30 165 “summarize” a larger number of features using a smaller number of features; these techniques go by the name of feature
bshanks@30 166 extraction or dimensionality reduction. The small set of features that such a technique yields is called the reduced feature
bshanks@85 167 set. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set
bshanks@85 168 may be any function of the set of gene expression levels.
bshanks@85 169 Clustering genes rather than voxels Although the ultimate goal is to cluster the instances (voxels or pixels), one
bshanks@85 170 strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used.
bshanks@30 171 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could
bshanks@30 172 have one reduced feature for each gene cluster.
bshanks@30 173 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression
bshanks@53 174 pattern which seems to pick out a single, spatially continguous region. Therefore, it seems likely that an anatomically
bshanks@85 175 interesting region will have multiple genes which each individually pick it out8. This suggests the following procedure:
bshanks@42 176 cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters.
bshanks@84 177 In Preliminary Studies, Figure 7, we show that a number of anatomically recognized cortical regions, as well as some
bshanks@84 178 “superregions” formed by lumping together a few regions, are associated with gene clusters in this fashion.
bshanks@51 179 The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering
bshanks@51 180 algorithms.
bshanks@85 181 ________________________________
bshanks@85 182 6There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often
bshanks@85 183 used to process satellite imagery.
bshanks@85 184 7First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms
bshanks@85 185 may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.
bshanks@85 186 8This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is
bshanks@85 187 possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression;
bshanks@85 188 perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although
bshanks@85 189 the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype.
bshanks@43 190 Related work
bshanks@85 191 Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, [15], [2], [16],
bshanks@85 192 and [1 ] associate spots on the cortex with the radial profile9 of response to some stain ([10] uses MRI), extract features from
bshanks@85 193 this profile, and then use similarity between surface pixels to cluster. Features used include statistical moments, wavelets,
bshanks@85 194 and the excess mass functional. Some of these features are motivated by the presence of tangential lines of stain intensity
bshanks@85 195 which correspond to laminar structure. Some methods use standard clustering procedures, whereas others make use of the
bshanks@85 196 spatial nature of the data to look for sudden transitions, which are identified as areal borders.
bshanks@85 197 [20 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual analysis,
bshanks@43 198 two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and a hierarchial recursive
bshanks@44 199 bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving
bshanks@85 200 the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset10 and while the results are
bshanks@84 201 promising, they also demonstrate that NNMF is not necessarily the best dimensionality reduction method for this application
bshanks@84 202 (see Preliminary Studies, Figure 6).
bshanks@85 203 AGEA[13] includes a preset hierarchial clustering of voxels based on a recursive bifurcation algorithm with correlation
bshanks@85 204 as the similarity metric. EMAGE[23] allows the user to select a dataset from among a large number of alternatives, or by
bshanks@53 205 running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchial complete linkage
bshanks@53 206 clustering with un-centred correlation as the similarity score.
bshanks@85 207 [6 ] clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were
bshanks@53 208 highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and
bshanks@53 209 ordered the rows of this matrix as follows: “the first row of the matrix was chosen to show the strongest contrast between
bshanks@53 210 the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing
bshanks@53 211 similarity using a least squares metric”. The resulting matrix showed four clusters. For each cluster, prototypical spatial
bshanks@53 212 expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without
bshanks@85 213 clustering voxels.
bshanks@85 214 [9 ] applies their technique for finding combinations of marker genes for the purpose of clustering genes around a “seed
bshanks@85 215 gene”. They do this by using the pattern of expression of the seed gene as the target image, and then searching for other
bshanks@85 216 genes which can be combined to reproduce this pattern. Other genes which are found are considered to be related to the
bshanks@85 217 seed. The same team also describes a method[22] for finding “association rules” such as, “if this voxel is expressed in by
bshanks@85 218 any gene, then that voxel is probably also expressed in by the same gene”. This could be useful as part of a procedure for
bshanks@85 219 clustering voxels.
bshanks@46 220 In summary, although these projects obtained clusterings, there has not been much comparison between different algo-
bshanks@85 221 rithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. The
bshanks@85 222 projects using gene expression on cortex did not attempt to make use of the radial profile of gene expression. Also, none of
bshanks@85 223 these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first in order
bshanks@85 224 to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms.
bshanks@85 225 Aim 3: apply the methods developed to the cerebral cortex
bshanks@30 226 Background
bshanks@84 227 The cortex is divided into areas and layers. Because of the cortical columnar organization, the parcellation of the cortex
bshanks@84 228 into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the
bshanks@84 229 areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the
bshanks@85 230 surface. One can picture an area of the cortex as a slice of a six-layered cake11.
bshanks@85 231 It is known that different cortical areas have distinct roles in both normal functioning and in disease processes, yet there
bshanks@85 232 are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is
bshanks@85 233 a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their
bshanks@85 234 approximate location upon the cortical surface.
bshanks@33 235 Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not
bshanks@53 236 completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single
bshanks@85 237 agreed-upon map can be seen by contrasting the recent maps given by Swanson[19] on the one hand, and Paxinos and
bshanks@85 238 Franklin[14] on the other. While the maps are certainly very similar in their general arrangement, significant differences
bshanks@85 239 remain.
bshanks@36 240 The Allen Mouse Brain Atlas dataset
bshanks@85 241 __
bshanks@85 242 9A radial profile is a profile along a line perpendicular to the cortical surface.
bshanks@85 243 10We ran “vanilla” NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft
bshanks@85 244 spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was
bshanks@85 245 needed. The paper under discussion also mentions that they tried a hierarchial variant of NNMF, which we have not yet tried.
bshanks@85 246 11Outside of isocortex, the number of layers varies.
bshanks@84 247 The Allen Mouse Brain Atlas (ABA) data were produced by doing in-situ hybridization on slices of male, 56-day-old
bshanks@36 248 C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed
bshanks@85 249 to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution
bshanks@85 250 is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse
bshanks@85 251 brains were needed in order to measure the expression of many genes.
bshanks@85 252 An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate
bshanks@36 253 system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 = 159,326
bshanks@85 254 voxels in the 3D coordinate system, of which 51,533 are in the brain[13].
bshanks@85 255 Mus musculus is thought to contain about 22,000 protein-coding genes[25]. The ABA contains data on about 20,000
bshanks@85 256 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from
bshanks@85 257 only the coronal subset of the ABA12.
bshanks@85 258 The ABA is not the only large public spatial gene expression dataset13. With the exception of the ABA, GenePaint, and
bshanks@85 259 EMAGE, most of the other resources have not (yet) extracted the expression intensity from the ISH images and registered
bshanks@85 260 the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public
bshanks@85 261 download from the website14. Many of these resources focus on developmental gene expression.
bshanks@46 262 Significance
bshanks@43 263 The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the
bshanks@42 264 combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for
bshanks@30 265 drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively
bshanks@30 266 target individual cortical areas.
bshanks@30 267 The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatom-
bshanks@33 268 ical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can
bshanks@33 269 find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that
bshanks@30 270 will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex.
bshanks@85 271 The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation of a
bshanks@85 272 better map. The development of present-day cortical maps was driven by the application of histological stains. If a different
bshanks@85 273 set of stains had been available which identified a different set of features, then today’s cortical maps may have come out
bshanks@85 274 differently. It is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been
bshanks@85 275 captured by any stain. Therefore, cortical anatomy needs to incorporate what we can learn from looking at the patterns of
bshanks@85 276 gene expression.
bshanks@63 277 While we do not here propose to analyze human gene expression data, it is conceivable that the methods we propose to
bshanks@63 278 develop could be used to suggest modifications to the human cortical map as well.
bshanks@63 279 Related work
bshanks@85 280 [13 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations
bshanks@63 281 between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either
bshanks@46 282 of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of
bshanks@46 283 the other components of AGEA can be applied to cortical areas; AGEA’s Gene Finder cannot be used to find marker genes
bshanks@85 284 for the cortical areas; and AGEA’s hierarchial clustering does not produce clusters corresponding to the cortical areas15.
bshanks@46 285 In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, (b) there has
bshanks@43 286 been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally
bshanks@43 287 finding marker genes for cortical areas, or on finding a hierarchial clustering that will yield a map of cortical areas de novo
bshanks@43 288 from gene expression data.
bshanks@53 289 Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker
bshanks@53 290 genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods.
bshanks@53 291 _________________________________________
bshanks@85 292 12The sagittal data do not cover the entire cortex, and also have greater registration error[13]. Genes were selected by the Allen Institute for
bshanks@85 293 coronal sectioning based on, “classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression
bshanks@85 294 pattern”[13].
bshanks@85 295 13Other such resources include GENSAT[8], GenePaint[24], its sister project GeneAtlas[5], BGEM[12], EMAGE[23], EurExpress (http://www.
bshanks@85 296 eurexpress.org/ee/; EurExpress data are also entered into EMAGE), EADHB (http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.
bshanks@85 297 html), MAMEP (http://mamep.molgen.mpg.de/index.php), Xenbase (http://xenbase.org/), ZFIN[18], Aniseed (http://aniseed-ibdm.
bshanks@85 298 univ-mrs.fr/), VisiGene (http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some of the other listed data sources), GEISHA[4],
bshanks@85 299 Fruitfly.org[21], COMPARE (http://compare.ibdml.univ-mrs.fr/), GXD[17], GEO[3] (GXD and GEO contain spatial data but also non-spatial
bshanks@85 300 data. All GXD spatial data are also in EMAGE.)
bshanks@85 301 14without prior offline registration
bshanks@85 302 15In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger
bshanks@85 303 than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation
bshanks@85 304 clustering algorithm will tend to create clusters representing cortical layers, not areas (there may be clusters which presumably correspond to the
bshanks@85 305 intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of
bshanks@85 306 these). The reason that Gene Finder cannot the find marker genes for cortical areas is that, although the user chooses a seed voxel, Gene Finder
bshanks@85 307 chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed.
bshanks@84 308 Preliminary Studies
bshanks@85 309
bshanks@85 310
bshanks@85 311 Figure 1: Top row: Genes Nfic and
bshanks@85 312 A930001M12Rik are the most correlated
bshanks@85 313 with area SS (somatosensory cortex). Bot-
bshanks@85 314 tom row: Genes C130038G02Rik and
bshanks@85 315 Cacna1i are those with the best fit using
bshanks@85 316 logistic regression. Within each picture, the
bshanks@85 317 vertical axis roughly corresponds to anterior
bshanks@85 318 at the top and posterior at the bottom, and
bshanks@85 319 the horizontal axis roughly corresponds to
bshanks@85 320 medial at the left and lateral at the right.
bshanks@85 321 The red outline is the boundary of region
bshanks@85 322 SS. Pixels are colored according to correla-
bshanks@85 323 tion, with red meaning high correlation and
bshanks@85 324 blue meaning low. Format conversion between SEV, MATLAB, NIFTI
bshanks@85 325 We have created software to (politely) download all of the SEV files16 from
bshanks@85 326 the Allen Institute website. We have also created software to convert between
bshanks@85 327 the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s file
bshanks@85 328 formats.
bshanks@85 329 Flatmap of cortex
bshanks@85 330 We downloaded the ABA data and applied a mask to select only those voxels
bshanks@85 331 which belong to cerebral cortex. We divided the cortex into hemispheres.
bshanks@85 332 Using Caret[7], we created a mesh representation of the surface of the se-
bshanks@85 333 lected voxels. For each gene, for each node of the mesh, we calculated an
bshanks@85 334 average of the gene expression of the voxels “underneath” that mesh node. We
bshanks@85 335 then flattened the cortex, creating a two-dimensional mesh.
bshanks@85 336 We sampled the nodes of the irregular, flat mesh in order to create a regular
bshanks@85 337 grid of pixel values. We converted this grid into a MATLAB matrix.
bshanks@85 338 We manually traced the boundaries of each of 49 cortical areas from the
bshanks@85 339 ABA coronal reference atlas slides. We then converted these manual traces
bshanks@85 340 into Caret-format regional boundary data on the mesh surface. We projected
bshanks@85 341 the regions onto the 2-d mesh, and then onto the grid, and then we converted
bshanks@85 342 the region data into MATLAB format.
bshanks@85 343 At this point, the data are in the form of a number of 2-D matrices, all in
bshanks@85 344 registration, with the matrix entries representing a grid of points (pixels) over
bshanks@85 345 the cortical surface:
bshanks@85 346 ∙ A 2-D matrix whose entries represent the regional label associated with each
bshanks@85 347 surface pixel
bshanks@85 348 ∙ For each gene, a 2-D matrix whose entries represent the average expression
bshanks@85 349 level underneath each surface pixel
bshanks@75 350
bshanks@78 351 Figure 2: Gene Pitx2
bshanks@75 352 is selectively underex-
bshanks@77 353 pressed in area SS. We created a normalized version of the gene expression data by subtracting each gene’s mean
bshanks@84 354 expression level (over all surface pixels) and dividing the expression level of each gene by its
bshanks@84 355 standard deviation.
bshanks@75 356 The features and the target area are both functions on the surface pixels. They can be referred
bshanks@75 357 to as scalar fields over the space of surface pixels; alternately, they can be thought of as images
bshanks@75 358 which can be displayed on the flatmapped surface.
bshanks@75 359 To move beyond a single average expression level for each surface pixel, we plan to create a
bshanks@75 360 separate matrix for each cortical layer to represent the average expression level within that layer.
bshanks@75 361 Cortical layers are found at different depths in different parts of the cortex. In preparation for
bshanks@75 362 extracting the layer-specific datasets, we have extended Caret with routines that allow the depth
bshanks@75 363 of the ROI for volume-to-surface projection to vary.
bshanks@75 364 In the Research Plan, we describe how we will automatically locate the layer depths. For
bshanks@85 365 validation, we have manually demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
bshanks@77 366 Feature selection and scoring methods
bshanks@75 367 Underexpression of a gene can serve as a marker Underexpression of a gene can sometimes serve as a marker. See,
bshanks@75 368 for example, Figure 2.
bshanks@75 369 Correlation Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance
bshanks@75 370 as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the
bshanks@75 371 surface pixels.
bshanks@84 372 One class of feature selection scoring methods contains methods which calculate some sort of “match” between each gene
bshanks@84 373 image and the target image. Those genes which match the best are good candidates for features.
bshanks@85 374 _________________________________________
bshanks@85 375 16SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.
bshanks@75 376 One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between
bshanks@75 377 each gene and each cortical area. The top row of Figure 1 shows the three genes most correlated with area SS.
bshanks@85 378
bshanks@85 379
bshanks@85 380 Figure 3: The top row shows the two genes
bshanks@85 381 which (individually) best predict area AUD,
bshanks@85 382 according to logistic regression. The bot-
bshanks@85 383 tom row shows the two genes which (indi-
bshanks@85 384 vidually) best match area AUD, according
bshanks@85 385 to gradient similarity. From left to right and
bshanks@85 386 top to bottom, the genes are Ssr1, Efcbp1,
bshanks@85 387 Ptk7, and Aph1a. Conditional entropy An information-theoretic scoring method is to find
bshanks@85 388 features such that, if the features (gene expression levels) are known, uncer-
bshanks@85 389 tainty about the target (the regional identity) is reduced. Entropy measures
bshanks@85 390 uncertainty, so what we want is to find features such that the conditional dis-
bshanks@85 391 tribution of the target has minimal entropy. The distribution to which we are
bshanks@85 392 referring is the probability distribution over the population of surface pixels.
bshanks@85 393 The simplest way to use information theory is on discrete data, so we
bshanks@85 394 discretized our gene expression data by creating, for each gene, five thresholded
bshanks@85 395 boolean masks of the gene data. For each gene, we created a boolean mask of
bshanks@85 396 its expression levels using each of these thresholds: the mean of that gene, the
bshanks@85 397 mean minus one standard deviation, the mean minus two standard deviations,
bshanks@85 398 the mean plus one standard deviation, the mean plus two standard deviations.
bshanks@85 399 Now, for each region, we created and ran a forward stepwise procedure
bshanks@85 400 which attempted to find pairs of gene expression boolean masks such that the
bshanks@85 401 conditional entropy of the target area’s boolean mask, conditioned upon the
bshanks@85 402 pair of gene expression boolean masks, is minimized.
bshanks@85 403 This finds pairs of genes which are most informative (at least at these dis-
bshanks@85 404 cretization thresholds) relative to the question, “Is this surface pixel a member
bshanks@85 405 of the target area?”. Its advantage over linear methods such as logistic regres-
bshanks@85 406 sion is that it takes account of arbitrarily nonlinear relationships; for example,
bshanks@85 407 if the XOR of two variables predicts the target, conditional entropy would
bshanks@85 408 notice, whereas linear methods would not.
bshanks@85 409 Gradient similarity We noticed that the previous two scoring methods,
bshanks@85 410 which are pointwise, often found genes whose pattern of expression did not look similar in shape to the target region. For
bshanks@85 411 this reason we designed a non-pointwise local scoring method to detect when a gene had a pattern of expression which
bshanks@85 412 looked like it had a boundary whose shape is similar to the shape of the target region. We call this scoring method “gradient
bshanks@85 413 similarity”.
bshanks@85 414
bshanks@85 415
bshanks@85 416 Figure 4: Upper left: wwc1. Upper right:
bshanks@85 417 mtif2. Lower left: wwc1 + mtif2 (each
bshanks@85 418 pixel’s value on the lower left is the sum of
bshanks@85 419 the corresponding pixels in the upper row). One might say that gradient similarity attempts to measure how much the
bshanks@85 420 border of the area of gene expression and the border of the target region over-
bshanks@85 421 lap. However, since gene expression falls off continuously rather than jumping
bshanks@85 422 from its maximum value to zero, the spatial pattern of a gene’s expression often
bshanks@85 423 does not have a discrete border. Therefore, instead of looking for a discrete
bshanks@85 424 border, we look for large gradients. Gradient similarity is a symmetric function
bshanks@85 425 over two images (i.e. two scalar fields). It is is high to the extent that matching
bshanks@85 426 pixels which have large values and large gradients also have gradients which
bshanks@85 427 are oriented in a similar direction. The formula is:
bshanks@85 428 ∑
bshanks@85 429 pixel<img src="cmsy7-32.png" alt="&#x2208;" />pixels cos(abs(&#x2220;&#x2207;1 -&#x2220;&#x2207;2)) &#x22C5;|&#x2207;1| + |&#x2207;2|
bshanks@41 430 2 &#x22C5; pixel_value1 + pixel_value2
bshanks@41 431 2
bshanks@85 432 where &#x2207;1 and &#x2207;2 are the gradient vectors of the two images at the current
bshanks@85 433 pixel; &#x2220;&#x2207;i is the angle of the gradient of image i at the current pixel; |&#x2207;i| is
bshanks@85 434 the magnitude of the gradient of image i at the current pixel; and pixel_valuei
bshanks@85 435 is the value of the current pixel in image i.
bshanks@85 436 The intuition is that we want to see if the borders of the pattern in the
bshanks@85 437 two images are similar; if the borders are similar, then both images will have
bshanks@85 438 corresponding pixels with large gradients (because this is a border) which are
bshanks@85 439 oriented in a similar direction (because the borders are similar).
bshanks@69 440 Most of the genes in Figure 5 were identified via gradient similarity.
bshanks@43 441 Gradient similarity provides information complementary to correlation
bshanks@41 442 To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider
bshanks@85 443 Fig. 3. The top row of Fig. 3 displays the 3 genes which most match area AUD, according to a pointwise method17. The
bshanks@85 444 _________________________________________
bshanks@85 445 17For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor
bshanks@85 446 variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well
bshanks@85 447 they predict area AUD.
bshanks@85 448 bottom row displays the 3 genes which most match AUD according to a method which considers local geometry18 The
bshanks@46 449 pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is
bshanks@46 450 that this includes many areas which don&#8217;t have a salient border matching the areal border. The geometric method identifies
bshanks@46 451 genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes
bshanks@46 452 genes which don&#8217;t express over the entire area. Genes which have high rankings using both pointwise and border criteria,
bshanks@46 453 such as Aph1a in the example, may be particularly good markers. None of these genes are, individually, a perfect marker
bshanks@46 454 for AUD; we deliberately chose a &#8220;difficult&#8221; area in order to better contrast pointwise with geometric methods.
bshanks@85 455
bshanks@85 456
bshanks@85 457
bshanks@85 458
bshanks@85 459 Figure 5: From left to right and top
bshanks@85 460 to bottom, single genes which roughly
bshanks@85 461 identify areas SS (somatosensory primary
bshanks@85 462 + supplemental), SSs (supplemental so-
bshanks@85 463 matosensory), PIR (piriform), FRP (frontal
bshanks@85 464 pole), RSP (retrosplenial), COApm (Corti-
bshanks@85 465 cal amygdalar, posterior part, medial zone).
bshanks@85 466 Grouping some areas together, we have
bshanks@85 467 also found genes to identify the groups
bshanks@85 468 ACA+PL+ILA+DP+ORB+MO (anterior
bshanks@85 469 cingulate, prelimbic, infralimbic, dorsal pe-
bshanks@85 470 duncular, orbital, motor), posterior and lat-
bshanks@85 471 eral visual (VISpm, VISpl, VISI, VISp; pos-
bshanks@85 472 teromedial, posterolateral, lateral, and pri-
bshanks@85 473 mary visual; the posterior and lateral vi-
bshanks@85 474 sual area is distinguished from its neigh-
bshanks@85 475 bors, but not from the entire rest of the
bshanks@85 476 cortex). The genes are Pitx2, Aldh1a2,
bshanks@85 477 Ppfibp1, Slco1a5, Tshz2, Trhr, Col12a1,
bshanks@85 478 Ets1. Areas which can be identified by single genes Using gradient simi-
bshanks@85 479 larity, we have already found single genes which roughly identify some areas
bshanks@85 480 and groupings of areas. For each of these areas, an example of a gene which
bshanks@85 481 roughly identifies it is shown in Figure 5. We have not yet cross-verified these
bshanks@85 482 genes in other atlases.
bshanks@85 483 In addition, there are a number of areas which are almost identified by single
bshanks@85 484 genes: COAa+NLOT (anterior part of cortical amygdalar area, nucleus of the
bshanks@85 485 lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate),
bshanks@85 486 VIS (visual), AUD (auditory).
bshanks@85 487 These results validate our expectation that the ABA dataset can be ex-
bshanks@85 488 ploited to find marker genes for many cortical areas, while also validating the
bshanks@85 489 relevancy of our new scoring method, gradient similarity.
bshanks@85 490 Combinations of multiple genes are useful and necessary for some
bshanks@85 491 areas
bshanks@85 492 In Figure 4, we give an example of a cortical area which is not marked by
bshanks@85 493 any single gene, but which can be identified combinatorially. Acccording to
bshanks@85 494 logistic regression, gene wwc1 is the best fit single gene for predicting whether
bshanks@85 495 or not a pixel on the cortical surface belongs to the motor area (area MO).
bshanks@85 496 The upper-left picture in Figure 4 shows wwc1&#8217;s spatial expression pattern over
bshanks@85 497 the cortex. The lower-right boundary of MO is represented reasonably well by
bshanks@85 498 this gene, but the gene overshoots the upper-left boundary. This flattened 2-D
bshanks@85 499 representation does not show it, but the area corresponding to the overshoot is
bshanks@85 500 the medial surface of the cortex. MO is only found on the dorsal surface. Gene
bshanks@85 501 mtif2 is shown in the upper-right. Mtif2 captures MO&#8217;s upper-left boundary,
bshanks@85 502 but not its lower-right boundary. Mtif2 does not express very much on the
bshanks@85 503 medial surface. By adding together the values at each pixel in these two figures,
bshanks@85 504 we get the lower-left image. This combination captures area MO much better
bshanks@85 505 than any single gene.
bshanks@85 506 This shows that our proposal to develop a method to find combinations of
bshanks@85 507 marker genes is both possible and necessary.
bshanks@85 508 Feature selection integrated with prediction As noted earlier, in gen-
bshanks@85 509 eral, any predictive method can be used for feature selection by running it
bshanks@85 510 inside a stepwise wrapper. Also, some predictive methods integrate soft con-
bshanks@85 511 straints on number of features used. Examples of both of these will be seen in
bshanks@85 512 the section &#8220;Multivariate Predictive methods&#8221;.
bshanks@85 513 Multivariate Predictive methods
bshanks@85 514 Forward stepwise logistic regression Logistic regression is a popular
bshanks@85 515 method for predictive modeling of categorial data. As a pilot run, for five
bshanks@85 516 cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise
bshanks@85 517 logistic regression to find single genes, pairs of genes, and triplets of genes
bshanks@85 518 which predict areal identify. This is an example of feature selection integrated
bshanks@85 519 with prediction using a stepwise wrapper. Some of the single genes found
bshanks@85 520 were shown in various figures throughout this document, and Figure 4 shows
bshanks@85 521 a combination of genes which was found.
bshanks@85 522 We felt that, for single genes, gradient similarity did a better job than
bshanks@85 523 logistic regression at capturing our subjective impression of a &#8220;good gene&#8221;.
bshanks@85 524 _________________________________________
bshanks@85 525 18For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD,
bshanks@84 526 was calculated, and this was used to rank the genes.
bshanks@84 527
bshanks@60 528
bshanks@69 529
bshanks@69 530
bshanks@69 531
bshanks@69 532 Figure 6: First row: the first 6 reduced dimensions, using PCA. Second
bshanks@69 533 row: the first 6 reduced dimensions, using NNMF. Third row: the first
bshanks@69 534 six reduced dimensions, using landmark Isomap. Bottom row: examples
bshanks@69 535 of kmeans clustering applied to reduced datasets to find 7 clusters. Left:
bshanks@69 536 19 of the major subdivisions of the cortex. Second from left: PCA. Third
bshanks@69 537 from left: NNMF. Right: Landmark Isomap. Additional details: In the
bshanks@69 538 third and fourth rows, 7 dimensions were found, but only 6 displayed. In
bshanks@69 539 the last row: for PCA, 50 dimensions were used; for NNMF, 6 dimensions
bshanks@85 540 were used; for landmark Isomap, 7 dimensions were used. SVM on all genes at once
bshanks@85 541 In order to see how well one can do when
bshanks@85 542 looking at all genes at once, we ran a support
bshanks@85 543 vector machine to classify cortical surface pix-
bshanks@85 544 els based on their gene expression profiles. We
bshanks@85 545 achieved classification accuracy of about 81%19.
bshanks@85 546 This shows that the genes included in the ABA
bshanks@85 547 dataset are sufficient to define much of cortical
bshanks@85 548 anatomy. However, as noted above, a classifier
bshanks@85 549 that looks at all the genes at once isn&#8217;t as prac-
bshanks@85 550 tically useful as a classifier that uses only a few
bshanks@85 551 genes.
bshanks@85 552 Data-driven redrawing of the cor-
bshanks@85 553 tical map
bshanks@85 554 We have applied the following dimensional-
bshanks@85 555 ity reduction algorithms to reduce the dimen-
bshanks@85 556 sionality of the gene expression profile associ-
bshanks@85 557 ated with each voxel: Principal Components
bshanks@85 558 Analysis (PCA), Simple PCA (SPCA), Multi-
bshanks@85 559 Dimensional Scaling (MDS), Isomap, Land-
bshanks@85 560 mark Isomap, Laplacian eigenmaps, Local Tan-
bshanks@85 561 gent Space Alignment (LTSA), Hessian locally
bshanks@85 562 linear embedding, Diffusion maps, Stochastic
bshanks@85 563 Neighbor Embedding (SNE), Stochastic Prox-
bshanks@85 564 imity Embedding (SPE), Fast Maximum Vari-
bshanks@85 565 ance Unfolding (FastMVU), Non-negative Ma-
bshanks@85 566 trix Factorization (NNMF). Space constraints
bshanks@85 567 prevent us from showing many of the results,
bshanks@85 568 but as a sample, PCA, NNMF, and landmark
bshanks@85 569 Isomap are shown in the first, second, and third
bshanks@85 570 rows of Figure 6.
bshanks@71 571
bshanks@85 572 Figure 7: Prototypes corresponding to sample gene clusters,
bshanks@85 573 clustered by gradient similarity. Region boundaries for the
bshanks@85 574 region that most matches each prototype are overlayed. After applying the dimensionality reduction, we ran clus-
bshanks@85 575 tering algorithms on the reduced data. To date we have tried
bshanks@85 576 k-means and spectral clustering. The results of k-means af-
bshanks@85 577 ter PCA, NNMF, and landmark Isomap are shown in the
bshanks@85 578 last row of Figure 6. To compare, the leftmost picture on
bshanks@85 579 the bottom row of Figure 6 shows some of the major sub-
bshanks@85 580 divisions of cortex. These results clearly show that differ-
bshanks@85 581 ent dimensionality reduction techniques capture different as-
bshanks@85 582 pects of the data and lead to different clusterings, indicating
bshanks@85 583 the utility of our proposal to produce a detailed comparion
bshanks@85 584 of these techniques as applied to the domain of genomic
bshanks@85 585 anatomy.
bshanks@85 586 Many areas are captured by clusters of genes We
bshanks@85 587 also clustered the genes using gradient similarity to see if
bshanks@85 588 the spatial regions defined by any clusters matched known
bshanks@85 589 anatomical regions. Figure 7 shows, for ten sample gene clusters, each cluster&#8217;s average expression pattern, compared to
bshanks@85 590 a known anatomical boundary. This suggests that it is worth attempting to cluster genes, and then to use the results to
bshanks@85 591 cluster voxels.
bshanks@85 592 _____________________________
bshanks@85 593 195-fold cross-validation.
bshanks@84 594 Research Design and Methods
bshanks@42 595 Further work on flatmapping
bshanks@85 596 Often the surface of a structure serves as a natural 2-D basis for anatomical organization. Even when the shape of the
bshanks@85 597 surface is known, there are multiple ways to map it into a plane. We will compare mappings which attempt to preserve
bshanks@85 598 size (such as the one used by Caret[7]) with mappings which preserve angle (conformal maps). Although there is much 2-D
bshanks@85 599 organization in anatomy, there are also structures whose anatomy is fundamentally 3-dimensional. We plan to include a
bshanks@85 600 statistical test that warns the user if the assumption of 2-D structure seems to be wrong.
bshanks@85 601 Automatic segmentation of cortical layers
bshanks@85 602 Extension to probabalistic maps Presently, we do not have a probabalistic atlas which is registered to the ABA
bshanks@85 603 space. However, in anticipation of the availability of such maps, we would like to explore extensions to our Aim 1 techniques
bshanks@85 604 which can handle probabalistic maps.
bshanks@30 605 Develop algorithms that find genetic markers for anatomical regions
bshanks@30 606 1.Develop scoring measures for evaluating how good individual genes are at marking areas: we will compare pointwise,
bshanks@30 607 geometric, and information-theoretic measures.
bshanks@30 608 2.Develop a procedure to find single marker genes for anatomical regions: for each cortical area, by using or combining
bshanks@30 609 the scoring measures developed, we will rank the genes by their ability to delineate each area.
bshanks@30 610 3.Extend the procedure to handle difficult areas by using combinatorial coding: for areas that cannot be identified by any
bshanks@30 611 single gene, identify them with a handful of genes. We will consider both (a) algorithms that incrementally/greedily
bshanks@30 612 combine single gene markers into sets, such as forward stepwise regression and decision trees, and also (b) supervised
bshanks@33 613 learning techniques which use soft constraints to minimize the number of features, such as sparse support vector
bshanks@30 614 machines.
bshanks@33 615 4.Extend the procedure to handle difficult areas by combining or redrawing the boundaries: An area may be difficult
bshanks@33 616 to identify because the boundaries are misdrawn, or because it does not &#8220;really&#8221; exist as a single area, at least on the
bshanks@30 617 genetic level. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its
bshanks@30 618 boundary were redrawn slightly, and (b) detect when a difficult area could be combined with adjacent areas to create
bshanks@30 619 a larger area which can be fit.
bshanks@51 620 # Linear discriminant analysis
bshanks@64 621 Decision trees todo
bshanks@85 622 20.
bshanks@30 623 Apply these algorithms to the cortex
bshanks@30 624 1.Create open source format conversion tools: we will create tools to bulk download the ABA dataset and to convert
bshanks@30 625 between SEV, NIFTI and MATLAB formats.
bshanks@30 626 2.Flatmap the ABA cortex data: map the ABA data onto a plane and draw the cortical area boundaries onto it.
bshanks@30 627 3.Find layer boundaries: cluster similar voxels together in order to automatically find the cortical layer boundaries.
bshanks@30 628 4.Run the procedures that we developed on the cortex: we will present, for each area, a short list of markers to identify
bshanks@30 629 that area; and we will also present lists of &#8220;panels&#8221; of genes that can be used to delineate many areas at once.
bshanks@30 630 Develop algorithms to suggest a division of a structure into anatomical parts
bshanks@60 631 # mixture models, etc
bshanks@30 632 1.Explore dimensionality reduction algorithms applied to pixels: including TODO
bshanks@30 633 2.Explore dimensionality reduction algorithms applied to genes: including TODO
bshanks@30 634 3.Explore clustering algorithms applied to pixels: including TODO
bshanks@85 635 _________________________________________
bshanks@85 636 20Already, for each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy
bshanks@85 637 on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate
bshanks@85 638 trees that use fewer genes
bshanks@30 639 4.Explore clustering algorithms applied to genes: including gene shaving, TODO
bshanks@30 640 5.Develop an algorithm to use dimensionality reduction and/or hierarchial clustering to create anatomical maps
bshanks@30 641 6.Run this algorithm on the cortex: present a hierarchial, genoarchitectonic map of the cortex
bshanks@51 642 # Linear discriminant analysis
bshanks@51 643 # jbt, coclustering
bshanks@51 644 # self-organizing map
bshanks@85 645 # confirm with EMAGE, GeneAtlas, GENSAT, etc, to fight overfitting, two hemis
bshanks@53 646 # compare using clustering scores
bshanks@64 647 # multivariate gradient similarity
bshanks@66 648 # deep belief nets
bshanks@66 649 # note: slice artifact
bshanks@33 650 Bibliography &amp; References Cited
bshanks@85 651 [1]Chris Adamson, Leigh Johnston, Terrie Inder, Sandra Rees, Iven Mareels, and Gary Egan. A Tracking Approach to
bshanks@85 652 Parcellation of the Cerebral Cortex, volume Volume 3749/2005 of Lecture Notes in Computer Science, pages 294&#8211;301.
bshanks@85 653 Springer Berlin / Heidelberg, 2005.
bshanks@85 654 [2]J. Annese, A. Pitiot, I. D. Dinov, and A. W. Toga. A myelo-architectonic method for the structural classification of
bshanks@85 655 cortical areas. NeuroImage, 21(1):15&#8211;26, 2004.
bshanks@85 656 [3]Tanya Barrett, Dennis B. Troup, Stephen E. Wilhite, Pierre Ledoux, Dmitry Rudnev, Carlos Evangelista, Irene F.
bshanks@53 657 Kim, Alexandra Soboleva, Maxim Tomashevsky, and Ron Edgar. NCBI GEO: mining tens of millions of expression
bshanks@53 658 profiles&#8211;database and tools update. Nucl. Acids Res., 35(suppl_1):D760&#8211;765, 2007.
bshanks@85 659 [4]George W. Bell, Tatiana A. Yatskievych, and Parker B. Antin. GEISHA, a whole-mount in situ hybridization gene
bshanks@53 660 expression screen in chicken embryos. Developmental Dynamics, 229(3):677&#8211;687, 2004.
bshanks@85 661 [5]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe Warren, Wah
bshanks@53 662 Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome. PLoS Comput Biol, 1(4):e41,
bshanks@53 663 2005.
bshanks@85 664 [6]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy, Arthur W.
bshanks@53 665 Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of expression for a mouse
bshanks@53 666 brain section obtained using voxelation. Physiol. Genomics, 30(3):313&#8211;321, August 2007.
bshanks@85 667 [7]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite for surface-
bshanks@33 668 based analyses of cerebral cortex. Journal of the American Medical Informatics Association: JAMIA, 8(5):443&#8211;59, 2001.
bshanks@33 669 PMID: 11522765.
bshanks@85 670 [8]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Schambra, Norma J.
bshanks@44 671 Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A gene expression atlas of the
bshanks@44 672 central nervous system based on bacterial artificial chromosomes. Nature, 425(6961):917&#8211;925, October 2003.
bshanks@85 673 [9]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Expression Pat-
bshanks@46 674 terns, volume 13 of Communications in Computer and Information Science, pages 347&#8211;361. Springer Berlin Heidelberg,
bshanks@46 675 2008.
bshanks@85 676 [10]F. Kruggel, M. K. Brckner, Th. Arendt, C. J. Wiggins, and D. Y. von Cramon. Analyzing the neocortical fine-structure.
bshanks@85 677 Medical Image Analysis, 7(3):251&#8211;264, September 2003.
bshanks@85 678 [11]Erh-Fang Lee, Jyl Boline, and Arthur W. Toga. A High-Resolution anatomical framework of the neonatal mouse brain
bshanks@53 679 for managing gene expression data. Frontiers in Neuroinformatics, 1:6, 2007. PMC2525996.
bshanks@85 680 [12]Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew Asbury, Tony Cheung,
bshanks@44 681 Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M. Norland, Dennis S. Rice, Nilesh Dosooye, Sundeep
bshanks@44 682 Shakya, Perdeep Mehta, and Tom Curran. BGEM: an in situ hybridization database of gene expression in the embryonic
bshanks@44 683 and adult mouse nervous system. PLoS Biology, 4(4):e86 EP &#8211;, April 2006.
bshanks@85 684 [13]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Susan M
bshanks@44 685 Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann, David J Anderson,
bshanks@44 686 Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas of the adult mouse brain. Nat
bshanks@44 687 Neurosci, 12(3):356&#8211;362, March 2009.
bshanks@85 688 [14]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2 edition, July
bshanks@36 689 2001.
bshanks@85 690 [15]A. Schleicher, N. Palomero-Gallagher, P. Morosan, S. Eickhoff, T. Kowalski, K. Vos, K. Amunts, and K. Zilles. Quanti-
bshanks@85 691 tative architectural analysis: a new approach to cortical mapping. Anatomy and Embryology, 210(5):373&#8211;386, December
bshanks@85 692 2005.
bshanks@85 693 [16]Oliver Schmitt, Lars Hmke, and Lutz Dmbgen. Detection of cortical transition regions utilizing statistical analyses of
bshanks@85 694 excess masses. NeuroImage, 19(1):42&#8211;63, May 2003.
bshanks@85 695 [17]Constance M. Smith, Jacqueline H. Finger, Terry F. Hayamizu, Ingeborg J. McCright, Janan T. Eppig, James A.
bshanks@53 696 Kadin, Joel E. Richardson, and Martin Ringwald. The mouse gene expression database (GXD): 2007 update. Nucl.
bshanks@53 697 Acids Res., 35(suppl_1):D618&#8211;623, 2007.
bshanks@85 698 [18]Judy Sprague, Leyla Bayraktaroglu, Dave Clements, Tom Conlin, David Fashena, Ken Frazer, Melissa Haendel, Dou-
bshanks@53 699 glas G Howe, Prita Mani, Sridhar Ramachandran, Kevin Schaper, Erik Segerdell, Peiran Song, Brock Sprunger, Sierra
bshanks@53 700 Taylor, Ceri E Van Slyke, and Monte Westerfield. The zebrafish information network: the zebrafish model organism
bshanks@53 701 database. Nucleic Acids Research, 34(Database issue):D581&#8211;5, 2006. PMID: 16381936.
bshanks@85 702 [19]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003.
bshanks@85 703 [20]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T. Mortrud,
bshanks@33 704 Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H. Gage, Allan R. Jones,
bshanks@33 705 Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the hippocampus. Neuron, 60(6):1010&#8211;
bshanks@33 706 1021, December 2008.
bshanks@85 707 [21]Pavel Tomancak, Amy Beaton, Richard Weiszmann, Elaine Kwan, ShengQiang Shu, Suzanna E Lewis, Stephen
bshanks@53 708 Richards, Michael Ashburner, Volker Hartenstein, Susan E Celniker, and Gerald M Rubin. Systematic determina-
bshanks@53 709 tion of patterns of gene expression during drosophila embryogenesis. Genome Biology, 3(12):research008818814, 2002.
bshanks@53 710 PMC151190.
bshanks@85 711 [22]Jano van Hemert and Richard Baldock. Mining Spatial Gene Expression Data for Association Rules, volume 4414/2007
bshanks@53 712 of Lecture Notes in Computer Science, pages 66&#8211;76. Springer Berlin / Heidelberg, 2007.
bshanks@85 713 [23]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton, Thomas P. Perry,
bshanks@44 714 Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen. EMAGE edinburgh mouse atlas
bshanks@44 715 of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860&#8211;865, 2008.
bshanks@85 716 [24]Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression patterns in the mouse
bshanks@44 717 embryo. Nucl. Acids Res., 32(suppl_1):D552&#8211;556, 2004.
bshanks@85 718 [25]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa Agar-
bshanks@44 719 wala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood, Robert Baertsch,
bshanks@44 720 Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer Bork, Marc Botcherby,
bshanks@44 721 Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John Burton, Jonathan Butler,
bshanks@44 722 Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T Chinwalla, Deanna M Church,
bshanks@44 723 Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R Copley, Alan Coulson, Olivier Couronne,
bshanks@44 724 James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy Davies, Kimberly D Delehaunty, Justin Deri,
bshanks@44 725 Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens, Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M
bshanks@44 726 Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes, Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A
bshanks@44 727 Fewell, Paul Flicek, Karen Foley, Wayne N Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage,
bshanks@44 728 Richard A Gibbs, Gustavo Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves,
bshanks@44 729 Eric D Green, Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki,
bshanks@44 730 LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard, Adrienne
bshanks@44 731 Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy, Michael Kamal,
bshanks@44 732 Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn Kells, W James Kent,
bshanks@44 733 Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David Kulp, Tom Landers, J P
bshanks@44 734 Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd, Susan Lucas, Bin Ma, Donna R
bshanks@44 735 Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer, Megan McCarthy, W Richard McCombie,
bshanks@44 736 Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim, Beverley Meredith, Jill P Mesirov, Webb Miller,
bshanks@44 737 Tracie L Miner, Emmanuel Mongin, Kate T Montgomery, Michael Morgan, Richard Mott, James C Mullikin, Donna M
bshanks@44 738 Muzny, William E Nash, Joanne O Nelson, Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J
bshanks@44 739 O&#8217;Connor, Yasushi Okazaki, Karen Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane
bshanks@44 740 Peterson, Pavel Pevzner, Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter,
bshanks@44 741 Michael Quail, Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph San-
bshanks@44 742 tos, Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven Seaman,
bshanks@44 743 Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer, Guy Slater, Arian
bshanks@44 744 Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles Sugnet, Mikita Suyama,
bshanks@44 745 Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp, Catherine Ucla, Abel Ureta-Vidal,
bshanks@44 746 Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie Wall, Ryan J Weber, Robert B Weiss, Michael C
bshanks@44 747 Wendl, Anthony P West, Kris Wetterstrand, Raymond Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey,
bshanks@44 748 Sophie Williams, Richard K Wilson, Eitan Winter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang,
bshanks@44 749 Evgeny M Zdobnov, Michael C Zody, and Eric S Lander. Initial sequencing and comparative analysis of the mouse
bshanks@44 750 genome. Nature, 420(6915):520&#8211;62, December 2002. PMID: 12466850.
bshanks@33 751
bshanks@33 752