rev |
line source |
bshanks@0 | 1 Specific aims
|
bshanks@53 | 2 Massivenew datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ transgenic
|
bshanks@53 | 3 reporter, microarray voxelation, and others, allow the expression levels of many genes at many locations to be compared.
|
bshanks@53 | 4 Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker
|
bshanks@53 | 5 genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns. We have
|
bshanks@53 | 6 three specific aims:
|
bshanks@30 | 7 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target
|
bshanks@30 | 8 anatomical regions
|
bshanks@84 | 9 (2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions, based on
|
bshanks@84 | 10 spatial patterns in gene expression
|
bshanks@33 | 11 (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse
|
bshanks@35 | 12 Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of
|
bshanks@35 | 13 Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2).
|
bshanks@84 | 14 Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that the methods
|
bshanks@84 | 15 developed in aims (1) and (2) will generalize to any sort of high-dimensional data over points located in a low-dimensional
|
bshanks@94 | 16 space. In particular, our method could be applied to genome-wide sequencing data derived from sets of tissues and disease
|
bshanks@94 | 17 states.
|
bshanks@84 | 18 In terms of the application of the methods to cerebral cortex, aim (1) is to go from cortical areas to marker genes,
|
bshanks@84 | 19 and aim (2) is to let the gene profile define the cortical areas. In addition to validating the usefulness of the algorithms,
|
bshanks@84 | 20 the application of these methods to cortex will produce immediate benefits, because there are currently no known genetic
|
bshanks@84 | 21 markers for most cortical areas. The results of the project will support the development of new ways to selectively target
|
bshanks@84 | 22 cortical areas, and it will support the development of a method for identifying the cortical areal boundaries present in small
|
bshanks@84 | 23 tissue samples.
|
bshanks@53 | 24 All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well as the
|
bshanks@30 | 25 machine-readable datasets developed in aim (3), will be published and freely available for others to use.
|
bshanks@87 | 26 The challenge topic
|
bshanks@87 | 27 This proposal addresses challenge topic 06-HG-101. Massive new datasets obtained with techniques such as in situ hybridiza-
|
bshanks@87 | 28 tion (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others, allow the expression levels
|
bshanks@87 | 29 of many genes at many locations to be compared. Our goal is to develop automated methods to relate spatial variation in
|
bshanks@87 | 30 gene expression to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical
|
bshanks@87 | 31 maps based on gene expression patterns.
|
bshanks@87 | 32 The Challenge and Potential impact
|
bshanks@94 | 33 Each of our three aims will be discussed in turn. For each aim, we will develop a conceptual framework for thinking about
|
bshanks@94 | 34 the task, and we will present our strategy for solving it. Next we will discuss related work. At the conclusion of each section,
|
bshanks@94 | 35 we will summarize why our strategy is different from what has been done before. At the end of this section, we will describe
|
bshanks@94 | 36 the potential impact.
|
bshanks@84 | 37 Aim 1: Given a map of regions, find genes that mark the regions
|
bshanks@94 | 38 Machine learning terminology: classifiers The task of looking for marker genes for known anatomical regions means
|
bshanks@94 | 39 that one is looking for a set of genes such that, if the expression level of those genes is known, then the locations of the
|
bshanks@94 | 40 regions can be inferred.
|
bshanks@94 | 41 If we define the regions so that they cover the entire anatomical structure to be subdivided, we may say that we are
|
bshanks@94 | 42 using gene expression in each voxel to assign that voxel to the proper area. We call this a classification task, because each
|
bshanks@94 | 43 voxel is being assigned to a class (namely, its region). An understanding of the relationship between the combination of
|
bshanks@94 | 44 their expression levels and the locations of the regions may be expressed as a function. The input to this function is a voxel,
|
bshanks@94 | 45 along with the gene expression levels within that voxel; the output is the regional identity of the target voxel, that is, the
|
bshanks@94 | 46 region to which the target voxel belongs. We call this function a classifier. In general, the input to a classifier is called an
|
bshanks@94 | 47 instance, and the output is called a label (or a class label).
|
bshanks@30 | 48 The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a
|
bshanks@30 | 49 classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be
|
bshanks@85 | 50 analyzed in concert with an anatomical atlas in order to produce a classifier. The initial gene expression dataset used in
|
bshanks@85 | 51 the construction of the classifier is called training data. In the machine learning literature, this sort of procedure may be
|
bshanks@85 | 52 thought of as a supervised learning task, defined as a task in which the goal is to learn a mapping from instances to labels,
|
bshanks@85 | 53 and the training data consists of a set of instances (voxels) for which the labels (regions) are known.
|
bshanks@30 | 54 Each gene expression level is called a feature, and the selection of which genes1 to include is called feature selection.
|
bshanks@33 | 55 Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with
|
bshanks@33 | 56 a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
|
bshanks@30 | 57 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then
|
bshanks@30 | 58 chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic
|
bshanks@30 | 59 procedure may be used in which features are added and subtracted from the selected set depending on how much they raise
|
bshanks@30 | 60 the score. Such procedures are called “stepwise” or “greedy”.
|
bshanks@30 | 61 Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the
|
bshanks@85 | 62 algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection
|
bshanks@85 | 63 methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel,
|
bshanks@85 | 64 and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or average). If
|
bshanks@85 | 65 only information from nearby voxels is used to calculate a voxel’s sub-score, then we say it is a local scoring method. If only
|
bshanks@85 | 66 information from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method.
|
bshanks@94 | 67 Both gene expression data and anatomical atlases have errors, due to a variety of factors. Individual subjects have
|
bshanks@94 | 68 idiosyncratic anatomy. Subjects may be improperly registred to the atlas. The method used to measure gene expression
|
bshanks@94 | 69 may be noisy. The atlas may have errors. It is even possible that some areas in the anatomical atlas are “wrong” in that
|
bshanks@94 | 70 they do not have the same shape as the natural domains of gene expression to which they correspond. These sources of error
|
bshanks@94 | 71 can affect the displacement and the shape of both the gene expression data and the anatomical target areas. Therefore, it
|
bshanks@94 | 72 is important to use feature selection methods which are robust to these kinds of errors.
|
bshanks@85 | 73 Our strategy for Aim 1
|
bshanks@85 | 74 Key questions when choosing a learning method are: What are the instances? What are the features? How are the features
|
bshanks@85 | 75 chosen? Here are four principles that outline our answers to these questions.
|
bshanks@94 | 76 _________________________________________
|
bshanks@94 | 77 1Strictly speaking, the features are gene expression levels, but we’ll call them genes.
|
bshanks@84 | 78 Principle 1: Combinatorial gene expression
|
bshanks@94 | 79 It istoo much to hope that every anatomical region of interest will be identified by a single gene. For example, in the
|
bshanks@84 | 80 cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas (ABA) dataset.
|
bshanks@84 | 81 However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for
|
bshanks@84 | 82 which multiple genes are necessary and sufficient is provided in Preliminary Studies, Figure 4). Therefore, each instance
|
bshanks@84 | 83 should contain multiple features (genes).
|
bshanks@84 | 84 Principle 2: Only look at combinations of small numbers of genes
|
bshanks@84 | 85 When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected
|
bshanks@84 | 86 as features. The more data that are available to a classifier, the better that it can do. For example, perhaps there are weak
|
bshanks@84 | 87 correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that
|
bshanks@84 | 88 we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we
|
bshanks@84 | 89 want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must
|
bshanks@84 | 90 contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible
|
bshanks@84 | 91 to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a
|
bshanks@84 | 92 procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes.
|
bshanks@84 | 93 Therefore, we must select only a few genes as features.
|
bshanks@63 | 94 The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many
|
bshanks@63 | 95 of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task
|
bshanks@63 | 96 combines feature selection with supervised learning.
|
bshanks@30 | 97 Principle 3: Use geometry in feature selection
|
bshanks@30 | 98 When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of
|
bshanks@30 | 99 each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information
|
bshanks@30 | 100 about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See
|
bshanks@84 | 101 Preliminary Studies, figure 3 for evidence of the complementary nature of pointwise and local scoring methods.
|
bshanks@30 | 102 Principle 4: Work in 2-D whenever possible
|
bshanks@30 | 103 There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When
|
bshanks@30 | 104 it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis
|
bshanks@33 | 105 algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D
|
bshanks@85 | 106 data. Therefore, when possible, the instances should represent pixels, not voxels.
|
bshanks@43 | 107 Related work
|
bshanks@44 | 108 There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expression data
|
bshanks@84 | 109 which are not fundamentally spatial2.
|
bshanks@43 | 110 As noted above, there has been much work on both supervised learning and there are many available algorithms for
|
bshanks@43 | 111 each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the
|
bshanks@43 | 112 way that this framework is set up has a large impact on performance. Creating a good framework can require creatively
|
bshanks@43 | 113 reconceptualizing the problem domain, and is not merely a mechanical “fine-tuning” of numerical parameters. For example,
|
bshanks@84 | 114 we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Studies) may
|
bshanks@43 | 115 be necessary in order to achieve the best results in this application.
|
bshanks@53 | 116 We are aware of six existing efforts to find marker genes using spatial gene expression data using automated methods.
|
bshanks@94 | 117 [12 ] mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of
|
bshanks@53 | 118 interest, computing what proportion of this structure is covered by the gene’s spatial region.
|
bshanks@94 | 119 GeneAtlas[5] and EMAGE [25] allow the user to construct a search query by demarcating regions and then specifing
|
bshanks@53 | 120 either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. For the
|
bshanks@53 | 121 similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses
|
bshanks@53 | 122 the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel3
|
bshanks@85 | 123 whose expression is within four discretization levels. EMAGE uses Jaccard similarity4. Neither GeneAtlas nor EMAGE
|
bshanks@53 | 124 allow one to search for combinations of genes that define a region in concert but not separately.
|
bshanks@94 | 125 [14 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components. Gene Finder: The user
|
bshanks@85 | 126 selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2) yields a list of genes which are
|
bshanks@85 | 127 overexpressed in that cluster. (note: the ABA website also contains pre-prepared lists of overexpressed genes for selected
|
bshanks@85 | 128 structures). Correlation: The user selects a seed voxel and the system then shows the user how much correlation there is
|
bshanks@85 | 129 between the gene expression profile of the seed voxel and every other voxel. Clusters: will be described later
|
bshanks@94 | 130 _________________________________________
|
bshanks@94 | 131 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not
|
bshanks@94 | 132 just data which have only a few different locations or which is indexed by anatomical label.
|
bshanks@94 | 133 3Actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity.
|
bshanks@94 | 134 4the number of true pixels in the intersection of the two images, divided by the number of pixels in their union.
|
bshanks@43 | 135 Gene Finder is different from our Aim 1 in at least three ways. First, Gene Finder finds only single genes, whereas we
|
bshanks@43 | 136 will also look for combinations of genes. Second, gene finder can only use overexpression as a marker, whereas we will also
|
bshanks@85 | 137 search for underexpression. Third, Gene Finder uses a simple pointwise score5, whereas we will also use geometric scores
|
bshanks@84 | 138 such as gradient similarity (described in Preliminary Studies). Figures 4, 2, and 3 in the Preliminary Studies section contains
|
bshanks@84 | 139 evidence that each of our three choices is the right one.
|
bshanks@85 | 140 [6 ] looks at the mean expression level of genes within anatomical regions, and applies a Student’s t-test with Bonferroni
|
bshanks@51 | 141 correction to determine whether the mean expression level of a gene is significantly higher in the target region. Like AGEA,
|
bshanks@51 | 142 this is a pointwise measure (only the mean expression level per pixel is being analyzed), it is not being used to look for
|
bshanks@51 | 143 underexpression, and does not look for combinations of genes.
|
bshanks@94 | 144 [10 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary
|
bshanks@46 | 145 algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image. Their
|
bshanks@51 | 146 match score is Jaccard similarity.
|
bshanks@84 | 147 In summary, there has been fruitful work on finding marker genes, but only one of the previous projects explores
|
bshanks@51 | 148 combinations of marker genes, and none of these publications compare the results obtained by using different algorithms or
|
bshanks@51 | 149 scoring methods.
|
bshanks@84 | 150 Aim 2: From gene expression data, discover a map of regions
|
bshanks@30 | 151 Machine learning terminology: clustering
|
bshanks@30 | 152 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as
|
bshanks@30 | 153 unsupervised learning in the jargon of machine learning. One thing that you can do with such a dataset is to group instances
|
bshanks@46 | 154 together. A set of similar instances is called a cluster, and the activity of finding grouping the data into clusters is called
|
bshanks@46 | 155 clustering or cluster analysis.
|
bshanks@84 | 156 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances
|
bshanks@84 | 157 are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels
|
bshanks@84 | 158 from the same anatomical region have similar gene expression profiles, at least compared to the other regions. This means
|
bshanks@84 | 159 that clustering voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into
|
bshanks@84 | 160 clusters of voxels with similar gene expression.
|
bshanks@85 | 161 It is desirable to determine not just one set of regions, but also how these regions relate to each other. The outcome of
|
bshanks@85 | 162 clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels. This is called
|
bshanks@85 | 163 hierarchial clustering.
|
bshanks@85 | 164 Similarity scores A crucial choice when designing a clustering method is how to measure similarity, across either pairs
|
bshanks@85 | 165 of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above
|
bshanks@85 | 166 under Aim 1) and scoring methods for similarity.
|
bshanks@85 | 167 Spatially contiguous clusters; image segmentation We have shown that aim 2 is a type of clustering task. In fact,
|
bshanks@85 | 168 it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a
|
bshanks@85 | 169 cluster must be spatially contiguous. In Preliminary Studies, we show that one can get reasonable results without enforcing
|
bshanks@85 | 170 this constraint; however, we plan to compare these results against other methods which guarantee contiguous clusters.
|
bshanks@85 | 171 Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim
|
bshanks@85 | 172 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels
|
bshanks@85 | 173 (one for each gene), rather than just three6. A more crucial difference is that there are various cues which are appropriate
|
bshanks@85 | 174 for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data
|
bshanks@85 | 175 such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other
|
bshanks@85 | 176 sorts of spatially arranged data, some of these algorithms are specialized for visual images.
|
bshanks@51 | 177 Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression feature
|
bshanks@51 | 178 vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying data.
|
bshanks@33 | 179 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the
|
bshanks@85 | 180 instances. However, some clustering algorithms perform better on small numbers of features7. There are techniques which
|
bshanks@30 | 181 “summarize” a larger number of features using a smaller number of features; these techniques go by the name of feature
|
bshanks@30 | 182 extraction or dimensionality reduction. The small set of features that such a technique yields is called the reduced feature
|
bshanks@85 | 183 set. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set
|
bshanks@85 | 184 may be any function of the set of gene expression levels.
|
bshanks@94 | 185 _________________________________________
|
bshanks@94 | 186 5“Expression energy ratio”, which captures overexpression.
|
bshanks@94 | 187 6There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often
|
bshanks@94 | 188 used to process satellite imagery.
|
bshanks@94 | 189 7First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms
|
bshanks@94 | 190 may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.
|
bshanks@85 | 191 Clustering genes rather than voxels Although the ultimate goal is to cluster the instances (voxels or pixels), one
|
bshanks@85 | 192 strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used.
|
bshanks@30 | 193 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could
|
bshanks@30 | 194 have one reduced feature for each gene cluster.
|
bshanks@30 | 195 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression
|
bshanks@94 | 196 pattern which seems to pick out a single, spatially continguous region. Therefore, it seems likely that an anatomically
|
bshanks@85 | 197 interesting region will have multiple genes which each individually pick it out8. This suggests the following procedure:
|
bshanks@42 | 198 cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters.
|
bshanks@84 | 199 In Preliminary Studies, Figure 7, we show that a number of anatomically recognized cortical regions, as well as some
|
bshanks@84 | 200 “superregions” formed by lumping together a few regions, are associated with gene clusters in this fashion.
|
bshanks@51 | 201 The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering
|
bshanks@51 | 202 algorithms.
|
bshanks@43 | 203 Related work
|
bshanks@94 | 204 Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, [17], [2], [18],
|
bshanks@94 | 205 and [1 ] associate spots on the cortex with the radial profile9 of response to some stain ([11] uses MRI), extract features from
|
bshanks@85 | 206 this profile, and then use similarity between surface pixels to cluster. Features used include statistical moments, wavelets,
|
bshanks@85 | 207 and the excess mass functional. Some of these features are motivated by the presence of tangential lines of stain intensity
|
bshanks@85 | 208 which correspond to laminar structure. Some methods use standard clustering procedures, whereas others make use of the
|
bshanks@85 | 209 spatial nature of the data to look for sudden transitions, which are identified as areal borders.
|
bshanks@94 | 210 [22 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual analysis,
|
bshanks@43 | 211 two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and a hierarchial recursive
|
bshanks@44 | 212 bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving
|
bshanks@85 | 213 the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset10 and while the results are
|
bshanks@84 | 214 promising, they also demonstrate that NNMF is not necessarily the best dimensionality reduction method for this application
|
bshanks@84 | 215 (see Preliminary Studies, Figure 6).
|
bshanks@94 | 216 AGEA[14] includes a preset hierarchial clustering of voxels based on a recursive bifurcation algorithm with correlation
|
bshanks@94 | 217 as the similarity metric. EMAGE[25] allows the user to select a dataset from among a large number of alternatives, or by
|
bshanks@53 | 218 running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchial complete linkage
|
bshanks@53 | 219 clustering with un-centred correlation as the similarity score.
|
bshanks@85 | 220 [6 ] clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were
|
bshanks@53 | 221 highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and
|
bshanks@53 | 222 ordered the rows of this matrix as follows: “the first row of the matrix was chosen to show the strongest contrast between
|
bshanks@53 | 223 the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing
|
bshanks@53 | 224 similarity using a least squares metric”. The resulting matrix showed four clusters. For each cluster, prototypical spatial
|
bshanks@53 | 225 expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without
|
bshanks@85 | 226 clustering voxels.
|
bshanks@94 | 227 [10 ] applies their technique for finding combinations of marker genes for the purpose of clustering genes around a “seed
|
bshanks@85 | 228 gene”. They do this by using the pattern of expression of the seed gene as the target image, and then searching for other
|
bshanks@85 | 229 genes which can be combined to reproduce this pattern. Other genes which are found are considered to be related to the
|
bshanks@94 | 230 seed. The same team also describes a method[24] for finding “association rules” such as, “if this voxel is expressed in by
|
bshanks@85 | 231 any gene, then that voxel is probably also expressed in by the same gene”. This could be useful as part of a procedure for
|
bshanks@85 | 232 clustering voxels.
|
bshanks@46 | 233 In summary, although these projects obtained clusterings, there has not been much comparison between different algo-
|
bshanks@85 | 234 rithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. The
|
bshanks@85 | 235 projects using gene expression on cortex did not attempt to make use of the radial profile of gene expression. Also, none of
|
bshanks@85 | 236 these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first in order
|
bshanks@85 | 237 to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms.
|
bshanks@87 | 238 _________________________________________
|
bshanks@87 | 239 8This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is
|
bshanks@87 | 240 possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression;
|
bshanks@87 | 241 perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although
|
bshanks@87 | 242 the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype.
|
bshanks@87 | 243 9A radial profile is a profile along a line perpendicular to the cortical surface.
|
bshanks@87 | 244 10We ran “vanilla” NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft
|
bshanks@87 | 245 spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was
|
bshanks@87 | 246 needed. The paper under discussion also mentions that they tried a hierarchial variant of NNMF, which we have not yet tried.
|
bshanks@94 | 247 Aim 3: apply the methods developed to the cerebral cortex
|
bshanks@94 | 248 Background
|
bshanks@94 | 249 The cortex is divided into areas and layers. Because of the cortical columnar organization, the parcellation of the cortex
|
bshanks@94 | 250 into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the
|
bshanks@84 | 251 areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the
|
bshanks@94 | 252 surface. One can picture an area of the cortex as a slice of a six-layered cake11.
|
bshanks@85 | 253 It is known that different cortical areas have distinct roles in both normal functioning and in disease processes, yet there
|
bshanks@85 | 254 are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is
|
bshanks@85 | 255 a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their
|
bshanks@85 | 256 approximate location upon the cortical surface.
|
bshanks@33 | 257 Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not
|
bshanks@53 | 258 completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single
|
bshanks@94 | 259 agreed-upon map can be seen by contrasting the recent maps given by Swanson[21] on the one hand, and Paxinos and
|
bshanks@94 | 260 Franklin[16] on the other. While the maps are certainly very similar in their general arrangement, significant differences
|
bshanks@85 | 261 remain.
|
bshanks@36 | 262 The Allen Mouse Brain Atlas dataset
|
bshanks@84 | 263 The Allen Mouse Brain Atlas (ABA) data were produced by doing in-situ hybridization on slices of male, 56-day-old
|
bshanks@36 | 264 C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed
|
bshanks@85 | 265 to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution
|
bshanks@85 | 266 is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse
|
bshanks@85 | 267 brains were needed in order to measure the expression of many genes.
|
bshanks@85 | 268 An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate
|
bshanks@36 | 269 system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 = 159,326
|
bshanks@94 | 270 voxels in the 3D coordinate system, of which 51,533 are in the brain[14].
|
bshanks@94 | 271 Mus musculus is thought to contain about 22,000 protein-coding genes[27]. The ABA contains data on about 20,000
|
bshanks@85 | 272 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from
|
bshanks@85 | 273 only the coronal subset of the ABA12.
|
bshanks@85 | 274 The ABA is not the only large public spatial gene expression dataset13. With the exception of the ABA, GenePaint, and
|
bshanks@85 | 275 EMAGE, most of the other resources have not (yet) extracted the expression intensity from the ISH images and registered
|
bshanks@85 | 276 the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public
|
bshanks@85 | 277 download from the website14. Many of these resources focus on developmental gene expression.
|
bshanks@63 | 278 Related work
|
bshanks@94 | 279 [14 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations
|
bshanks@63 | 280 between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either
|
bshanks@46 | 281 of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of
|
bshanks@46 | 282 the other components of AGEA can be applied to cortical areas; AGEA’s Gene Finder cannot be used to find marker genes
|
bshanks@85 | 283 for the cortical areas; and AGEA’s hierarchial clustering does not produce clusters corresponding to the cortical areas15.
|
bshanks@46 | 284 In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, (b) there has
|
bshanks@43 | 285 been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally
|
bshanks@43 | 286 finding marker genes for cortical areas, or on finding a hierarchial clustering that will yield a map of cortical areas de novo
|
bshanks@43 | 287 from gene expression data.
|
bshanks@94 | 288 ___________________
|
bshanks@94 | 289 11Outside of isocortex, the number of layers varies.
|
bshanks@94 | 290 12The sagittal data do not cover the entire cortex, and also have greater registration error[14]. Genes were selected by the Allen Institute for
|
bshanks@85 | 291 coronal sectioning based on, “classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression
|
bshanks@94 | 292 pattern”[14].
|
bshanks@94 | 293 13Other such resources include GENSAT[8], GenePaint[26], its sister project GeneAtlas[5], BGEM[13], EMAGE[25], EurExpress (http:
|
bshanks@94 | 294 //www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE), EADHB (http://www.ncl.ac.uk/ihg/EADHB/database/$EADHB_
|
bshanks@94 | 295 {database}$.html), MAMEP (http://mamep.molgen.mpg.de/index.php), Xenbase (http://xenbase.org/), ZFIN[20], Aniseed (http://
|
bshanks@94 | 296 aniseed-ibdm.univ-mrs.fr/), VisiGene (http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some of the other listed data
|
bshanks@94 | 297 sources), GEISHA[4], Fruitfly.org[23], COMPARE (http://compare.ibdml.univ-mrs.fr/), GXD[19], GEO[3] (GXD and GEO contain spatial
|
bshanks@94 | 298 data but also non-spatial data. All GXD spatial data are also in EMAGE.)
|
bshanks@85 | 299 14without prior offline registration
|
bshanks@85 | 300 15In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger
|
bshanks@85 | 301 than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation
|
bshanks@85 | 302 clustering algorithm will tend to create clusters representing cortical layers, not areas (there may be clusters which presumably correspond to the
|
bshanks@85 | 303 intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of
|
bshanks@85 | 304 these). The reason that Gene Finder cannot the find marker genes for cortical areas is that, although the user chooses a seed voxel, Gene Finder
|
bshanks@85 | 305 chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed.
|
bshanks@94 | 306 Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker
|
bshanks@94 | 307 genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods.
|
bshanks@94 | 308 Significance
|
bshanks@85 | 309
|
bshanks@85 | 310
|
bshanks@85 | 311 Figure 1: Top row: Genes Nfic and
|
bshanks@85 | 312 A930001M12Rik are the most correlated
|
bshanks@85 | 313 with area SS (somatosensory cortex). Bot-
|
bshanks@85 | 314 tom row: Genes C130038G02Rik and
|
bshanks@85 | 315 Cacna1i are those with the best fit using
|
bshanks@85 | 316 logistic regression. Within each picture, the
|
bshanks@85 | 317 vertical axis roughly corresponds to anterior
|
bshanks@85 | 318 at the top and posterior at the bottom, and
|
bshanks@85 | 319 the horizontal axis roughly corresponds to
|
bshanks@85 | 320 medial at the left and lateral at the right.
|
bshanks@85 | 321 The red outline is the boundary of region
|
bshanks@85 | 322 SS. Pixels are colored according to correla-
|
bshanks@85 | 323 tion, with red meaning high correlation and
|
bshanks@93 | 324 blue meaning low. The method developed in aim (1) will be applied to each cortical area to find
|
bshanks@93 | 325 a set of marker genes such that the combinatorial expression pattern of those
|
bshanks@93 | 326 genes uniquely picks out the target area. Finding marker genes will be useful
|
bshanks@93 | 327 for drug discovery as well as for experimentation because marker genes can be
|
bshanks@93 | 328 used to design interventions which selectively target individual cortical areas.
|
bshanks@93 | 329 The application of the marker gene finding algorithm to the cortex will
|
bshanks@93 | 330 also support the development of new neuroanatomical methods. In addition
|
bshanks@93 | 331 to finding markers for each individual cortical areas, we will find a small panel
|
bshanks@93 | 332 of genes that can find many of the areal boundaries at once. This panel of
|
bshanks@93 | 333 marker genes will allow the development of an ISH protocol that will allow
|
bshanks@93 | 334 experimenters to more easily identify which anatomical areas are present in
|
bshanks@93 | 335 small samples of cortex.
|
bshanks@93 | 336 The method developed in aim (2) will provide a genoarchitectonic viewpoint
|
bshanks@93 | 337 that will contribute to the creation of a better map. The development of
|
bshanks@93 | 338 present-day cortical maps was driven by the application of histological stains.
|
bshanks@93 | 339 If a different set of stains had been available which identified a different set of
|
bshanks@93 | 340 features, then today’s cortical maps may have come out differently. It is likely
|
bshanks@93 | 341 that there are many repeated, salient spatial patterns in the gene expression
|
bshanks@93 | 342 which have not yet been captured by any stain. Therefore, cortical anatomy
|
bshanks@93 | 343 needs to incorporate what we can learn from looking at the patterns of gene
|
bshanks@93 | 344 expression.
|
bshanks@93 | 345 While we do not here propose to analyze human gene expression data, it is
|
bshanks@93 | 346 conceivable that the methods we propose to develop could be used to suggest
|
bshanks@94 | 347 modifications to the human cortical map as well. In fact, the methods we will
|
bshanks@94 | 348 develop will be applicable to other datasets beyond the brain. We will provide
|
bshanks@94 | 349 an open-source toolbox to allow other researchers to easily use our methods.
|
bshanks@94 | 350 With these methods, researchers with gene expression for any area of the body
|
bshanks@94 | 351 will be able to efficiently find marker genes for anatomical regions, or to use
|
bshanks@94 | 352 gene expression to discover new anatomical patterning. As described above,
|
bshanks@94 | 353 marker genes have a variety of uses in the development of drugs and experimental manipulations, and in the anatomical
|
bshanks@94 | 354 characterization of tissue samples. The discovery of new ways to carve up anatomical structures into regions may lead to
|
bshanks@94 | 355 the discovery of new anatomical subregions in various structures, which will widely impact all areas of biology.
|
bshanks@75 | 356
|
bshanks@78 | 357 Figure 2: Gene Pitx2
|
bshanks@75 | 358 is selectively underex-
|
bshanks@93 | 359 pressed in area SS. Although our particular application involves the 3D spatial distribution of gene expression, we
|
bshanks@93 | 360 anticipate that the methods developed in aims (1) and (2) will not be limited to gene expression
|
bshanks@93 | 361 data, but rather will generalize to any sort of high-dimensional data over points located in a
|
bshanks@93 | 362 low-dimensional space.
|
bshanks@93 | 363 The approach: Preliminary Studies
|
bshanks@93 | 364 Format conversion between SEV, MATLAB, NIFTI
|
bshanks@93 | 365 We have created software to (politely) download all of the SEV files16 from the Allen Institute
|
bshanks@93 | 366 website. We have also created software to convert between the SEV, MATLAB, and NIFTI file
|
bshanks@93 | 367 formats, as well as some of Caret’s file formats.
|
bshanks@93 | 368 Flatmap of cortex
|
bshanks@93 | 369 We downloaded the ABA data and applied a mask to select only those voxels which belong to
|
bshanks@93 | 370 cerebral cortex. We divided the cortex into hemispheres.
|
bshanks@94 | 371 Using Caret[7], we created a mesh representation of the surface of the selected voxels. For each gene, and for each node
|
bshanks@94 | 372 of the mesh, we calculated an average of the gene expression of the voxels “underneath” that mesh node. We then flattened
|
bshanks@93 | 373 the cortex, creating a two-dimensional mesh.
|
bshanks@94 | 374 ____
|
bshanks@93 | 375 16SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.
|
bshanks@94 | 376
|
bshanks@85 | 377
|
bshanks@85 | 378
|
bshanks@85 | 379 Figure 3: The top row shows the two genes
|
bshanks@85 | 380 which (individually) best predict area AUD,
|
bshanks@85 | 381 according to logistic regression. The bot-
|
bshanks@85 | 382 tom row shows the two genes which (indi-
|
bshanks@85 | 383 vidually) best match area AUD, according
|
bshanks@85 | 384 to gradient similarity. From left to right and
|
bshanks@85 | 385 top to bottom, the genes are Ssr1, Efcbp1,
|
bshanks@94 | 386 Ptk7, and Aph1a. We sampled the nodes of the irregular, flat mesh in order to create a regular
|
bshanks@94 | 387 grid of pixel values. We converted this grid into a MATLAB matrix.
|
bshanks@94 | 388 We manually traced the boundaries of each of 49 cortical areas from the
|
bshanks@94 | 389 ABA coronal reference atlas slides. We then converted these manual traces
|
bshanks@94 | 390 into Caret-format regional boundary data on the mesh surface. We projected
|
bshanks@94 | 391 the regions onto the 2-d mesh, and then onto the grid, and then we converted
|
bshanks@94 | 392 the region data into MATLAB format.
|
bshanks@94 | 393 At this point, the data are in the form of a number of 2-D matrices, all in
|
bshanks@94 | 394 registration, with the matrix entries representing a grid of points (pixels) over
|
bshanks@94 | 395 the cortical surface:
|
bshanks@94 | 396 ∙ A 2-D matrix whose entries represent the regional label associated with
|
bshanks@94 | 397 each surface pixel
|
bshanks@94 | 398 ∙ For each gene, a 2-D matrix whose entries represent the average expres-
|
bshanks@94 | 399 sion level underneath each surface pixel
|
bshanks@94 | 400 We created a normalized version of the gene expression data by subtracting
|
bshanks@93 | 401 each gene’s mean expression level (over all surface pixels) and dividing the
|
bshanks@93 | 402 expression level of each gene by its standard deviation.
|
bshanks@93 | 403 The features and the target area are both functions on the surface pix-
|
bshanks@93 | 404 els. They can be referred to as scalar fields over the space of surface pixels;
|
bshanks@93 | 405 alternately, they can be thought of as images which can be displayed on the
|
bshanks@93 | 406 flatmapped surface.
|
bshanks@93 | 407 To move beyond a single average expression level for each surface pixel, we
|
bshanks@94 | 408 plan to create a separate matrix for each cortical layer to represent the average expression level within that layer. Cortical
|
bshanks@94 | 409 layers are found at different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets,
|
bshanks@94 | 410 we have extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary.
|
bshanks@94 | 411 In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually
|
bshanks@94 | 412 demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
|
bshanks@94 | 413 Feature selection and scoring methods
|
bshanks@94 | 414 Underexpression of a gene can serve as a marker Underexpression of a gene can sometimes serve as a marker. See,
|
bshanks@94 | 415 for example, Figure 2.
|
bshanks@93 | 416
|
bshanks@93 | 417
|
bshanks@93 | 418 Figure 4: Upper left: wwc1. Upper right:
|
bshanks@93 | 419 mtif2. Lower left: wwc1 + mtif2 (each
|
bshanks@93 | 420 pixel’s value on the lower left is the sum of
|
bshanks@94 | 421 the corresponding pixels in the upper row). Correlation Recall that the instances are surface pixels, and consider the
|
bshanks@94 | 422 problem of attempting to classify each instance as either a member of a partic-
|
bshanks@94 | 423 ular anatomical area, or not. The target area can be represented as a boolean
|
bshanks@94 | 424 mask over the surface pixels.
|
bshanks@94 | 425 One class of feature selection scoring methods contains methods which cal-
|
bshanks@94 | 426 culate some sort of “match” between each gene image and the target image.
|
bshanks@94 | 427 Those genes which match the best are good candidates for features.
|
bshanks@94 | 428 One of the simplest methods in this class is to use correlation as the match
|
bshanks@94 | 429 score. We calculated the correlation between each gene and each cortical area.
|
bshanks@94 | 430 The top row of Figure 1 shows the three genes most correlated with area SS.
|
bshanks@94 | 431 Conditional entropy An information-theoretic scoring method is to find
|
bshanks@85 | 432 features such that, if the features (gene expression levels) are known, uncer-
|
bshanks@85 | 433 tainty about the target (the regional identity) is reduced. Entropy measures
|
bshanks@85 | 434 uncertainty, so what we want is to find features such that the conditional dis-
|
bshanks@85 | 435 tribution of the target has minimal entropy. The distribution to which we are
|
bshanks@85 | 436 referring is the probability distribution over the population of surface pixels.
|
bshanks@85 | 437 The simplest way to use information theory is on discrete data, so we
|
bshanks@85 | 438 discretized our gene expression data by creating, for each gene, five thresholded
|
bshanks@94 | 439 boolean masks of the gene data. For each gene, we created a boolean mask
|
bshanks@94 | 440 of its expression levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the
|
bshanks@94 | 441 mean minus two standard deviations, the mean plus one standard deviation, the mean plus two standard deviations.
|
bshanks@94 | 442 Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression
|
bshanks@94 | 443 boolean masks such that the conditional entropy of the target area’s boolean mask, conditioned upon the pair of gene
|
bshanks@94 | 444 expression boolean masks, is minimized.
|
bshanks@94 | 445 This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question,
|
bshanks@94 | 446 “Is this surface pixel a member of the target area?”. Its advantage over linear methods such as logistic regression is that it
|
bshanks@94 | 447 takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional
|
bshanks@94 | 448 entropy would notice, whereas linear methods would not.
|
bshanks@85 | 449
|
bshanks@85 | 450
|
bshanks@85 | 451
|
bshanks@85 | 452
|
bshanks@85 | 453 Figure 5: From left to right and top
|
bshanks@85 | 454 to bottom, single genes which roughly
|
bshanks@85 | 455 identify areas SS (somatosensory primary
|
bshanks@85 | 456 + supplemental), SSs (supplemental so-
|
bshanks@85 | 457 matosensory), PIR (piriform), FRP (frontal
|
bshanks@85 | 458 pole), RSP (retrosplenial), COApm (Corti-
|
bshanks@85 | 459 cal amygdalar, posterior part, medial zone).
|
bshanks@85 | 460 Grouping some areas together, we have
|
bshanks@85 | 461 also found genes to identify the groups
|
bshanks@85 | 462 ACA+PL+ILA+DP+ORB+MO (anterior
|
bshanks@85 | 463 cingulate, prelimbic, infralimbic, dorsal pe-
|
bshanks@85 | 464 duncular, orbital, motor), posterior and lat-
|
bshanks@85 | 465 eral visual (VISpm, VISpl, VISI, VISp; pos-
|
bshanks@85 | 466 teromedial, posterolateral, lateral, and pri-
|
bshanks@85 | 467 mary visual; the posterior and lateral vi-
|
bshanks@85 | 468 sual area is distinguished from its neigh-
|
bshanks@85 | 469 bors, but not from the entire rest of the
|
bshanks@85 | 470 cortex). The genes are Pitx2, Aldh1a2,
|
bshanks@85 | 471 Ppfibp1, Slco1a5, Tshz2, Trhr, Col12a1,
|
bshanks@93 | 472 Ets1. Gradient similarity We noticed that the previous two scoring methods,
|
bshanks@93 | 473 which are pointwise, often found genes whose pattern of expression did not
|
bshanks@93 | 474 look similar in shape to the target region. For this reason we designed a
|
bshanks@93 | 475 non-pointwise local scoring method to detect when a gene had a pattern of
|
bshanks@93 | 476 expression which looked like it had a boundary whose shape is similar to the
|
bshanks@93 | 477 shape of the target region. We call this scoring method “gradient similarity”.
|
bshanks@93 | 478 One might say that gradient similarity attempts to measure how much the
|
bshanks@93 | 479 border of the area of gene expression and the border of the target region over-
|
bshanks@93 | 480 lap. However, since gene expression falls off continuously rather than jumping
|
bshanks@93 | 481 from its maximum value to zero, the spatial pattern of a gene’s expression often
|
bshanks@93 | 482 does not have a discrete border. Therefore, instead of looking for a discrete
|
bshanks@93 | 483 border, we look for large gradients. Gradient similarity is a symmetric function
|
bshanks@93 | 484 over two images (i.e. two scalar fields). It is is high to the extent that matching
|
bshanks@93 | 485 pixels which have large values and large gradients also have gradients which
|
bshanks@93 | 486 are oriented in a similar direction. The formula is:
|
bshanks@93 | 487 ∑
|
bshanks@93 | 488 pixel<img src="cmsy7-32.png" alt="∈" />pixels cos(abs(∠∇1 -∠∇2)) ⋅|∇1| + |∇2|
|
bshanks@93 | 489 2 ⋅ pixel_value1 + pixel_value2
|
bshanks@93 | 490 2
|
bshanks@93 | 491 where ∇1 and ∇2 are the gradient vectors of the two images at the current
|
bshanks@93 | 492 pixel; ∠∇i is the angle of the gradient of image i at the current pixel; |∇i| is
|
bshanks@93 | 493 the magnitude of the gradient of image i at the current pixel; and pixel_valuei
|
bshanks@93 | 494 is the value of the current pixel in image i.
|
bshanks@93 | 495 The intuition is that we want to see if the borders of the pattern in the
|
bshanks@93 | 496 two images are similar; if the borders are similar, then both images will have
|
bshanks@93 | 497 corresponding pixels with large gradients (because this is a border) which are
|
bshanks@93 | 498 oriented in a similar direction (because the borders are similar).
|
bshanks@93 | 499 Most of the genes in Figure 5 were identified via gradient similarity.
|
bshanks@93 | 500 Gradient similarity provides information complementary to cor-
|
bshanks@93 | 501 relation
|
bshanks@93 | 502 To show that gradient similarity can provide useful information that cannot
|
bshanks@93 | 503 be detected via pointwise analyses, consider Fig. 3. The top row of Fig. 3
|
bshanks@93 | 504 displays the 3 genes which most match area AUD, according to a pointwise
|
bshanks@93 | 505 method17. The bottom row displays the 3 genes which most match AUD ac-
|
bshanks@93 | 506 cording to a method which considers local geometry18 The pointwise method
|
bshanks@93 | 507 in the top row identifies genes which express more strongly in AUD than out-
|
bshanks@93 | 508 side of it; its weakness is that this includes many areas which don’t have a
|
bshanks@93 | 509 salient border matching the areal border. The geometric method identifies
|
bshanks@93 | 510 genes whose salient expression border seems to partially line up with the bor-
|
bshanks@93 | 511 der of AUD; its weakness is that this includes genes which don’t express over
|
bshanks@93 | 512 the entire area. Genes which have high rankings using both pointwise and bor-
|
bshanks@93 | 513 der criteria, such as Aph1a in the example, may be particularly good markers.
|
bshanks@93 | 514 None of these genes are, individually, a perfect marker for AUD; we deliberately
|
bshanks@93 | 515 chose a “difficult” area in order to better contrast pointwise with geometric
|
bshanks@93 | 516 methods.
|
bshanks@93 | 517 Areas which can be identified by single genes Using gradient simi-
|
bshanks@94 | 518 larity, we have already found single genes which roughly identify some areas
|
bshanks@94 | 519 and groupings of areas. For each of these areas, an example of a gene which roughly identifies it is shown in Figure 5. We
|
bshanks@94 | 520 have not yet cross-verified these genes in other atlases.
|
bshanks@92 | 521 _________________________________________
|
bshanks@93 | 522 17For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor
|
bshanks@93 | 523 variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well
|
bshanks@93 | 524 they predict area AUD.
|
bshanks@93 | 525 18For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD,
|
bshanks@93 | 526 was calculated, and this was used to rank the genes.
|
bshanks@94 | 527 In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior part of
|
bshanks@94 | 528 cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), VIS
|
bshanks@94 | 529 (visual), AUD (auditory).
|
bshanks@94 | 530 These results validate our expectation that the ABA dataset can be exploited to find marker genes for many cortical
|
bshanks@94 | 531 areas, while also validating the relevancy of our new scoring method, gradient similarity.
|
bshanks@93 | 532 Combinations of multiple genes are useful and necessary for some areas
|
bshanks@93 | 533 In Figure 4, we give an example of a cortical area which is not marked by any single gene, but which can be identified
|
bshanks@93 | 534 combinatorially. Acccording to logistic regression, gene wwc1 is the best fit single gene for predicting whether or not a
|
bshanks@93 | 535 pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure 4 shows wwc1’s spatial
|
bshanks@93 | 536 expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the
|
bshanks@93 | 537 gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding
|
bshanks@93 | 538 to the overshoot is the medial surface of the cortex. MO is only found on the dorsal surface. Gene mtif2 is shown in the
|
bshanks@93 | 539 upper-right. Mtif2 captures MO’s upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much
|
bshanks@93 | 540 on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left image. This
|
bshanks@93 | 541 combination captures area MO much better than any single gene.
|
bshanks@93 | 542 This shows that our proposal to develop a method to find combinations of marker genes is both possible and necessary.
|
bshanks@94 | 543 Feature selection integrated with prediction As noted earlier, in general, any classifier can be used for feature
|
bshanks@94 | 544 selection by running it inside a stepwise wrapper. Also, some learning algorithms integrate soft constraints on number of
|
bshanks@94 | 545 features used. Examples of both of these will be seen in the section “Multivariate supervised learning”.
|
bshanks@94 | 546 Multivariate supervised learning
|
bshanks@60 | 547
|
bshanks@69 | 548
|
bshanks@69 | 549
|
bshanks@69 | 550
|
bshanks@69 | 551 Figure 6: First row: the first 6 reduced dimensions, using PCA. Second
|
bshanks@69 | 552 row: the first 6 reduced dimensions, using NNMF. Third row: the first
|
bshanks@69 | 553 six reduced dimensions, using landmark Isomap. Bottom row: examples
|
bshanks@69 | 554 of kmeans clustering applied to reduced datasets to find 7 clusters. Left:
|
bshanks@69 | 555 19 of the major subdivisions of the cortex. Second from left: PCA. Third
|
bshanks@69 | 556 from left: NNMF. Right: Landmark Isomap. Additional details: In the
|
bshanks@69 | 557 third and fourth rows, 7 dimensions were found, but only 6 displayed. In
|
bshanks@69 | 558 the last row: for PCA, 50 dimensions were used; for NNMF, 6 dimensions
|
bshanks@93 | 559 were used; for landmark Isomap, 7 dimensions were used. Forward stepwise logistic regression Lo-
|
bshanks@93 | 560 gistic regression is a popular method for pre-
|
bshanks@93 | 561 dictive modeling of categorial data. As a pi-
|
bshanks@93 | 562 lot run, for five cortical areas (SS, AUD, RSP,
|
bshanks@93 | 563 VIS, and MO), we performed forward stepwise
|
bshanks@93 | 564 logistic regression to find single genes, pairs of
|
bshanks@93 | 565 genes, and triplets of genes which predict areal
|
bshanks@93 | 566 identify. This is an example of feature selec-
|
bshanks@93 | 567 tion integrated with prediction using a stepwise
|
bshanks@93 | 568 wrapper. Some of the single genes found were
|
bshanks@93 | 569 shown in various figures throughout this doc-
|
bshanks@93 | 570 ument, and Figure 4 shows a combination of
|
bshanks@93 | 571 genes which was found.
|
bshanks@93 | 572 We felt that, for single genes, gradient simi-
|
bshanks@93 | 573 larity did a better job than logistic regression at
|
bshanks@93 | 574 capturing our subjective impression of a “good
|
bshanks@93 | 575 gene”.
|
bshanks@93 | 576 SVM on all genes at once
|
bshanks@93 | 577 In order to see how well one can do when
|
bshanks@93 | 578 looking at all genes at once, we ran a support
|
bshanks@93 | 579 vector machine to classify cortical surface pix-
|
bshanks@93 | 580 els based on their gene expression profiles. We
|
bshanks@93 | 581 achieved classification accuracy of about 81%19.
|
bshanks@93 | 582 This shows that the genes included in the ABA
|
bshanks@93 | 583 dataset are sufficient to define much of cortical
|
bshanks@93 | 584 anatomy. However, as noted above, a classifier
|
bshanks@93 | 585 that looks at all the genes at once isn’t as prac-
|
bshanks@93 | 586 tically useful as a classifier that uses only a few
|
bshanks@93 | 587 genes.
|
bshanks@94 | 588 _________________________________________
|
bshanks@94 | 589 195-fold cross-validation.
|
bshanks@93 | 590 Data-driven redrawing of the cor-
|
bshanks@85 | 591 tical map
|
bshanks@93 | 592 We have applied the following dimensionality reduction algorithms to reduce the dimensionality of the gene expression
|
bshanks@93 | 593 profile associated with each voxel: Principal Components Analysis (PCA), Simple PCA (SPCA), Multi-Dimensional Scaling
|
bshanks@93 | 594 (MDS), Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space Alignment (LTSA), Hessian locally linear
|
bshanks@93 | 595 embedding, Diffusion maps, Stochastic Neighbor Embedding (SNE), Stochastic Proximity Embedding (SPE), Fast Maximum
|
bshanks@93 | 596 Variance Unfolding (FastMVU), Non-negative Matrix Factorization (NNMF). Space constraints prevent us from showing
|
bshanks@93 | 597 many of the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in the first, second, and third rows of
|
bshanks@93 | 598 Figure 6.
|
bshanks@93 | 599 After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we have tried
|
bshanks@93 | 600 k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are shown in the last
|
bshanks@93 | 601 row of Figure 6. To compare, the leftmost picture on the bottom row of Figure 6 shows some of the major subdivisions of
|
bshanks@93 | 602 cortex. These results clearly show that different dimensionality reduction techniques capture different aspects of the data
|
bshanks@93 | 603 and lead to different clusterings, indicating the utility of our proposal to produce a detailed comparion of these techniques
|
bshanks@93 | 604 as applied to the domain of genomic anatomy.
|
bshanks@71 | 605
|
bshanks@85 | 606 Figure 7: Prototypes corresponding to sample gene clusters,
|
bshanks@85 | 607 clustered by gradient similarity. Region boundaries for the
|
bshanks@92 | 608 region that most matches each prototype are overlayed. Many areas are captured by clusters of genes We
|
bshanks@92 | 609 also clustered the genes using gradient similarity to see if
|
bshanks@92 | 610 the spatial regions defined by any clusters matched known
|
bshanks@92 | 611 anatomical regions. Figure 7 shows, for ten sample gene
|
bshanks@92 | 612 clusters, each cluster’s average expression pattern, compared
|
bshanks@92 | 613 to a known anatomical boundary. This suggests that it is
|
bshanks@92 | 614 worth attempting to cluster genes, and then to use the re-
|
bshanks@92 | 615 sults to cluster voxels.
|
bshanks@92 | 616 The approach: what we plan to do
|
bshanks@92 | 617 Flatmap cortex and segment cortical layers
|
bshanks@92 | 618 There are multiple ways to flatten 3-D data into 2-D. We
|
bshanks@92 | 619 will compare mappings from manifolds to planes which at-
|
bshanks@92 | 620 tempt to preserve size (such as the one used by Caret[7])
|
bshanks@92 | 621 with mappings which preserve angle (conformal maps). Our
|
bshanks@92 | 622 method will include a statistical test that warns the user if
|
bshanks@92 | 623 the assumption of 2-D structure seems to be wrong.
|
bshanks@86 | 624 We have not yet made use of radial profiles. While the radial profiles may be used “raw”, for laminar structures like the
|
bshanks@86 | 625 cortex another strategy is to group together voxels in the same cortical layer; each surface pixel would then be associated
|
bshanks@86 | 626 with one expression level per gene per layer. We will develop a segmentation algorithm to automatically identify the layer
|
bshanks@86 | 627 boundaries.
|
bshanks@30 | 628 Develop algorithms that find genetic markers for anatomical regions
|
bshanks@92 | 629 We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise,
|
bshanks@92 | 630 geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity),
|
bshanks@92 | 631 but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy
|
bshanks@92 | 632 ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such
|
bshanks@94 | 633 as Student’s t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a scoring
|
bshanks@94 | 634 measure on genes by taking the prediction error when using that gene to predict the target.
|
bshanks@92 | 635 Using some combination of these measures, we will develop a procedure to find single marker genes for anatomical regions:
|
bshanks@94 | 636 for each cortical area, we will rank the genes by their ability to delineate each area. We will quantitatively compare the list
|
bshanks@94 | 637 of single genes generated by our method to the lists generated by previous methods which are mentioned in Aim 1 Related
|
bshanks@94 | 638 Work.
|
bshanks@92 | 639 Some cortical areas have no single marker genes but can be identified by combinatorial coding. This requires multivariate
|
bshanks@92 | 640 scoring measures and feature selection procedures. Many of the measures, such as expression energy, gradient similarity,
|
bshanks@92 | 641 Jaccard, Dice, Hough, Student’s t, and Mann-Whitney U are univariate. We will extend these scoring measures for use
|
bshanks@92 | 642 in multivariate feature selection, that is, for scoring how well combinations of genes, rather than individual genes, can
|
bshanks@92 | 643 distinguish a target area. There are existing multivariate forms of some of the univariate scoring measures, for example,
|
bshanks@92 | 644 Hotelling’s T-square is a multivariate analog of Student’s t.
|
bshanks@92 | 645 We will develop a feature selection procedure for choosing the best small set of marker genes for a given anatomical
|
bshanks@92 | 646 area. In addition to using the scoring measures that we develop, we will also explore (a) feature selection using a stepwise
|
bshanks@94 | 647 wrapper over “vanilla” classifiers such as logistic regression, (b) supervised learning methods such as decision trees which
|
bshanks@94 | 648 incrementally/greedily combine single gene markers into sets, and (c) supervised learning methods which use soft constraints
|
bshanks@94 | 649 to minimize number of features used, such as sparse support vector machines.
|
bshanks@94 | 650 Since errors of displacement and of shape may cause genes and target areas to match less than they should, we will
|
bshanks@94 | 651 consider the robustness of feature selection methods in the presence of error. Some of these methods, such as the Hough
|
bshanks@94 | 652 transform, are designed to be resistant in the presence of error, but many are not. We will consider extensions to scoring
|
bshanks@94 | 653 measures that may improve their robustness; for example, a wrapper that runs a scoring method on small displacements
|
bshanks@94 | 654 and distortions of the data adds robustness to registration error at the expense of computation time.
|
bshanks@94 | 655 An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape of the natural
|
bshanks@94 | 656 domain of gene expression corresponding to the area is different from the shape of the area as recognized by anatomists.
|
bshanks@94 | 657 We will extend our procedure to handle difficult areas by combining areas or redrawing their boundaries. We will develop
|
bshanks@94 | 658 extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly, and (b)
|
bshanks@94 | 659 detect when a difficult area could be combined with adjacent areas to create a larger area which can be fit.
|
bshanks@92 | 660 A future publication on the method that we develop in Aim 1 will review the scoring measures and quantitatively compare
|
bshanks@92 | 661 their performance in order to provide a foundation for future research of methods of marker gene finding. We will measure
|
bshanks@92 | 662 the robustness of the scoring measures as well as their absolute performance on our dataset.
|
bshanks@94 | 663 Classifiers
|
bshanks@94 | 664 We will explore and compare different classifiers. As noted above, this activity is not separate from the previous one,
|
bshanks@94 | 665 because some supervised learning algorithms include feature selection, and any classifier can be combined with a stepwise
|
bshanks@94 | 666 wrapper for use as a feature selection method. We will explore logistic regression (including spatial models[15]), decision
|
bshanks@95 | 667 trees20 , sparse SVMs, generative mixture models (including naive bayes), kernel density estimation, instance-based learning
|
bshanks@95 | 668 methods (such as k-nearest neighbor), genetic algorithms, and artificial neural networks.
|
bshanks@95 | 669 Application to cortical areas
|
bshanks@86 | 670 # confirm with EMAGE, GeneAtlas, GENSAT, etc, to fight overfitting, two hemis
|
bshanks@30 | 671 Develop algorithms to suggest a division of a structure into anatomical parts
|
bshanks@30 | 672 1.Explore dimensionality reduction algorithms applied to pixels: including TODO
|
bshanks@30 | 673 2.Explore dimensionality reduction algorithms applied to genes: including TODO
|
bshanks@30 | 674 3.Explore clustering algorithms applied to pixels: including TODO
|
bshanks@94 | 675 4.Explore clustering algorithms applied to genes: including gene shaving[9], TODO
|
bshanks@30 | 676 5.Develop an algorithm to use dimensionality reduction and/or hierarchial clustering to create anatomical maps
|
bshanks@30 | 677 6.Run this algorithm on the cortex: present a hierarchial, genoarchitectonic map of the cortex
|
bshanks@51 | 678 # Linear discriminant analysis
|
bshanks@51 | 679 # jbt, coclustering
|
bshanks@51 | 680 # self-organizing map
|
bshanks@92 | 681 # Linear discriminant analysis
|
bshanks@53 | 682 # compare using clustering scores
|
bshanks@64 | 683 # multivariate gradient similarity
|
bshanks@66 | 684 # deep belief nets
|
bshanks@87 | 685 Apply these algorithms to the cortex
|
bshanks@87 | 686 Using the methods developed in Aim 1, we will present, for each cortical area, a short list of markers to identify that
|
bshanks@87 | 687 area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once. Using the methods
|
bshanks@87 | 688 developed in Aim 2, we will present one or more hierarchial cortical maps. We will identify and explain how the statistical
|
bshanks@92 | 689 structure in the gene expression data led to any unexpected or interesting features of these maps, and we will provide
|
bshanks@92 | 690 biological hypotheses to interpret any new cortical areas, or groupings of areas, which are discovered.
|
bshanks@94 | 691 _________________________________________
|
bshanks@94 | 692 20Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision tree for
|
bshanks@94 | 693 that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was too large. We
|
bshanks@94 | 694 plan to implement a pruning procedure to generate trees that use fewer genes.
|
bshanks@87 | 695 Timeline and milestones
|
bshanks@90 | 696 Finding marker genes
|
bshanks@89 | 697 ∙September-November 2009: Develop an automated mechanism for segmenting the cortical voxels into layers
|
bshanks@89 | 698 ∙November 2009 (milestone): Have completed construction of a flatmapped, cortical dataset with information for each
|
bshanks@89 | 699 layer
|
bshanks@89 | 700 ∙October 2009-April 2010: Develop scoring methods and to test them in various supervised learning frameworks. Also
|
bshanks@88 | 701 test out various dimensionality reduction schemes in combination with supervised learning. create or extend supervised
|
bshanks@88 | 702 learning frameworks which use multivariate versions of the best scoring methods.
|
bshanks@89 | 703 ∙January 2010 (milestone): Submit a publication on single marker genes for cortical areas
|
bshanks@88 | 704 ∙February-July 2010: Continue to develop scoring methods and supervised learning frameworks. Explore the best way
|
bshanks@88 | 705 to integrate radial profiles with supervised learning. Explore the best way to make supervised learning techniques
|
bshanks@88 | 706 robust against incorrect labels (i.e. when the areas drawn on the input cortical map are slightly off). Quantitatively
|
bshanks@88 | 707 compare the performance of different supervised learning techniques. Validate marker genes found in the ABA dataset
|
bshanks@88 | 708 by checking against other gene expression datasets. Create documentation and unit tests for software toolbox for Aim
|
bshanks@88 | 709 1. Respond to user bug reports for Aim 1 software toolbox.
|
bshanks@89 | 710 ∙June 2010 (milestone): Submit a paper describing a method fulfilling Aim 1. Release toolbox.
|
bshanks@89 | 711 ∙July 2010 (milestone): Submit a paper describing combinations of marker genes for each cortical area, and a small
|
bshanks@88 | 712 number of marker genes that can, in combination, define most of the areas at once
|
bshanks@90 | 713 Revealing new ways to parcellate a structure into regions
|
bshanks@91 | 714 ∙June 2010-March 2011: Explore dimensionality reduction algorithms for Aim 2. Explore standard hierarchial clus-
|
bshanks@91 | 715 tering algorithms, used in combination with dimensionality reduction, for Aim 2. Explore co-clustering algorithms.
|
bshanks@91 | 716 Think about how radial profile information can be used for Aim 2. Adapt clustering algorithms to use radial profile
|
bshanks@91 | 717 information. Quantitatively compare the performance of different dimensionality reduction and clustering techniques.
|
bshanks@89 | 718 Quantitatively compare the value of different flatmapping methods and ways of representing radial profiles.
|
bshanks@89 | 719 ∙March 2011 (milestone): Submit a paper describing a method fulfilling Aim 2. Release toolbox.
|
bshanks@89 | 720 ∙February-May 2011: Using the methods developed for Aim 2, explore the genomic anatomy of the cortex. If new ways
|
bshanks@89 | 721 of organizing the cortex into areas are discovered, read the literature and talk to people to learn about research related
|
bshanks@89 | 722 to interpreting our results. Create documentation and unit tests for software toolbox for Aim 2. Respond to user bug
|
bshanks@90 | 723 reports for Aim 2 software toolbox.
|
bshanks@89 | 724 ∙May 2011 (milestone): Submit a paper on the genomic anatomy of the cortex, using the methods developed in Aim 2
|
bshanks@89 | 725 ∙May-August 2011: Revisit Aim 1 to see if what was learned during Aim 2 can improve the methods for Aim 1. Follow
|
bshanks@89 | 726 up on responses to our papers. Possibly submit another paper.
|
bshanks@33 | 727 Bibliography & References Cited
|
bshanks@85 | 728 [1]Chris Adamson, Leigh Johnston, Terrie Inder, Sandra Rees, Iven Mareels, and Gary Egan. A Tracking Approach to
|
bshanks@85 | 729 Parcellation of the Cerebral Cortex, volume Volume 3749/2005 of Lecture Notes in Computer Science, pages 294–301.
|
bshanks@85 | 730 Springer Berlin / Heidelberg, 2005.
|
bshanks@85 | 731 [2]J. Annese, A. Pitiot, I. D. Dinov, and A. W. Toga. A myelo-architectonic method for the structural classification of
|
bshanks@85 | 732 cortical areas. NeuroImage, 21(1):15–26, 2004.
|
bshanks@85 | 733 [3]Tanya Barrett, Dennis B. Troup, Stephen E. Wilhite, Pierre Ledoux, Dmitry Rudnev, Carlos Evangelista, Irene F.
|
bshanks@53 | 734 Kim, Alexandra Soboleva, Maxim Tomashevsky, and Ron Edgar. NCBI GEO: mining tens of millions of expression
|
bshanks@53 | 735 profiles–database and tools update. Nucl. Acids Res., 35(suppl_1):D760–765, 2007.
|
bshanks@85 | 736 [4]George W. Bell, Tatiana A. Yatskievych, and Parker B. Antin. GEISHA, a whole-mount in situ hybridization gene
|
bshanks@53 | 737 expression screen in chicken embryos. Developmental Dynamics, 229(3):677–687, 2004.
|
bshanks@85 | 738 [5]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe Warren, Wah
|
bshanks@53 | 739 Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome. PLoS Comput Biol, 1(4):e41,
|
bshanks@53 | 740 2005.
|
bshanks@85 | 741 [6]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy, Arthur W.
|
bshanks@53 | 742 Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of expression for a mouse
|
bshanks@53 | 743 brain section obtained using voxelation. Physiol. Genomics, 30(3):313–321, August 2007.
|
bshanks@85 | 744 [7]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite for surface-
|
bshanks@33 | 745 based analyses of cerebral cortex. Journal of the American Medical Informatics Association: JAMIA, 8(5):443–59, 2001.
|
bshanks@33 | 746 PMID: 11522765.
|
bshanks@85 | 747 [8]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Schambra, Norma J.
|
bshanks@44 | 748 Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A gene expression atlas of the
|
bshanks@44 | 749 central nervous system based on bacterial artificial chromosomes. Nature, 425(6961):917–925, October 2003.
|
bshanks@94 | 750 [9]Trevor Hastie, Robert Tibshirani, Michael Eisen, Ash Alizadeh, Ronald Levy, Louis Staudt, Wing Chan, David Botstein,
|
bshanks@94 | 751 and Patrick Brown. ’Gene shaving’ as a method for identifying distinct sets of genes with similar expression patterns.
|
bshanks@94 | 752 Genome Biology, 1(2):research0003.1–research0003.21, 2000.
|
bshanks@94 | 753 [10]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Expression Pat-
|
bshanks@46 | 754 terns, volume 13 of Communications in Computer and Information Science, pages 347–361. Springer Berlin Heidelberg,
|
bshanks@46 | 755 2008.
|
bshanks@94 | 756 [11]F. Kruggel, M. K. Brckner, Th. Arendt, C. J. Wiggins, and D. Y. von Cramon. Analyzing the neocortical fine-structure.
|
bshanks@85 | 757 Medical Image Analysis, 7(3):251–264, September 2003.
|
bshanks@94 | 758 [12]Erh-Fang Lee, Jyl Boline, and Arthur W. Toga. A High-Resolution anatomical framework of the neonatal mouse brain
|
bshanks@53 | 759 for managing gene expression data. Frontiers in Neuroinformatics, 1:6, 2007. PMC2525996.
|
bshanks@94 | 760 [13]Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew Asbury, Tony Cheung,
|
bshanks@44 | 761 Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M. Norland, Dennis S. Rice, Nilesh Dosooye, Sundeep
|
bshanks@44 | 762 Shakya, Perdeep Mehta, and Tom Curran. BGEM: an in situ hybridization database of gene expression in the embryonic
|
bshanks@44 | 763 and adult mouse nervous system. PLoS Biology, 4(4):e86 EP –, April 2006.
|
bshanks@94 | 764 [14]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Susan M
|
bshanks@44 | 765 Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann, David J Anderson,
|
bshanks@44 | 766 Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas of the adult mouse brain. Nat
|
bshanks@44 | 767 Neurosci, 12(3):356–362, March 2009.
|
bshanks@94 | 768 [15]Christopher J. Paciorek. Computational techniques for spatial logistic regression with large data sets. Computational
|
bshanks@94 | 769 Statistics & Data Analysis, 51(8):3631–3653, May 2007.
|
bshanks@94 | 770 [16]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2 edition, July
|
bshanks@36 | 771 2001.
|
bshanks@94 | 772 [17]A. Schleicher, N. Palomero-Gallagher, P. Morosan, S. Eickhoff, T. Kowalski, K. Vos, K. Amunts, and K. Zilles. Quanti-
|
bshanks@85 | 773 tative architectural analysis: a new approach to cortical mapping. Anatomy and Embryology, 210(5):373–386, December
|
bshanks@85 | 774 2005.
|
bshanks@94 | 775 [18]Oliver Schmitt, Lars Hmke, and Lutz Dmbgen. Detection of cortical transition regions utilizing statistical analyses of
|
bshanks@85 | 776 excess masses. NeuroImage, 19(1):42–63, May 2003.
|
bshanks@94 | 777 [19]Constance M. Smith, Jacqueline H. Finger, Terry F. Hayamizu, Ingeborg J. McCright, Janan T. Eppig, James A.
|
bshanks@53 | 778 Kadin, Joel E. Richardson, and Martin Ringwald. The mouse gene expression database (GXD): 2007 update. Nucl.
|
bshanks@53 | 779 Acids Res., 35(suppl_1):D618–623, 2007.
|
bshanks@94 | 780 [20]Judy Sprague, Leyla Bayraktaroglu, Dave Clements, Tom Conlin, David Fashena, Ken Frazer, Melissa Haendel, Dou-
|
bshanks@53 | 781 glas G Howe, Prita Mani, Sridhar Ramachandran, Kevin Schaper, Erik Segerdell, Peiran Song, Brock Sprunger, Sierra
|
bshanks@53 | 782 Taylor, Ceri E Van Slyke, and Monte Westerfield. The zebrafish information network: the zebrafish model organism
|
bshanks@53 | 783 database. Nucleic Acids Research, 34(Database issue):D581–5, 2006. PMID: 16381936.
|
bshanks@94 | 784 [21]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003.
|
bshanks@94 | 785 [22]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T. Mortrud,
|
bshanks@33 | 786 Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H. Gage, Allan R. Jones,
|
bshanks@33 | 787 Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the hippocampus. Neuron, 60(6):1010–
|
bshanks@33 | 788 1021, December 2008.
|
bshanks@94 | 789 [23]Pavel Tomancak, Amy Beaton, Richard Weiszmann, Elaine Kwan, ShengQiang Shu, Suzanna E Lewis, Stephen
|
bshanks@53 | 790 Richards, Michael Ashburner, Volker Hartenstein, Susan E Celniker, and Gerald M Rubin. Systematic determina-
|
bshanks@53 | 791 tion of patterns of gene expression during drosophila embryogenesis. Genome Biology, 3(12):research008818814, 2002.
|
bshanks@53 | 792 PMC151190.
|
bshanks@94 | 793 [24]Jano van Hemert and Richard Baldock. Mining Spatial Gene Expression Data for Association Rules, volume 4414/2007
|
bshanks@53 | 794 of Lecture Notes in Computer Science, pages 66–76. Springer Berlin / Heidelberg, 2007.
|
bshanks@94 | 795 [25]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton, Thomas P. Perry,
|
bshanks@44 | 796 Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen. EMAGE edinburgh mouse atlas
|
bshanks@44 | 797 of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860–865, 2008.
|
bshanks@94 | 798 [26]Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression patterns in the mouse
|
bshanks@44 | 799 embryo. Nucl. Acids Res., 32(suppl_1):D552–556, 2004.
|
bshanks@94 | 800 [27]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa Agar-
|
bshanks@44 | 801 wala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood, Robert Baertsch,
|
bshanks@44 | 802 Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer Bork, Marc Botcherby,
|
bshanks@44 | 803 Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John Burton, Jonathan Butler,
|
bshanks@44 | 804 Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T Chinwalla, Deanna M Church,
|
bshanks@44 | 805 Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R Copley, Alan Coulson, Olivier Couronne,
|
bshanks@44 | 806 James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy Davies, Kimberly D Delehaunty, Justin Deri,
|
bshanks@44 | 807 Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens, Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M
|
bshanks@44 | 808 Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes, Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A
|
bshanks@44 | 809 Fewell, Paul Flicek, Karen Foley, Wayne N Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage,
|
bshanks@44 | 810 Richard A Gibbs, Gustavo Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves,
|
bshanks@44 | 811 Eric D Green, Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki,
|
bshanks@44 | 812 LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard, Adrienne
|
bshanks@44 | 813 Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy, Michael Kamal,
|
bshanks@44 | 814 Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn Kells, W James Kent,
|
bshanks@44 | 815 Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David Kulp, Tom Landers, J P
|
bshanks@44 | 816 Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd, Susan Lucas, Bin Ma, Donna R
|
bshanks@44 | 817 Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer, Megan McCarthy, W Richard McCombie,
|
bshanks@44 | 818 Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim, Beverley Meredith, Jill P Mesirov, Webb Miller,
|
bshanks@44 | 819 Tracie L Miner, Emmanuel Mongin, Kate T Montgomery, Michael Morgan, Richard Mott, James C Mullikin, Donna M
|
bshanks@44 | 820 Muzny, William E Nash, Joanne O Nelson, Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J
|
bshanks@44 | 821 O’Connor, Yasushi Okazaki, Karen Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane
|
bshanks@44 | 822 Peterson, Pavel Pevzner, Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter,
|
bshanks@44 | 823 Michael Quail, Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph San-
|
bshanks@44 | 824 tos, Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven Seaman,
|
bshanks@44 | 825 Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer, Guy Slater, Arian
|
bshanks@44 | 826 Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles Sugnet, Mikita Suyama,
|
bshanks@44 | 827 Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp, Catherine Ucla, Abel Ureta-Vidal,
|
bshanks@44 | 828 Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie Wall, Ryan J Weber, Robert B Weiss, Michael C
|
bshanks@44 | 829 Wendl, Anthony P West, Kris Wetterstrand, Raymond Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey,
|
bshanks@44 | 830 Sophie Williams, Richard K Wilson, Eitan Winter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang,
|
bshanks@44 | 831 Evgeny M Zdobnov, Michael C Zody, and Eric S Lander. Initial sequencing and comparative analysis of the mouse
|
bshanks@44 | 832 genome. Nature, 420(6915):520–62, December 2002. PMID: 12466850.
|
bshanks@33 | 833
|
bshanks@33 | 834
|