rev |
line source |
bshanks@0 | 1 Specific aims
|
bshanks@106 | 2 Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ
|
bshanks@106 | 3 transgenic reporter, microarray voxelation, and others, allow the expression levels of many genes at many loca-
|
bshanks@106 | 4 tions to be compared. Our goal is to develop automated methods to relate spatial variation in gene expression to
|
bshanks@106 | 5 anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical maps
|
bshanks@106 | 6 based on gene expression patterns. We will validate these methods by applying them to 46 anatomical areas
|
bshanks@106 | 7 within the cerebral cortex, by using the Allen Mouse Brain Atlas coronal dataset (ABA). This gene expression
|
bshanks@106 | 8 dataset was generated using ISH, and contains over 4,000 genes. For each gene, a digitized 3-D raster of the
|
bshanks@106 | 9 expression pattern is available: for each gene, the level of expression at each of 51,533 voxels is recorded.
|
bshanks@106 | 10 We have three specific aims:
|
bshanks@96 | 11 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which
|
bshanks@96 | 12 selectively target anatomical regions
|
bshanks@96 | 13 (2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions,
|
bshanks@96 | 14 based on spatial patterns in gene expression
|
bshanks@96 | 15 (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen
|
bshanks@96 | 16 Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending
|
bshanks@96 | 17 the functionality of Caret, an existing open-source scientific imaging program. Use this dataset to validate the
|
bshanks@96 | 18 methods developed in (1) and (2).
|
bshanks@96 | 19 Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that
|
bshanks@96 | 20 the methods developed in aims (1) and (2) will generalize to any sort of high-dimensional data over points located
|
bshanks@96 | 21 in a low-dimensional space. In particular, our method could be applied to genome-wide sequencing data derived
|
bshanks@96 | 22 from sets of tissues and disease states.
|
bshanks@96 | 23 In terms of the application of the methods to cerebral cortex, aim (1) is to go from cortical areas to marker
|
bshanks@96 | 24 genes, and aim (2) is to let the gene profile define the cortical areas. In addition to validating the usefulness
|
bshanks@96 | 25 of the algorithms, the application of these methods to cortex will produce immediate benefits, because there
|
bshanks@96 | 26 are currently no known genetic markers for most cortical areas. The results of the project will support the
|
bshanks@96 | 27 development of new ways to selectively target cortical areas, and it will support the development of a method for
|
bshanks@96 | 28 identifying the cortical areal boundaries present in small tissue samples.
|
bshanks@96 | 29 All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well
|
bshanks@96 | 30 as the machine-readable datasets developed in aim (3), will be published and freely available for others to use.
|
bshanks@87 | 31 The challenge topic
|
bshanks@96 | 32 This proposal addresses challenge topic 06-HG-101. Massive new datasets obtained with techniques such as
|
bshanks@96 | 33 in situ hybridization (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others,
|
bshanks@96 | 34 allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated
|
bshanks@96 | 35 methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific
|
bshanks@96 | 36 anatomical regions, and also to draw new anatomical maps based on gene expression patterns.
|
bshanks@101 | 37 ______________
|
bshanks@101 | 38 The Challenge and Potential impact
|
bshanks@96 | 39 Each of our three aims will be discussed in turn. For each aim, we will develop a conceptual framework for
|
bshanks@105 | 40 thinking about the task. Next we will discuss related work, and then summarize why our strategy is different from
|
bshanks@105 | 41 what has been done before. After we have discussed all three aims, we will describe the potential impact.
|
bshanks@101 | 42 Aim 1: Given a map of regions, find genes that mark the regions
|
bshanks@104 | 43 Machine learning terminology: classifiers The task of looking for marker genes for known anatomical regions
|
bshanks@104 | 44 means that one is looking for a set of genes such that, if the expression level of those genes is known, then the
|
bshanks@104 | 45 locations of the regions can be inferred.
|
bshanks@104 | 46 If we define the regions so that they cover the entire anatomical structure to be subdivided, we may say that
|
bshanks@104 | 47 we are using gene expression in each voxel to assign that voxel to the proper area. We call this a classification
|
bshanks@104 | 48 task, because each voxel is being assigned to a class (namely, its region). An understanding of the relationship
|
bshanks@104 | 49 between the combination of their expression levels and the locations of the regions may be expressed as a
|
bshanks@104 | 50 function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is
|
bshanks@104 | 51 the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function
|
bshanks@104 | 52 a classifier. In general, the input to a classifier is called an instance, and the output is called a label (or a class
|
bshanks@104 | 53 label).
|
bshanks@101 | 54 The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for
|
bshanks@96 | 55 determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene
|
bshanks@96 | 56 expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. The
|
bshanks@96 | 57 initial gene expression dataset used in the construction of the classifier is called training data. In the machine
|
bshanks@96 | 58 learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a task in
|
bshanks@96 | 59 which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances
|
bshanks@96 | 60 (voxels) for which the labels (regions) are known.
|
bshanks@101 | 61 Each gene expression level is called a feature, and the selection of which genes1 to include is called feature
|
bshanks@96 | 62 selection. Feature selection is one component of the task of learning a classifier. Some methods for learning
|
bshanks@96 | 63 classifiers start out with a separate feature selection phase, whereas other methods combine feature selection
|
bshanks@96 | 64 with other aspects of training.
|
bshanks@101 | 65 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked
|
bshanks@96 | 66 genes are then chosen. Some scoring measures can assign a score to a set of selected genes, not just to a
|
bshanks@96 | 67 single gene; in this case, a dynamic procedure may be used in which features are added and subtracted from the
|
bshanks@96 | 68 selected set depending on how much they raise the score. Such procedures are called “stepwise” or “greedy”.
|
bshanks@101 | 69 Although the classifier itself may only look at the gene expression data within each voxel before classifying
|
bshanks@96 | 70 that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize
|
bshanks@96 | 71 score-based feature selection methods depending on how the score of calculated. Often the score calculation
|
bshanks@96 | 72 consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the
|
bshanks@96 | 73 aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to
|
bshanks@96 | 74 calculate a voxel’s sub-score, then we say it is a local scoring method. If only information from the voxel itself is
|
bshanks@96 | 75 used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method.
|
bshanks@101 | 76 Both gene expression data and anatomical atlases have errors, due to a variety of factors. Individual subjects
|
bshanks@99 | 77 have idiosyncratic anatomy. Subjects may be improperly registered to the atlas. The method used to measure
|
bshanks@96 | 78 gene expression may be noisy. The atlas may have errors. It is even possible that some areas in the anatomical
|
bshanks@105 | 79 atlas are “wrong” in that they do not have the same shape as the natural domains of gene expression to which
|
bshanks@104 | 80 1Strictly speaking, the features are gene expression levels, but we’ll call them genes.
|
bshanks@96 | 81 they correspond. These sources of error can affect the displacement and the shape of both the gene expression
|
bshanks@96 | 82 data and the anatomical target areas. Therefore, it is important to use feature selection methods which are
|
bshanks@96 | 83 robust to these kinds of errors.
|
bshanks@85 | 84 Our strategy for Aim 1
|
bshanks@104 | 85 Key questions when choosing a learning method are: What are the instances? What are the features? How are
|
bshanks@104 | 86 the features chosen? Here are four principles that outline our answers to these questions.
|
bshanks@104 | 87 Principle 1: Combinatorial gene expression
|
bshanks@104 | 88 It is too much to hope that every anatomical region of interest will be identified by a single gene. For example,
|
bshanks@104 | 89 in the cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas
|
bshanks@104 | 90 (ABA) dataset. However, at least some of these areas can be delineated by looking at combinations of genes
|
bshanks@104 | 91 (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Studies,
|
bshanks@104 | 92 Figure 4). Therefore, each instance should contain multiple features (genes).
|
bshanks@104 | 93 Principle 2: Only look at combinations of small numbers of genes
|
bshanks@104 | 94 When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have
|
bshanks@104 | 95 been selected as features. The more data that are available to a classifier, the better that it can do. For example,
|
bshanks@104 | 96 perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every
|
bshanks@104 | 97 gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to
|
bshanks@104 | 98 gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for
|
bshanks@104 | 99 some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the
|
bshanks@104 | 100 expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that
|
bshanks@104 | 101 checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on
|
bshanks@104 | 102 tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we
|
bshanks@104 | 103 must select only a few genes as features.
|
bshanks@96 | 104 The requirement to find combinations of only a small number of genes limits us from straightforwardly ap-
|
bshanks@96 | 105 plying many of the most simple techniques from the field of supervised machine learning. In the parlance of
|
bshanks@96 | 106 machine learning, our task combines feature selection with supervised learning.
|
bshanks@30 | 107 Principle 3: Use geometry in feature selection
|
bshanks@96 | 108 When doing feature selection with score-based methods, the simplest thing to do would be to score the per-
|
bshanks@96 | 109 formance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach
|
bshanks@96 | 110 is to also use information about the geometric relations between each voxel and its neighbors; this requires non-
|
bshanks@96 | 111 pointwise, local scoring methods. See Preliminary Studies, figure 3 for evidence of the complementary nature of
|
bshanks@96 | 112 pointwise and local scoring methods.
|
bshanks@30 | 113 Principle 4: Work in 2-D whenever possible
|
bshanks@96 | 114 There are many anatomical structures which are commonly characterized in terms of a two-dimensional
|
bshanks@96 | 115 manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be
|
bshanks@96 | 116 improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for
|
bshanks@96 | 117 humans to visualize and work with 2-D data. Therefore, when possible, the instances should represent pixels,
|
bshanks@96 | 118 not voxels.
|
bshanks@43 | 119 Related work
|
bshanks@104 | 120 There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expres-
|
bshanks@104 | 121 sion data which are not fundamentally spatial2.
|
bshanks@104 | 122 As noted above, there has been much work on both supervised learning and there are many available
|
bshanks@104 | 123 algorithms for each. However, the algorithms require the scientist to provide a framework for representing the
|
bshanks@104 | 124 problem domain, and the way that this framework is set up has a large impact on performance. Creating a
|
bshanks@104 | 125 good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical
|
bshanks@104 | 126 “fine-tuning” of numerical parameters. For example, we believe that domain-specific scoring measures (such
|
bshanks@104 | 127 _________________________________________
|
bshanks@104 | 128 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates;
|
bshanks@104 | 129 not just data which have only a few different locations or which is indexed by anatomical label.
|
bshanks@104 | 130 as gradient similarity, which is discussed in Preliminary Studies) may be necessary in order to achieve the best
|
bshanks@104 | 131 results in this application.
|
bshanks@104 | 132 We now turn to efforts to find marker genes using spatial gene expression data using automated methods.
|
bshanks@106 | 133 GeneAtlas[3] and EMAGE [19] allow the user to construct a search query by demarcating regions and then
|
bshanks@104 | 134 specifying either the strength of expression or the name of another gene or dataset whose expression pattern
|
bshanks@104 | 135 is to be matched. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a
|
bshanks@104 | 136 region in concert but not separately.
|
bshanks@106 | 137 [12 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components. Gene Finder: The
|
bshanks@104 | 138 user selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2) yields a list of
|
bshanks@104 | 139 genes which are overexpressed in that cluster. Correlation: The user selects a seed voxel and the system then
|
bshanks@104 | 140 shows the user how much correlation there is between the gene expression profile of the seed voxel and every
|
bshanks@106 | 141 other voxel. Clusters: will be described later. [4] looks at the mean expression level of genes within anatomical
|
bshanks@104 | 142 regions, and applies a Student’s t-test with Bonferroni correction to determine whether the mean expression
|
bshanks@106 | 143 level of a gene is significantly higher in the target region. [12] and [4] differ from our Aim 1 in at least three
|
bshanks@106 | 144 ways. First, [12] and [4] find only single genes, whereas we will also look for combinations of genes. Second,
|
bshanks@106 | 145 [12 ] and [4] can only use overexpression as a marker, whereas we will also search for underexpression. Third,
|
bshanks@106 | 146 [12 ] and [4] use scores based on pointwise expression levels, whereas we will also use geometric scores such
|
bshanks@104 | 147 as gradient similarity (described in Preliminary Studies). Figures 4, 2, and 3 in the Preliminary Studies section
|
bshanks@104 | 148 contain evidence that each of our three choices is the right one.
|
bshanks@106 | 149 [8 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use
|
bshanks@96 | 150 an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to
|
bshanks@99 | 151 match a target image.
|
bshanks@96 | 152 In summary, there has been fruitful work on finding marker genes, but only one of the previous projects
|
bshanks@96 | 153 explores combinations of marker genes, and none of these publications compare the results obtained by using
|
bshanks@96 | 154 different algorithms or scoring methods.
|
bshanks@84 | 155 Aim 2: From gene expression data, discover a map of regions
|
bshanks@101 | 156 Machine learning terminology: clustering
|
bshanks@101 | 157 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is
|
bshanks@101 | 158 referred to as unsupervised learning in the jargon of machine learning. One thing that you can do with such a
|
bshanks@101 | 159 dataset is to group instances together. A set of similar instances is called a cluster, and the activity of finding
|
bshanks@101 | 160 grouping the data into clusters is called clustering or cluster analysis.
|
bshanks@101 | 161 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The
|
bshanks@101 | 162 instances are once again voxels (or pixels) along with their associated gene expression profiles. We make
|
bshanks@101 | 163 the assumption that voxels from the same anatomical region have similar gene expression profiles, at least
|
bshanks@101 | 164 compared to the other regions. This means that clustering voxels is the same as finding potential regions; we
|
bshanks@101 | 165 seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression.
|
bshanks@101 | 166 It is desirable to determine not just one set of regions, but also how these regions relate to each other. The
|
bshanks@101 | 167 outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition
|
bshanks@101 | 168 the voxels. This is called hierarchical clustering.
|
bshanks@101 | 169 Similarity scores A crucial choice when designing a clustering method is how to measure similarity, across
|
bshanks@101 | 170 either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature
|
bshanks@101 | 171 selection (discussed above under Aim 1) and scoring methods for similarity.
|
bshanks@104 | 172 Spatially contiguous clusters; image segmentation We have shown that aim 2 is a type of clustering
|
bshanks@104 | 173 task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels
|
bshanks@104 | 174 grouped together into a cluster must be spatially contiguous. In Preliminary Studies, we show that one can get
|
bshanks@104 | 175 reasonable results without enforcing this constraint; however, we plan to compare these results against other
|
bshanks@104 | 176 methods which guarantee contiguous clusters.
|
bshanks@104 | 177 Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous
|
bshanks@104 | 178 clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are
|
bshanks@104 | 179 thousands of color channels (one for each gene), rather than just three3. A more crucial difference is that there
|
bshanks@104 | 180 are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not
|
bshanks@104 | 181 appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation
|
bshanks@104 | 182 algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these
|
bshanks@104 | 183 algorithms are specialized for visual images.
|
bshanks@96 | 184 Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression
|
bshanks@96 | 185 feature vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying
|
bshanks@96 | 186 data.
|
bshanks@96 | 187 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion
|
bshanks@98 | 188 in the instances. However, some clustering algorithms perform better on small numbers of features4. There are
|
bshanks@96 | 189 techniques which “summarize” a larger number of features using a smaller number of features; these techniques
|
bshanks@101 | 190 go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique
|
bshanks@101 | 191 yields is called the reduced feature set. Note that the features in the reduced feature set do not necessarily
|
bshanks@101 | 192 correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
|
bshanks@101 | 193 Clustering genes rather than voxels Although the ultimate goal is to cluster the instances (voxels or pixels),
|
bshanks@101 | 194 one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes
|
bshanks@101 | 195 could be used.
|
bshanks@101 | 196 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene,
|
bshanks@101 | 197 we could have one reduced feature for each gene cluster.
|
bshanks@101 | 198 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have
|
bshanks@101 | 199 an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following
|
bshanks@104 | 200 procedure: cluster together genes which pick out similar regions, and then to use the more popular common
|
bshanks@104 | 201 regions as the final clusters. In Preliminary Studies, Figure 7, we show that a number of anatomically recognized
|
bshanks@104 | 202 cortical regions, as well as some “superregions” formed by lumping together a few regions, are associated with
|
bshanks@104 | 203 gene clusters in this fashion.
|
bshanks@104 | 204 Related work
|
bshanks@104 | 205 Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example,
|
bshanks@106 | 206 [15 ], [2 ], [16], and [1] associate spots on the cortex with the radial profile5 of response to some stain ([10] uses
|
bshanks@104 | 207 MRI), extract features from this profile, and then use similarity between surface pixels to cluster.
|
bshanks@106 | 208 [18 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual
|
bshanks@104 | 209 analysis, two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and
|
bshanks@104 | 210 a hierarchical recursive bifurcation clustering scheme based on correlation as the similarity score. The paper
|
bshanks@104 | 211 yielded impressive results, proving the usefulness of computational genomic anatomy. We have run NNMF on
|
bshanks@104 | 212 the cortical dataset
|
bshanks@106 | 213 AGEA[12] includes a preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with
|
bshanks@106 | 214 correlation as the similarity metric. EMAGE[19] allows the user to select a dataset from among a large number
|
bshanks@104 | 215 of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters
|
bshanks@104 | 216 via hierarchical complete linkage clustering.
|
bshanks@106 | 217 [4 ] clusters genes. For each cluster, prototypical spatial expression patterns were created by averaging the
|
bshanks@104 | 218 genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
|
bshanks@106 | 219 [8 ] applies their technique for finding combinations of marker genes for the purpose of clustering genes
|
bshanks@104 | 220 around a “seed gene”.
|
bshanks@104 | 221 In summary, although these projects obtained clusterings, there has not been much comparison between
|
bshanks@104 | 222 different algorithms or scoring methods, so it is likely that the best clustering method for this application has not
|
bshanks@104 | 223 yet been found. The projects using gene expression on cortex did not attempt to make use of the radial profile
|
bshanks@104 | 224 of gene expression. Also, none of these projects did a separate dimensionality reduction step before clustering
|
bshanks@98 | 225 _________________________________________
|
bshanks@98 | 226 3There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are
|
bshanks@98 | 227 often used to process satellite imagery.
|
bshanks@98 | 228 4First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering
|
bshanks@98 | 229 algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.
|
bshanks@104 | 230 5A radial profile is a profile along a line perpendicular to the cortical surface.
|
bshanks@104 | 231 pixels, none tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, and
|
bshanks@104 | 232 none used co-clustering algorithms.
|
bshanks@104 | 233 Aim 3: apply the methods developed to the cerebral cortex
|
bshanks@85 | 234
|
bshanks@85 | 235
|
bshanks@104 | 236 Figure 1: Top row: Genes Nfic
|
bshanks@104 | 237 and A930001M12Rik are the most
|
bshanks@104 | 238 correlated with area SS (somatosen-
|
bshanks@104 | 239 sory cortex). Bottom row: Genes
|
bshanks@104 | 240 C130038G02Rik and Cacna1i are
|
bshanks@104 | 241 those with the best fit using logistic
|
bshanks@104 | 242 regression. Within each picture, the
|
bshanks@104 | 243 vertical axis roughly corresponds to
|
bshanks@104 | 244 anterior at the top and posterior at the
|
bshanks@104 | 245 bottom, and the horizontal axis roughly
|
bshanks@104 | 246 corresponds to medial at the left and
|
bshanks@104 | 247 lateral at the right. The red outline is
|
bshanks@104 | 248 the boundary of region SS. Pixels are
|
bshanks@104 | 249 colored according to correlation, with
|
bshanks@104 | 250 red meaning high correlation and blue
|
bshanks@104 | 251 meaning low. Background
|
bshanks@101 | 252 The cortex is divided into areas and layers. Because of the cortical
|
bshanks@101 | 253 columnar organization, the parcellation of the cortex into areas can be
|
bshanks@101 | 254 drawn as a 2-D map on the surface of the cortex. In the third dimension,
|
bshanks@101 | 255 the boundaries between the areas continue downwards into the cortical
|
bshanks@101 | 256 depth, perpendicular to the surface. The layer boundaries run parallel
|
bshanks@101 | 257 to the surface. One can picture an area of the cortex as a slice of a
|
bshanks@101 | 258 six-layered cake6.
|
bshanks@101 | 259 It is known that different cortical areas have distinct roles in both
|
bshanks@101 | 260 normal functioning and in disease processes, yet there are no known
|
bshanks@101 | 261 marker genes for most cortical areas. When it is necessary to divide a
|
bshanks@101 | 262 tissue sample into cortical areas, this is a manual process that requires
|
bshanks@101 | 263 a skilled human to combine multiple visual cues and interpret them in
|
bshanks@101 | 264 the context of their approximate location upon the cortical surface.
|
bshanks@104 | 265 Even the questions of how many areas should be recognized in
|
bshanks@104 | 266 cortex, and what their arrangement is, are still not completely settled.
|
bshanks@104 | 267 A proposed division of the cortex into areas is called a cortical map.
|
bshanks@104 | 268 In the rodent, the lack of a single agreed-upon map can be seen by
|
bshanks@106 | 269 contrasting the recent maps given by Swanson[17] on the one hand,
|
bshanks@106 | 270 and Paxinos and Franklin[14] on the other. While the maps are cer-
|
bshanks@104 | 271 tainly very similar in their general arrangement, significant differences
|
bshanks@104 | 272 remain.
|
bshanks@104 | 273 The Allen Mouse Brain Atlas dataset
|
bshanks@106 | 274 The Allen Mouse Brain Atlas (ABA) data[11] were produced by do-
|
bshanks@106 | 275 ing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse
|
bshanks@106 | 276 brains. Pictures were taken of the processed slice, and these pictures
|
bshanks@106 | 277 were semi-automatically analyzed to create a digital measurement of
|
bshanks@106 | 278 gene expression levels at each location in each slice. Per slice, cellular
|
bshanks@106 | 279 spatial resolution is achieved. Using this method, a single physical slice
|
bshanks@104 | 280 can only be used to measure one single gene; many different mouse brains were needed in order to measure
|
bshanks@104 | 281 the expression of many genes.
|
bshanks@106 | 282 Mus musculus is thought to contain about 22,000 protein-coding genes[20]. The ABA contains data on
|
bshanks@106 | 283 about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections.
|
bshanks@106 | 284 Our dataset is derived from only the coronal subset of the ABA7. An automated nonlinear alignment procedure
|
bshanks@106 | 285 located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system,
|
bshanks@106 | 286 voxels are cubes with 200 microns on a side. There are 67x41x58 = 159,326 voxels, of which 51,533 are in the
|
bshanks@106 | 287 brain[12]. For each voxel and each gene, the expression energy[11] within that voxel is made available.
|
bshanks@98 | 288 The ABA is not the only large public spatial gene expression dataset. However, with the exception of the ABA,
|
bshanks@98 | 289 GenePaint, and EMAGE, most of the other resources have not (yet) extracted the expression intensity from the
|
bshanks@98 | 290 ISH images and registered the results into a single 3-D space.
|
bshanks@98 | 291 Related work
|
bshanks@106 | 292 [12 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure
|
bshanks@98 | 293 of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort
|
bshanks@104 | 294 _________________________________________
|
bshanks@104 | 295 6Outside of isocortex, the number of layers varies.
|
bshanks@106 | 296 7The sagittal data do not cover the entire cortex, and also have greater registration error[12]. Genes were selected by the Allen
|
bshanks@104 | 297 Institute for coronal sectioning based on, “classes of known neuroscientific interest... or through post hoc identification of a marked
|
bshanks@106 | 298 non-ubiquitous expression pattern”[12].
|
bshanks@98 | 299 of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical
|
bshanks@98 | 300 map based on gene expression data. Neither of the other components of AGEA can be applied to cortical
|
bshanks@99 | 301 areas; AGEA’s Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA’s hierarchical
|
bshanks@98 | 302 clustering does not produce clusters corresponding to the cortical areas8.
|
bshanks@98 | 303 In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes,
|
bshanks@98 | 304 (b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no
|
bshanks@99 | 305 work on computationally finding marker genes for cortical areas, or on finding a hierarchical clustering that will
|
bshanks@98 | 306 yield a map of cortical areas de novo from gene expression data.
|
bshanks@98 | 307 Our project is guided by a concrete application with a well-specified criterion of success (how well we can
|
bshanks@98 | 308 find marker genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing
|
bshanks@98 | 309 different methods.
|
bshanks@98 | 310 Significance
|
bshanks@98 | 311
|
bshanks@104 | 312 Figure 2: Gene Pitx2
|
bshanks@104 | 313 is selectively underex-
|
bshanks@104 | 314 pressed in area SS. The method developed in aim (1) will be applied to each cortical area to find a set of
|
bshanks@104 | 315 marker genes such that the combinatorial expression pattern of those genes uniquely
|
bshanks@104 | 316 picks out the target area. Finding marker genes will be useful for drug discovery as
|
bshanks@104 | 317 well as for experimentation because marker genes can be used to design interventions
|
bshanks@104 | 318 which selectively target individual cortical areas.
|
bshanks@104 | 319 The application of the marker gene finding algorithm to the cortex will also support
|
bshanks@104 | 320 the development of new neuroanatomical methods. In addition to finding markers for
|
bshanks@104 | 321 each individual cortical areas, we will find a small panel of genes that can find many of
|
bshanks@104 | 322 the areal boundaries at once. This panel of marker genes will allow the development of
|
bshanks@104 | 323 an ISH protocol that will allow experimenters to more easily identify which anatomical
|
bshanks@104 | 324 areas are present in small samples of cortex.
|
bshanks@98 | 325 The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation
|
bshanks@98 | 326 of a better map. The development of present-day cortical maps was driven by the application of histological
|
bshanks@98 | 327 stains. If a different set of stains had been available which identified a different set of features, then today’s
|
bshanks@98 | 328 cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns
|
bshanks@100 | 329 in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to
|
bshanks@100 | 330 incorporate what we can learn from looking at the patterns of gene expression.
|
bshanks@98 | 331 While we do not here propose to analyze human gene expression data, it is conceivable that the methods
|
bshanks@98 | 332 we propose to develop could be used to suggest modifications to the human cortical map as well. In fact, the
|
bshanks@98 | 333 methods we will develop will be applicable to other datasets beyond the brain.
|
bshanks@101 | 334 _______________________________
|
bshanks@101 | 335 The approach: Preliminary Studies
|
bshanks@101 | 336 Format conversion between SEV, MATLAB, NIFTI
|
bshanks@98 | 337 We have created software to (politely) download all of the SEV files9 from the Allen Institute website. We have
|
bshanks@98 | 338 also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s
|
bshanks@98 | 339 file formats.
|
bshanks@101 | 340 Flatmap of cortex
|
bshanks@98 | 341 We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex.
|
bshanks@106 | 342 We divided the cortex into hemispheres. Using Caret[5], we created a mesh representation of the surface of the
|
bshanks@98 | 343 selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression
|
bshanks@98 | 344 of the voxels “underneath” that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We
|
bshanks@98 | 345 sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this
|
bshanks@106 | 346 grid into a MATLAB matrix. We manually traced the boundaries of each of 46 cortical areas from the ABA coronal
|
bshanks@104 | 347 reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the
|
bshanks@101 | 348 8In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer
|
bshanks@101 | 349 are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a
|
bshanks@101 | 350 pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.
|
bshanks@101 | 351 9SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.
|
bshanks@98 | 352 mesh surface. We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the
|
bshanks@98 | 353 region data into MATLAB format.
|
bshanks@98 | 354 At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries
|
bshanks@98 | 355 representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent
|
bshanks@98 | 356 the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries
|
bshanks@98 | 357 represent the average expression level underneath each surface pixel. We created a normalized version of the
|
bshanks@98 | 358 gene expression data by subtracting each gene’s mean expression level (over all surface pixels) and dividing the
|
bshanks@98 | 359 expression level of each gene by its standard deviation. The features and the target area are both functions on
|
bshanks@98 | 360 the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can
|
bshanks@98 | 361 be thought of as images which can be displayed on the flatmapped surface.
|
bshanks@98 | 362 To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix
|
bshanks@98 | 363 for each cortical layer to represent the average expression level within that layer. Cortical layers are found at
|
bshanks@98 | 364 different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have
|
bshanks@98 | 365 extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the
|
bshanks@98 | 366 Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually
|
bshanks@98 | 367 demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
|
bshanks@98 | 368 Feature selection and scoring methods
|
bshanks@104 | 369
|
bshanks@104 | 370
|
bshanks@104 | 371 Figure 3: The top row shows the two
|
bshanks@104 | 372 genes which (individually) best predict
|
bshanks@104 | 373 area AUD, according to logistic regres-
|
bshanks@104 | 374 sion. The bottom row shows the two
|
bshanks@104 | 375 genes which (individually) best match
|
bshanks@104 | 376 area AUD, according to gradient sim-
|
bshanks@104 | 377 ilarity. From left to right and top to
|
bshanks@104 | 378 bottom, the genes are Ssr1, Efcbp1,
|
bshanks@104 | 379 Ptk7, and Aph1a. Underexpression of a gene can serve as a marker Underexpression
|
bshanks@104 | 380 of a gene can sometimes serve as a marker. See, for example, Figure
|
bshanks@104 | 381 2.
|
bshanks@104 | 382 Correlation Recall that the instances are surface pixels, and con-
|
bshanks@104 | 383 sider the problem of attempting to classify each instance as either a
|
bshanks@104 | 384 member of a particular anatomical area, or not. The target area can be
|
bshanks@104 | 385 represented as a boolean mask over the surface pixels.
|
bshanks@104 | 386 We calculated the correlation between each gene and each cortical
|
bshanks@104 | 387 area. The top row of Figure 1 shows the three genes most correlated
|
bshanks@104 | 388 with area SS.
|
bshanks@104 | 389 Conditional entropy
|
bshanks@104 | 390 For each region, we created and ran a forward stepwise procedure
|
bshanks@104 | 391 which attempted to find pairs of gene expression boolean masks such
|
bshanks@104 | 392 that the conditional entropy of the target area’s boolean mask, condi-
|
bshanks@104 | 393 tioned upon the pair of gene expression boolean masks, is minimized.
|
bshanks@104 | 394 This finds pairs of genes which are most informative (at least at
|
bshanks@104 | 395 these discretization thresholds) relative to the question, “Is this surface
|
bshanks@104 | 396 pixel a member of the target area?”. Its advantage over linear methods
|
bshanks@104 | 397 such as logistic regression is that it takes account of arbitrarily nonlin-
|
bshanks@104 | 398 ear relationships; for example, if the XOR of two variables predicts the
|
bshanks@104 | 399 target, conditional entropy would notice, whereas linear methods would
|
bshanks@104 | 400 not.
|
bshanks@98 | 401 Gradient similarity We noticed that the previous two scoring methods, which are pointwise, often found
|
bshanks@98 | 402 genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed
|
bshanks@98 | 403 a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a
|
bshanks@98 | 404 boundary whose shape is similar to the shape of the target region. We call this scoring method “gradient
|
bshanks@98 | 405 similarity”. The formula is:
|
bshanks@98 | 406 ∑
|
bshanks@98 | 407 pixel<img src="cmsy8-32.png" alt="∈" />pixels cos(abs(∠∇1 -∠∇2)) ⋅|∇1| + |∇2|
|
bshanks@98 | 408 2 ⋅ pixel_value1 + pixel_value2
|
bshanks@98 | 409 2
|
bshanks@98 | 410 where ∇1 and ∇2 are the gradient vectors of the two images at the current pixel; ∠∇i is the angle of the
|
bshanks@98 | 411 gradient of image i at the current pixel; |∇i| is the magnitude of the gradient of image i at the current pixel; and
|
bshanks@98 | 412 pixel valuei is the value of the current pixel in image i.
|
bshanks@98 | 413 The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders
|
bshanks@98 | 414 are similar, then both images will have corresponding pixels with large gradients (because this is a border) which
|
bshanks@98 | 415 are oriented in a similar direction (because the borders are similar).
|
bshanks@98 | 416 Gradient similarity provides information complementary to correlation
|
bshanks@104 | 417
|
bshanks@104 | 418
|
bshanks@104 | 419 Figure 4: Upper left: wwc1. Upper
|
bshanks@104 | 420 right: mtif2. Lower left: wwc1 + mtif2
|
bshanks@104 | 421 (each pixel’s value on the lower left is
|
bshanks@104 | 422 the sum of the corresponding pixels in
|
bshanks@104 | 423 the upper row). To show that gradient similarity can provide useful information that
|
bshanks@104 | 424 cannot be detected via pointwise analyses, consider Fig. 3. The
|
bshanks@104 | 425 pointwise method in the top row identifies genes which express more
|
bshanks@104 | 426 strongly in AUD than outside of it; its weakness is that this includes
|
bshanks@104 | 427 many areas which don’t have a salient border matching the areal bor-
|
bshanks@104 | 428 der. The geometric method identifies genes whose salient expression
|
bshanks@104 | 429 border seems to partially line up with the border of AUD; its weakness
|
bshanks@104 | 430 is that this includes genes which don’t express over the entire area.
|
bshanks@104 | 431 Areas which can be identified by single genes Using gradient
|
bshanks@104 | 432 similarity, we have already found single genes which roughly identify
|
bshanks@104 | 433 some areas and groupings of areas. For each of these areas, an ex-
|
bshanks@104 | 434 ample of a gene which roughly identifies it is shown in Figure 5. We
|
bshanks@104 | 435 have not yet cross-verified these genes in other atlases.
|
bshanks@104 | 436 In addition, there are a number of areas which are almost identified
|
bshanks@104 | 437 by single genes: COAa+NLOT (anterior part of cortical amygdalar area,
|
bshanks@104 | 438 nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral
|
bshanks@104 | 439 anterior cingulate), VIS (visual), AUD (auditory).
|
bshanks@104 | 440 These results validate our expectation that the ABA dataset can be
|
bshanks@104 | 441 exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring
|
bshanks@104 | 442 method, gradient similarity.
|
bshanks@98 | 443 Combinations of multiple genes are useful and necessary for some areas
|
bshanks@98 | 444 In Figure 4, we give an example of a cortical area which is not marked by any single gene, but which
|
bshanks@99 | 445 can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for
|
bshanks@98 | 446 predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left
|
bshanks@98 | 447 picture in Figure 4 shows wwc1’s spatial expression pattern over the cortex. The lower-right boundary of MO is
|
bshanks@98 | 448 represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D
|
bshanks@98 | 449 representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex.
|
bshanks@98 | 450 MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO’s upper-left
|
bshanks@98 | 451 boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding
|
bshanks@98 | 452 together the values at each pixel in these two figures, we get the lower-left image. This combination captures
|
bshanks@98 | 453 area MO much better than any single gene.
|
bshanks@98 | 454 This shows that our proposal to develop a method to find combinations of marker genes is both possible and
|
bshanks@98 | 455 necessary.
|
bshanks@98 | 456 Multivariate supervised learning
|
bshanks@98 | 457 Forward stepwise logistic regression Logistic regression is a popular method for predictive modeling of cate-
|
bshanks@99 | 458 gorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise
|
bshanks@98 | 459 logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is
|
bshanks@98 | 460 an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes
|
bshanks@98 | 461 found were shown in various figures throughout this document, and Figure 4 shows a combination of genes
|
bshanks@98 | 462 which was found.
|
bshanks@98 | 463 SVM on all genes at once
|
bshanks@98 | 464 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to
|
bshanks@98 | 465 classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of
|
bshanks@98 | 466 about 81%10. This shows that the genes included in the ABA dataset are sufficient to define much of cortical
|
bshanks@98 | 467 anatomy. However, as noted above, a classifier that looks at all the genes at once isn’t as practically useful as a
|
bshanks@98 | 468 classifier that uses only a few genes.
|
bshanks@96 | 469 Data-driven redrawing of the cortical map
|
bshanks@104 | 470
|
bshanks@104 | 471
|
bshanks@104 | 472
|
bshanks@104 | 473
|
bshanks@104 | 474 Figure 5: From left to right and top
|
bshanks@104 | 475 to bottom, single genes which roughly
|
bshanks@104 | 476 identify areas SS (somatosensory pri-
|
bshanks@104 | 477 mary + supplemental), SSs (supple-
|
bshanks@104 | 478 mental somatosensory), PIR (piriform),
|
bshanks@104 | 479 FRP (frontal pole), RSP (retrosple-
|
bshanks@104 | 480 nial), COApm (Cortical amygdalar, pos-
|
bshanks@104 | 481 terior part, medial zone). Grouping
|
bshanks@104 | 482 some areas together, we have also
|
bshanks@104 | 483 found genes to identify the groups
|
bshanks@104 | 484 ACA+PL+ILA+DP+ORB+MO (anterior
|
bshanks@104 | 485 cingulate, prelimbic, infralimbic, dor-
|
bshanks@104 | 486 sal peduncular, orbital, motor), poste-
|
bshanks@104 | 487 rior and lateral visual (VISpm, VISpl,
|
bshanks@104 | 488 VISI, VISp; posteromedial, posterolat-
|
bshanks@104 | 489 eral, lateral, and primary visual; the
|
bshanks@104 | 490 posterior and lateral visual area is dis-
|
bshanks@104 | 491 tinguished from its neighbors, but not
|
bshanks@104 | 492 from the entire rest of the cortex). The
|
bshanks@104 | 493 genes are Pitx2, Aldh1a2, Ppfibp1,
|
bshanks@104 | 494 Slco1a5, Tshz2, Trhr, Col12a1, Ets1. We have applied the following dimensionality reduction algorithms
|
bshanks@104 | 495 to reduce the dimensionality of the gene expression profile associ-
|
bshanks@104 | 496 ated with each pixel: Principal Components Analysis (PCA), Simple
|
bshanks@104 | 497 PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian
|
bshanks@104 | 498 eigenmaps, Local Tangent Space Alignment, Stochastic Proximity Em-
|
bshanks@104 | 499 bedding, Fast Maximum Variance Unfolding, Non-negative Matrix Fac-
|
bshanks@104 | 500 torization (NNMF). Space constraints prevent us from showing many of
|
bshanks@104 | 501 the results, but as a sample, PCA, NNMF, and landmark Isomap are
|
bshanks@104 | 502 shown in the first, second, and third rows of Figure 6.
|
bshanks@104 | 503 After applying the dimensionality reduction, we ran clustering algo-
|
bshanks@104 | 504 rithms on the reduced data. To date we have tried k-means and spec-
|
bshanks@104 | 505 tral clustering. The results of k-means after PCA, NNMF, and landmark
|
bshanks@104 | 506 Isomap are shown in the last row of Figure 6. To compare, the leftmost
|
bshanks@104 | 507 picture on the bottom row of Figure 6 shows some of the major subdivi-
|
bshanks@104 | 508 sions of cortex. These results clearly show that different dimensionality
|
bshanks@104 | 509 reduction techniques capture different aspects of the data and lead to
|
bshanks@104 | 510 different clusterings, indicating the utility of our proposal to produce a
|
bshanks@104 | 511 detailed comparison of these techniques as applied to the domain of
|
bshanks@104 | 512 genomic anatomy.
|
bshanks@104 | 513 Many areas are captured by clusters of genes We also clustered
|
bshanks@104 | 514 the genes using gradient similarity to see if the spatial regions defined
|
bshanks@104 | 515 by any clusters matched known anatomical regions. Figure 7 shows, for
|
bshanks@104 | 516 ten sample gene clusters, each cluster’s average expression pattern,
|
bshanks@104 | 517 compared to a known anatomical boundary. This suggests that it is
|
bshanks@104 | 518 worth attempting to cluster genes, and then to use the results to cluster
|
bshanks@104 | 519 pixels.
|
bshanks@104 | 520 The approach: what we plan to do
|
bshanks@104 | 521 Flatmap cortex and segment cortical layers
|
bshanks@104 | 522 There are multiple ways to flatten 3-D data into 2-D. We will compare
|
bshanks@104 | 523 mappings from manifolds to planes which attempt to preserve size
|
bshanks@106 | 524 (such as the one used by Caret[5]) with mappings which preserve an-
|
bshanks@104 | 525 gle (conformal maps). Our method will include a statistical test that
|
bshanks@104 | 526 warns the user if the assumption of 2-D structure seems to be wrong.
|
bshanks@104 | 527 We have not yet made use of radial profiles. While the radial pro-
|
bshanks@104 | 528 files may be used “raw”, for laminar structures like the cortex another
|
bshanks@104 | 529 strategy is to group together voxels in the same cortical layer; each sur-
|
bshanks@104 | 530 face pixel would then be associated with one expression level per gene
|
bshanks@104 | 531 per layer. We will develop a segmentation algorithm to automatically
|
bshanks@104 | 532 identify the layer boundaries.
|
bshanks@104 | 533 Develop algorithms that find genetic markers for anatomical re-
|
bshanks@104 | 534 gions
|
bshanks@104 | 535 Scoring measures and feature selection We will develop scoring
|
bshanks@104 | 536 methods for evaluating how good individual genes are at marking ar-
|
bshanks@104 | 537 eas. We will compare pointwise, geometric, and information-theoretic
|
bshanks@104 | 538 _________________________________________
|
bshanks@101 | 539 105-fold cross-validation.
|
bshanks@104 | 540 measures. We already developed one entirely new scoring method (gradient similarity), but we may develop
|
bshanks@104 | 541 more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, con-
|
bshanks@104 | 542 ditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such
|
bshanks@104 | 543 as Student’s t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a
|
bshanks@104 | 544 scoring measure on genes by taking the prediction error when using that gene to predict the target.
|
bshanks@104 | 545
|
bshanks@104 | 546
|
bshanks@104 | 547
|
bshanks@104 | 548
|
bshanks@104 | 549 Figure 6: First row: the first 6 reduced dimensions, using PCA. Sec-
|
bshanks@104 | 550 ond row: the first 6 reduced dimensions, using NNMF. Third row:
|
bshanks@104 | 551 the first six reduced dimensions, using landmark Isomap. Bottom
|
bshanks@104 | 552 row: examples of kmeans clustering applied to reduced datasets
|
bshanks@104 | 553 to find 7 clusters. Left: 19 of the major subdivisions of the cortex.
|
bshanks@104 | 554 Second from left: PCA. Third from left: NNMF. Right: Landmark
|
bshanks@104 | 555 Isomap. Additional details: In the third and fourth rows, 7 dimen-
|
bshanks@104 | 556 sions were found, but only 6 displayed. In the last row: for PCA,
|
bshanks@104 | 557 50 dimensions were used; for NNMF, 6 dimensions were used; for
|
bshanks@104 | 558 landmark Isomap, 7 dimensions were used. Using some combination of these mea-
|
bshanks@104 | 559 sures, we will develop a procedure to
|
bshanks@104 | 560 find single marker genes for anatomical
|
bshanks@104 | 561 regions: for each cortical area, we will
|
bshanks@104 | 562 rank the genes by their ability to delineate
|
bshanks@104 | 563 each area. We will quantitatively compare
|
bshanks@104 | 564 the list of single genes generated by our
|
bshanks@104 | 565 method to the lists generated by previous
|
bshanks@104 | 566 methods which are mentioned in Aim 1 Re-
|
bshanks@104 | 567 lated Work.
|
bshanks@104 | 568 Some cortical areas have no single
|
bshanks@104 | 569 marker genes but can be identified by com-
|
bshanks@104 | 570 binatorial coding. This requires multivari-
|
bshanks@104 | 571 ate scoring measures and feature selec-
|
bshanks@104 | 572 tion procedures. Many of the measures,
|
bshanks@104 | 573 such as expression energy, gradient sim-
|
bshanks@104 | 574 ilarity, Jaccard, Dice, Hough, Student’s t,
|
bshanks@104 | 575 and Mann-Whitney U are univariate. We
|
bshanks@104 | 576 will extend these scoring measures for use
|
bshanks@104 | 577 in multivariate feature selection, that is, for
|
bshanks@104 | 578 scoring how well combinations of genes,
|
bshanks@104 | 579 rather than individual genes, can distin-
|
bshanks@104 | 580 guish a target area. There are existing
|
bshanks@104 | 581 multivariate forms of some of the univariate
|
bshanks@104 | 582 scoring measures, for example, Hotelling’s
|
bshanks@104 | 583 T-square is a multivariate analog of Stu-
|
bshanks@104 | 584 dent’s t.
|
bshanks@104 | 585 We will develop a feature selection pro-
|
bshanks@104 | 586 cedure for choosing the best small set of
|
bshanks@104 | 587 marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will
|
bshanks@104 | 588 also explore (a) feature selection using a stepwise wrapper over “vanilla” classifiers such as logistic regression,
|
bshanks@104 | 589 (b) supervised learning methods such as decision trees which incrementally/greedily combine single gene mark-
|
bshanks@104 | 590 ers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features
|
bshanks@104 | 591 used, such as sparse support vector machines (SVMs).
|
bshanks@96 | 592 Since errors of displacement and of shape may cause genes and target areas to match less than they should,
|
bshanks@96 | 593 we will consider the robustness of feature selection methods in the presence of error. Some of these methods,
|
bshanks@96 | 594 such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will
|
bshanks@96 | 595 consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a
|
bshanks@96 | 596 scoring method on small displacements and distortions of the data adds robustness to registration error at the
|
bshanks@96 | 597 expense of computation time.
|
bshanks@96 | 598 An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape
|
bshanks@96 | 599 of the natural domain of gene expression corresponding to the area is different from the shape of the area as
|
bshanks@96 | 600 recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing
|
bshanks@96 | 601 their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be
|
bshanks@98 | 602 fit if its boundary were redrawn slightly11, and (b) detect when a difficult area could be combined with adjacent
|
bshanks@104 | 603 _________________________________________
|
bshanks@104 | 604 11Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by
|
bshanks@104 | 605 multiple sources of evidence. Interestingly, the need to detect “natural spatial domains of gene expression” in a data-driven fashion
|
bshanks@104 | 606 means that the methods of Aim 2 might be useful in achieving Aim 1, as well – particularly discriminative dimensionality reduction.
|
bshanks@96 | 607 areas to create a larger area which can be fit.
|
bshanks@96 | 608 A future publication on the method that we develop in Aim 1 will review the scoring measures and quantita-
|
bshanks@96 | 609 tively compare their performance in order to provide a foundation for future research of methods of marker gene
|
bshanks@96 | 610 finding. We will measure the robustness of the scoring measures as well as their absolute performance on our
|
bshanks@96 | 611 dataset.
|
bshanks@96 | 612 Classifiers We will explore and compare different classifiers. As noted above, this activity is not separate
|
bshanks@96 | 613 from the previous one, because some supervised learning algorithms include feature selection, and any clas-
|
bshanks@96 | 614 sifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic
|
bshanks@106 | 615 regression (including spatial models[13]), decision trees12, sparse SVMs, generative mixture models (including
|
bshanks@96 | 616 naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic
|
bshanks@96 | 617 algorithms, and artificial neural networks.
|
bshanks@30 | 618 Develop algorithms to suggest a division of a structure into anatomical parts
|
bshanks@104 | 619
|
bshanks@104 | 620 Figure 7: Prototypes corresponding to sample gene
|
bshanks@104 | 621 clusters, clustered by gradient similarity. Region bound-
|
bshanks@104 | 622 aries for the region that most matches each prototype
|
bshanks@104 | 623 are overlaid. Dimensionality reduction on gene expression pro-
|
bshanks@104 | 624 files We have already described the application of
|
bshanks@104 | 625 ten dimensionality reduction algorithms for the pur-
|
bshanks@104 | 626 pose of replacing the gene expression profiles, which
|
bshanks@104 | 627 are vectors of about 4000 gene expression levels,
|
bshanks@104 | 628 with a smaller number of features. We plan to fur-
|
bshanks@104 | 629 ther explore and interpret these results, as well as to
|
bshanks@104 | 630 apply other unsupervised learning algorithms, includ-
|
bshanks@104 | 631 ing independent components analysis, self-organizing
|
bshanks@104 | 632 maps, and generative models such as deep Boltz-
|
bshanks@104 | 633 mann machines. We will explore ways to quantitatively
|
bshanks@104 | 634 compare the relevance of the different dimensionality
|
bshanks@104 | 635 reduction methods for identifying cortical areal bound-
|
bshanks@104 | 636 aries.
|
bshanks@98 | 637 Dimensionality reduction on pixels Instead of applying dimensionality reduction to the gene expression
|
bshanks@99 | 638 profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in
|
bshanks@98 | 639 this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions.
|
bshanks@98 | 640 Clustering and segmentation on pixels We will explore clustering and segmentation algorithms in order to
|
bshanks@106 | 641 segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving[7], recursive division
|
bshanks@98 | 642 clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transforma-
|
bshanks@98 | 643 tions, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with
|
bshanks@98 | 644 various linkage functions. These methods can be combined with dimensionality reduction.
|
bshanks@98 | 645 Clustering on genes We have already shown that the procedure of clustering genes according to gradient
|
bshanks@98 | 646 similarity, and then creating an averaged prototype of each cluster’s expression pattern, yields some spatial
|
bshanks@98 | 647 patterns which match cortical areas. We will further explore the clustering of genes.
|
bshanks@96 | 648 In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful
|
bshanks@96 | 649 as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then
|
bshanks@96 | 650 replacing their expression levels with a single average expression level, thereby removing some redundancy from
|
bshanks@96 | 651 the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality
|
bshanks@96 | 652 reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would
|
bshanks@96 | 653 help or hurt the ultimate goal of identifying interesting spatial regions.
|
bshanks@99 | 654 Co-clustering There are some algorithms which simultaneously incorporate clustering on instances and on
|
bshanks@106 | 655 features (in our case, genes and pixels), for example, IRM[9]. These are called co-clustering or biclustering
|
bshanks@101 | 656 _________________________________________
|
bshanks@104 | 657 12Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision
|
bshanks@101 | 658 tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was
|
bshanks@101 | 659 too large. We plan to implement a pruning procedure to generate trees that use fewer genes.
|
bshanks@98 | 660 algorithms.
|
bshanks@98 | 661 Radial profiles We wil explore the use of the radial profile of gene expression under each pixel.
|
bshanks@98 | 662 Compare different methods In order to tell which method is best for genomic anatomy, for each experimental
|
bshanks@98 | 663 method we will compare the cortical map found by unsupervised learning to a cortical map derived from the Allen
|
bshanks@98 | 664 Reference Atlas. We will explore various quantitative metrics that purport to measure how similar two clusterings
|
bshanks@98 | 665 are, such as Jaccard, Rand index, Fowlkes-Mallows, variation of information, Larsen, Van Dongen, and others.
|
bshanks@96 | 666 Discriminative dimensionality reduction In addition to using a purely data-driven approach to identify
|
bshanks@96 | 667 spatial regions, it might be useful to see how well the known regions can be reconstructed from a small number
|
bshanks@96 | 668 of features, even if those features are chosen by using knowledge of the regions. For example, linear discriminant
|
bshanks@96 | 669 analysis could be used as a dimensionality reduction technique in order to identify a few features which are the
|
bshanks@96 | 670 best linear summary of gene expression profiles for the purpose of discriminating between regions. This reduced
|
bshanks@96 | 671 feature set could then be used to cluster pixels into regions. Perhaps the resulting clusters will be similar to the
|
bshanks@96 | 672 reference atlas, yet more faithful to natural spatial domains of gene expression than the reference atlas is.
|
bshanks@96 | 673 Apply the new methods to the cortex
|
bshanks@96 | 674 Using the methods developed in Aim 1, we will present, for each cortical area, a short list of markers to identify
|
bshanks@96 | 675 that area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once.
|
bshanks@96 | 676 Because in most cases the ABA coronal dataset only contains one ISH per gene, it is possible for an unrelated
|
bshanks@96 | 677 combination of genes to seem to identify an area when in fact it is only coincidence. There are two ways we will
|
bshanks@96 | 678 validate our marker genes to guard against this. First, we will confirm that putative combinations of marker genes
|
bshanks@96 | 679 express the same pattern in both hemispheres. Second, we will manually validate our final results on other gene
|
bshanks@106 | 680 expression datasets such as EMAGE, GeneAtlas, and GENSAT[6].
|
bshanks@99 | 681 Using the methods developed in Aim 2, we will present one or more hierarchical cortical maps. We will identify
|
bshanks@96 | 682 and explain how the statistical structure in the gene expression data led to any unexpected or interesting features
|
bshanks@96 | 683 of these maps, and we will provide biological hypotheses to interpret any new cortical areas, or groupings of
|
bshanks@96 | 684 areas, which are discovered.
|
bshanks@101 | 685 ____________________________________________________________________________
|
bshanks@101 | 686 Timeline and milestones
|
bshanks@90 | 687 Finding marker genes
|
bshanks@96 | 688 September-November 2009: Develop an automated mechanism for segmenting the cortical voxels into layers
|
bshanks@96 | 689 November 2009 (milestone): Have completed construction of a flatmapped, cortical dataset with information
|
bshanks@96 | 690 for each layer
|
bshanks@101 | 691 October 2009-April 2010: Develop scoring and supervised learning methods.
|
bshanks@96 | 692 January 2010 (milestone): Submit a publication on single marker genes for cortical areas
|
bshanks@99 | 693 February-July 2010: Continue to develop scoring methods and supervised learning frameworks. Extend tech-
|
bshanks@99 | 694 niques for robustness. Compare the performance of techniques. Validate marker genes. Prepare software
|
bshanks@99 | 695 toolbox for Aim 1.
|
bshanks@96 | 696 June 2010 (milestone): Submit a paper describing a method fulfilling Aim 1. Release toolbox.
|
bshanks@96 | 697 July 2010 (milestone): Submit a paper describing combinations of marker genes for each cortical area, and a
|
bshanks@96 | 698 small number of marker genes that can, in combination, define most of the areas at once
|
bshanks@101 | 699 Revealing new ways to parcellate a structure into regions
|
bshanks@101 | 700 June 2010-March 2011: Explore dimensionality reduction algorithms. Explore clustering algorithms. Adapt
|
bshanks@101 | 701 clustering algorithms to use radial profile information. Compare the performance of techniques.
|
bshanks@96 | 702 March 2011 (milestone): Submit a paper describing a method fulfilling Aim 2. Release toolbox.
|
bshanks@101 | 703 February-May 2011: Using the methods developed for Aim 2, explore the genomic anatomy of the cortex,
|
bshanks@101 | 704 interpret the results. Prepare software toolbox for Aim 2.
|
bshanks@96 | 705 May 2011 (milestone): Submit a paper on the genomic anatomy of the cortex, using the methods developed in
|
bshanks@96 | 706 Aim 2
|
bshanks@96 | 707 May-August 2011: Revisit Aim 1 to see if what was learned during Aim 2 can improve the methods for Aim 1.
|
bshanks@99 | 708 Possibly submit another paper.
|
bshanks@33 | 709 Bibliography & References Cited
|
bshanks@96 | 710 [1]Chris Adamson, Leigh Johnston, Terrie Inder, Sandra Rees, Iven Mareels, and Gary Egan. A Tracking
|
bshanks@96 | 711 Approach to Parcellation of the Cerebral Cortex, volume Volume 3749/2005 of Lecture Notes in Computer
|
bshanks@96 | 712 Science, pages 294–301. Springer Berlin / Heidelberg, 2005.
|
bshanks@96 | 713 [2]J. Annese, A. Pitiot, I. D. Dinov, and A. W. Toga. A myelo-architectonic method for the structural classification
|
bshanks@96 | 714 of cortical areas. NeuroImage, 21(1):15–26, 2004.
|
bshanks@106 | 715 [3]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe
|
bshanks@96 | 716 Warren, Wah Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome.
|
bshanks@96 | 717 PLoS Comput Biol, 1(4):e41, 2005.
|
bshanks@106 | 718 [4]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy,
|
bshanks@96 | 719 Arthur W. Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of
|
bshanks@96 | 720 expression for a mouse brain section obtained using voxelation. Physiol. Genomics, 30(3):313–321, August
|
bshanks@96 | 721 2007.
|
bshanks@106 | 722 [5]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite
|
bshanks@96 | 723 for surface-based analyses of cerebral cortex. Journal of the American Medical Informatics Association:
|
bshanks@96 | 724 JAMIA, 8(5):443–59, 2001. PMID: 11522765.
|
bshanks@106 | 725 [6]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Scham-
|
bshanks@96 | 726 bra, Norma J. Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A
|
bshanks@96 | 727 gene expression atlas of the central nervous system based on bacterial artificial chromosomes. Nature,
|
bshanks@96 | 728 425(6961):917–925, October 2003.
|
bshanks@106 | 729 [7]Trevor Hastie, Robert Tibshirani, Michael Eisen, Ash Alizadeh, Ronald Levy, Louis Staudt, Wing Chan,
|
bshanks@96 | 730 David Botstein, and Patrick Brown. ’Gene shaving’ as a method for identifying distinct sets of genes with
|
bshanks@96 | 731 similar expression patterns. Genome Biology, 1(2):research0003.1–research0003.21, 2000.
|
bshanks@106 | 732 [8]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Ex-
|
bshanks@96 | 733 pression Patterns, volume 13 of Communications in Computer and Information Science, pages 347–361.
|
bshanks@96 | 734 Springer Berlin Heidelberg, 2008.
|
bshanks@106 | 735 [9]C Kemp, JB Tenenbaum, TL Griffiths, T Yamada, and N Ueda. Learning systems of concepts with an infinite
|
bshanks@96 | 736 relational model. In AAAI, 2006.
|
bshanks@106 | 737 [10]F. Kruggel, M. K. Brckner, Th. Arendt, C. J. Wiggins, and D. Y. von Cramon. Analyzing the neocortical
|
bshanks@96 | 738 fine-structure. Medical Image Analysis, 7(3):251–264, September 2003.
|
bshanks@106 | 739 [11]Ed S. Lein, Michael J. Hawrylycz, Nancy Ao, Mikael Ayres, Amy Bensinger, Amy Bernard, Andrew F. Boe,
|
bshanks@106 | 740 Mark S. Boguski, Kevin S. Brockway, Emi J. Byrnes, Lin Chen, Li Chen, Tsuey-Ming Chen, Mei Chi Chin,
|
bshanks@106 | 741 Jimmy Chong, Brian E. Crook, Aneta Czaplinska, Chinh N. Dang, Suvro Datta, Nick R. Dee, Aimee L.
|
bshanks@106 | 742 Desaki, Tsega Desta, Ellen Diep, Tim A. Dolbeare, Matthew J. Donelan, Hong-Wei Dong, Jennifer G.
|
bshanks@106 | 743 Dougherty, Ben J. Duncan, Amanda J. Ebbert, Gregor Eichele, Lili K. Estin, Casey Faber, Benjamin A.
|
bshanks@106 | 744 Facer, Rick Fields, Shanna R. Fischer, Tim P. Fliss, Cliff Frensley, Sabrina N. Gates, Katie J. Glattfelder,
|
bshanks@106 | 745 Kevin R. Halverson, Matthew R. Hart, John G. Hohmann, Maureen P. Howell, Darren P. Jeung, Rebecca A.
|
bshanks@106 | 746 Johnson, Patrick T. Karr, Reena Kawal, Jolene M. Kidney, Rachel H. Knapik, Chihchau L. Kuan, James H.
|
bshanks@106 | 747 Lake, Annabel R. Laramee, Kirk D. Larsen, Christopher Lau, Tracy A. Lemon, Agnes J. Liang, Ying Liu,
|
bshanks@106 | 748 Lon T. Luong, Jesse Michaels, Judith J. Morgan, Rebecca J. Morgan, Marty T. Mortrud, Nerick F. Mosqueda,
|
bshanks@106 | 749 Lydia L. Ng, Randy Ng, Geralyn J. Orta, Caroline C. Overly, Tu H. Pak, Sheana E. Parry, Sayan D. Pathak,
|
bshanks@106 | 750 Owen C. Pearson, Ralph B. Puchalski, Zackery L. Riley, Hannah R. Rockett, Stephen A. Rowland, Joshua J.
|
bshanks@106 | 751 Royall, Marcos J. Ruiz, Nadia R. Sarno, Katherine Schaffnit, Nadiya V. Shapovalova, Taz Sivisay, Clif-
|
bshanks@106 | 752 ford R. Slaughterbeck, Simon C. Smith, Kimberly A. Smith, Bryan I. Smith, Andy J. Sodt, Nick N. Stewart,
|
bshanks@106 | 753 Kenda-Ruth Stumpf, Susan M. Sunkin, Madhavi Sutram, Angelene Tam, Carey D. Teemer, Christina Thaller,
|
bshanks@106 | 754 Carol L. Thompson, Lee R. Varnam, Axel Visel, Ray M. Whitlock, Paul E. Wohnoutka, Crissa K. Wolkey,
|
bshanks@106 | 755 Victoria Y. Wong, Matthew Wood, Murat B. Yaylaoglu, Rob C. Young, Brian L. Youngstrom, Xu Feng Yuan,
|
bshanks@106 | 756 Bin Zhang, Theresa A. Zwingman, and Allan R. Jones. Genome-wide atlas of gene expression in the adult
|
bshanks@106 | 757 mouse brain. Nature, 445(7124):168–176, 2007.
|
bshanks@106 | 758 [12]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Su-
|
bshanks@96 | 759 san M Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann,
|
bshanks@96 | 760 David J Anderson, Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas
|
bshanks@96 | 761 of the adult mouse brain. Nat Neurosci, 12(3):356–362, March 2009.
|
bshanks@106 | 762 [13]Christopher J. Paciorek. Computational techniques for spatial logistic regression with large data sets. Com-
|
bshanks@96 | 763 putational Statistics & Data Analysis, 51(8):3631–3653, May 2007.
|
bshanks@106 | 764 [14]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2
|
bshanks@96 | 765 edition, July 2001.
|
bshanks@106 | 766 [15]A. Schleicher, N. Palomero-Gallagher, P. Morosan, S. Eickhoff, T. Kowalski, K. Vos, K. Amunts, and
|
bshanks@96 | 767 K. Zilles. Quantitative architectural analysis: a new approach to cortical mapping. Anatomy and Em-
|
bshanks@96 | 768 bryology, 210(5):373–386, December 2005.
|
bshanks@106 | 769 [16]Oliver Schmitt, Lars Hmke, and Lutz Dmbgen. Detection of cortical transition regions utilizing statistical
|
bshanks@96 | 770 analyses of excess masses. NeuroImage, 19(1):42–63, May 2003.
|
bshanks@106 | 771 [17]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003.
|
bshanks@106 | 772 [18]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T.
|
bshanks@96 | 773 Mortrud, Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H.
|
bshanks@96 | 774 Gage, Allan R. Jones, Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the
|
bshanks@96 | 775 hippocampus. Neuron, 60(6):1010–1021, December 2008.
|
bshanks@106 | 776 [19]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton,
|
bshanks@96 | 777 Thomas P. Perry, Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen.
|
bshanks@96 | 778 EMAGE edinburgh mouse atlas of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860–
|
bshanks@96 | 779 865, 2008.
|
bshanks@106 | 780 [20]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa
|
bshanks@96 | 781 Agarwala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood,
|
bshanks@96 | 782 Robert Baertsch, Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer
|
bshanks@96 | 783 Bork, Marc Botcherby, Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John
|
bshanks@96 | 784 Burton, Jonathan Butler, Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T
|
bshanks@96 | 785 Chinwalla, Deanna M Church, Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R
|
bshanks@96 | 786 Copley, Alan Coulson, Olivier Couronne, James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy
|
bshanks@96 | 787 Davies, Kimberly D Delehaunty, Justin Deri, Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens,
|
bshanks@96 | 788 Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes,
|
bshanks@96 | 789 Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A Fewell, Paul Flicek, Karen Foley, Wayne N
|
bshanks@96 | 790 Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage, Richard A Gibbs, Gustavo
|
bshanks@96 | 791 Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves, Eric D Green,
|
bshanks@96 | 792 Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki,
|
bshanks@96 | 793 LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard,
|
bshanks@96 | 794 Adrienne Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy,
|
bshanks@96 | 795 Michael Kamal, Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn
|
bshanks@96 | 796 Kells, W James Kent, Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David
|
bshanks@96 | 797 Kulp, Tom Landers, J P Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd,
|
bshanks@96 | 798 Susan Lucas, Bin Ma, Donna R Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer,
|
bshanks@96 | 799 Megan McCarthy, W Richard McCombie, Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim,
|
bshanks@96 | 800 Beverley Meredith, Jill P Mesirov, Webb Miller, Tracie L Miner, Emmanuel Mongin, Kate T Montgomery,
|
bshanks@96 | 801 Michael Morgan, Richard Mott, James C Mullikin, Donna M Muzny, William E Nash, Joanne O Nelson,
|
bshanks@96 | 802 Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J O’Connor, Yasushi Okazaki, Karen
|
bshanks@96 | 803 Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane Peterson, Pavel Pevzner,
|
bshanks@96 | 804 Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter, Michael Quail,
|
bshanks@96 | 805 Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph Santos,
|
bshanks@96 | 806 Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven
|
bshanks@96 | 807 Seaman, Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer,
|
bshanks@96 | 808 Guy Slater, Arian Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles
|
bshanks@96 | 809 Sugnet, Mikita Suyama, Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp,
|
bshanks@96 | 810 Catherine Ucla, Abel Ureta-Vidal, Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie
|
bshanks@96 | 811 Wall, Ryan J Weber, Robert B Weiss, Michael C Wendl, Anthony P West, Kris Wetterstrand, Raymond
|
bshanks@96 | 812 Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey, Sophie Williams, Richard K Wilson, Eitan Win-
|
bshanks@96 | 813 ter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang, Evgeny M Zdobnov, Michael C Zody, and
|
bshanks@96 | 814 Eric S Lander. Initial sequencing and comparative analysis of the mouse genome. Nature, 420(6915):520–
|
bshanks@96 | 815 62, December 2002. PMID: 12466850.
|
bshanks@33 | 816
|
bshanks@33 | 817
|