rev |
line source |
bshanks@0 | 1 Specific aims
|
bshanks@96 | 2 Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in
|
bshanks@96 | 3 situ transgenic reporter, microarray voxelation, and others, allow the expression levels of many genes at many
|
bshanks@96 | 4 locations to be compared. Our goal is to develop automated methods to relate spatial variation in gene expres-
|
bshanks@96 | 5 sion to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical
|
bshanks@96 | 6 maps based on gene expression patterns. We have three specific aims:
|
bshanks@96 | 7 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which
|
bshanks@96 | 8 selectively target anatomical regions
|
bshanks@96 | 9 (2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions,
|
bshanks@96 | 10 based on spatial patterns in gene expression
|
bshanks@96 | 11 (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen
|
bshanks@96 | 12 Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending
|
bshanks@96 | 13 the functionality of Caret, an existing open-source scientific imaging program. Use this dataset to validate the
|
bshanks@96 | 14 methods developed in (1) and (2).
|
bshanks@96 | 15 Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that
|
bshanks@96 | 16 the methods developed in aims (1) and (2) will generalize to any sort of high-dimensional data over points located
|
bshanks@96 | 17 in a low-dimensional space. In particular, our method could be applied to genome-wide sequencing data derived
|
bshanks@96 | 18 from sets of tissues and disease states.
|
bshanks@96 | 19 In terms of the application of the methods to cerebral cortex, aim (1) is to go from cortical areas to marker
|
bshanks@96 | 20 genes, and aim (2) is to let the gene profile define the cortical areas. In addition to validating the usefulness
|
bshanks@96 | 21 of the algorithms, the application of these methods to cortex will produce immediate benefits, because there
|
bshanks@96 | 22 are currently no known genetic markers for most cortical areas. The results of the project will support the
|
bshanks@96 | 23 development of new ways to selectively target cortical areas, and it will support the development of a method for
|
bshanks@96 | 24 identifying the cortical areal boundaries present in small tissue samples.
|
bshanks@96 | 25 All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well
|
bshanks@96 | 26 as the machine-readable datasets developed in aim (3), will be published and freely available for others to use.
|
bshanks@87 | 27 The challenge topic
|
bshanks@96 | 28 This proposal addresses challenge topic 06-HG-101. Massive new datasets obtained with techniques such as
|
bshanks@96 | 29 in situ hybridization (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others,
|
bshanks@96 | 30 allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated
|
bshanks@96 | 31 methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific
|
bshanks@96 | 32 anatomical regions, and also to draw new anatomical maps based on gene expression patterns.
|
bshanks@101 | 33 ______________
|
bshanks@101 | 34 The Challenge and Potential impact
|
bshanks@96 | 35 Each of our three aims will be discussed in turn. For each aim, we will develop a conceptual framework for
|
bshanks@105 | 36 thinking about the task. Next we will discuss related work, and then summarize why our strategy is different from
|
bshanks@105 | 37 what has been done before. After we have discussed all three aims, we will describe the potential impact.
|
bshanks@101 | 38 Aim 1: Given a map of regions, find genes that mark the regions
|
bshanks@104 | 39 Machine learning terminology: classifiers The task of looking for marker genes for known anatomical regions
|
bshanks@104 | 40 means that one is looking for a set of genes such that, if the expression level of those genes is known, then the
|
bshanks@104 | 41 locations of the regions can be inferred.
|
bshanks@104 | 42 If we define the regions so that they cover the entire anatomical structure to be subdivided, we may say that
|
bshanks@104 | 43 we are using gene expression in each voxel to assign that voxel to the proper area. We call this a classification
|
bshanks@104 | 44 task, because each voxel is being assigned to a class (namely, its region). An understanding of the relationship
|
bshanks@104 | 45 between the combination of their expression levels and the locations of the regions may be expressed as a
|
bshanks@104 | 46 function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is
|
bshanks@104 | 47 the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function
|
bshanks@104 | 48 a classifier. In general, the input to a classifier is called an instance, and the output is called a label (or a class
|
bshanks@104 | 49 label).
|
bshanks@101 | 50 The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for
|
bshanks@96 | 51 determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene
|
bshanks@96 | 52 expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. The
|
bshanks@96 | 53 initial gene expression dataset used in the construction of the classifier is called training data. In the machine
|
bshanks@96 | 54 learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a task in
|
bshanks@96 | 55 which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances
|
bshanks@96 | 56 (voxels) for which the labels (regions) are known.
|
bshanks@101 | 57 Each gene expression level is called a feature, and the selection of which genes1 to include is called feature
|
bshanks@96 | 58 selection. Feature selection is one component of the task of learning a classifier. Some methods for learning
|
bshanks@96 | 59 classifiers start out with a separate feature selection phase, whereas other methods combine feature selection
|
bshanks@96 | 60 with other aspects of training.
|
bshanks@101 | 61 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked
|
bshanks@96 | 62 genes are then chosen. Some scoring measures can assign a score to a set of selected genes, not just to a
|
bshanks@96 | 63 single gene; in this case, a dynamic procedure may be used in which features are added and subtracted from the
|
bshanks@96 | 64 selected set depending on how much they raise the score. Such procedures are called “stepwise” or “greedy”.
|
bshanks@101 | 65 Although the classifier itself may only look at the gene expression data within each voxel before classifying
|
bshanks@96 | 66 that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize
|
bshanks@96 | 67 score-based feature selection methods depending on how the score of calculated. Often the score calculation
|
bshanks@96 | 68 consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the
|
bshanks@96 | 69 aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to
|
bshanks@96 | 70 calculate a voxel’s sub-score, then we say it is a local scoring method. If only information from the voxel itself is
|
bshanks@96 | 71 used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method.
|
bshanks@101 | 72 Both gene expression data and anatomical atlases have errors, due to a variety of factors. Individual subjects
|
bshanks@99 | 73 have idiosyncratic anatomy. Subjects may be improperly registered to the atlas. The method used to measure
|
bshanks@96 | 74 gene expression may be noisy. The atlas may have errors. It is even possible that some areas in the anatomical
|
bshanks@105 | 75 atlas are “wrong” in that they do not have the same shape as the natural domains of gene expression to which
|
bshanks@104 | 76 1Strictly speaking, the features are gene expression levels, but we’ll call them genes.
|
bshanks@96 | 77 they correspond. These sources of error can affect the displacement and the shape of both the gene expression
|
bshanks@96 | 78 data and the anatomical target areas. Therefore, it is important to use feature selection methods which are
|
bshanks@96 | 79 robust to these kinds of errors.
|
bshanks@85 | 80 Our strategy for Aim 1
|
bshanks@104 | 81 Key questions when choosing a learning method are: What are the instances? What are the features? How are
|
bshanks@104 | 82 the features chosen? Here are four principles that outline our answers to these questions.
|
bshanks@104 | 83 Principle 1: Combinatorial gene expression
|
bshanks@104 | 84 It is too much to hope that every anatomical region of interest will be identified by a single gene. For example,
|
bshanks@104 | 85 in the cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas
|
bshanks@104 | 86 (ABA) dataset. However, at least some of these areas can be delineated by looking at combinations of genes
|
bshanks@104 | 87 (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Studies,
|
bshanks@104 | 88 Figure 4). Therefore, each instance should contain multiple features (genes).
|
bshanks@104 | 89 Principle 2: Only look at combinations of small numbers of genes
|
bshanks@104 | 90 When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have
|
bshanks@104 | 91 been selected as features. The more data that are available to a classifier, the better that it can do. For example,
|
bshanks@104 | 92 perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every
|
bshanks@104 | 93 gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to
|
bshanks@104 | 94 gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for
|
bshanks@104 | 95 some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the
|
bshanks@104 | 96 expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that
|
bshanks@104 | 97 checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on
|
bshanks@104 | 98 tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we
|
bshanks@104 | 99 must select only a few genes as features.
|
bshanks@96 | 100 The requirement to find combinations of only a small number of genes limits us from straightforwardly ap-
|
bshanks@96 | 101 plying many of the most simple techniques from the field of supervised machine learning. In the parlance of
|
bshanks@96 | 102 machine learning, our task combines feature selection with supervised learning.
|
bshanks@30 | 103 Principle 3: Use geometry in feature selection
|
bshanks@96 | 104 When doing feature selection with score-based methods, the simplest thing to do would be to score the per-
|
bshanks@96 | 105 formance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach
|
bshanks@96 | 106 is to also use information about the geometric relations between each voxel and its neighbors; this requires non-
|
bshanks@96 | 107 pointwise, local scoring methods. See Preliminary Studies, figure 3 for evidence of the complementary nature of
|
bshanks@96 | 108 pointwise and local scoring methods.
|
bshanks@30 | 109 Principle 4: Work in 2-D whenever possible
|
bshanks@96 | 110 There are many anatomical structures which are commonly characterized in terms of a two-dimensional
|
bshanks@96 | 111 manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be
|
bshanks@96 | 112 improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for
|
bshanks@96 | 113 humans to visualize and work with 2-D data. Therefore, when possible, the instances should represent pixels,
|
bshanks@96 | 114 not voxels.
|
bshanks@43 | 115 Related work
|
bshanks@104 | 116 There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expres-
|
bshanks@104 | 117 sion data which are not fundamentally spatial2.
|
bshanks@104 | 118 As noted above, there has been much work on both supervised learning and there are many available
|
bshanks@104 | 119 algorithms for each. However, the algorithms require the scientist to provide a framework for representing the
|
bshanks@104 | 120 problem domain, and the way that this framework is set up has a large impact on performance. Creating a
|
bshanks@104 | 121 good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical
|
bshanks@104 | 122 “fine-tuning” of numerical parameters. For example, we believe that domain-specific scoring measures (such
|
bshanks@104 | 123 _________________________________________
|
bshanks@104 | 124 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates;
|
bshanks@104 | 125 not just data which have only a few different locations or which is indexed by anatomical label.
|
bshanks@104 | 126 as gradient similarity, which is discussed in Preliminary Studies) may be necessary in order to achieve the best
|
bshanks@104 | 127 results in this application.
|
bshanks@104 | 128 We now turn to efforts to find marker genes using spatial gene expression data using automated methods.
|
bshanks@104 | 129 GeneAtlas[5] and EMAGE [26] allow the user to construct a search query by demarcating regions and then
|
bshanks@104 | 130 specifying either the strength of expression or the name of another gene or dataset whose expression pattern
|
bshanks@104 | 131 is to be matched. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a
|
bshanks@104 | 132 region in concert but not separately.
|
bshanks@104 | 133 [15 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components. Gene Finder: The
|
bshanks@104 | 134 user selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2) yields a list of
|
bshanks@104 | 135 genes which are overexpressed in that cluster. Correlation: The user selects a seed voxel and the system then
|
bshanks@104 | 136 shows the user how much correlation there is between the gene expression profile of the seed voxel and every
|
bshanks@104 | 137 other voxel. Clusters: will be described later. [6] looks at the mean expression level of genes within anatomical
|
bshanks@104 | 138 regions, and applies a Student’s t-test with Bonferroni correction to determine whether the mean expression
|
bshanks@104 | 139 level of a gene is significantly higher in the target region. [15] and [6] differ from our Aim 1 in at least three
|
bshanks@104 | 140 ways. First, [15] and [6] find only single genes, whereas we will also look for combinations of genes. Second,
|
bshanks@104 | 141 [15 ] and [6] can only use overexpression as a marker, whereas we will also search for underexpression. Third,
|
bshanks@104 | 142 [15 ] and [6] use scores based on pointwise expression levels, whereas we will also use geometric scores such
|
bshanks@104 | 143 as gradient similarity (described in Preliminary Studies). Figures 4, 2, and 3 in the Preliminary Studies section
|
bshanks@104 | 144 contain evidence that each of our three choices is the right one.
|
bshanks@96 | 145 [10 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use
|
bshanks@96 | 146 an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to
|
bshanks@99 | 147 match a target image.
|
bshanks@96 | 148 In summary, there has been fruitful work on finding marker genes, but only one of the previous projects
|
bshanks@96 | 149 explores combinations of marker genes, and none of these publications compare the results obtained by using
|
bshanks@96 | 150 different algorithms or scoring methods.
|
bshanks@84 | 151 Aim 2: From gene expression data, discover a map of regions
|
bshanks@101 | 152 Machine learning terminology: clustering
|
bshanks@101 | 153 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is
|
bshanks@101 | 154 referred to as unsupervised learning in the jargon of machine learning. One thing that you can do with such a
|
bshanks@101 | 155 dataset is to group instances together. A set of similar instances is called a cluster, and the activity of finding
|
bshanks@101 | 156 grouping the data into clusters is called clustering or cluster analysis.
|
bshanks@101 | 157 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The
|
bshanks@101 | 158 instances are once again voxels (or pixels) along with their associated gene expression profiles. We make
|
bshanks@101 | 159 the assumption that voxels from the same anatomical region have similar gene expression profiles, at least
|
bshanks@101 | 160 compared to the other regions. This means that clustering voxels is the same as finding potential regions; we
|
bshanks@101 | 161 seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression.
|
bshanks@101 | 162 It is desirable to determine not just one set of regions, but also how these regions relate to each other. The
|
bshanks@101 | 163 outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition
|
bshanks@101 | 164 the voxels. This is called hierarchical clustering.
|
bshanks@101 | 165 Similarity scores A crucial choice when designing a clustering method is how to measure similarity, across
|
bshanks@101 | 166 either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature
|
bshanks@101 | 167 selection (discussed above under Aim 1) and scoring methods for similarity.
|
bshanks@104 | 168 Spatially contiguous clusters; image segmentation We have shown that aim 2 is a type of clustering
|
bshanks@104 | 169 task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels
|
bshanks@104 | 170 grouped together into a cluster must be spatially contiguous. In Preliminary Studies, we show that one can get
|
bshanks@104 | 171 reasonable results without enforcing this constraint; however, we plan to compare these results against other
|
bshanks@104 | 172 methods which guarantee contiguous clusters.
|
bshanks@104 | 173 Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous
|
bshanks@104 | 174 clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are
|
bshanks@104 | 175 thousands of color channels (one for each gene), rather than just three3. A more crucial difference is that there
|
bshanks@104 | 176 are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not
|
bshanks@104 | 177 appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation
|
bshanks@104 | 178 algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these
|
bshanks@104 | 179 algorithms are specialized for visual images.
|
bshanks@96 | 180 Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression
|
bshanks@96 | 181 feature vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying
|
bshanks@96 | 182 data.
|
bshanks@96 | 183 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion
|
bshanks@98 | 184 in the instances. However, some clustering algorithms perform better on small numbers of features4. There are
|
bshanks@96 | 185 techniques which “summarize” a larger number of features using a smaller number of features; these techniques
|
bshanks@101 | 186 go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique
|
bshanks@101 | 187 yields is called the reduced feature set. Note that the features in the reduced feature set do not necessarily
|
bshanks@101 | 188 correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
|
bshanks@101 | 189 Clustering genes rather than voxels Although the ultimate goal is to cluster the instances (voxels or pixels),
|
bshanks@101 | 190 one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes
|
bshanks@101 | 191 could be used.
|
bshanks@101 | 192 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene,
|
bshanks@101 | 193 we could have one reduced feature for each gene cluster.
|
bshanks@101 | 194 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have
|
bshanks@101 | 195 an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following
|
bshanks@104 | 196 procedure: cluster together genes which pick out similar regions, and then to use the more popular common
|
bshanks@104 | 197 regions as the final clusters. In Preliminary Studies, Figure 7, we show that a number of anatomically recognized
|
bshanks@104 | 198 cortical regions, as well as some “superregions” formed by lumping together a few regions, are associated with
|
bshanks@104 | 199 gene clusters in this fashion.
|
bshanks@104 | 200 Related work
|
bshanks@104 | 201 Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example,
|
bshanks@104 | 202 [18 ], [2 ], [19], and [1] associate spots on the cortex with the radial profile5 of response to some stain ([12] uses
|
bshanks@104 | 203 MRI), extract features from this profile, and then use similarity between surface pixels to cluster.
|
bshanks@104 | 204 [23 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual
|
bshanks@104 | 205 analysis, two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and
|
bshanks@104 | 206 a hierarchical recursive bifurcation clustering scheme based on correlation as the similarity score. The paper
|
bshanks@104 | 207 yielded impressive results, proving the usefulness of computational genomic anatomy. We have run NNMF on
|
bshanks@104 | 208 the cortical dataset
|
bshanks@104 | 209 AGEA[15] includes a preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with
|
bshanks@104 | 210 correlation as the similarity metric. EMAGE[26] allows the user to select a dataset from among a large number
|
bshanks@104 | 211 of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters
|
bshanks@104 | 212 via hierarchical complete linkage clustering.
|
bshanks@104 | 213 [6 ] clusters genes. For each cluster, prototypical spatial expression patterns were created by averaging the
|
bshanks@104 | 214 genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
|
bshanks@104 | 215 [10 ] applies their technique for finding combinations of marker genes for the purpose of clustering genes
|
bshanks@104 | 216 around a “seed gene”.
|
bshanks@104 | 217 In summary, although these projects obtained clusterings, there has not been much comparison between
|
bshanks@104 | 218 different algorithms or scoring methods, so it is likely that the best clustering method for this application has not
|
bshanks@104 | 219 yet been found. The projects using gene expression on cortex did not attempt to make use of the radial profile
|
bshanks@104 | 220 of gene expression. Also, none of these projects did a separate dimensionality reduction step before clustering
|
bshanks@98 | 221 _________________________________________
|
bshanks@98 | 222 3There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are
|
bshanks@98 | 223 often used to process satellite imagery.
|
bshanks@98 | 224 4First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering
|
bshanks@98 | 225 algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.
|
bshanks@104 | 226 5A radial profile is a profile along a line perpendicular to the cortical surface.
|
bshanks@104 | 227 pixels, none tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, and
|
bshanks@104 | 228 none used co-clustering algorithms.
|
bshanks@104 | 229 Aim 3: apply the methods developed to the cerebral cortex
|
bshanks@85 | 230
|
bshanks@85 | 231
|
bshanks@104 | 232 Figure 1: Top row: Genes Nfic
|
bshanks@104 | 233 and A930001M12Rik are the most
|
bshanks@104 | 234 correlated with area SS (somatosen-
|
bshanks@104 | 235 sory cortex). Bottom row: Genes
|
bshanks@104 | 236 C130038G02Rik and Cacna1i are
|
bshanks@104 | 237 those with the best fit using logistic
|
bshanks@104 | 238 regression. Within each picture, the
|
bshanks@104 | 239 vertical axis roughly corresponds to
|
bshanks@104 | 240 anterior at the top and posterior at the
|
bshanks@104 | 241 bottom, and the horizontal axis roughly
|
bshanks@104 | 242 corresponds to medial at the left and
|
bshanks@104 | 243 lateral at the right. The red outline is
|
bshanks@104 | 244 the boundary of region SS. Pixels are
|
bshanks@104 | 245 colored according to correlation, with
|
bshanks@104 | 246 red meaning high correlation and blue
|
bshanks@104 | 247 meaning low. Background
|
bshanks@101 | 248 The cortex is divided into areas and layers. Because of the cortical
|
bshanks@101 | 249 columnar organization, the parcellation of the cortex into areas can be
|
bshanks@101 | 250 drawn as a 2-D map on the surface of the cortex. In the third dimension,
|
bshanks@101 | 251 the boundaries between the areas continue downwards into the cortical
|
bshanks@101 | 252 depth, perpendicular to the surface. The layer boundaries run parallel
|
bshanks@101 | 253 to the surface. One can picture an area of the cortex as a slice of a
|
bshanks@101 | 254 six-layered cake6.
|
bshanks@101 | 255 It is known that different cortical areas have distinct roles in both
|
bshanks@101 | 256 normal functioning and in disease processes, yet there are no known
|
bshanks@101 | 257 marker genes for most cortical areas. When it is necessary to divide a
|
bshanks@101 | 258 tissue sample into cortical areas, this is a manual process that requires
|
bshanks@101 | 259 a skilled human to combine multiple visual cues and interpret them in
|
bshanks@101 | 260 the context of their approximate location upon the cortical surface.
|
bshanks@104 | 261 Even the questions of how many areas should be recognized in
|
bshanks@104 | 262 cortex, and what their arrangement is, are still not completely settled.
|
bshanks@104 | 263 A proposed division of the cortex into areas is called a cortical map.
|
bshanks@104 | 264 In the rodent, the lack of a single agreed-upon map can be seen by
|
bshanks@104 | 265 contrasting the recent maps given by Swanson[22] on the one hand,
|
bshanks@104 | 266 and Paxinos and Franklin[17] on the other. While the maps are cer-
|
bshanks@104 | 267 tainly very similar in their general arrangement, significant differences
|
bshanks@104 | 268 remain.
|
bshanks@104 | 269 The Allen Mouse Brain Atlas dataset
|
bshanks@104 | 270 The Allen Mouse Brain Atlas (ABA) data were produced by doing in-
|
bshanks@104 | 271 situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains.
|
bshanks@104 | 272 Pictures were taken of the processed slice, and these pictures were
|
bshanks@104 | 273 semi-automatically analyzed to create a digital measurement of gene
|
bshanks@104 | 274 expression levels at each location in each slice. Per slice, cellular spa-
|
bshanks@104 | 275 tial resolution is achieved. Using this method, a single physical slice
|
bshanks@104 | 276 can only be used to measure one single gene; many different mouse brains were needed in order to measure
|
bshanks@104 | 277 the expression of many genes.
|
bshanks@104 | 278 An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D
|
bshanks@104 | 279 coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are
|
bshanks@104 | 280 67x41x58 = 159,326 voxels in the 3D coordinate system, of which 51,533 are in the brain[15].
|
bshanks@98 | 281 Mus musculus is thought to contain about 22,000 protein-coding genes[28]. The ABA contains data on about
|
bshanks@98 | 282 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our
|
bshanks@98 | 283 dataset is derived from only the coronal subset of the ABA7.
|
bshanks@98 | 284 The ABA is not the only large public spatial gene expression dataset. However, with the exception of the ABA,
|
bshanks@98 | 285 GenePaint, and EMAGE, most of the other resources have not (yet) extracted the expression intensity from the
|
bshanks@98 | 286 ISH images and registered the results into a single 3-D space.
|
bshanks@98 | 287 Related work
|
bshanks@98 | 288 [15 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure
|
bshanks@98 | 289 of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort
|
bshanks@104 | 290 _________________________________________
|
bshanks@104 | 291 6Outside of isocortex, the number of layers varies.
|
bshanks@104 | 292 7The sagittal data do not cover the entire cortex, and also have greater registration error[15]. Genes were selected by the Allen
|
bshanks@104 | 293 Institute for coronal sectioning based on, “classes of known neuroscientific interest... or through post hoc identification of a marked
|
bshanks@104 | 294 non-ubiquitous expression pattern”[15].
|
bshanks@98 | 295 of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical
|
bshanks@98 | 296 map based on gene expression data. Neither of the other components of AGEA can be applied to cortical
|
bshanks@99 | 297 areas; AGEA’s Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA’s hierarchical
|
bshanks@98 | 298 clustering does not produce clusters corresponding to the cortical areas8.
|
bshanks@98 | 299 In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes,
|
bshanks@98 | 300 (b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no
|
bshanks@99 | 301 work on computationally finding marker genes for cortical areas, or on finding a hierarchical clustering that will
|
bshanks@98 | 302 yield a map of cortical areas de novo from gene expression data.
|
bshanks@98 | 303 Our project is guided by a concrete application with a well-specified criterion of success (how well we can
|
bshanks@98 | 304 find marker genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing
|
bshanks@98 | 305 different methods.
|
bshanks@98 | 306 Significance
|
bshanks@98 | 307
|
bshanks@104 | 308 Figure 2: Gene Pitx2
|
bshanks@104 | 309 is selectively underex-
|
bshanks@104 | 310 pressed in area SS. The method developed in aim (1) will be applied to each cortical area to find a set of
|
bshanks@104 | 311 marker genes such that the combinatorial expression pattern of those genes uniquely
|
bshanks@104 | 312 picks out the target area. Finding marker genes will be useful for drug discovery as
|
bshanks@104 | 313 well as for experimentation because marker genes can be used to design interventions
|
bshanks@104 | 314 which selectively target individual cortical areas.
|
bshanks@104 | 315 The application of the marker gene finding algorithm to the cortex will also support
|
bshanks@104 | 316 the development of new neuroanatomical methods. In addition to finding markers for
|
bshanks@104 | 317 each individual cortical areas, we will find a small panel of genes that can find many of
|
bshanks@104 | 318 the areal boundaries at once. This panel of marker genes will allow the development of
|
bshanks@104 | 319 an ISH protocol that will allow experimenters to more easily identify which anatomical
|
bshanks@104 | 320 areas are present in small samples of cortex.
|
bshanks@98 | 321 The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation
|
bshanks@98 | 322 of a better map. The development of present-day cortical maps was driven by the application of histological
|
bshanks@98 | 323 stains. If a different set of stains had been available which identified a different set of features, then today’s
|
bshanks@98 | 324 cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns
|
bshanks@100 | 325 in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to
|
bshanks@100 | 326 incorporate what we can learn from looking at the patterns of gene expression.
|
bshanks@98 | 327 While we do not here propose to analyze human gene expression data, it is conceivable that the methods
|
bshanks@98 | 328 we propose to develop could be used to suggest modifications to the human cortical map as well. In fact, the
|
bshanks@98 | 329 methods we will develop will be applicable to other datasets beyond the brain.
|
bshanks@101 | 330 _______________________________
|
bshanks@101 | 331 The approach: Preliminary Studies
|
bshanks@101 | 332 Format conversion between SEV, MATLAB, NIFTI
|
bshanks@98 | 333 We have created software to (politely) download all of the SEV files9 from the Allen Institute website. We have
|
bshanks@98 | 334 also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s
|
bshanks@98 | 335 file formats.
|
bshanks@101 | 336 Flatmap of cortex
|
bshanks@98 | 337 We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex.
|
bshanks@98 | 338 We divided the cortex into hemispheres. Using Caret[7], we created a mesh representation of the surface of the
|
bshanks@98 | 339 selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression
|
bshanks@98 | 340 of the voxels “underneath” that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We
|
bshanks@98 | 341 sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this
|
bshanks@98 | 342 grid into a MATLAB matrix. We manually traced the boundaries of each of 49 cortical areas from the ABA coronal
|
bshanks@104 | 343 reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the
|
bshanks@101 | 344 8In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer
|
bshanks@101 | 345 are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a
|
bshanks@101 | 346 pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.
|
bshanks@101 | 347 9SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.
|
bshanks@98 | 348 mesh surface. We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the
|
bshanks@98 | 349 region data into MATLAB format.
|
bshanks@98 | 350 At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries
|
bshanks@98 | 351 representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent
|
bshanks@98 | 352 the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries
|
bshanks@98 | 353 represent the average expression level underneath each surface pixel. We created a normalized version of the
|
bshanks@98 | 354 gene expression data by subtracting each gene’s mean expression level (over all surface pixels) and dividing the
|
bshanks@98 | 355 expression level of each gene by its standard deviation. The features and the target area are both functions on
|
bshanks@98 | 356 the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can
|
bshanks@98 | 357 be thought of as images which can be displayed on the flatmapped surface.
|
bshanks@98 | 358 To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix
|
bshanks@98 | 359 for each cortical layer to represent the average expression level within that layer. Cortical layers are found at
|
bshanks@98 | 360 different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have
|
bshanks@98 | 361 extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the
|
bshanks@98 | 362 Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually
|
bshanks@98 | 363 demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
|
bshanks@98 | 364 Feature selection and scoring methods
|
bshanks@104 | 365
|
bshanks@104 | 366
|
bshanks@104 | 367 Figure 3: The top row shows the two
|
bshanks@104 | 368 genes which (individually) best predict
|
bshanks@104 | 369 area AUD, according to logistic regres-
|
bshanks@104 | 370 sion. The bottom row shows the two
|
bshanks@104 | 371 genes which (individually) best match
|
bshanks@104 | 372 area AUD, according to gradient sim-
|
bshanks@104 | 373 ilarity. From left to right and top to
|
bshanks@104 | 374 bottom, the genes are Ssr1, Efcbp1,
|
bshanks@104 | 375 Ptk7, and Aph1a. Underexpression of a gene can serve as a marker Underexpression
|
bshanks@104 | 376 of a gene can sometimes serve as a marker. See, for example, Figure
|
bshanks@104 | 377 2.
|
bshanks@104 | 378 Correlation Recall that the instances are surface pixels, and con-
|
bshanks@104 | 379 sider the problem of attempting to classify each instance as either a
|
bshanks@104 | 380 member of a particular anatomical area, or not. The target area can be
|
bshanks@104 | 381 represented as a boolean mask over the surface pixels.
|
bshanks@104 | 382 We calculated the correlation between each gene and each cortical
|
bshanks@104 | 383 area. The top row of Figure 1 shows the three genes most correlated
|
bshanks@104 | 384 with area SS.
|
bshanks@104 | 385 Conditional entropy
|
bshanks@104 | 386 For each region, we created and ran a forward stepwise procedure
|
bshanks@104 | 387 which attempted to find pairs of gene expression boolean masks such
|
bshanks@104 | 388 that the conditional entropy of the target area’s boolean mask, condi-
|
bshanks@104 | 389 tioned upon the pair of gene expression boolean masks, is minimized.
|
bshanks@104 | 390 This finds pairs of genes which are most informative (at least at
|
bshanks@104 | 391 these discretization thresholds) relative to the question, “Is this surface
|
bshanks@104 | 392 pixel a member of the target area?”. Its advantage over linear methods
|
bshanks@104 | 393 such as logistic regression is that it takes account of arbitrarily nonlin-
|
bshanks@104 | 394 ear relationships; for example, if the XOR of two variables predicts the
|
bshanks@104 | 395 target, conditional entropy would notice, whereas linear methods would
|
bshanks@104 | 396 not.
|
bshanks@98 | 397 Gradient similarity We noticed that the previous two scoring methods, which are pointwise, often found
|
bshanks@98 | 398 genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed
|
bshanks@98 | 399 a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a
|
bshanks@98 | 400 boundary whose shape is similar to the shape of the target region. We call this scoring method “gradient
|
bshanks@98 | 401 similarity”. The formula is:
|
bshanks@98 | 402 ∑
|
bshanks@98 | 403 pixel<img src="cmsy8-32.png" alt="∈" />pixels cos(abs(∠∇1 -∠∇2)) ⋅|∇1| + |∇2|
|
bshanks@98 | 404 2 ⋅ pixel_value1 + pixel_value2
|
bshanks@98 | 405 2
|
bshanks@98 | 406 where ∇1 and ∇2 are the gradient vectors of the two images at the current pixel; ∠∇i is the angle of the
|
bshanks@98 | 407 gradient of image i at the current pixel; |∇i| is the magnitude of the gradient of image i at the current pixel; and
|
bshanks@98 | 408 pixel valuei is the value of the current pixel in image i.
|
bshanks@98 | 409 The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders
|
bshanks@98 | 410 are similar, then both images will have corresponding pixels with large gradients (because this is a border) which
|
bshanks@98 | 411 are oriented in a similar direction (because the borders are similar).
|
bshanks@98 | 412 Gradient similarity provides information complementary to correlation
|
bshanks@104 | 413
|
bshanks@104 | 414
|
bshanks@104 | 415 Figure 4: Upper left: wwc1. Upper
|
bshanks@104 | 416 right: mtif2. Lower left: wwc1 + mtif2
|
bshanks@104 | 417 (each pixel’s value on the lower left is
|
bshanks@104 | 418 the sum of the corresponding pixels in
|
bshanks@104 | 419 the upper row). To show that gradient similarity can provide useful information that
|
bshanks@104 | 420 cannot be detected via pointwise analyses, consider Fig. 3. The
|
bshanks@104 | 421 pointwise method in the top row identifies genes which express more
|
bshanks@104 | 422 strongly in AUD than outside of it; its weakness is that this includes
|
bshanks@104 | 423 many areas which don’t have a salient border matching the areal bor-
|
bshanks@104 | 424 der. The geometric method identifies genes whose salient expression
|
bshanks@104 | 425 border seems to partially line up with the border of AUD; its weakness
|
bshanks@104 | 426 is that this includes genes which don’t express over the entire area.
|
bshanks@104 | 427 Areas which can be identified by single genes Using gradient
|
bshanks@104 | 428 similarity, we have already found single genes which roughly identify
|
bshanks@104 | 429 some areas and groupings of areas. For each of these areas, an ex-
|
bshanks@104 | 430 ample of a gene which roughly identifies it is shown in Figure 5. We
|
bshanks@104 | 431 have not yet cross-verified these genes in other atlases.
|
bshanks@104 | 432 In addition, there are a number of areas which are almost identified
|
bshanks@104 | 433 by single genes: COAa+NLOT (anterior part of cortical amygdalar area,
|
bshanks@104 | 434 nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral
|
bshanks@104 | 435 anterior cingulate), VIS (visual), AUD (auditory).
|
bshanks@104 | 436 These results validate our expectation that the ABA dataset can be
|
bshanks@104 | 437 exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring
|
bshanks@104 | 438 method, gradient similarity.
|
bshanks@98 | 439 Combinations of multiple genes are useful and necessary for some areas
|
bshanks@98 | 440 In Figure 4, we give an example of a cortical area which is not marked by any single gene, but which
|
bshanks@99 | 441 can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for
|
bshanks@98 | 442 predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left
|
bshanks@98 | 443 picture in Figure 4 shows wwc1’s spatial expression pattern over the cortex. The lower-right boundary of MO is
|
bshanks@98 | 444 represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D
|
bshanks@98 | 445 representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex.
|
bshanks@98 | 446 MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO’s upper-left
|
bshanks@98 | 447 boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding
|
bshanks@98 | 448 together the values at each pixel in these two figures, we get the lower-left image. This combination captures
|
bshanks@98 | 449 area MO much better than any single gene.
|
bshanks@98 | 450 This shows that our proposal to develop a method to find combinations of marker genes is both possible and
|
bshanks@98 | 451 necessary.
|
bshanks@98 | 452 Multivariate supervised learning
|
bshanks@98 | 453 Forward stepwise logistic regression Logistic regression is a popular method for predictive modeling of cate-
|
bshanks@99 | 454 gorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise
|
bshanks@98 | 455 logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is
|
bshanks@98 | 456 an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes
|
bshanks@98 | 457 found were shown in various figures throughout this document, and Figure 4 shows a combination of genes
|
bshanks@98 | 458 which was found.
|
bshanks@98 | 459 SVM on all genes at once
|
bshanks@98 | 460 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to
|
bshanks@98 | 461 classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of
|
bshanks@98 | 462 about 81%10. This shows that the genes included in the ABA dataset are sufficient to define much of cortical
|
bshanks@98 | 463 anatomy. However, as noted above, a classifier that looks at all the genes at once isn’t as practically useful as a
|
bshanks@98 | 464 classifier that uses only a few genes.
|
bshanks@96 | 465 Data-driven redrawing of the cortical map
|
bshanks@104 | 466
|
bshanks@104 | 467
|
bshanks@104 | 468
|
bshanks@104 | 469
|
bshanks@104 | 470 Figure 5: From left to right and top
|
bshanks@104 | 471 to bottom, single genes which roughly
|
bshanks@104 | 472 identify areas SS (somatosensory pri-
|
bshanks@104 | 473 mary + supplemental), SSs (supple-
|
bshanks@104 | 474 mental somatosensory), PIR (piriform),
|
bshanks@104 | 475 FRP (frontal pole), RSP (retrosple-
|
bshanks@104 | 476 nial), COApm (Cortical amygdalar, pos-
|
bshanks@104 | 477 terior part, medial zone). Grouping
|
bshanks@104 | 478 some areas together, we have also
|
bshanks@104 | 479 found genes to identify the groups
|
bshanks@104 | 480 ACA+PL+ILA+DP+ORB+MO (anterior
|
bshanks@104 | 481 cingulate, prelimbic, infralimbic, dor-
|
bshanks@104 | 482 sal peduncular, orbital, motor), poste-
|
bshanks@104 | 483 rior and lateral visual (VISpm, VISpl,
|
bshanks@104 | 484 VISI, VISp; posteromedial, posterolat-
|
bshanks@104 | 485 eral, lateral, and primary visual; the
|
bshanks@104 | 486 posterior and lateral visual area is dis-
|
bshanks@104 | 487 tinguished from its neighbors, but not
|
bshanks@104 | 488 from the entire rest of the cortex). The
|
bshanks@104 | 489 genes are Pitx2, Aldh1a2, Ppfibp1,
|
bshanks@104 | 490 Slco1a5, Tshz2, Trhr, Col12a1, Ets1. We have applied the following dimensionality reduction algorithms
|
bshanks@104 | 491 to reduce the dimensionality of the gene expression profile associ-
|
bshanks@104 | 492 ated with each pixel: Principal Components Analysis (PCA), Simple
|
bshanks@104 | 493 PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian
|
bshanks@104 | 494 eigenmaps, Local Tangent Space Alignment, Stochastic Proximity Em-
|
bshanks@104 | 495 bedding, Fast Maximum Variance Unfolding, Non-negative Matrix Fac-
|
bshanks@104 | 496 torization (NNMF). Space constraints prevent us from showing many of
|
bshanks@104 | 497 the results, but as a sample, PCA, NNMF, and landmark Isomap are
|
bshanks@104 | 498 shown in the first, second, and third rows of Figure 6.
|
bshanks@104 | 499 After applying the dimensionality reduction, we ran clustering algo-
|
bshanks@104 | 500 rithms on the reduced data. To date we have tried k-means and spec-
|
bshanks@104 | 501 tral clustering. The results of k-means after PCA, NNMF, and landmark
|
bshanks@104 | 502 Isomap are shown in the last row of Figure 6. To compare, the leftmost
|
bshanks@104 | 503 picture on the bottom row of Figure 6 shows some of the major subdivi-
|
bshanks@104 | 504 sions of cortex. These results clearly show that different dimensionality
|
bshanks@104 | 505 reduction techniques capture different aspects of the data and lead to
|
bshanks@104 | 506 different clusterings, indicating the utility of our proposal to produce a
|
bshanks@104 | 507 detailed comparison of these techniques as applied to the domain of
|
bshanks@104 | 508 genomic anatomy.
|
bshanks@104 | 509 Many areas are captured by clusters of genes We also clustered
|
bshanks@104 | 510 the genes using gradient similarity to see if the spatial regions defined
|
bshanks@104 | 511 by any clusters matched known anatomical regions. Figure 7 shows, for
|
bshanks@104 | 512 ten sample gene clusters, each cluster’s average expression pattern,
|
bshanks@104 | 513 compared to a known anatomical boundary. This suggests that it is
|
bshanks@104 | 514 worth attempting to cluster genes, and then to use the results to cluster
|
bshanks@104 | 515 pixels.
|
bshanks@104 | 516 The approach: what we plan to do
|
bshanks@104 | 517 Flatmap cortex and segment cortical layers
|
bshanks@104 | 518 There are multiple ways to flatten 3-D data into 2-D. We will compare
|
bshanks@104 | 519 mappings from manifolds to planes which attempt to preserve size
|
bshanks@104 | 520 (such as the one used by Caret[7]) with mappings which preserve an-
|
bshanks@104 | 521 gle (conformal maps). Our method will include a statistical test that
|
bshanks@104 | 522 warns the user if the assumption of 2-D structure seems to be wrong.
|
bshanks@104 | 523 We have not yet made use of radial profiles. While the radial pro-
|
bshanks@104 | 524 files may be used “raw”, for laminar structures like the cortex another
|
bshanks@104 | 525 strategy is to group together voxels in the same cortical layer; each sur-
|
bshanks@104 | 526 face pixel would then be associated with one expression level per gene
|
bshanks@104 | 527 per layer. We will develop a segmentation algorithm to automatically
|
bshanks@104 | 528 identify the layer boundaries.
|
bshanks@104 | 529 Develop algorithms that find genetic markers for anatomical re-
|
bshanks@104 | 530 gions
|
bshanks@104 | 531 Scoring measures and feature selection We will develop scoring
|
bshanks@104 | 532 methods for evaluating how good individual genes are at marking ar-
|
bshanks@104 | 533 eas. We will compare pointwise, geometric, and information-theoretic
|
bshanks@104 | 534 _________________________________________
|
bshanks@101 | 535 105-fold cross-validation.
|
bshanks@104 | 536 measures. We already developed one entirely new scoring method (gradient similarity), but we may develop
|
bshanks@104 | 537 more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, con-
|
bshanks@104 | 538 ditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such
|
bshanks@104 | 539 as Student’s t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a
|
bshanks@104 | 540 scoring measure on genes by taking the prediction error when using that gene to predict the target.
|
bshanks@104 | 541
|
bshanks@104 | 542
|
bshanks@104 | 543
|
bshanks@104 | 544
|
bshanks@104 | 545 Figure 6: First row: the first 6 reduced dimensions, using PCA. Sec-
|
bshanks@104 | 546 ond row: the first 6 reduced dimensions, using NNMF. Third row:
|
bshanks@104 | 547 the first six reduced dimensions, using landmark Isomap. Bottom
|
bshanks@104 | 548 row: examples of kmeans clustering applied to reduced datasets
|
bshanks@104 | 549 to find 7 clusters. Left: 19 of the major subdivisions of the cortex.
|
bshanks@104 | 550 Second from left: PCA. Third from left: NNMF. Right: Landmark
|
bshanks@104 | 551 Isomap. Additional details: In the third and fourth rows, 7 dimen-
|
bshanks@104 | 552 sions were found, but only 6 displayed. In the last row: for PCA,
|
bshanks@104 | 553 50 dimensions were used; for NNMF, 6 dimensions were used; for
|
bshanks@104 | 554 landmark Isomap, 7 dimensions were used. Using some combination of these mea-
|
bshanks@104 | 555 sures, we will develop a procedure to
|
bshanks@104 | 556 find single marker genes for anatomical
|
bshanks@104 | 557 regions: for each cortical area, we will
|
bshanks@104 | 558 rank the genes by their ability to delineate
|
bshanks@104 | 559 each area. We will quantitatively compare
|
bshanks@104 | 560 the list of single genes generated by our
|
bshanks@104 | 561 method to the lists generated by previous
|
bshanks@104 | 562 methods which are mentioned in Aim 1 Re-
|
bshanks@104 | 563 lated Work.
|
bshanks@104 | 564 Some cortical areas have no single
|
bshanks@104 | 565 marker genes but can be identified by com-
|
bshanks@104 | 566 binatorial coding. This requires multivari-
|
bshanks@104 | 567 ate scoring measures and feature selec-
|
bshanks@104 | 568 tion procedures. Many of the measures,
|
bshanks@104 | 569 such as expression energy, gradient sim-
|
bshanks@104 | 570 ilarity, Jaccard, Dice, Hough, Student’s t,
|
bshanks@104 | 571 and Mann-Whitney U are univariate. We
|
bshanks@104 | 572 will extend these scoring measures for use
|
bshanks@104 | 573 in multivariate feature selection, that is, for
|
bshanks@104 | 574 scoring how well combinations of genes,
|
bshanks@104 | 575 rather than individual genes, can distin-
|
bshanks@104 | 576 guish a target area. There are existing
|
bshanks@104 | 577 multivariate forms of some of the univariate
|
bshanks@104 | 578 scoring measures, for example, Hotelling’s
|
bshanks@104 | 579 T-square is a multivariate analog of Stu-
|
bshanks@104 | 580 dent’s t.
|
bshanks@104 | 581 We will develop a feature selection pro-
|
bshanks@104 | 582 cedure for choosing the best small set of
|
bshanks@104 | 583 marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will
|
bshanks@104 | 584 also explore (a) feature selection using a stepwise wrapper over “vanilla” classifiers such as logistic regression,
|
bshanks@104 | 585 (b) supervised learning methods such as decision trees which incrementally/greedily combine single gene mark-
|
bshanks@104 | 586 ers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features
|
bshanks@104 | 587 used, such as sparse support vector machines (SVMs).
|
bshanks@96 | 588 Since errors of displacement and of shape may cause genes and target areas to match less than they should,
|
bshanks@96 | 589 we will consider the robustness of feature selection methods in the presence of error. Some of these methods,
|
bshanks@96 | 590 such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will
|
bshanks@96 | 591 consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a
|
bshanks@96 | 592 scoring method on small displacements and distortions of the data adds robustness to registration error at the
|
bshanks@96 | 593 expense of computation time.
|
bshanks@96 | 594 An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape
|
bshanks@96 | 595 of the natural domain of gene expression corresponding to the area is different from the shape of the area as
|
bshanks@96 | 596 recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing
|
bshanks@96 | 597 their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be
|
bshanks@98 | 598 fit if its boundary were redrawn slightly11, and (b) detect when a difficult area could be combined with adjacent
|
bshanks@104 | 599 _________________________________________
|
bshanks@104 | 600 11Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by
|
bshanks@104 | 601 multiple sources of evidence. Interestingly, the need to detect “natural spatial domains of gene expression” in a data-driven fashion
|
bshanks@104 | 602 means that the methods of Aim 2 might be useful in achieving Aim 1, as well – particularly discriminative dimensionality reduction.
|
bshanks@96 | 603 areas to create a larger area which can be fit.
|
bshanks@96 | 604 A future publication on the method that we develop in Aim 1 will review the scoring measures and quantita-
|
bshanks@96 | 605 tively compare their performance in order to provide a foundation for future research of methods of marker gene
|
bshanks@96 | 606 finding. We will measure the robustness of the scoring measures as well as their absolute performance on our
|
bshanks@96 | 607 dataset.
|
bshanks@96 | 608 Classifiers We will explore and compare different classifiers. As noted above, this activity is not separate
|
bshanks@96 | 609 from the previous one, because some supervised learning algorithms include feature selection, and any clas-
|
bshanks@96 | 610 sifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic
|
bshanks@98 | 611 regression (including spatial models[16]), decision trees12, sparse SVMs, generative mixture models (including
|
bshanks@96 | 612 naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic
|
bshanks@96 | 613 algorithms, and artificial neural networks.
|
bshanks@30 | 614 Develop algorithms to suggest a division of a structure into anatomical parts
|
bshanks@104 | 615
|
bshanks@104 | 616 Figure 7: Prototypes corresponding to sample gene
|
bshanks@104 | 617 clusters, clustered by gradient similarity. Region bound-
|
bshanks@104 | 618 aries for the region that most matches each prototype
|
bshanks@104 | 619 are overlaid. Dimensionality reduction on gene expression pro-
|
bshanks@104 | 620 files We have already described the application of
|
bshanks@104 | 621 ten dimensionality reduction algorithms for the pur-
|
bshanks@104 | 622 pose of replacing the gene expression profiles, which
|
bshanks@104 | 623 are vectors of about 4000 gene expression levels,
|
bshanks@104 | 624 with a smaller number of features. We plan to fur-
|
bshanks@104 | 625 ther explore and interpret these results, as well as to
|
bshanks@104 | 626 apply other unsupervised learning algorithms, includ-
|
bshanks@104 | 627 ing independent components analysis, self-organizing
|
bshanks@104 | 628 maps, and generative models such as deep Boltz-
|
bshanks@104 | 629 mann machines. We will explore ways to quantitatively
|
bshanks@104 | 630 compare the relevance of the different dimensionality
|
bshanks@104 | 631 reduction methods for identifying cortical areal bound-
|
bshanks@104 | 632 aries.
|
bshanks@98 | 633 Dimensionality reduction on pixels Instead of applying dimensionality reduction to the gene expression
|
bshanks@99 | 634 profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in
|
bshanks@98 | 635 this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions.
|
bshanks@98 | 636 Clustering and segmentation on pixels We will explore clustering and segmentation algorithms in order to
|
bshanks@98 | 637 segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving[9], recursive division
|
bshanks@98 | 638 clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transforma-
|
bshanks@98 | 639 tions, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with
|
bshanks@98 | 640 various linkage functions. These methods can be combined with dimensionality reduction.
|
bshanks@98 | 641 Clustering on genes We have already shown that the procedure of clustering genes according to gradient
|
bshanks@98 | 642 similarity, and then creating an averaged prototype of each cluster’s expression pattern, yields some spatial
|
bshanks@98 | 643 patterns which match cortical areas. We will further explore the clustering of genes.
|
bshanks@96 | 644 In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful
|
bshanks@96 | 645 as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then
|
bshanks@96 | 646 replacing their expression levels with a single average expression level, thereby removing some redundancy from
|
bshanks@96 | 647 the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality
|
bshanks@96 | 648 reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would
|
bshanks@96 | 649 help or hurt the ultimate goal of identifying interesting spatial regions.
|
bshanks@99 | 650 Co-clustering There are some algorithms which simultaneously incorporate clustering on instances and on
|
bshanks@98 | 651 features (in our case, genes and pixels), for example, IRM[11]. These are called co-clustering or biclustering
|
bshanks@101 | 652 _________________________________________
|
bshanks@104 | 653 12Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision
|
bshanks@101 | 654 tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was
|
bshanks@101 | 655 too large. We plan to implement a pruning procedure to generate trees that use fewer genes.
|
bshanks@98 | 656 algorithms.
|
bshanks@98 | 657 Radial profiles We wil explore the use of the radial profile of gene expression under each pixel.
|
bshanks@98 | 658 Compare different methods In order to tell which method is best for genomic anatomy, for each experimental
|
bshanks@98 | 659 method we will compare the cortical map found by unsupervised learning to a cortical map derived from the Allen
|
bshanks@98 | 660 Reference Atlas. We will explore various quantitative metrics that purport to measure how similar two clusterings
|
bshanks@98 | 661 are, such as Jaccard, Rand index, Fowlkes-Mallows, variation of information, Larsen, Van Dongen, and others.
|
bshanks@96 | 662 Discriminative dimensionality reduction In addition to using a purely data-driven approach to identify
|
bshanks@96 | 663 spatial regions, it might be useful to see how well the known regions can be reconstructed from a small number
|
bshanks@96 | 664 of features, even if those features are chosen by using knowledge of the regions. For example, linear discriminant
|
bshanks@96 | 665 analysis could be used as a dimensionality reduction technique in order to identify a few features which are the
|
bshanks@96 | 666 best linear summary of gene expression profiles for the purpose of discriminating between regions. This reduced
|
bshanks@96 | 667 feature set could then be used to cluster pixels into regions. Perhaps the resulting clusters will be similar to the
|
bshanks@96 | 668 reference atlas, yet more faithful to natural spatial domains of gene expression than the reference atlas is.
|
bshanks@96 | 669 Apply the new methods to the cortex
|
bshanks@96 | 670 Using the methods developed in Aim 1, we will present, for each cortical area, a short list of markers to identify
|
bshanks@96 | 671 that area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once.
|
bshanks@96 | 672 Because in most cases the ABA coronal dataset only contains one ISH per gene, it is possible for an unrelated
|
bshanks@96 | 673 combination of genes to seem to identify an area when in fact it is only coincidence. There are two ways we will
|
bshanks@96 | 674 validate our marker genes to guard against this. First, we will confirm that putative combinations of marker genes
|
bshanks@96 | 675 express the same pattern in both hemispheres. Second, we will manually validate our final results on other gene
|
bshanks@98 | 676 expression datasets such as EMAGE, GeneAtlas, and GENSAT[8].
|
bshanks@99 | 677 Using the methods developed in Aim 2, we will present one or more hierarchical cortical maps. We will identify
|
bshanks@96 | 678 and explain how the statistical structure in the gene expression data led to any unexpected or interesting features
|
bshanks@96 | 679 of these maps, and we will provide biological hypotheses to interpret any new cortical areas, or groupings of
|
bshanks@96 | 680 areas, which are discovered.
|
bshanks@101 | 681 ____________________________________________________________________________
|
bshanks@101 | 682 Timeline and milestones
|
bshanks@90 | 683 Finding marker genes
|
bshanks@96 | 684 September-November 2009: Develop an automated mechanism for segmenting the cortical voxels into layers
|
bshanks@96 | 685 November 2009 (milestone): Have completed construction of a flatmapped, cortical dataset with information
|
bshanks@96 | 686 for each layer
|
bshanks@101 | 687 October 2009-April 2010: Develop scoring and supervised learning methods.
|
bshanks@96 | 688 January 2010 (milestone): Submit a publication on single marker genes for cortical areas
|
bshanks@99 | 689 February-July 2010: Continue to develop scoring methods and supervised learning frameworks. Extend tech-
|
bshanks@99 | 690 niques for robustness. Compare the performance of techniques. Validate marker genes. Prepare software
|
bshanks@99 | 691 toolbox for Aim 1.
|
bshanks@96 | 692 June 2010 (milestone): Submit a paper describing a method fulfilling Aim 1. Release toolbox.
|
bshanks@96 | 693 July 2010 (milestone): Submit a paper describing combinations of marker genes for each cortical area, and a
|
bshanks@96 | 694 small number of marker genes that can, in combination, define most of the areas at once
|
bshanks@101 | 695 Revealing new ways to parcellate a structure into regions
|
bshanks@101 | 696 June 2010-March 2011: Explore dimensionality reduction algorithms. Explore clustering algorithms. Adapt
|
bshanks@101 | 697 clustering algorithms to use radial profile information. Compare the performance of techniques.
|
bshanks@96 | 698 March 2011 (milestone): Submit a paper describing a method fulfilling Aim 2. Release toolbox.
|
bshanks@101 | 699 February-May 2011: Using the methods developed for Aim 2, explore the genomic anatomy of the cortex,
|
bshanks@101 | 700 interpret the results. Prepare software toolbox for Aim 2.
|
bshanks@96 | 701 May 2011 (milestone): Submit a paper on the genomic anatomy of the cortex, using the methods developed in
|
bshanks@96 | 702 Aim 2
|
bshanks@96 | 703 May-August 2011: Revisit Aim 1 to see if what was learned during Aim 2 can improve the methods for Aim 1.
|
bshanks@99 | 704 Possibly submit another paper.
|
bshanks@33 | 705 Bibliography & References Cited
|
bshanks@96 | 706 [1]Chris Adamson, Leigh Johnston, Terrie Inder, Sandra Rees, Iven Mareels, and Gary Egan. A Tracking
|
bshanks@96 | 707 Approach to Parcellation of the Cerebral Cortex, volume Volume 3749/2005 of Lecture Notes in Computer
|
bshanks@96 | 708 Science, pages 294–301. Springer Berlin / Heidelberg, 2005.
|
bshanks@96 | 709 [2]J. Annese, A. Pitiot, I. D. Dinov, and A. W. Toga. A myelo-architectonic method for the structural classification
|
bshanks@96 | 710 of cortical areas. NeuroImage, 21(1):15–26, 2004.
|
bshanks@96 | 711 [3]Tanya Barrett, Dennis B. Troup, Stephen E. Wilhite, Pierre Ledoux, Dmitry Rudnev, Carlos Evangelista,
|
bshanks@96 | 712 Irene F. Kim, Alexandra Soboleva, Maxim Tomashevsky, and Ron Edgar. NCBI GEO: mining tens of millions
|
bshanks@96 | 713 of expression profiles–database and tools update. Nucl. Acids Res., 35(suppl_1):D760–765, 2007.
|
bshanks@96 | 714 [4]George W. Bell, Tatiana A. Yatskievych, and Parker B. Antin. GEISHA, a whole-mount in situ hybridization
|
bshanks@96 | 715 gene expression screen in chicken embryos. Developmental Dynamics, 229(3):677–687, 2004.
|
bshanks@96 | 716 [5]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe
|
bshanks@96 | 717 Warren, Wah Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome.
|
bshanks@96 | 718 PLoS Comput Biol, 1(4):e41, 2005.
|
bshanks@96 | 719 [6]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy,
|
bshanks@96 | 720 Arthur W. Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of
|
bshanks@96 | 721 expression for a mouse brain section obtained using voxelation. Physiol. Genomics, 30(3):313–321, August
|
bshanks@96 | 722 2007.
|
bshanks@96 | 723 [7]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite
|
bshanks@96 | 724 for surface-based analyses of cerebral cortex. Journal of the American Medical Informatics Association:
|
bshanks@96 | 725 JAMIA, 8(5):443–59, 2001. PMID: 11522765.
|
bshanks@96 | 726 [8]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Scham-
|
bshanks@96 | 727 bra, Norma J. Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A
|
bshanks@96 | 728 gene expression atlas of the central nervous system based on bacterial artificial chromosomes. Nature,
|
bshanks@96 | 729 425(6961):917–925, October 2003.
|
bshanks@96 | 730 [9]Trevor Hastie, Robert Tibshirani, Michael Eisen, Ash Alizadeh, Ronald Levy, Louis Staudt, Wing Chan,
|
bshanks@96 | 731 David Botstein, and Patrick Brown. ’Gene shaving’ as a method for identifying distinct sets of genes with
|
bshanks@96 | 732 similar expression patterns. Genome Biology, 1(2):research0003.1–research0003.21, 2000.
|
bshanks@96 | 733 [10]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Ex-
|
bshanks@96 | 734 pression Patterns, volume 13 of Communications in Computer and Information Science, pages 347–361.
|
bshanks@96 | 735 Springer Berlin Heidelberg, 2008.
|
bshanks@96 | 736 [11]C Kemp, JB Tenenbaum, TL Griffiths, T Yamada, and N Ueda. Learning systems of concepts with an infinite
|
bshanks@96 | 737 relational model. In AAAI, 2006.
|
bshanks@96 | 738 [12]F. Kruggel, M. K. Brckner, Th. Arendt, C. J. Wiggins, and D. Y. von Cramon. Analyzing the neocortical
|
bshanks@96 | 739 fine-structure. Medical Image Analysis, 7(3):251–264, September 2003.
|
bshanks@96 | 740 [13]Erh-Fang Lee, Jyl Boline, and Arthur W. Toga. A High-Resolution anatomical framework of the neonatal
|
bshanks@96 | 741 mouse brain for managing gene expression data. Frontiers in Neuroinformatics, 1:6, 2007. PMC2525996.
|
bshanks@96 | 742 [14]Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew Asbury, Tony
|
bshanks@96 | 743 Cheung, Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M. Norland, Dennis S. Rice,
|
bshanks@96 | 744 Nilesh Dosooye, Sundeep Shakya, Perdeep Mehta, and Tom Curran. BGEM: an in situ hybridization
|
bshanks@96 | 745 database of gene expression in the embryonic and adult mouse nervous system. PLoS Biology, 4(4):e86
|
bshanks@96 | 746 EP –, April 2006.
|
bshanks@96 | 747 [15]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Su-
|
bshanks@96 | 748 san M Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann,
|
bshanks@96 | 749 David J Anderson, Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas
|
bshanks@96 | 750 of the adult mouse brain. Nat Neurosci, 12(3):356–362, March 2009.
|
bshanks@96 | 751 [16]Christopher J. Paciorek. Computational techniques for spatial logistic regression with large data sets. Com-
|
bshanks@96 | 752 putational Statistics & Data Analysis, 51(8):3631–3653, May 2007.
|
bshanks@96 | 753 [17]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2
|
bshanks@96 | 754 edition, July 2001.
|
bshanks@96 | 755 [18]A. Schleicher, N. Palomero-Gallagher, P. Morosan, S. Eickhoff, T. Kowalski, K. Vos, K. Amunts, and
|
bshanks@96 | 756 K. Zilles. Quantitative architectural analysis: a new approach to cortical mapping. Anatomy and Em-
|
bshanks@96 | 757 bryology, 210(5):373–386, December 2005.
|
bshanks@96 | 758 [19]Oliver Schmitt, Lars Hmke, and Lutz Dmbgen. Detection of cortical transition regions utilizing statistical
|
bshanks@96 | 759 analyses of excess masses. NeuroImage, 19(1):42–63, May 2003.
|
bshanks@96 | 760 [20]Constance M. Smith, Jacqueline H. Finger, Terry F. Hayamizu, Ingeborg J. McCright, Janan T. Eppig,
|
bshanks@96 | 761 James A. Kadin, Joel E. Richardson, and Martin Ringwald. The mouse gene expression database (GXD):
|
bshanks@96 | 762 2007 update. Nucl. Acids Res., 35(suppl_1):D618–623, 2007.
|
bshanks@96 | 763 [21]Judy Sprague, Leyla Bayraktaroglu, Dave Clements, Tom Conlin, David Fashena, Ken Frazer, Melissa
|
bshanks@96 | 764 Haendel, Douglas G Howe, Prita Mani, Sridhar Ramachandran, Kevin Schaper, Erik Segerdell, Peiran
|
bshanks@96 | 765 Song, Brock Sprunger, Sierra Taylor, Ceri E Van Slyke, and Monte Westerfield. The zebrafish information
|
bshanks@96 | 766 network: the zebrafish model organism database. Nucleic Acids Research, 34(Database issue):D581–5,
|
bshanks@96 | 767 2006. PMID: 16381936.
|
bshanks@96 | 768 [22]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003.
|
bshanks@96 | 769 [23]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T.
|
bshanks@96 | 770 Mortrud, Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H.
|
bshanks@96 | 771 Gage, Allan R. Jones, Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the
|
bshanks@96 | 772 hippocampus. Neuron, 60(6):1010–1021, December 2008.
|
bshanks@96 | 773 [24]Pavel Tomancak, Amy Beaton, Richard Weiszmann, Elaine Kwan, ShengQiang Shu, Suzanna E Lewis,
|
bshanks@96 | 774 Stephen Richards, Michael Ashburner, Volker Hartenstein, Susan E Celniker, and Gerald M Rubin. Sys-
|
bshanks@96 | 775 tematic determination of patterns of gene expression during drosophila embryogenesis. Genome Biology,
|
bshanks@96 | 776 3(12):research008818814, 2002. PMC151190.
|
bshanks@96 | 777 [25]Jano van Hemert and Richard Baldock. Mining Spatial Gene Expression Data for Association Rules, volume
|
bshanks@96 | 778 4414/2007 of Lecture Notes in Computer Science, pages 66–76. Springer Berlin / Heidelberg, 2007.
|
bshanks@96 | 779 [26]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton,
|
bshanks@96 | 780 Thomas P. Perry, Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen.
|
bshanks@96 | 781 EMAGE edinburgh mouse atlas of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860–
|
bshanks@96 | 782 865, 2008.
|
bshanks@96 | 783 [27]Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression patterns in
|
bshanks@96 | 784 the mouse embryo. Nucl. Acids Res., 32(suppl_1):D552–556, 2004.
|
bshanks@96 | 785 [28]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa
|
bshanks@96 | 786 Agarwala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood,
|
bshanks@96 | 787 Robert Baertsch, Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer
|
bshanks@96 | 788 Bork, Marc Botcherby, Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John
|
bshanks@96 | 789 Burton, Jonathan Butler, Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T
|
bshanks@96 | 790 Chinwalla, Deanna M Church, Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R
|
bshanks@96 | 791 Copley, Alan Coulson, Olivier Couronne, James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy
|
bshanks@96 | 792 Davies, Kimberly D Delehaunty, Justin Deri, Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens,
|
bshanks@96 | 793 Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes,
|
bshanks@96 | 794 Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A Fewell, Paul Flicek, Karen Foley, Wayne N
|
bshanks@96 | 795 Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage, Richard A Gibbs, Gustavo
|
bshanks@96 | 796 Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves, Eric D Green,
|
bshanks@96 | 797 Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki,
|
bshanks@96 | 798 LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard,
|
bshanks@96 | 799 Adrienne Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy,
|
bshanks@96 | 800 Michael Kamal, Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn
|
bshanks@96 | 801 Kells, W James Kent, Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David
|
bshanks@96 | 802 Kulp, Tom Landers, J P Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd,
|
bshanks@96 | 803 Susan Lucas, Bin Ma, Donna R Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer,
|
bshanks@96 | 804 Megan McCarthy, W Richard McCombie, Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim,
|
bshanks@96 | 805 Beverley Meredith, Jill P Mesirov, Webb Miller, Tracie L Miner, Emmanuel Mongin, Kate T Montgomery,
|
bshanks@96 | 806 Michael Morgan, Richard Mott, James C Mullikin, Donna M Muzny, William E Nash, Joanne O Nelson,
|
bshanks@96 | 807 Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J O’Connor, Yasushi Okazaki, Karen
|
bshanks@96 | 808 Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane Peterson, Pavel Pevzner,
|
bshanks@96 | 809 Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter, Michael Quail,
|
bshanks@96 | 810 Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph Santos,
|
bshanks@96 | 811 Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven
|
bshanks@96 | 812 Seaman, Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer,
|
bshanks@96 | 813 Guy Slater, Arian Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles
|
bshanks@96 | 814 Sugnet, Mikita Suyama, Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp,
|
bshanks@96 | 815 Catherine Ucla, Abel Ureta-Vidal, Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie
|
bshanks@96 | 816 Wall, Ryan J Weber, Robert B Weiss, Michael C Wendl, Anthony P West, Kris Wetterstrand, Raymond
|
bshanks@96 | 817 Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey, Sophie Williams, Richard K Wilson, Eitan Win-
|
bshanks@96 | 818 ter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang, Evgeny M Zdobnov, Michael C Zody, and
|
bshanks@96 | 819 Eric S Lander. Initial sequencing and comparative analysis of the mouse genome. Nature, 420(6915):520–
|
bshanks@96 | 820 62, December 2002. PMID: 12466850.
|
bshanks@33 | 821
|
bshanks@33 | 822
|