rev |
line source |
bshanks@0 | 1 Specific aims
|
bshanks@96 | 2 Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in
|
bshanks@96 | 3 situ transgenic reporter, microarray voxelation, and others, allow the expression levels of many genes at many
|
bshanks@96 | 4 locations to be compared. Our goal is to develop automated methods to relate spatial variation in gene expres-
|
bshanks@96 | 5 sion to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical
|
bshanks@96 | 6 maps based on gene expression patterns. We have three specific aims:
|
bshanks@96 | 7 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which
|
bshanks@96 | 8 selectively target anatomical regions
|
bshanks@96 | 9 (2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions,
|
bshanks@96 | 10 based on spatial patterns in gene expression
|
bshanks@96 | 11 (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen
|
bshanks@96 | 12 Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending
|
bshanks@96 | 13 the functionality of Caret, an existing open-source scientific imaging program. Use this dataset to validate the
|
bshanks@96 | 14 methods developed in (1) and (2).
|
bshanks@96 | 15 Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that
|
bshanks@96 | 16 the methods developed in aims (1) and (2) will generalize to any sort of high-dimensional data over points located
|
bshanks@96 | 17 in a low-dimensional space. In particular, our method could be applied to genome-wide sequencing data derived
|
bshanks@96 | 18 from sets of tissues and disease states.
|
bshanks@96 | 19 In terms of the application of the methods to cerebral cortex, aim (1) is to go from cortical areas to marker
|
bshanks@96 | 20 genes, and aim (2) is to let the gene profile define the cortical areas. In addition to validating the usefulness
|
bshanks@96 | 21 of the algorithms, the application of these methods to cortex will produce immediate benefits, because there
|
bshanks@96 | 22 are currently no known genetic markers for most cortical areas. The results of the project will support the
|
bshanks@96 | 23 development of new ways to selectively target cortical areas, and it will support the development of a method for
|
bshanks@96 | 24 identifying the cortical areal boundaries present in small tissue samples.
|
bshanks@96 | 25 All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well
|
bshanks@96 | 26 as the machine-readable datasets developed in aim (3), will be published and freely available for others to use.
|
bshanks@87 | 27 The challenge topic
|
bshanks@96 | 28 This proposal addresses challenge topic 06-HG-101. Massive new datasets obtained with techniques such as
|
bshanks@96 | 29 in situ hybridization (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others,
|
bshanks@96 | 30 allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated
|
bshanks@96 | 31 methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific
|
bshanks@96 | 32 anatomical regions, and also to draw new anatomical maps based on gene expression patterns.
|
bshanks@101 | 33 ______________
|
bshanks@101 | 34 The Challenge and Potential impact
|
bshanks@96 | 35 Each of our three aims will be discussed in turn. For each aim, we will develop a conceptual framework for
|
bshanks@96 | 36 thinking about the task, and we will present our strategy for solving it. Next we will discuss related work. At the
|
bshanks@96 | 37 conclusion of each section, we will summarize why our strategy is different from what has been done before. At
|
bshanks@96 | 38 the end of this section, we will describe the potential impact.
|
bshanks@101 | 39 Aim 1: Given a map of regions, find genes that mark the regions
|
bshanks@101 | 40
|
bshanks@98 | 41 Figure 1: Gene Pitx2
|
bshanks@98 | 42 is selectively underex-
|
bshanks@101 | 43 pressed in area SS. Machine learning terminology: classifiers The task of looking for marker genes for
|
bshanks@101 | 44 known anatomical regions means that one is looking for a set of genes such that, if
|
bshanks@101 | 45 the expression level of those genes is known, then the locations of the regions can be
|
bshanks@101 | 46 inferred.
|
bshanks@101 | 47 If we define the regions so that they cover the entire anatomical structure to be
|
bshanks@101 | 48 subdivided, we may say that we are using gene expression in each voxel to assign
|
bshanks@101 | 49 that voxel to the proper area. We call this a classification task, because each voxel
|
bshanks@101 | 50 is being assigned to a class (namely, its region). An understanding of the relationship
|
bshanks@101 | 51 between the combination of their expression levels and the locations of the regions may
|
bshanks@101 | 52 be expressed as a function. The input to this function is a voxel, along with the gene
|
bshanks@101 | 53 expression levels within that voxel; the output is the regional identity of the target voxel,
|
bshanks@98 | 54 that is, the region to which the target voxel belongs. We call this function a classifier. In general, the input to a
|
bshanks@98 | 55 classifier is called an instance, and the output is called a label (or a class label).
|
bshanks@101 | 56 The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for
|
bshanks@96 | 57 determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene
|
bshanks@96 | 58 expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. The
|
bshanks@96 | 59 initial gene expression dataset used in the construction of the classifier is called training data. In the machine
|
bshanks@96 | 60 learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a task in
|
bshanks@96 | 61 which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances
|
bshanks@96 | 62 (voxels) for which the labels (regions) are known.
|
bshanks@101 | 63 Each gene expression level is called a feature, and the selection of which genes1 to include is called feature
|
bshanks@96 | 64 selection. Feature selection is one component of the task of learning a classifier. Some methods for learning
|
bshanks@96 | 65 classifiers start out with a separate feature selection phase, whereas other methods combine feature selection
|
bshanks@96 | 66 with other aspects of training.
|
bshanks@101 | 67 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked
|
bshanks@96 | 68 genes are then chosen. Some scoring measures can assign a score to a set of selected genes, not just to a
|
bshanks@96 | 69 single gene; in this case, a dynamic procedure may be used in which features are added and subtracted from the
|
bshanks@96 | 70 selected set depending on how much they raise the score. Such procedures are called “stepwise” or “greedy”.
|
bshanks@101 | 71 Although the classifier itself may only look at the gene expression data within each voxel before classifying
|
bshanks@96 | 72 that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize
|
bshanks@96 | 73 score-based feature selection methods depending on how the score of calculated. Often the score calculation
|
bshanks@96 | 74 consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the
|
bshanks@96 | 75 aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to
|
bshanks@96 | 76 calculate a voxel’s sub-score, then we say it is a local scoring method. If only information from the voxel itself is
|
bshanks@96 | 77 used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method.
|
bshanks@101 | 78 Both gene expression data and anatomical atlases have errors, due to a variety of factors. Individual subjects
|
bshanks@101 | 79 1Strictly speaking, the features are gene expression levels, but we’ll call them genes.
|
bshanks@99 | 80 have idiosyncratic anatomy. Subjects may be improperly registered to the atlas. The method used to measure
|
bshanks@96 | 81 gene expression may be noisy. The atlas may have errors. It is even possible that some areas in the anatomical
|
bshanks@96 | 82 atlas are “wrong” in that they do not have the same shape as the natural domains of gene expression to which
|
bshanks@96 | 83 they correspond. These sources of error can affect the displacement and the shape of both the gene expression
|
bshanks@96 | 84 data and the anatomical target areas. Therefore, it is important to use feature selection methods which are
|
bshanks@96 | 85 robust to these kinds of errors.
|
bshanks@85 | 86 Our strategy for Aim 1
|
bshanks@98 | 87
|
bshanks@98 | 88
|
bshanks@98 | 89 Figure 2: Top row: Genes Nfic
|
bshanks@98 | 90 and A930001M12Rik are the most
|
bshanks@98 | 91 correlated with area SS (somatosen-
|
bshanks@98 | 92 sory cortex). Bottom row: Genes
|
bshanks@98 | 93 C130038G02Rik and Cacna1i are
|
bshanks@98 | 94 those with the best fit using logistic
|
bshanks@98 | 95 regression. Within each picture, the
|
bshanks@98 | 96 vertical axis roughly corresponds to
|
bshanks@98 | 97 anterior at the top and posterior at the
|
bshanks@98 | 98 bottom, and the horizontal axis roughly
|
bshanks@98 | 99 corresponds to medial at the left and
|
bshanks@98 | 100 lateral at the right. The red outline is
|
bshanks@98 | 101 the boundary of region SS. Pixels are
|
bshanks@98 | 102 colored according to correlation, with
|
bshanks@98 | 103 red meaning high correlation and blue
|
bshanks@98 | 104 meaning low. Key questions when choosing a learning method are: What are the
|
bshanks@98 | 105 instances? What are the features? How are the features chosen?
|
bshanks@98 | 106 Here are four principles that outline our answers to these questions.
|
bshanks@98 | 107 Principle 1: Combinatorial gene expression
|
bshanks@98 | 108 It is too much to hope that every anatomical region of interest will be
|
bshanks@98 | 109 identified by a single gene. For example, in the cortex, there are some
|
bshanks@98 | 110 areas which are not clearly delineated by any gene included in the
|
bshanks@98 | 111 Allen Brain Atlas (ABA) dataset. However, at least some of these areas
|
bshanks@98 | 112 can be delineated by looking at combinations of genes (an example
|
bshanks@98 | 113 of an area for which multiple genes are necessary and sufficient is
|
bshanks@98 | 114 provided in Preliminary Studies, Figure 4). Therefore, each instance
|
bshanks@98 | 115 should contain multiple features (genes).
|
bshanks@98 | 116 Principle 2: Only look at combinations of small numbers of
|
bshanks@98 | 117 genes
|
bshanks@98 | 118 When the classifier classifies a voxel, it is only allowed to look at
|
bshanks@98 | 119 the expression of the genes which have been selected as features.
|
bshanks@98 | 120 The more data that are available to a classifier, the better that it can do.
|
bshanks@98 | 121 For example, perhaps there are weak correlations over many genes
|
bshanks@98 | 122 that add up to a strong signal. So, why not include every gene as a
|
bshanks@98 | 123 feature? The reason is that we wish to employ the classifier in situations
|
bshanks@98 | 124 in which it is not feasible to gather data about every gene. For example,
|
bshanks@98 | 125 if we want to use the expression of marker genes as a trigger for some
|
bshanks@98 | 126 regionally-targeted intervention, then our intervention must contain a
|
bshanks@98 | 127 molecular mechanism to check the expression level of each marker
|
bshanks@98 | 128 gene before it triggers. It is currently infeasible to design a molecular
|
bshanks@98 | 129 trigger that checks the level of more than a handful of genes. Similarly,
|
bshanks@98 | 130 if the goal is to develop a procedure to do ISH on tissue samples in
|
bshanks@98 | 131 order to label their anatomy, then it is infeasible to label more than a
|
bshanks@98 | 132 few genes. Therefore, we must select only a few genes as features.
|
bshanks@96 | 133 The requirement to find combinations of only a small number of genes limits us from straightforwardly ap-
|
bshanks@96 | 134 plying many of the most simple techniques from the field of supervised machine learning. In the parlance of
|
bshanks@96 | 135 machine learning, our task combines feature selection with supervised learning.
|
bshanks@30 | 136 Principle 3: Use geometry in feature selection
|
bshanks@96 | 137 When doing feature selection with score-based methods, the simplest thing to do would be to score the per-
|
bshanks@96 | 138 formance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach
|
bshanks@96 | 139 is to also use information about the geometric relations between each voxel and its neighbors; this requires non-
|
bshanks@96 | 140 pointwise, local scoring methods. See Preliminary Studies, figure 3 for evidence of the complementary nature of
|
bshanks@96 | 141 pointwise and local scoring methods.
|
bshanks@30 | 142 Principle 4: Work in 2-D whenever possible
|
bshanks@96 | 143 There are many anatomical structures which are commonly characterized in terms of a two-dimensional
|
bshanks@96 | 144 manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be
|
bshanks@96 | 145 improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for
|
bshanks@96 | 146 humans to visualize and work with 2-D data. Therefore, when possible, the instances should represent pixels,
|
bshanks@96 | 147 not voxels.
|
bshanks@43 | 148 Related work
|
bshanks@98 | 149
|
bshanks@98 | 150
|
bshanks@98 | 151 Figure 3: The top row shows the two
|
bshanks@98 | 152 genes which (individually) best predict
|
bshanks@98 | 153 area AUD, according to logistic regres-
|
bshanks@98 | 154 sion. The bottom row shows the two
|
bshanks@98 | 155 genes which (individually) best match
|
bshanks@98 | 156 area AUD, according to gradient sim-
|
bshanks@98 | 157 ilarity. From left to right and top to
|
bshanks@98 | 158 bottom, the genes are Ssr1, Efcbp1,
|
bshanks@98 | 159 Ptk7, and Aph1a. There is a substantial body of work on the analysis of gene expres-
|
bshanks@98 | 160 sion data, most of this concerns gene expression data which are not
|
bshanks@98 | 161 fundamentally spatial2.
|
bshanks@98 | 162 As noted above, there has been much work on both supervised
|
bshanks@98 | 163 learning and there are many available algorithms for each. However,
|
bshanks@98 | 164 the algorithms require the scientist to provide a framework for repre-
|
bshanks@98 | 165 senting the problem domain, and the way that this framework is set
|
bshanks@98 | 166 up has a large impact on performance. Creating a good framework
|
bshanks@98 | 167 can require creatively reconceptualizing the problem domain, and is
|
bshanks@98 | 168 not merely a mechanical “fine-tuning” of numerical parameters. For
|
bshanks@98 | 169 example, we believe that domain-specific scoring measures (such as
|
bshanks@98 | 170 gradient similarity, which is discussed in Preliminary Studies) may be
|
bshanks@98 | 171 necessary in order to achieve the best results in this application.
|
bshanks@101 | 172 We now turn to efforts to find marker genes using spatial gene ex-
|
bshanks@101 | 173 pression data using automated methods.
|
bshanks@98 | 174 GeneAtlas[5] and EMAGE [26] allow the user to construct a search
|
bshanks@99 | 175 query by demarcating regions and then specifying either the strength of
|
bshanks@98 | 176 expression or the name of another gene or dataset whose expression
|
bshanks@98 | 177 pattern is to be matched. Neither GeneAtlas nor EMAGE allow one to
|
bshanks@101 | 178 search for combinations of genes that define a region in concert but not
|
bshanks@101 | 179 separately.
|
bshanks@101 | 180 [15] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA
|
bshanks@101 | 181 has three components. Gene Finder: The user selects a seed voxel and the system (1) chooses a cluster which
|
bshanks@101 | 182 includes the seed voxel, (2) yields a list of genes which are overexpressed in that cluster. Correlation: The
|
bshanks@101 | 183 user selects a seed voxel and the system then shows the user how much correlation there is between the gene
|
bshanks@101 | 184 expression profile of the seed voxel and every other voxel. Clusters: will be described later. [6] looks at the mean
|
bshanks@101 | 185 expression level of genes within anatomical regions, and applies a Student’s t-test with Bonferroni correction to
|
bshanks@101 | 186 determine whether the mean expression level of a gene is significantly higher in the target region. [15] and [6]
|
bshanks@101 | 187 differ from our Aim 1 in at least three ways. First, [15] and [6] find only single genes, whereas we will also look
|
bshanks@101 | 188 for combinations of genes. Second, [15] and [6] can only use overexpression as a marker, whereas we will also
|
bshanks@101 | 189 search for underexpression. Third, [15] and [6] use scores based on pointwise expression levels, whereas we
|
bshanks@101 | 190 will also use geometric scores such as gradient similarity (described in Preliminary Studies). Figures 4, 1, and 3
|
bshanks@101 | 191 in the Preliminary Studies section contain evidence that each of our three choices is the right one.
|
bshanks@96 | 192 [10 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use
|
bshanks@96 | 193 an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to
|
bshanks@99 | 194 match a target image.
|
bshanks@96 | 195 In summary, there has been fruitful work on finding marker genes, but only one of the previous projects
|
bshanks@96 | 196 explores combinations of marker genes, and none of these publications compare the results obtained by using
|
bshanks@96 | 197 different algorithms or scoring methods.
|
bshanks@84 | 198 Aim 2: From gene expression data, discover a map of regions
|
bshanks@101 | 199 Machine learning terminology: clustering
|
bshanks@101 | 200 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is
|
bshanks@101 | 201 referred to as unsupervised learning in the jargon of machine learning. One thing that you can do with such a
|
bshanks@101 | 202 dataset is to group instances together. A set of similar instances is called a cluster, and the activity of finding
|
bshanks@101 | 203 grouping the data into clusters is called clustering or cluster analysis.
|
bshanks@101 | 204 _________________________________________
|
bshanks@101 | 205 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates;
|
bshanks@101 | 206 not just data which have only a few different locations or which is indexed by anatomical label.
|
bshanks@101 | 207 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The
|
bshanks@101 | 208 instances are once again voxels (or pixels) along with their associated gene expression profiles. We make
|
bshanks@101 | 209 the assumption that voxels from the same anatomical region have similar gene expression profiles, at least
|
bshanks@101 | 210 compared to the other regions. This means that clustering voxels is the same as finding potential regions; we
|
bshanks@101 | 211 seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression.
|
bshanks@101 | 212 It is desirable to determine not just one set of regions, but also how these regions relate to each other. The
|
bshanks@101 | 213 outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition
|
bshanks@101 | 214 the voxels. This is called hierarchical clustering.
|
bshanks@101 | 215 Similarity scores A crucial choice when designing a clustering method is how to measure similarity, across
|
bshanks@101 | 216 either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature
|
bshanks@101 | 217 selection (discussed above under Aim 1) and scoring methods for similarity.
|
bshanks@98 | 218
|
bshanks@98 | 219
|
bshanks@98 | 220 Figure 4: Upper left: wwc1. Upper
|
bshanks@98 | 221 right: mtif2. Lower left: wwc1 + mtif2
|
bshanks@98 | 222 (each pixel’s value on the lower left is
|
bshanks@98 | 223 the sum of the corresponding pixels in
|
bshanks@101 | 224 the upper row). Spatially contiguous clusters; image segmentation We have
|
bshanks@101 | 225 shown that aim 2 is a type of clustering task. In fact, it is a special
|
bshanks@101 | 226 type of clustering task because we have an additional constraint on
|
bshanks@101 | 227 clusters; voxels grouped together into a cluster must be spatially con-
|
bshanks@101 | 228 tiguous. In Preliminary Studies, we show that one can get reasonable
|
bshanks@101 | 229 results without enforcing this constraint; however, we plan to compare
|
bshanks@101 | 230 these results against other methods which guarantee contiguous clus-
|
bshanks@101 | 231 ters.
|
bshanks@101 | 232 Image segmentation is the task of partitioning the pixels in a digital
|
bshanks@101 | 233 image into clusters, usually contiguous clusters. Aim 2 is similar to an
|
bshanks@101 | 234 image segmentation task. There are two main differences; in our task,
|
bshanks@101 | 235 there are thousands of color channels (one for each gene), rather than
|
bshanks@101 | 236 just three3. A more crucial difference is that there are various cues
|
bshanks@101 | 237 which are appropriate for detecting sharp object boundaries in a visual
|
bshanks@101 | 238 scene but which are not appropriate for segmenting abstract spatial
|
bshanks@101 | 239 data such as gene expression. Although many image segmentation
|
bshanks@101 | 240 algorithms can be expected to work well for segmenting other sorts of
|
bshanks@101 | 241 spatially arranged data, some of these algorithms are specialized for
|
bshanks@101 | 242 visual images.
|
bshanks@96 | 243 Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression
|
bshanks@96 | 244 feature vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying
|
bshanks@96 | 245 data.
|
bshanks@96 | 246 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion
|
bshanks@98 | 247 in the instances. However, some clustering algorithms perform better on small numbers of features4. There are
|
bshanks@96 | 248 techniques which “summarize” a larger number of features using a smaller number of features; these techniques
|
bshanks@101 | 249 go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique
|
bshanks@101 | 250 yields is called the reduced feature set. Note that the features in the reduced feature set do not necessarily
|
bshanks@101 | 251 correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
|
bshanks@101 | 252 Clustering genes rather than voxels Although the ultimate goal is to cluster the instances (voxels or pixels),
|
bshanks@101 | 253 one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes
|
bshanks@101 | 254 could be used.
|
bshanks@101 | 255 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene,
|
bshanks@101 | 256 we could have one reduced feature for each gene cluster.
|
bshanks@101 | 257 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have
|
bshanks@101 | 258 an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following
|
bshanks@98 | 259 _________________________________________
|
bshanks@98 | 260 3There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are
|
bshanks@98 | 261 often used to process satellite imagery.
|
bshanks@98 | 262 4First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering
|
bshanks@98 | 263 algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.
|
bshanks@101 | 264 procedure: cluster together genes which pick out similar regions, and then to use the more popular common
|
bshanks@101 | 265 regions as the final clusters. In Preliminary Studies, Figure 7, we show that a number of anatomically recognized
|
bshanks@101 | 266 cortical regions, as well as some “superregions” formed by lumping together a few regions, are associated with
|
bshanks@101 | 267 gene clusters in this fashion.
|
bshanks@85 | 268
|
bshanks@85 | 269
|
bshanks@85 | 270
|
bshanks@85 | 271
|
bshanks@96 | 272 Figure 5: From left to right and top
|
bshanks@96 | 273 to bottom, single genes which roughly
|
bshanks@96 | 274 identify areas SS (somatosensory pri-
|
bshanks@96 | 275 mary + supplemental), SSs (supple-
|
bshanks@96 | 276 mental somatosensory), PIR (piriform),
|
bshanks@96 | 277 FRP (frontal pole), RSP (retrosple-
|
bshanks@96 | 278 nial), COApm (Cortical amygdalar, pos-
|
bshanks@96 | 279 terior part, medial zone). Grouping
|
bshanks@96 | 280 some areas together, we have also
|
bshanks@96 | 281 found genes to identify the groups
|
bshanks@85 | 282 ACA+PL+ILA+DP+ORB+MO (anterior
|
bshanks@96 | 283 cingulate, prelimbic, infralimbic, dor-
|
bshanks@96 | 284 sal peduncular, orbital, motor), poste-
|
bshanks@96 | 285 rior and lateral visual (VISpm, VISpl,
|
bshanks@96 | 286 VISI, VISp; posteromedial, posterolat-
|
bshanks@96 | 287 eral, lateral, and primary visual; the
|
bshanks@96 | 288 posterior and lateral visual area is dis-
|
bshanks@96 | 289 tinguished from its neighbors, but not
|
bshanks@96 | 290 from the entire rest of the cortex). The
|
bshanks@96 | 291 genes are Pitx2, Aldh1a2, Ppfibp1,
|
bshanks@101 | 292 Slco1a5, Tshz2, Trhr, Col12a1, Ets1. Related work
|
bshanks@98 | 293 Some researchers have attempted to parcellate cortex on the basis of
|
bshanks@98 | 294 non-gene expression data. For example, [18], [2], [19], and [1] asso-
|
bshanks@98 | 295 ciate spots on the cortex with the radial profile5 of response to some
|
bshanks@98 | 296 stain ([12] uses MRI), extract features from this profile, and then use
|
bshanks@98 | 297 similarity between surface pixels to cluster.
|
bshanks@98 | 298 [23] describes an analysis of the anatomy of the hippocampus us-
|
bshanks@98 | 299 ing the ABA dataset. In addition to manual analysis, two clustering
|
bshanks@98 | 300 methods were employed, a modified Non-negative Matrix Factoriza-
|
bshanks@99 | 301 tion (NNMF), and a hierarchical recursive bifurcation clustering scheme
|
bshanks@98 | 302 based on correlation as the similarity score. The paper yielded impres-
|
bshanks@98 | 303 sive results, proving the usefulness of computational genomic anatomy.
|
bshanks@98 | 304 We have run NNMF on the cortical dataset
|
bshanks@99 | 305 AGEA[15] includes a preset hierarchical clustering of voxels based
|
bshanks@98 | 306 on a recursive bifurcation algorithm with correlation as the similarity
|
bshanks@98 | 307 metric. EMAGE[26] allows the user to select a dataset from among a
|
bshanks@98 | 308 large number of alternatives, or by running a search query, and then to
|
bshanks@99 | 309 cluster the genes within that dataset. EMAGE clusters via hierarchical
|
bshanks@99 | 310 complete linkage clustering.
|
bshanks@99 | 311 [6] clusters genes. For each cluster, prototypical spatial expression
|
bshanks@99 | 312 patterns were created by averaging the genes in the cluster. The pro-
|
bshanks@99 | 313 totypes were analyzed manually, without clustering voxels.
|
bshanks@101 | 314 [10] applies their technique for finding combinations of marker genes
|
bshanks@101 | 315 for the purpose of clustering genes around a “seed gene”.
|
bshanks@98 | 316 In summary, although these projects obtained clusterings, there has
|
bshanks@98 | 317 not been much comparison between different algorithms or scoring
|
bshanks@98 | 318 methods, so it is likely that the best clustering method for this appli-
|
bshanks@101 | 319 cation has not yet been found. The projects using gene expression
|
bshanks@101 | 320 on cortex did not attempt to make use of the radial profile of gene ex-
|
bshanks@101 | 321 pression. Also, none of these projects did a separate dimensionality
|
bshanks@101 | 322 reduction step before clustering pixels, none tried to cluster genes first
|
bshanks@101 | 323 in order to guide automated clustering of pixels into spatial regions, and
|
bshanks@101 | 324 none used co-clustering algorithms.
|
bshanks@101 | 325 Aim 3: apply the methods developed to the cerebral cortex
|
bshanks@101 | 326 Background
|
bshanks@101 | 327 The cortex is divided into areas and layers. Because of the cortical
|
bshanks@101 | 328 columnar organization, the parcellation of the cortex into areas can be
|
bshanks@101 | 329 drawn as a 2-D map on the surface of the cortex. In the third dimension,
|
bshanks@101 | 330 the boundaries between the areas continue downwards into the cortical
|
bshanks@101 | 331 depth, perpendicular to the surface. The layer boundaries run parallel
|
bshanks@101 | 332 to the surface. One can picture an area of the cortex as a slice of a
|
bshanks@101 | 333 six-layered cake6.
|
bshanks@101 | 334 It is known that different cortical areas have distinct roles in both
|
bshanks@101 | 335 normal functioning and in disease processes, yet there are no known
|
bshanks@92 | 336 _________________________________________
|
bshanks@98 | 337 5A radial profile is a profile along a line perpendicular to the cortical surface.
|
bshanks@101 | 338 6Outside of isocortex, the number of layers varies.
|
bshanks@101 | 339 marker genes for most cortical areas. When it is necessary to divide a
|
bshanks@101 | 340 tissue sample into cortical areas, this is a manual process that requires
|
bshanks@101 | 341 a skilled human to combine multiple visual cues and interpret them in
|
bshanks@101 | 342 the context of their approximate location upon the cortical surface.
|
bshanks@60 | 343
|
bshanks@69 | 344
|
bshanks@69 | 345
|
bshanks@69 | 346
|
bshanks@96 | 347 Figure 6: First row: the first 6 reduced dimensions, using PCA. Sec-
|
bshanks@96 | 348 ond row: the first 6 reduced dimensions, using NNMF. Third row:
|
bshanks@96 | 349 the first six reduced dimensions, using landmark Isomap. Bottom
|
bshanks@96 | 350 row: examples of kmeans clustering applied to reduced datasets
|
bshanks@96 | 351 to find 7 clusters. Left: 19 of the major subdivisions of the cortex.
|
bshanks@96 | 352 Second from left: PCA. Third from left: NNMF. Right: Landmark
|
bshanks@96 | 353 Isomap. Additional details: In the third and fourth rows, 7 dimen-
|
bshanks@96 | 354 sions were found, but only 6 displayed. In the last row: for PCA,
|
bshanks@96 | 355 50 dimensions were used; for NNMF, 6 dimensions were used; for
|
bshanks@101 | 356 landmark Isomap, 7 dimensions were used. Even the questions of how many ar-
|
bshanks@98 | 357 eas should be recognized in cortex, and
|
bshanks@98 | 358 what their arrangement is, are still not com-
|
bshanks@98 | 359 pletely settled. A proposed division of the
|
bshanks@98 | 360 cortex into areas is called a cortical map.
|
bshanks@98 | 361 In the rodent, the lack of a single agreed-
|
bshanks@101 | 362 upon map can be seen by contrasting the
|
bshanks@101 | 363 recent maps given by Swanson[22] on the
|
bshanks@101 | 364 one hand, and Paxinos and Franklin[17] on
|
bshanks@101 | 365 the other. While the maps are certainly
|
bshanks@101 | 366 very similar in their general arrangement,
|
bshanks@101 | 367 significant differences remain.
|
bshanks@101 | 368 The Allen Mouse Brain Atlas dataset
|
bshanks@101 | 369 The Allen Mouse Brain Atlas (ABA)
|
bshanks@101 | 370 data were produced by doing in-situ hy-
|
bshanks@101 | 371 bridization on slices of male, 56-day-old
|
bshanks@101 | 372 C57BL/6J mouse brains. Pictures were
|
bshanks@101 | 373 taken of the processed slice, and these pic-
|
bshanks@101 | 374 tures were semi-automatically analyzed to
|
bshanks@101 | 375 create a digital measurement of gene ex-
|
bshanks@101 | 376 pression levels at each location in each
|
bshanks@101 | 377 slice. Per slice, cellular spatial resolution
|
bshanks@101 | 378 is achieved. Using this method, a single
|
bshanks@101 | 379 physical slice can only be used to measure
|
bshanks@101 | 380 one single gene; many different mouse
|
bshanks@101 | 381 brains were needed in order to measure
|
bshanks@101 | 382 the expression of many genes.
|
bshanks@101 | 383 An automated nonlinear alignment pro-
|
bshanks@101 | 384 cedure located the 2D data from the var-
|
bshanks@101 | 385 ious slices in a single 3D coordinate system. In the final 3D coordinate system, voxels are cubes with 200
|
bshanks@101 | 386 microns on a side. There are 67x41x58 = 159,326 voxels in the 3D coordinate system, of which 51,533 are in
|
bshanks@101 | 387 the brain[15].
|
bshanks@98 | 388 Mus musculus is thought to contain about 22,000 protein-coding genes[28]. The ABA contains data on about
|
bshanks@98 | 389 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our
|
bshanks@98 | 390 dataset is derived from only the coronal subset of the ABA7.
|
bshanks@98 | 391 The ABA is not the only large public spatial gene expression dataset. However, with the exception of the ABA,
|
bshanks@98 | 392 GenePaint, and EMAGE, most of the other resources have not (yet) extracted the expression intensity from the
|
bshanks@98 | 393 ISH images and registered the results into a single 3-D space.
|
bshanks@98 | 394 Related work
|
bshanks@98 | 395 [15 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure
|
bshanks@98 | 396 of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort
|
bshanks@98 | 397 of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical
|
bshanks@98 | 398 map based on gene expression data. Neither of the other components of AGEA can be applied to cortical
|
bshanks@99 | 399 areas; AGEA’s Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA’s hierarchical
|
bshanks@101 | 400 _________________________________________
|
bshanks@101 | 401 7The sagittal data do not cover the entire cortex, and also have greater registration error[15]. Genes were selected by the Allen
|
bshanks@101 | 402 Institute for coronal sectioning based on, “classes of known neuroscientific interest... or through post hoc identification of a marked
|
bshanks@101 | 403 non-ubiquitous expression pattern”[15].
|
bshanks@98 | 404 clustering does not produce clusters corresponding to the cortical areas8.
|
bshanks@98 | 405 In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes,
|
bshanks@98 | 406 (b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no
|
bshanks@99 | 407 work on computationally finding marker genes for cortical areas, or on finding a hierarchical clustering that will
|
bshanks@98 | 408 yield a map of cortical areas de novo from gene expression data.
|
bshanks@98 | 409 Our project is guided by a concrete application with a well-specified criterion of success (how well we can
|
bshanks@98 | 410 find marker genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing
|
bshanks@98 | 411 different methods.
|
bshanks@98 | 412 Significance
|
bshanks@98 | 413
|
bshanks@98 | 414 Figure 7: Prototypes corresponding to sample gene
|
bshanks@98 | 415 clusters, clustered by gradient similarity. Region bound-
|
bshanks@98 | 416 aries for the region that most matches each prototype
|
bshanks@99 | 417 are overlaid. The method developed in aim (1) will be applied to
|
bshanks@98 | 418 each cortical area to find a set of marker genes such
|
bshanks@98 | 419 that the combinatorial expression pattern of those
|
bshanks@98 | 420 genes uniquely picks out the target area. Finding
|
bshanks@98 | 421 marker genes will be useful for drug discovery as well
|
bshanks@98 | 422 as for experimentation because marker genes can be
|
bshanks@98 | 423 used to design interventions which selectively target
|
bshanks@98 | 424 individual cortical areas.
|
bshanks@98 | 425 The application of the marker gene finding algo-
|
bshanks@98 | 426 rithm to the cortex will also support the development
|
bshanks@98 | 427 of new neuroanatomical methods. In addition to find-
|
bshanks@98 | 428 ing markers for each individual cortical areas, we will
|
bshanks@98 | 429 find a small panel of genes that can find many of the
|
bshanks@98 | 430 areal boundaries at once. This panel of marker genes
|
bshanks@98 | 431 will allow the development of an ISH protocol that will allow experimenters to more easily identify which anatom-
|
bshanks@98 | 432 ical areas are present in small samples of cortex.
|
bshanks@98 | 433 The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation
|
bshanks@98 | 434 of a better map. The development of present-day cortical maps was driven by the application of histological
|
bshanks@98 | 435 stains. If a different set of stains had been available which identified a different set of features, then today’s
|
bshanks@98 | 436 cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns
|
bshanks@100 | 437 in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to
|
bshanks@100 | 438 incorporate what we can learn from looking at the patterns of gene expression.
|
bshanks@98 | 439 While we do not here propose to analyze human gene expression data, it is conceivable that the methods
|
bshanks@98 | 440 we propose to develop could be used to suggest modifications to the human cortical map as well. In fact, the
|
bshanks@98 | 441 methods we will develop will be applicable to other datasets beyond the brain.
|
bshanks@101 | 442 _______________________________
|
bshanks@101 | 443 The approach: Preliminary Studies
|
bshanks@101 | 444 Format conversion between SEV, MATLAB, NIFTI
|
bshanks@98 | 445 We have created software to (politely) download all of the SEV files9 from the Allen Institute website. We have
|
bshanks@98 | 446 also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s
|
bshanks@98 | 447 file formats.
|
bshanks@101 | 448 Flatmap of cortex
|
bshanks@98 | 449 We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex.
|
bshanks@98 | 450 We divided the cortex into hemispheres. Using Caret[7], we created a mesh representation of the surface of the
|
bshanks@98 | 451 selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression
|
bshanks@98 | 452 of the voxels “underneath” that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We
|
bshanks@98 | 453 sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this
|
bshanks@98 | 454 grid into a MATLAB matrix. We manually traced the boundaries of each of 49 cortical areas from the ABA coronal
|
bshanks@101 | 455 8In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer
|
bshanks@101 | 456 are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a
|
bshanks@101 | 457 pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.
|
bshanks@101 | 458 9SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.
|
bshanks@98 | 459 reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the
|
bshanks@98 | 460 mesh surface. We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the
|
bshanks@98 | 461 region data into MATLAB format.
|
bshanks@98 | 462 At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries
|
bshanks@98 | 463 representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent
|
bshanks@98 | 464 the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries
|
bshanks@98 | 465 represent the average expression level underneath each surface pixel. We created a normalized version of the
|
bshanks@98 | 466 gene expression data by subtracting each gene’s mean expression level (over all surface pixels) and dividing the
|
bshanks@98 | 467 expression level of each gene by its standard deviation. The features and the target area are both functions on
|
bshanks@98 | 468 the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can
|
bshanks@98 | 469 be thought of as images which can be displayed on the flatmapped surface.
|
bshanks@98 | 470 To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix
|
bshanks@98 | 471 for each cortical layer to represent the average expression level within that layer. Cortical layers are found at
|
bshanks@98 | 472 different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have
|
bshanks@98 | 473 extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the
|
bshanks@98 | 474 Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually
|
bshanks@98 | 475 demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
|
bshanks@98 | 476 Feature selection and scoring methods
|
bshanks@98 | 477 Underexpression of a gene can serve as a marker Underexpression of a gene can sometimes serve as a
|
bshanks@98 | 478 marker. See, for example, Figure 1.
|
bshanks@98 | 479 Correlation Recall that the instances are surface pixels, and consider the problem of attempting to classify
|
bshanks@98 | 480 each instance as either a member of a particular anatomical area, or not. The target area can be represented
|
bshanks@98 | 481 as a boolean mask over the surface pixels.
|
bshanks@98 | 482 We calculated the correlation between each gene and each cortical area. The top row of Figure 2 shows the
|
bshanks@98 | 483 three genes most correlated with area SS.
|
bshanks@100 | 484 Conditional entropy
|
bshanks@100 | 485 For each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene
|
bshanks@98 | 486 expression boolean masks such that the conditional entropy of the target area’s boolean mask, conditioned
|
bshanks@98 | 487 upon the pair of gene expression boolean masks, is minimized.
|
bshanks@98 | 488 This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the
|
bshanks@98 | 489 question, “Is this surface pixel a member of the target area?”. Its advantage over linear methods such as logistic
|
bshanks@98 | 490 regression is that it takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables
|
bshanks@98 | 491 predicts the target, conditional entropy would notice, whereas linear methods would not.
|
bshanks@98 | 492 Gradient similarity We noticed that the previous two scoring methods, which are pointwise, often found
|
bshanks@98 | 493 genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed
|
bshanks@98 | 494 a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a
|
bshanks@98 | 495 boundary whose shape is similar to the shape of the target region. We call this scoring method “gradient
|
bshanks@98 | 496 similarity”. The formula is:
|
bshanks@98 | 497 ∑
|
bshanks@98 | 498 pixel<img src="cmsy8-32.png" alt="∈" />pixels cos(abs(∠∇1 -∠∇2)) ⋅|∇1| + |∇2|
|
bshanks@98 | 499 2 ⋅ pixel_value1 + pixel_value2
|
bshanks@98 | 500 2
|
bshanks@98 | 501 where ∇1 and ∇2 are the gradient vectors of the two images at the current pixel; ∠∇i is the angle of the
|
bshanks@98 | 502 gradient of image i at the current pixel; |∇i| is the magnitude of the gradient of image i at the current pixel; and
|
bshanks@98 | 503 pixel valuei is the value of the current pixel in image i.
|
bshanks@98 | 504 The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders
|
bshanks@98 | 505 are similar, then both images will have corresponding pixels with large gradients (because this is a border) which
|
bshanks@98 | 506 are oriented in a similar direction (because the borders are similar).
|
bshanks@98 | 507 Gradient similarity provides information complementary to correlation
|
bshanks@98 | 508 To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses,
|
bshanks@98 | 509 consider Fig. 3. The pointwise method in the top row identifies genes which express more strongly in AUD than
|
bshanks@98 | 510 outside of it; its weakness is that this includes many areas which don’t have a salient border matching the areal
|
bshanks@98 | 511 border. The geometric method identifies genes whose salient expression border seems to partially line up with
|
bshanks@98 | 512 the border of AUD; its weakness is that this includes genes which don’t express over the entire area.
|
bshanks@98 | 513 Areas which can be identified by single genes Using gradient similarity, we have already found single
|
bshanks@98 | 514 genes which roughly identify some areas and groupings of areas. For each of these areas, an example of
|
bshanks@98 | 515 a gene which roughly identifies it is shown in Figure 5. We have not yet cross-verified these genes in other
|
bshanks@98 | 516 atlases.
|
bshanks@98 | 517 In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior
|
bshanks@98 | 518 part of cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior
|
bshanks@98 | 519 cingulate), VIS (visual), AUD (auditory).
|
bshanks@98 | 520 These results validate our expectation that the ABA dataset can be exploited to find marker genes for many
|
bshanks@98 | 521 cortical areas, while also validating the relevancy of our new scoring method, gradient similarity.
|
bshanks@98 | 522 Combinations of multiple genes are useful and necessary for some areas
|
bshanks@98 | 523 In Figure 4, we give an example of a cortical area which is not marked by any single gene, but which
|
bshanks@99 | 524 can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for
|
bshanks@98 | 525 predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left
|
bshanks@98 | 526 picture in Figure 4 shows wwc1’s spatial expression pattern over the cortex. The lower-right boundary of MO is
|
bshanks@98 | 527 represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D
|
bshanks@98 | 528 representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex.
|
bshanks@98 | 529 MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO’s upper-left
|
bshanks@98 | 530 boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding
|
bshanks@98 | 531 together the values at each pixel in these two figures, we get the lower-left image. This combination captures
|
bshanks@98 | 532 area MO much better than any single gene.
|
bshanks@98 | 533 This shows that our proposal to develop a method to find combinations of marker genes is both possible and
|
bshanks@98 | 534 necessary.
|
bshanks@98 | 535 Multivariate supervised learning
|
bshanks@98 | 536 Forward stepwise logistic regression Logistic regression is a popular method for predictive modeling of cate-
|
bshanks@99 | 537 gorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise
|
bshanks@98 | 538 logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is
|
bshanks@98 | 539 an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes
|
bshanks@98 | 540 found were shown in various figures throughout this document, and Figure 4 shows a combination of genes
|
bshanks@98 | 541 which was found.
|
bshanks@98 | 542 SVM on all genes at once
|
bshanks@98 | 543 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to
|
bshanks@98 | 544 classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of
|
bshanks@98 | 545 about 81%10. This shows that the genes included in the ABA dataset are sufficient to define much of cortical
|
bshanks@98 | 546 anatomy. However, as noted above, a classifier that looks at all the genes at once isn’t as practically useful as a
|
bshanks@98 | 547 classifier that uses only a few genes.
|
bshanks@96 | 548 Data-driven redrawing of the cortical map
|
bshanks@98 | 549 We have applied the following dimensionality reduction algorithms to reduce the dimensionality of the gene ex-
|
bshanks@98 | 550 pression profile associated with each pixel: Principal Components Analysis (PCA), Simple PCA, Multi-Dimensional
|
bshanks@98 | 551 Scaling, Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space Alignment, Stochastic Proximity
|
bshanks@98 | 552 Embedding, Fast Maximum Variance Unfolding, Non-negative Matrix Factorization (NNMF). Space constraints
|
bshanks@98 | 553 prevent us from showing many of the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in
|
bshanks@98 | 554 the first, second, and third rows of Figure 6.
|
bshanks@101 | 555 _
|
bshanks@101 | 556 105-fold cross-validation.
|
bshanks@96 | 557 After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we
|
bshanks@96 | 558 have tried k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are
|
bshanks@96 | 559 shown in the last row of Figure 6. To compare, the leftmost picture on the bottom row of Figure 6 shows some
|
bshanks@96 | 560 of the major subdivisions of cortex. These results clearly show that different dimensionality reduction techniques
|
bshanks@96 | 561 capture different aspects of the data and lead to different clusterings, indicating the utility of our proposal to
|
bshanks@99 | 562 produce a detailed comparison of these techniques as applied to the domain of genomic anatomy.
|
bshanks@98 | 563 Many areas are captured by clusters of genes We also clustered the genes using gradient similarity to
|
bshanks@98 | 564 see if the spatial regions defined by any clusters matched known anatomical regions. Figure 7 shows, for ten
|
bshanks@98 | 565 sample gene clusters, each cluster’s average expression pattern, compared to a known anatomical boundary.
|
bshanks@98 | 566 This suggests that it is worth attempting to cluster genes, and then to use the results to cluster pixels.
|
bshanks@98 | 567 The approach: what we plan to do
|
bshanks@98 | 568 Flatmap cortex and segment cortical layers
|
bshanks@98 | 569 There are multiple ways to flatten 3-D data into 2-D. We will compare mappings from manifolds to planes which
|
bshanks@98 | 570 attempt to preserve size (such as the one used by Caret[7]) with mappings which preserve angle (conformal
|
bshanks@98 | 571 maps). Our method will include a statistical test that warns the user if the assumption of 2-D structure seems to
|
bshanks@98 | 572 be wrong.
|
bshanks@96 | 573 We have not yet made use of radial profiles. While the radial profiles may be used “raw”, for laminar structures
|
bshanks@96 | 574 like the cortex another strategy is to group together voxels in the same cortical layer; each surface pixel would
|
bshanks@96 | 575 then be associated with one expression level per gene per layer. We will develop a segmentation algorithm to
|
bshanks@96 | 576 automatically identify the layer boundaries.
|
bshanks@30 | 577 Develop algorithms that find genetic markers for anatomical regions
|
bshanks@96 | 578 Scoring measures and feature selection We will develop scoring methods for evaluating how good individual
|
bshanks@96 | 579 genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We
|
bshanks@96 | 580 already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring
|
bshanks@96 | 581 measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy,
|
bshanks@96 | 582 gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Student’s t-
|
bshanks@96 | 583 test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a scoring measure
|
bshanks@96 | 584 on genes by taking the prediction error when using that gene to predict the target.
|
bshanks@96 | 585 Using some combination of these measures, we will develop a procedure to find single marker genes for
|
bshanks@96 | 586 anatomical regions: for each cortical area, we will rank the genes by their ability to delineate each area. We
|
bshanks@96 | 587 will quantitatively compare the list of single genes generated by our method to the lists generated by previous
|
bshanks@96 | 588 methods which are mentioned in Aim 1 Related Work.
|
bshanks@96 | 589 Some cortical areas have no single marker genes but can be identified by combinatorial coding. This requires
|
bshanks@96 | 590 multivariate scoring measures and feature selection procedures. Many of the measures, such as expression
|
bshanks@96 | 591 energy, gradient similarity, Jaccard, Dice, Hough, Student’s t, and Mann-Whitney U are univariate. We will extend
|
bshanks@96 | 592 these scoring measures for use in multivariate feature selection, that is, for scoring how well combinations of
|
bshanks@96 | 593 genes, rather than individual genes, can distinguish a target area. There are existing multivariate forms of some
|
bshanks@96 | 594 of the univariate scoring measures, for example, Hotelling’s T-square is a multivariate analog of Student’s t.
|
bshanks@96 | 595 We will develop a feature selection procedure for choosing the best small set of marker genes for a given
|
bshanks@96 | 596 anatomical area. In addition to using the scoring measures that we develop, we will also explore (a) feature
|
bshanks@96 | 597 selection using a stepwise wrapper over “vanilla” classifiers such as logistic regression, (b) supervised learning
|
bshanks@96 | 598 methods such as decision trees which incrementally/greedily combine single gene markers into sets, and (c)
|
bshanks@96 | 599 supervised learning methods which use soft constraints to minimize number of features used, such as sparse
|
bshanks@99 | 600 support vector machines (SVMs).
|
bshanks@96 | 601 Since errors of displacement and of shape may cause genes and target areas to match less than they should,
|
bshanks@96 | 602 we will consider the robustness of feature selection methods in the presence of error. Some of these methods,
|
bshanks@96 | 603 such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will
|
bshanks@96 | 604 consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a
|
bshanks@96 | 605 scoring method on small displacements and distortions of the data adds robustness to registration error at the
|
bshanks@96 | 606 expense of computation time.
|
bshanks@96 | 607 An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape
|
bshanks@96 | 608 of the natural domain of gene expression corresponding to the area is different from the shape of the area as
|
bshanks@96 | 609 recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing
|
bshanks@96 | 610 their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be
|
bshanks@98 | 611 fit if its boundary were redrawn slightly11, and (b) detect when a difficult area could be combined with adjacent
|
bshanks@96 | 612 areas to create a larger area which can be fit.
|
bshanks@96 | 613 A future publication on the method that we develop in Aim 1 will review the scoring measures and quantita-
|
bshanks@96 | 614 tively compare their performance in order to provide a foundation for future research of methods of marker gene
|
bshanks@96 | 615 finding. We will measure the robustness of the scoring measures as well as their absolute performance on our
|
bshanks@96 | 616 dataset.
|
bshanks@96 | 617 Classifiers We will explore and compare different classifiers. As noted above, this activity is not separate
|
bshanks@96 | 618 from the previous one, because some supervised learning algorithms include feature selection, and any clas-
|
bshanks@96 | 619 sifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic
|
bshanks@98 | 620 regression (including spatial models[16]), decision trees12, sparse SVMs, generative mixture models (including
|
bshanks@96 | 621 naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic
|
bshanks@96 | 622 algorithms, and artificial neural networks.
|
bshanks@30 | 623 Develop algorithms to suggest a division of a structure into anatomical parts
|
bshanks@98 | 624 Dimensionality reduction on gene expression profiles We have already described the application of ten
|
bshanks@98 | 625 dimensionality reduction algorithms for the purpose of replacing the gene expression profiles, which are vectors
|
bshanks@98 | 626 of about 4000 gene expression levels, with a smaller number of features. We plan to further explore and interpret
|
bshanks@98 | 627 these results, as well as to apply other unsupervised learning algorithms, including independent components
|
bshanks@98 | 628 analysis, self-organizing maps, and generative models such as deep Boltzmann machines. We will explore ways
|
bshanks@98 | 629 to quantitatively compare the relevance of the different dimensionality reduction methods for identifying cortical
|
bshanks@98 | 630 areal boundaries.
|
bshanks@98 | 631 Dimensionality reduction on pixels Instead of applying dimensionality reduction to the gene expression
|
bshanks@99 | 632 profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in
|
bshanks@98 | 633 this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions.
|
bshanks@98 | 634 Clustering and segmentation on pixels We will explore clustering and segmentation algorithms in order to
|
bshanks@98 | 635 segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving[9], recursive division
|
bshanks@98 | 636 clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transforma-
|
bshanks@98 | 637 tions, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with
|
bshanks@98 | 638 various linkage functions. These methods can be combined with dimensionality reduction.
|
bshanks@98 | 639 Clustering on genes We have already shown that the procedure of clustering genes according to gradient
|
bshanks@98 | 640 similarity, and then creating an averaged prototype of each cluster’s expression pattern, yields some spatial
|
bshanks@98 | 641 patterns which match cortical areas. We will further explore the clustering of genes.
|
bshanks@96 | 642 In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful
|
bshanks@96 | 643 as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then
|
bshanks@96 | 644 replacing their expression levels with a single average expression level, thereby removing some redundancy from
|
bshanks@96 | 645 the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality
|
bshanks@96 | 646 reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would
|
bshanks@96 | 647 help or hurt the ultimate goal of identifying interesting spatial regions.
|
bshanks@99 | 648 Co-clustering There are some algorithms which simultaneously incorporate clustering on instances and on
|
bshanks@98 | 649 features (in our case, genes and pixels), for example, IRM[11]. These are called co-clustering or biclustering
|
bshanks@101 | 650 _________________________________________
|
bshanks@101 | 651 11Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by
|
bshanks@101 | 652 multiple sources of evidence. Interestingly, the need to detect “natural spatial domains of gene expression” in a data-driven fashion
|
bshanks@101 | 653 means that the methods of Aim 2 might be useful in achieving Aim 1, as well – particularly discriminative dimensionality reduction.
|
bshanks@101 | 654 12Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision
|
bshanks@101 | 655 tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was
|
bshanks@101 | 656 too large. We plan to implement a pruning procedure to generate trees that use fewer genes.
|
bshanks@98 | 657 algorithms.
|
bshanks@98 | 658 Radial profiles We wil explore the use of the radial profile of gene expression under each pixel.
|
bshanks@98 | 659 Compare different methods In order to tell which method is best for genomic anatomy, for each experimental
|
bshanks@98 | 660 method we will compare the cortical map found by unsupervised learning to a cortical map derived from the Allen
|
bshanks@98 | 661 Reference Atlas. We will explore various quantitative metrics that purport to measure how similar two clusterings
|
bshanks@98 | 662 are, such as Jaccard, Rand index, Fowlkes-Mallows, variation of information, Larsen, Van Dongen, and others.
|
bshanks@96 | 663 Discriminative dimensionality reduction In addition to using a purely data-driven approach to identify
|
bshanks@96 | 664 spatial regions, it might be useful to see how well the known regions can be reconstructed from a small number
|
bshanks@96 | 665 of features, even if those features are chosen by using knowledge of the regions. For example, linear discriminant
|
bshanks@96 | 666 analysis could be used as a dimensionality reduction technique in order to identify a few features which are the
|
bshanks@96 | 667 best linear summary of gene expression profiles for the purpose of discriminating between regions. This reduced
|
bshanks@96 | 668 feature set could then be used to cluster pixels into regions. Perhaps the resulting clusters will be similar to the
|
bshanks@96 | 669 reference atlas, yet more faithful to natural spatial domains of gene expression than the reference atlas is.
|
bshanks@96 | 670 Apply the new methods to the cortex
|
bshanks@96 | 671 Using the methods developed in Aim 1, we will present, for each cortical area, a short list of markers to identify
|
bshanks@96 | 672 that area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once.
|
bshanks@96 | 673 Because in most cases the ABA coronal dataset only contains one ISH per gene, it is possible for an unrelated
|
bshanks@96 | 674 combination of genes to seem to identify an area when in fact it is only coincidence. There are two ways we will
|
bshanks@96 | 675 validate our marker genes to guard against this. First, we will confirm that putative combinations of marker genes
|
bshanks@96 | 676 express the same pattern in both hemispheres. Second, we will manually validate our final results on other gene
|
bshanks@98 | 677 expression datasets such as EMAGE, GeneAtlas, and GENSAT[8].
|
bshanks@99 | 678 Using the methods developed in Aim 2, we will present one or more hierarchical cortical maps. We will identify
|
bshanks@96 | 679 and explain how the statistical structure in the gene expression data led to any unexpected or interesting features
|
bshanks@96 | 680 of these maps, and we will provide biological hypotheses to interpret any new cortical areas, or groupings of
|
bshanks@96 | 681 areas, which are discovered.
|
bshanks@101 | 682 ____________________________________________________________________________
|
bshanks@101 | 683 Timeline and milestones
|
bshanks@90 | 684 Finding marker genes
|
bshanks@96 | 685 September-November 2009: Develop an automated mechanism for segmenting the cortical voxels into layers
|
bshanks@96 | 686 November 2009 (milestone): Have completed construction of a flatmapped, cortical dataset with information
|
bshanks@96 | 687 for each layer
|
bshanks@101 | 688 October 2009-April 2010: Develop scoring and supervised learning methods.
|
bshanks@96 | 689 January 2010 (milestone): Submit a publication on single marker genes for cortical areas
|
bshanks@99 | 690 February-July 2010: Continue to develop scoring methods and supervised learning frameworks. Extend tech-
|
bshanks@99 | 691 niques for robustness. Compare the performance of techniques. Validate marker genes. Prepare software
|
bshanks@99 | 692 toolbox for Aim 1.
|
bshanks@96 | 693 June 2010 (milestone): Submit a paper describing a method fulfilling Aim 1. Release toolbox.
|
bshanks@96 | 694 July 2010 (milestone): Submit a paper describing combinations of marker genes for each cortical area, and a
|
bshanks@96 | 695 small number of marker genes that can, in combination, define most of the areas at once
|
bshanks@101 | 696 Revealing new ways to parcellate a structure into regions
|
bshanks@101 | 697 June 2010-March 2011: Explore dimensionality reduction algorithms. Explore clustering algorithms. Adapt
|
bshanks@101 | 698 clustering algorithms to use radial profile information. Compare the performance of techniques.
|
bshanks@96 | 699 March 2011 (milestone): Submit a paper describing a method fulfilling Aim 2. Release toolbox.
|
bshanks@101 | 700 February-May 2011: Using the methods developed for Aim 2, explore the genomic anatomy of the cortex,
|
bshanks@101 | 701 interpret the results. Prepare software toolbox for Aim 2.
|
bshanks@96 | 702 May 2011 (milestone): Submit a paper on the genomic anatomy of the cortex, using the methods developed in
|
bshanks@96 | 703 Aim 2
|
bshanks@96 | 704 May-August 2011: Revisit Aim 1 to see if what was learned during Aim 2 can improve the methods for Aim 1.
|
bshanks@99 | 705 Possibly submit another paper.
|
bshanks@33 | 706 Bibliography & References Cited
|
bshanks@96 | 707 [1]Chris Adamson, Leigh Johnston, Terrie Inder, Sandra Rees, Iven Mareels, and Gary Egan. A Tracking
|
bshanks@96 | 708 Approach to Parcellation of the Cerebral Cortex, volume Volume 3749/2005 of Lecture Notes in Computer
|
bshanks@96 | 709 Science, pages 294–301. Springer Berlin / Heidelberg, 2005.
|
bshanks@96 | 710 [2]J. Annese, A. Pitiot, I. D. Dinov, and A. W. Toga. A myelo-architectonic method for the structural classification
|
bshanks@96 | 711 of cortical areas. NeuroImage, 21(1):15–26, 2004.
|
bshanks@96 | 712 [3]Tanya Barrett, Dennis B. Troup, Stephen E. Wilhite, Pierre Ledoux, Dmitry Rudnev, Carlos Evangelista,
|
bshanks@96 | 713 Irene F. Kim, Alexandra Soboleva, Maxim Tomashevsky, and Ron Edgar. NCBI GEO: mining tens of millions
|
bshanks@96 | 714 of expression profiles–database and tools update. Nucl. Acids Res., 35(suppl_1):D760–765, 2007.
|
bshanks@96 | 715 [4]George W. Bell, Tatiana A. Yatskievych, and Parker B. Antin. GEISHA, a whole-mount in situ hybridization
|
bshanks@96 | 716 gene expression screen in chicken embryos. Developmental Dynamics, 229(3):677–687, 2004.
|
bshanks@96 | 717 [5]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe
|
bshanks@96 | 718 Warren, Wah Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome.
|
bshanks@96 | 719 PLoS Comput Biol, 1(4):e41, 2005.
|
bshanks@96 | 720 [6]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy,
|
bshanks@96 | 721 Arthur W. Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of
|
bshanks@96 | 722 expression for a mouse brain section obtained using voxelation. Physiol. Genomics, 30(3):313–321, August
|
bshanks@96 | 723 2007.
|
bshanks@96 | 724 [7]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite
|
bshanks@96 | 725 for surface-based analyses of cerebral cortex. Journal of the American Medical Informatics Association:
|
bshanks@96 | 726 JAMIA, 8(5):443–59, 2001. PMID: 11522765.
|
bshanks@96 | 727 [8]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Scham-
|
bshanks@96 | 728 bra, Norma J. Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A
|
bshanks@96 | 729 gene expression atlas of the central nervous system based on bacterial artificial chromosomes. Nature,
|
bshanks@96 | 730 425(6961):917–925, October 2003.
|
bshanks@96 | 731 [9]Trevor Hastie, Robert Tibshirani, Michael Eisen, Ash Alizadeh, Ronald Levy, Louis Staudt, Wing Chan,
|
bshanks@96 | 732 David Botstein, and Patrick Brown. ’Gene shaving’ as a method for identifying distinct sets of genes with
|
bshanks@96 | 733 similar expression patterns. Genome Biology, 1(2):research0003.1–research0003.21, 2000.
|
bshanks@96 | 734 [10]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Ex-
|
bshanks@96 | 735 pression Patterns, volume 13 of Communications in Computer and Information Science, pages 347–361.
|
bshanks@96 | 736 Springer Berlin Heidelberg, 2008.
|
bshanks@96 | 737 [11]C Kemp, JB Tenenbaum, TL Griffiths, T Yamada, and N Ueda. Learning systems of concepts with an infinite
|
bshanks@96 | 738 relational model. In AAAI, 2006.
|
bshanks@96 | 739 [12]F. Kruggel, M. K. Brckner, Th. Arendt, C. J. Wiggins, and D. Y. von Cramon. Analyzing the neocortical
|
bshanks@96 | 740 fine-structure. Medical Image Analysis, 7(3):251–264, September 2003.
|
bshanks@96 | 741 [13]Erh-Fang Lee, Jyl Boline, and Arthur W. Toga. A High-Resolution anatomical framework of the neonatal
|
bshanks@96 | 742 mouse brain for managing gene expression data. Frontiers in Neuroinformatics, 1:6, 2007. PMC2525996.
|
bshanks@96 | 743 [14]Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew Asbury, Tony
|
bshanks@96 | 744 Cheung, Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M. Norland, Dennis S. Rice,
|
bshanks@96 | 745 Nilesh Dosooye, Sundeep Shakya, Perdeep Mehta, and Tom Curran. BGEM: an in situ hybridization
|
bshanks@96 | 746 database of gene expression in the embryonic and adult mouse nervous system. PLoS Biology, 4(4):e86
|
bshanks@96 | 747 EP –, April 2006.
|
bshanks@96 | 748 [15]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Su-
|
bshanks@96 | 749 san M Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann,
|
bshanks@96 | 750 David J Anderson, Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas
|
bshanks@96 | 751 of the adult mouse brain. Nat Neurosci, 12(3):356–362, March 2009.
|
bshanks@96 | 752 [16]Christopher J. Paciorek. Computational techniques for spatial logistic regression with large data sets. Com-
|
bshanks@96 | 753 putational Statistics & Data Analysis, 51(8):3631–3653, May 2007.
|
bshanks@96 | 754 [17]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2
|
bshanks@96 | 755 edition, July 2001.
|
bshanks@96 | 756 [18]A. Schleicher, N. Palomero-Gallagher, P. Morosan, S. Eickhoff, T. Kowalski, K. Vos, K. Amunts, and
|
bshanks@96 | 757 K. Zilles. Quantitative architectural analysis: a new approach to cortical mapping. Anatomy and Em-
|
bshanks@96 | 758 bryology, 210(5):373–386, December 2005.
|
bshanks@96 | 759 [19]Oliver Schmitt, Lars Hmke, and Lutz Dmbgen. Detection of cortical transition regions utilizing statistical
|
bshanks@96 | 760 analyses of excess masses. NeuroImage, 19(1):42–63, May 2003.
|
bshanks@96 | 761 [20]Constance M. Smith, Jacqueline H. Finger, Terry F. Hayamizu, Ingeborg J. McCright, Janan T. Eppig,
|
bshanks@96 | 762 James A. Kadin, Joel E. Richardson, and Martin Ringwald. The mouse gene expression database (GXD):
|
bshanks@96 | 763 2007 update. Nucl. Acids Res., 35(suppl_1):D618–623, 2007.
|
bshanks@96 | 764 [21]Judy Sprague, Leyla Bayraktaroglu, Dave Clements, Tom Conlin, David Fashena, Ken Frazer, Melissa
|
bshanks@96 | 765 Haendel, Douglas G Howe, Prita Mani, Sridhar Ramachandran, Kevin Schaper, Erik Segerdell, Peiran
|
bshanks@96 | 766 Song, Brock Sprunger, Sierra Taylor, Ceri E Van Slyke, and Monte Westerfield. The zebrafish information
|
bshanks@96 | 767 network: the zebrafish model organism database. Nucleic Acids Research, 34(Database issue):D581–5,
|
bshanks@96 | 768 2006. PMID: 16381936.
|
bshanks@96 | 769 [22]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003.
|
bshanks@96 | 770 [23]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T.
|
bshanks@96 | 771 Mortrud, Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H.
|
bshanks@96 | 772 Gage, Allan R. Jones, Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the
|
bshanks@96 | 773 hippocampus. Neuron, 60(6):1010–1021, December 2008.
|
bshanks@96 | 774 [24]Pavel Tomancak, Amy Beaton, Richard Weiszmann, Elaine Kwan, ShengQiang Shu, Suzanna E Lewis,
|
bshanks@96 | 775 Stephen Richards, Michael Ashburner, Volker Hartenstein, Susan E Celniker, and Gerald M Rubin. Sys-
|
bshanks@96 | 776 tematic determination of patterns of gene expression during drosophila embryogenesis. Genome Biology,
|
bshanks@96 | 777 3(12):research008818814, 2002. PMC151190.
|
bshanks@96 | 778 [25]Jano van Hemert and Richard Baldock. Mining Spatial Gene Expression Data for Association Rules, volume
|
bshanks@96 | 779 4414/2007 of Lecture Notes in Computer Science, pages 66–76. Springer Berlin / Heidelberg, 2007.
|
bshanks@96 | 780 [26]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton,
|
bshanks@96 | 781 Thomas P. Perry, Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen.
|
bshanks@96 | 782 EMAGE edinburgh mouse atlas of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860–
|
bshanks@96 | 783 865, 2008.
|
bshanks@96 | 784 [27]Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression patterns in
|
bshanks@96 | 785 the mouse embryo. Nucl. Acids Res., 32(suppl_1):D552–556, 2004.
|
bshanks@96 | 786 [28]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa
|
bshanks@96 | 787 Agarwala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood,
|
bshanks@96 | 788 Robert Baertsch, Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer
|
bshanks@96 | 789 Bork, Marc Botcherby, Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John
|
bshanks@96 | 790 Burton, Jonathan Butler, Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T
|
bshanks@96 | 791 Chinwalla, Deanna M Church, Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R
|
bshanks@96 | 792 Copley, Alan Coulson, Olivier Couronne, James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy
|
bshanks@96 | 793 Davies, Kimberly D Delehaunty, Justin Deri, Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens,
|
bshanks@96 | 794 Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes,
|
bshanks@96 | 795 Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A Fewell, Paul Flicek, Karen Foley, Wayne N
|
bshanks@96 | 796 Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage, Richard A Gibbs, Gustavo
|
bshanks@96 | 797 Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves, Eric D Green,
|
bshanks@96 | 798 Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki,
|
bshanks@96 | 799 LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard,
|
bshanks@96 | 800 Adrienne Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy,
|
bshanks@96 | 801 Michael Kamal, Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn
|
bshanks@96 | 802 Kells, W James Kent, Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David
|
bshanks@96 | 803 Kulp, Tom Landers, J P Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd,
|
bshanks@96 | 804 Susan Lucas, Bin Ma, Donna R Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer,
|
bshanks@96 | 805 Megan McCarthy, W Richard McCombie, Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim,
|
bshanks@96 | 806 Beverley Meredith, Jill P Mesirov, Webb Miller, Tracie L Miner, Emmanuel Mongin, Kate T Montgomery,
|
bshanks@96 | 807 Michael Morgan, Richard Mott, James C Mullikin, Donna M Muzny, William E Nash, Joanne O Nelson,
|
bshanks@96 | 808 Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J O’Connor, Yasushi Okazaki, Karen
|
bshanks@96 | 809 Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane Peterson, Pavel Pevzner,
|
bshanks@96 | 810 Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter, Michael Quail,
|
bshanks@96 | 811 Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph Santos,
|
bshanks@96 | 812 Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven
|
bshanks@96 | 813 Seaman, Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer,
|
bshanks@96 | 814 Guy Slater, Arian Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles
|
bshanks@96 | 815 Sugnet, Mikita Suyama, Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp,
|
bshanks@96 | 816 Catherine Ucla, Abel Ureta-Vidal, Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie
|
bshanks@96 | 817 Wall, Ryan J Weber, Robert B Weiss, Michael C Wendl, Anthony P West, Kris Wetterstrand, Raymond
|
bshanks@96 | 818 Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey, Sophie Williams, Richard K Wilson, Eitan Win-
|
bshanks@96 | 819 ter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang, Evgeny M Zdobnov, Michael C Zody, and
|
bshanks@96 | 820 Eric S Lander. Initial sequencing and comparative analysis of the mouse genome. Nature, 420(6915):520–
|
bshanks@96 | 821 62, December 2002. PMID: 12466850.
|
bshanks@33 | 822
|
bshanks@33 | 823
|