cg

annotate grant.html @ 59:c46f8f975f7c

.
author bshanks@bshanks-salk.dyndns.org
date Sun Apr 19 14:19:52 2009 -0700 (16 years ago)
parents 074e2be60b38
children 9381e0c1827f

rev   line source
bshanks@0 1 Specific aims
bshanks@53 2 Massivenew datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ transgenic
bshanks@53 3 reporter, microarray voxelation, and others, allow the expression levels of many genes at many locations to be compared.
bshanks@53 4 Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker
bshanks@53 5 genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns. We have
bshanks@53 6 three specific aims:
bshanks@30 7 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target
bshanks@30 8 anatomical regions
bshanks@42 9 (2) develop an algorithm to suggest new ways of carving up a structure into anatomical regions, based on spatial patterns
bshanks@42 10 in gene expression
bshanks@33 11 (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse
bshanks@35 12 Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of
bshanks@35 13 Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2).
bshanks@30 14 In addition to validating the usefulness of the algorithms, the application of these methods to cerebral cortex will produce
bshanks@30 15 immediate benefits, because there are currently no known genetic markers for many cortical areas. The results of the project
bshanks@33 16 will support the development of new ways to selectively target cortical areas, and it will support the development of a
bshanks@33 17 method for identifying the cortical areal boundaries present in small tissue samples.
bshanks@53 18 All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well as the
bshanks@30 19 machine-readable datasets developed in aim (3), will be published and freely available for others to use.
bshanks@30 20 Background and significance
bshanks@30 21 Aim 1
bshanks@30 22 Machine learning terminology: supervised learning
bshanks@42 23 The task of looking for marker genes for anatomical regions means that one is looking for a set of genes such that, if the
bshanks@42 24 expression level of those genes is known, then the locations of the regions can be inferred.
bshanks@42 25 If we define the regions so that they cover the entire anatomical structure to be divided, then instead of saying that we
bshanks@42 26 are using gene expression to find the locations of the regions, we may say that we are using gene expression to determine to
bshanks@42 27 which region each voxel within the structure belongs. We call this a classification task, because each voxel is being assigned
bshanks@42 28 to a class (namely, its region).
bshanks@30 29 Therefore, an understanding of the relationship between the combination of their expression levels and the locations of
bshanks@42 30 the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels
bshanks@42 31 within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs.
bshanks@42 32 We call this function a classifier. In general, the input to a classifier is called an instance, and the output is called a label
bshanks@42 33 (or a class label).
bshanks@30 34 The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a
bshanks@30 35 classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be
bshanks@30 36 analyzed in concert with an anatomical atlas in order to produce a classifier. Such a procedure is a type of a machine learning
bshanks@33 37 procedure. The construction of the classifier is called training (also learning), and the initial gene expression dataset used
bshanks@33 38 in the construction of the classifier is called training data.
bshanks@30 39 In the machine learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a
bshanks@30 40 task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances
bshanks@42 41 (voxels) for which the labels (regions) are known.
bshanks@30 42 Each gene expression level is called a feature, and the selection of which genes1 to include is called feature selection.
bshanks@33 43 Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with
bshanks@33 44 a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
bshanks@30 45 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then
bshanks@30 46 chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic
bshanks@30 47 procedure may be used in which features are added and subtracted from the selected set depending on how much they raise
bshanks@30 48 the score. Such procedures are called “stepwise” or “greedy”.
bshanks@30 49 Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the
bshanks@30 50 learning algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature
bshanks@30 51 selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to
bshanks@53 52 each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or
bshanks@53 53 average). If only information from nearby voxels is used to calculate a voxel’s sub-score, then we say it is a local scoring
bshanks@53 54 method. If only information from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a pointwise scoring
bshanks@53 55 method.
bshanks@30 56 Key questions when choosing a learning method are: What are the instances? What are the features? How are the
bshanks@30 57 features chosen? Here are four principles that outline our answers to these questions.
bshanks@30 58 Principle 1: Combinatorial gene expression It is too much to hope that every anatomical region of interest will be
bshanks@30 59 identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene
bshanks@30 60 included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at
bshanks@30 61 combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary
bshanks@30 62 Results). Therefore, each instance should contain multiple features (genes).
bshanks@30 63 Principle 2: Only look at combinations of small numbers of genes When the classifier classifies a voxel, it is
bshanks@30 64 only allowed to look at the expression of the genes which have been selected as features. The more data that is available to
bshanks@30 65 a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a
bshanks@30 66 strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations
bshanks@30 67 in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as
bshanks@30 68 a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the
bshanks@30 69 expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the
bshanks@33 70 level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order
bshanks@30 71 to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as
bshanks@30 72 features.
bshanks@30 73 Principle 3: Use geometry in feature selection
bshanks@33 74 _________________________________________
bshanks@33 75 1Strictly speaking, the features are gene expression levels, but we’ll call them genes.
bshanks@30 76 When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of
bshanks@30 77 each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information
bshanks@30 78 about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See
bshanks@30 79 Preliminary Results for evidence of the complementary nature of pointwise and local scoring methods.
bshanks@30 80 Principle 4: Work in 2-D whenever possible
bshanks@30 81 There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When
bshanks@30 82 it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis
bshanks@33 83 algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D
bshanks@33 84 data.
bshanks@30 85 Therefore, when possible, the instances should represent pixels, not voxels.
bshanks@43 86 Related work
bshanks@44 87 There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expression data
bshanks@44 88 which is not fundamentally spatial2.
bshanks@43 89 As noted above, there has been much work on both supervised learning and there are many available algorithms for
bshanks@43 90 each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the
bshanks@43 91 way that this framework is set up has a large impact on performance. Creating a good framework can require creatively
bshanks@43 92 reconceptualizing the problem domain, and is not merely a mechanical “fine-tuning” of numerical parameters. For example,
bshanks@43 93 we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Work) may
bshanks@43 94 be necessary in order to achieve the best results in this application.
bshanks@53 95 We are aware of six existing efforts to find marker genes using spatial gene expression data using automated methods.
bshanks@53 96 [8 ] mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of
bshanks@53 97 interest, computing what proportion of this structure is covered by the gene’s spatial region.
bshanks@53 98 GeneAtlas[3] and EMAGE [18] allow the user to construct a search query by demarcating regions and then specifing
bshanks@53 99 either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. For the
bshanks@53 100 similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses
bshanks@53 101 the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel3
bshanks@53 102 whose expression is within four discretization levels. EMAGE uses Jaccard similarity, which is equal to the number of true
bshanks@53 103 pixels in the intersection of the two images, divided by the number of pixels in their union. Neither GeneAtlas nor EMAGE
bshanks@53 104 allow one to search for combinations of genes that define a region in concert but not separately.
bshanks@53 105 [10 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components:
bshanks@43 106 * Gene Finder: The user selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2)
bshanks@43 107 yields a list of genes which are overexpressed in that cluster. (note: the ABA website also contains pre-prepared lists of
bshanks@43 108 overexpressed genes for selected structures)
bshanks@43 109 * Correlation: The user selects a seed voxel and the shows the user how much correlation there is between the gene
bshanks@43 110 expression profile of the seed voxel and every other voxel.
bshanks@53 111 * Clusters: will be described later
bshanks@43 112 Gene Finder is different from our Aim 1 in at least three ways. First, Gene Finder finds only single genes, whereas we
bshanks@43 113 will also look for combinations of genes. Second, gene finder can only use overexpression as a marker, whereas we will also
bshanks@53 114 search for underexpression. Third, Gene Finder uses a simple pointwise score4, whereas we will also use geometric scores
bshanks@43 115 such as gradient similarity. The Preliminary Data section contains evidence that each of our three choices is the right one.
bshanks@53 116 [4 ] looks at the mean expression level of genes within anatomical regions, and applies a Student’s t-test with Bonferroni
bshanks@51 117 correction to determine whether the mean expression level of a gene is significantly higher in the target region. Like AGEA,
bshanks@51 118 this is a pointwise measure (only the mean expression level per pixel is being analyzed), it is not being used to look for
bshanks@51 119 underexpression, and does not look for combinations of genes.
bshanks@53 120 [7 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary
bshanks@46 121 algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image. Their
bshanks@51 122 match score is Jaccard similarity.
bshanks@51 123 In summary, there has been fruitful work on finding marker genes, however, only one of the previous projects explores
bshanks@51 124 combinations of marker genes, and none of these publications compare the results obtained by using different algorithms or
bshanks@51 125 scoring methods.
bshanks@53 126 ___________________________
bshanks@53 127 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not
bshanks@53 128 just data which has only a few different locations or which is indexed by anatomical label.
bshanks@53 129 3Actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity.
bshanks@53 130 4“Expression energy ratio”, which captures overexpression.
bshanks@30 131 Aim 2
bshanks@30 132 Machine learning terminology: clustering
bshanks@30 133 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as
bshanks@30 134 unsupervised learning in the jargon of machine learning. One thing that you can do with such a dataset is to group instances
bshanks@46 135 together. A set of similar instances is called a cluster, and the activity of finding grouping the data into clusters is called
bshanks@46 136 clustering or cluster analysis.
bshanks@46 137 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are
bshanks@46 138 once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from
bshanks@42 139 the same region have similar gene expression profiles, at least compared to the other regions. This means that clustering
bshanks@42 140 voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels
bshanks@42 141 with similar gene expression.
bshanks@42 142 It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps
bshanks@44 143 some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they
bshanks@42 144 could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests
bshanks@42 145 the outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels.
bshanks@42 146 This is called hierarchial clustering.
bshanks@30 147 Similarity scores
bshanks@30 148 A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or
bshanks@33 149 clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and
bshanks@30 150 scoring methods for similarity.
bshanks@30 151 Spatially contiguous clusters; image segmentation
bshanks@33 152 We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have
bshanks@33 153 an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary
bshanks@33 154 Results, we show that one can get reasonable results without enforcing this constraint, however, we plan to compare these
bshanks@33 155 results against other methods which guarantee contiguous clusters.
bshanks@30 156 Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a
bshanks@33 157 variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into
bshanks@30 158 clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in
bshanks@30 159 our task, there are thousands of color channels (one for each gene), rather than just three. There are imaging tasks which
bshanks@33 160 use more than three colors, however, for example multispectral imaging and hyperspectral imaging, which are often used
bshanks@33 161 to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting
bshanks@33 162 sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene
bshanks@33 163 expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of
bshanks@33 164 spatially arranged data, some of these algorithms are specialized for visual images.
bshanks@51 165 Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression feature
bshanks@51 166 vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying data.
bshanks@33 167 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the
bshanks@30 168 instances. However, some clustering algorithms perform better on small numbers of features. There are techniques which
bshanks@30 169 “summarize” a larger number of features using a smaller number of features; these techniques go by the name of feature
bshanks@30 170 extraction or dimensionality reduction. The small set of features that such a technique yields is called the reduced feature
bshanks@30 171 set. After the reduced feature set is created, the instances may be replaced by reduced instances, which have as their features
bshanks@30 172 the reduced feature set rather than the original feature set of all gene expression levels. Note that the features in the reduced
bshanks@30 173 feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene
bshanks@30 174 expression levels.
bshanks@51 175 Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the
bshanks@51 176 reduced data set is less than in the original data set, the running time of clustering algorithms may be much less. Second,
bshanks@51 177 it is thought that some clustering algorithms may give better results on reduced data.
bshanks@51 178 Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example,
bshanks@51 179 one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions
bshanks@51 180 with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points
bshanks@51 181 in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of
bshanks@51 182 the points on a 2-D plan will exactly satisfy this property – however, dimensionality reduction techniques allow one to find
bshanks@51 183 arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction
bshanks@51 184 is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction
bshanks@51 185 before clustering.
bshanks@30 186 Clustering genes rather than voxels
bshanks@30 187 Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster
bshanks@30 188 the features (genes). There are two ways that clusters of genes could be used.
bshanks@30 189 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could
bshanks@30 190 have one reduced feature for each gene cluster.
bshanks@30 191 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression
bshanks@53 192 pattern which seems to pick out a single, spatially continguous region. Therefore, it seems likely that an anatomically
bshanks@53 193 interesting region will have multiple genes which each individually pick it out5. This suggests the following procedure:
bshanks@42 194 cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters.
bshanks@42 195 In the Preliminary Data we show that a number of anatomically recognized cortical regions, as well as some “superregions”
bshanks@42 196 formed by lumping together a few regions, are associated with gene clusters in this fashion.
bshanks@51 197 The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering
bshanks@51 198 algorithms.
bshanks@43 199 Related work
bshanks@51 200 We are aware of five existing efforts to cluster spatial gene expression data.
bshanks@53 201 [15 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual analysis,
bshanks@43 202 two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and a hierarchial recursive
bshanks@44 203 bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving
bshanks@53 204 the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset6 and while the results are
bshanks@44 205 promising (see Preliminary Data), we think that it will be possible to find an even better method.
bshanks@53 206 AGEA[10] includes a preset hierarchial clustering of voxels based on a recursive bifurcation algorithm with correlation
bshanks@53 207 as the similarity metric. EMAGE[18] allows the user to select a dataset from among a large number of alternatives, or by
bshanks@53 208 running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchial complete linkage
bshanks@53 209 clustering with un-centred correlation as the similarity score.
bshanks@53 210 [4 ] clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were
bshanks@53 211 highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and
bshanks@53 212 ordered the rows of this matrix as follows: “the first row of the matrix was chosen to show the strongest contrast between
bshanks@53 213 the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing
bshanks@53 214 similarity using a least squares metric”. The resulting matrix showed four clusters. For each cluster, prototypical spatial
bshanks@53 215 expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without
bshanks@53 216 clustering voxels
bshanks@53 217 In an interesting twist, [7] applies their technique for finding combinations of marker genes for the purpose of clustering
bshanks@46 218 genes around a “seed gene”. The way they do this is by using the pattern of expression of the seed gene as the target image,
bshanks@46 219 and then searching for other genes which can be combined to reproduce this pattern. Those other genes which are found
bshanks@53 220 are considered to be related to the seed. The same team also describes a method[17] for finding “association rules” such as,
bshanks@46 221 “if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene”. This could be
bshanks@46 222 useful as part of a procedure for clustering voxels.
bshanks@46 223 In summary, although these projects obtained clusterings, there has not been much comparison between different algo-
bshanks@51 224 rithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. Also,
bshanks@53 225 none of these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first
bshanks@53 226 in order to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms.
bshanks@30 227 Aim 3
bshanks@30 228 Background
bshanks@33 229 The cortex is divided into areas and layers. To a first approximation, the parcellation of the cortex into areas can
bshanks@33 230 be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue
bshanks@33 231 downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can
bshanks@33 232 picture an area of the cortex as a slice of many-layered cake.
bshanks@30 233 Although it is known that different cortical areas have distinct roles in both normal functioning and in disease processes,
bshanks@30 234 there are no known marker genes for many cortical areas. When it is necessary to divide a tissue sample into cortical areas,
bshanks@53 235 _________________________________________
bshanks@53 236 5This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is
bshanks@53 237 possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression;
bshanks@53 238 perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although
bshanks@53 239 the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype.
bshanks@53 240 6We ran “vanilla” NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft
bshanks@53 241 spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was
bshanks@53 242 needed. The paper under discussion also mentions that they tried a hierarchial variant of NNMF, which we have not yet tried.
bshanks@30 243 this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of
bshanks@30 244 their approximate location upon the cortical surface.
bshanks@33 245 Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not
bshanks@53 246 completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single
bshanks@53 247 agreed-upon map can be seen by contrasting the recent maps given by Swanson[14] on the one hand, and Paxinos and
bshanks@53 248 Franklin[11] on the other. While the maps are certainly very similar in their general arrangement, significant differences
bshanks@30 249 remain in the details.
bshanks@36 250 The Allen Mouse Brain Atlas dataset
bshanks@36 251 The Allen Mouse Brain Atlas (ABA) data was produced by doing in-situ hybridization on slices of male, 56-day-old
bshanks@36 252 C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed
bshanks@36 253 in order to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial
bshanks@36 254 resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different
bshanks@36 255 mouse brains were needed in order to measure the expression of many genes.
bshanks@36 256 Next, an automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate
bshanks@36 257 system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 = 159,326
bshanks@53 258 voxels in the 3D coordinate system, of which 51,533 are in the brain[10].
bshanks@53 259 Mus musculus, the common house mouse, is thought to contain about 22,000 protein-coding genes[20]. The ABA contains
bshanks@36 260 data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our
bshanks@46 261 dataset is derived from only the coronal subset of the ABA, because the sagittal data does not cover the entire cortex, and
bshanks@53 262 also has greater registration error[10]. Genes were selected by the Allen Institute for coronal sectioning based on, “classes
bshanks@53 263 of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern”[10].
bshanks@53 264 The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT[6],
bshanks@53 265 GenePaint[19], its sister project GeneAtlas[3], BGEM[9], EMAGE[18], EurExpress7, EADHB8, MAMEP9, Xenbase10,
bshanks@53 266 ZFIN[13], Aniseed11, VisiGene12, GEISHA[2], Fruitfly.org[16], COMPARE13 GXD[12], GEO[1]14. With the exception of
bshanks@53 267 the ABA, GenePaint, and EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH
bshanks@53 268 images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of
bshanks@53 269 data available for public download from the website15. Many of these resources focus on developmental gene expression.
bshanks@46 270 Significance
bshanks@43 271 The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the
bshanks@42 272 combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for
bshanks@30 273 drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively
bshanks@30 274 target individual cortical areas.
bshanks@30 275 The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatom-
bshanks@33 276 ical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can
bshanks@33 277 find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that
bshanks@30 278 will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex.
bshanks@53 279 The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation of
bshanks@33 280 a better map. The development of present-day cortical maps was driven by the application of histological stains. It is
bshanks@33 281 conceivable that if a different set of stains had been available which identified a different set of features, then the today’s
bshanks@33 282 cortical maps would have come out differently. Since the number of classes of stains is small compared to the number of
bshanks@33 283 genes, it is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been
bshanks@33 284 captured by any stain. Therefore, current ideas about cortical anatomy need to incorporate what we can learn from looking
bshanks@33 285 at the patterns of gene expression.
bshanks@30 286 While we do not here propose to analyze human gene expression data, it is conceivable that the methods we propose to
bshanks@30 287 develop could be used to suggest modifications to the human cortical map as well.
bshanks@30 288 Related work
bshanks@53 289 [10 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations
bshanks@46 290 between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either
bshanks@53 291 _________________________________________
bshanks@53 292 7http://www.eurexpress.org/ee/; EurExpress data is also entered into EMAGE
bshanks@53 293 8http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html
bshanks@53 294 9http://mamep.molgen.mpg.de/index.php
bshanks@53 295 10http://xenbase.org/
bshanks@53 296 11http://aniseed-ibdm.univ-mrs.fr/
bshanks@53 297 12http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources
bshanks@53 298 13http://compare.ibdml.univ-mrs.fr/
bshanks@53 299 14GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.
bshanks@53 300 15without prior offline registration
bshanks@46 301 of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of
bshanks@46 302 the other components of AGEA can be applied to cortical areas; AGEA’s Gene Finder cannot be used to find marker genes
bshanks@53 303 for the cortical areas; and AGEA’s hierarchial clustering does not produce clusters corresponding to the cortical areas16.
bshanks@46 304 In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, (b) there has
bshanks@43 305 been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally
bshanks@43 306 finding marker genes for cortical areas, or on finding a hierarchial clustering that will yield a map of cortical areas de novo
bshanks@43 307 from gene expression data.
bshanks@53 308 Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker
bshanks@53 309 genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods.
bshanks@53 310 _________________________________________
bshanks@53 311 16In both cases, the root cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are
bshanks@44 312 often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel
bshanks@46 313 correlation clustering algorithm will tend to create clusters representing cortical layers, not areas. This is why the hierarchial clustering does not
bshanks@44 314 find most cortical areas (there are clusters which presumably correspond to the intersection of a layer and an area, but since one area will have
bshanks@44 315 many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot find marker genes for
bshanks@44 316 most cortical areas is that in Gene Finder, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found,
bshanks@44 317 and it creates that ROI by (pairwise voxel correlation) clustering around the seed.
bshanks@30 318 Preliminary work
bshanks@30 319 Format conversion between SEV, MATLAB, NIFTI
bshanks@35 320 We have created software to (politely) download all of the SEV files from the Allen Institute website. We have also created
bshanks@38 321 software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s file formats.
bshanks@30 322 Flatmap of cortex
bshanks@36 323 We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. We divided
bshanks@36 324 the cortex into hemispheres.
bshanks@53 325 Using Caret[5], we created a mesh representation of the surface of the selected voxels. For each gene, for each node of
bshanks@42 326 the mesh, we calculated an average of the gene expression of the voxels “underneath” that mesh node. We then flattened
bshanks@42 327 the cortex, creating a two-dimensional mesh.
bshanks@36 328 We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this grid
bshanks@36 329 into a MATLAB matrix.
bshanks@36 330 We manually traced the boundaries of each cortical area from the ABA coronal reference atlas slides. We then converted
bshanks@42 331 these manual traces into Caret-format regional boundary data on the mesh surface. We projected the regions onto the 2-d
bshanks@42 332 mesh, and then onto the grid, and then we converted the region data into MATLAB format.
bshanks@37 333 At this point, the data is in the form of a number of 2-D matrices, all in registration, with the matrix entries representing
bshanks@37 334 a grid of points (pixels) over the cortical surface:
bshanks@36 335 ∙A 2-D matrix whose entries represent the regional label associated with each surface pixel
bshanks@36 336 ∙For each gene, a 2-D matrix whose entries represent the average expression level underneath each surface pixel
bshanks@38 337 We created a normalized version of the gene expression data by subtracting each gene’s mean expression level (over all
bshanks@38 338 surface pixels) and dividing each gene by its standard deviation.
bshanks@40 339 The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over
bshanks@40 340 the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface.
bshanks@37 341 To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each
bshanks@37 342 cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in
bshanks@37 343 different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines
bshanks@37 344 that allow the depth of the ROI for volume-to-surface projection to vary.
bshanks@36 345 In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually
bshanks@36 346 demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
bshanks@38 347 Feature selection and scoring methods
bshanks@38 348 Correlation Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance
bshanks@46 349 as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the
bshanks@38 350 surface pixels.
bshanks@40 351 One class of feature selection scoring method are those which calculate some sort of “match” between each gene image
bshanks@40 352 and the target image. Those genes which match the best are good candidates for features.
bshanks@38 353 One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between
bshanks@38 354 each gene and each cortical area.
bshanks@39 355 todo: fig
bshanks@38 356 Conditional entropy An information-theoretic scoring method is to find features such that, if the features (gene
bshanks@38 357 expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty,
bshanks@38 358 so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution
bshanks@38 359 to which we are referring is the probability distribution over the population of surface pixels.
bshanks@38 360 The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating,
bshanks@46 361 for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression
bshanks@40 362 levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two
bshanks@40 363 standard deviations, the mean plus one standard deviation, the mean plus two standard deviations.
bshanks@39 364 Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression
bshanks@46 365 boolean masks such that the conditional entropy of the target area’s boolean mask, conditioned upon the pair of gene
bshanks@46 366 expression boolean masks, is minimized.
bshanks@39 367 This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question,
bshanks@39 368 “Is this surface pixel a member of the target area?”.
bshanks@38 369
bshanks@41 370
bshanks@41 371
bshanks@41 372 Figure 1: The top row shows the three genes which (individually) best predict area AUD, according to logistic regression.
bshanks@41 373 The bottom row shows the three genes which (individually) best match area AUD, according to gradient similarity. From
bshanks@41 374 left to right and top to bottom, the genes are Ssr1, Efcbp1, Aph1a, Ptk7, Aph1a again, and Lepr
bshanks@39 375 todo: fig
bshanks@39 376 Gradient similarity We noticed that the previous two scoring methods, which are pointwise, often found genes whose
bshanks@39 377 pattern of expression did not look similar in shape to the target region. Fort his reason we designed a non-pointwise local
bshanks@39 378 scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar
bshanks@40 379 to the shape of the target region. We call this scoring method “gradient similarity”.
bshanks@40 380 One might say that gradient similarity attempts to measure how much the border of the area of gene expression and
bshanks@40 381 the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its
bshanks@40 382 maximum value to zero, the spatial pattern of a gene’s expression often does not have a discrete border. Therefore, instead
bshanks@40 383 of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images
bshanks@40 384 (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have
bshanks@40 385 gradients which are oriented in a similar direction. The formula is:
bshanks@41 386 ∑
bshanks@41 387 pixel<img src="cmsy7-32.png" alt="&#x2208;" />pixels cos(abs(&#x2220;&#x2207;1 -&#x2220;&#x2207;2)) &#x22C5;|&#x2207;1| + |&#x2207;2|
bshanks@41 388 2 &#x22C5; pixel_value1 + pixel_value2
bshanks@41 389 2
bshanks@40 390 where &#x2207;1 and &#x2207;2 are the gradient vectors of the two images at the current pixel; &#x2220;&#x2207;i is the angle of the gradient of
bshanks@41 391 image i at the current pixel; |&#x2207;i| is the magnitude of the gradient of image i at the current pixel; and pixel_valuei is the
bshanks@40 392 value of the current pixel in image i.
bshanks@40 393 The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar,
bshanks@40 394 then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a
bshanks@40 395 similar direction (because the borders are similar).
bshanks@43 396 Gradient similarity provides information complementary to correlation
bshanks@41 397 To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider
bshanks@53 398 Fig. . The top row of Fig. displays the 3 genes which most match area AUD, according to a pointwise method17. The
bshanks@53 399 bottom row displays the 3 genes which most match AUD according to a method which considers local geometry18 The
bshanks@46 400 pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is
bshanks@46 401 that this includes many areas which don&#8217;t have a salient border matching the areal border. The geometric method identifies
bshanks@46 402 genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes
bshanks@46 403 genes which don&#8217;t express over the entire area. Genes which have high rankings using both pointwise and border criteria,
bshanks@46 404 such as Aph1a in the example, may be particularly good markers. None of these genes are, individually, a perfect marker
bshanks@46 405 for AUD; we deliberately chose a &#8220;difficult&#8221; area in order to better contrast pointwise with geometric methods.
bshanks@43 406 Combinations of multiple genes are useful
bshanks@30 407 Here we give an example of a cortical area which is not marked by any single gene, but which can be identified combi-
bshanks@53 408 natorially. according to logistic regression, gene wwc119 is the best fit single gene for predicting whether or not a pixel on
bshanks@48 409 _________________________________________
bshanks@53 410 17For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor
bshanks@41 411 variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well
bshanks@41 412 they predict area AUD.
bshanks@53 413 18For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD,
bshanks@53 414 was calculated, and this was used to rank the genes.
bshanks@53 415 19&#8220;WW, C2 and coiled-coil domain containing 1&#8221;; EntrezGene ID 211652
bshanks@41 416
bshanks@41 417
bshanks@41 418
bshanks@41 419 Figure 2: Upper left: wwc1. Upper right: mtif2. Lower left: wwc1 + mtif2 (each pixel&#8217;s value on the lower left is the sum
bshanks@41 420 of the corresponding pixels in the upper row). Within each picture, the vertical axis roughly corresponds to anterior at the
bshanks@41 421 top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right.
bshanks@41 422 The red outline is the boundary of region MO. Pixels are colored approximately according to the density of expressing cells
bshanks@41 423 underneath each pixel, with red meaning a lot of expression and blue meaning little.
bshanks@30 424 the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure shows wwc1&#8217;s spatial expression
bshanks@30 425 pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, however the gene
bshanks@33 426 overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the
bshanks@30 427 overshoot is the medial surface of the cortex. MO is only found on the lateral surface (todo).
bshanks@53 428 Gene mtif220 is shown in figure the upper-right of Fig. . Mtif2 captures MO&#8217;s upper-left boundary, but not its lower-right
bshanks@33 429 boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these
bshanks@33 430 two figures, we get the lower-left of Figure . This combination captures area MO much better than any single gene.
bshanks@38 431 Areas which can be identified by single genes
bshanks@39 432 todo
bshanks@43 433 Underexpression of a gene can serve as a marker
bshanks@39 434 todo
bshanks@39 435 Specific to Aim 1 (and Aim 3)
bshanks@39 436 Forward stepwise logistic regression todo
bshanks@30 437 SVM on all genes at once
bshanks@30 438 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical
bshanks@53 439 surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%21. As noted above,
bshanks@30 440 however, a classifier that looks at all the genes at once isn&#8217;t practically useful.
bshanks@30 441 The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many
bshanks@33 442 of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task
bshanks@30 443 combines feature selection with supervised learning.
bshanks@30 444 Decision trees
bshanks@30 445 todo
bshanks@30 446 Specific to Aim 2 (and Aim 3)
bshanks@30 447 Raw dimensionality reduction results
bshanks@30 448 todo
bshanks@30 449 (might want to incld nnMF since mentioned above)
bshanks@41 450 _________________________________________
bshanks@53 451 20&#8220;mitochondrial translational initiation factor 2&#8221;; EntrezGene ID 76784
bshanks@53 452 215-fold cross-validation.
bshanks@30 453 Dimensionality reduction plus K-means or spectral clustering
bshanks@30 454 Many areas are captured by clusters of genes
bshanks@40 455 todo
bshanks@40 456 todo
bshanks@30 457 Research plan
bshanks@42 458 Further work on flatmapping
bshanks@42 459 In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo),
bshanks@42 460 or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but
bshanks@42 461 in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane.
bshanks@42 462 In the case of the cerebral cortex, it remains to be seen which method of mapping the manifold into a plane is optimal
bshanks@53 463 for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret[5]) with
bshanks@42 464 mappings which preserve angle (conformal maps).
bshanks@42 465 Although there is much 2-D organization in anatomy, there are also structures whose shape is fundamentally 3-dimensional.
bshanks@42 466 If possible, we would like the method we develop to include a statistical test that warns the user if the assumption of 2-D
bshanks@42 467 structure seems to be wrong.
bshanks@30 468 todo amongst other things:
bshanks@30 469 Develop algorithms that find genetic markers for anatomical regions
bshanks@30 470 1.Develop scoring measures for evaluating how good individual genes are at marking areas: we will compare pointwise,
bshanks@30 471 geometric, and information-theoretic measures.
bshanks@30 472 2.Develop a procedure to find single marker genes for anatomical regions: for each cortical area, by using or combining
bshanks@30 473 the scoring measures developed, we will rank the genes by their ability to delineate each area.
bshanks@30 474 3.Extend the procedure to handle difficult areas by using combinatorial coding: for areas that cannot be identified by any
bshanks@30 475 single gene, identify them with a handful of genes. We will consider both (a) algorithms that incrementally/greedily
bshanks@30 476 combine single gene markers into sets, such as forward stepwise regression and decision trees, and also (b) supervised
bshanks@33 477 learning techniques which use soft constraints to minimize the number of features, such as sparse support vector
bshanks@30 478 machines.
bshanks@33 479 4.Extend the procedure to handle difficult areas by combining or redrawing the boundaries: An area may be difficult
bshanks@33 480 to identify because the boundaries are misdrawn, or because it does not &#8220;really&#8221; exist as a single area, at least on the
bshanks@30 481 genetic level. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its
bshanks@30 482 boundary were redrawn slightly, and (b) detect when a difficult area could be combined with adjacent areas to create
bshanks@30 483 a larger area which can be fit.
bshanks@51 484 # Linear discriminant analysis
bshanks@30 485 Apply these algorithms to the cortex
bshanks@30 486 1.Create open source format conversion tools: we will create tools to bulk download the ABA dataset and to convert
bshanks@30 487 between SEV, NIFTI and MATLAB formats.
bshanks@30 488 2.Flatmap the ABA cortex data: map the ABA data onto a plane and draw the cortical area boundaries onto it.
bshanks@30 489 3.Find layer boundaries: cluster similar voxels together in order to automatically find the cortical layer boundaries.
bshanks@30 490 4.Run the procedures that we developed on the cortex: we will present, for each area, a short list of markers to identify
bshanks@30 491 that area; and we will also present lists of &#8220;panels&#8221; of genes that can be used to delineate many areas at once.
bshanks@30 492 Develop algorithms to suggest a division of a structure into anatomical parts
bshanks@30 493 1.Explore dimensionality reduction algorithms applied to pixels: including TODO
bshanks@30 494 2.Explore dimensionality reduction algorithms applied to genes: including TODO
bshanks@30 495 3.Explore clustering algorithms applied to pixels: including TODO
bshanks@30 496 4.Explore clustering algorithms applied to genes: including gene shaving, TODO
bshanks@30 497 5.Develop an algorithm to use dimensionality reduction and/or hierarchial clustering to create anatomical maps
bshanks@30 498 6.Run this algorithm on the cortex: present a hierarchial, genoarchitectonic map of the cortex
bshanks@51 499 # Linear discriminant analysis
bshanks@51 500 # jbt, coclustering
bshanks@51 501 # self-organizing map
bshanks@53 502 # confirm with EMAGE, GeneAtlas, GENSAT, etc, to fight overfitting
bshanks@53 503 # compare using clustering scores
bshanks@33 504 Bibliography &amp; References Cited
bshanks@53 505 [1]Tanya Barrett, Dennis B. Troup, Stephen E. Wilhite, Pierre Ledoux, Dmitry Rudnev, Carlos Evangelista, Irene F.
bshanks@53 506 Kim, Alexandra Soboleva, Maxim Tomashevsky, and Ron Edgar. NCBI GEO: mining tens of millions of expression
bshanks@53 507 profiles&#8211;database and tools update. Nucl. Acids Res., 35(suppl_1):D760&#8211;765, 2007.
bshanks@53 508 [2]George W. Bell, Tatiana A. Yatskievych, and Parker B. Antin. GEISHA, a whole-mount in situ hybridization gene
bshanks@53 509 expression screen in chicken embryos. Developmental Dynamics, 229(3):677&#8211;687, 2004.
bshanks@53 510 [3]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe Warren, Wah
bshanks@53 511 Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome. PLoS Comput Biol, 1(4):e41,
bshanks@53 512 2005.
bshanks@53 513 [4]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy, Arthur W.
bshanks@53 514 Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of expression for a mouse
bshanks@53 515 brain section obtained using voxelation. Physiol. Genomics, 30(3):313&#8211;321, August 2007.
bshanks@53 516 [5]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite for surface-
bshanks@33 517 based analyses of cerebral cortex. Journal of the American Medical Informatics Association: JAMIA, 8(5):443&#8211;59, 2001.
bshanks@33 518 PMID: 11522765.
bshanks@53 519 [6]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Schambra, Norma J.
bshanks@44 520 Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A gene expression atlas of the
bshanks@44 521 central nervous system based on bacterial artificial chromosomes. Nature, 425(6961):917&#8211;925, October 2003.
bshanks@53 522 [7]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Expression Pat-
bshanks@46 523 terns, volume 13 of Communications in Computer and Information Science, pages 347&#8211;361. Springer Berlin Heidelberg,
bshanks@46 524 2008.
bshanks@53 525 [8]Erh-Fang Lee, Jyl Boline, and Arthur W. Toga. A High-Resolution anatomical framework of the neonatal mouse brain
bshanks@53 526 for managing gene expression data. Frontiers in Neuroinformatics, 1:6, 2007. PMC2525996.
bshanks@53 527 [9]Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew Asbury, Tony Cheung,
bshanks@44 528 Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M. Norland, Dennis S. Rice, Nilesh Dosooye, Sundeep
bshanks@44 529 Shakya, Perdeep Mehta, and Tom Curran. BGEM: an in situ hybridization database of gene expression in the embryonic
bshanks@44 530 and adult mouse nervous system. PLoS Biology, 4(4):e86 EP &#8211;, April 2006.
bshanks@53 531 [10]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Susan M
bshanks@44 532 Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann, David J Anderson,
bshanks@44 533 Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas of the adult mouse brain. Nat
bshanks@44 534 Neurosci, 12(3):356&#8211;362, March 2009.
bshanks@53 535 [11]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2 edition, July
bshanks@36 536 2001.
bshanks@53 537 [12]Constance M. Smith, Jacqueline H. Finger, Terry F. Hayamizu, Ingeborg J. McCright, Janan T. Eppig, James A.
bshanks@53 538 Kadin, Joel E. Richardson, and Martin Ringwald. The mouse gene expression database (GXD): 2007 update. Nucl.
bshanks@53 539 Acids Res., 35(suppl_1):D618&#8211;623, 2007.
bshanks@53 540 [13]Judy Sprague, Leyla Bayraktaroglu, Dave Clements, Tom Conlin, David Fashena, Ken Frazer, Melissa Haendel, Dou-
bshanks@53 541 glas G Howe, Prita Mani, Sridhar Ramachandran, Kevin Schaper, Erik Segerdell, Peiran Song, Brock Sprunger, Sierra
bshanks@53 542 Taylor, Ceri E Van Slyke, and Monte Westerfield. The zebrafish information network: the zebrafish model organism
bshanks@53 543 database. Nucleic Acids Research, 34(Database issue):D581&#8211;5, 2006. PMID: 16381936.
bshanks@53 544 [14]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003.
bshanks@53 545 [15]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T. Mortrud,
bshanks@33 546 Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H. Gage, Allan R. Jones,
bshanks@33 547 Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the hippocampus. Neuron, 60(6):1010&#8211;
bshanks@33 548 1021, December 2008.
bshanks@53 549 [16]Pavel Tomancak, Amy Beaton, Richard Weiszmann, Elaine Kwan, ShengQiang Shu, Suzanna E Lewis, Stephen
bshanks@53 550 Richards, Michael Ashburner, Volker Hartenstein, Susan E Celniker, and Gerald M Rubin. Systematic determina-
bshanks@53 551 tion of patterns of gene expression during drosophila embryogenesis. Genome Biology, 3(12):research008818814, 2002.
bshanks@53 552 PMC151190.
bshanks@53 553 [17]Jano van Hemert and Richard Baldock. Mining Spatial Gene Expression Data for Association Rules, volume 4414/2007
bshanks@53 554 of Lecture Notes in Computer Science, pages 66&#8211;76. Springer Berlin / Heidelberg, 2007.
bshanks@53 555 [18]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton, Thomas P. Perry,
bshanks@44 556 Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen. EMAGE edinburgh mouse atlas
bshanks@44 557 of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860&#8211;865, 2008.
bshanks@53 558 [19]Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression patterns in the mouse
bshanks@44 559 embryo. Nucl. Acids Res., 32(suppl_1):D552&#8211;556, 2004.
bshanks@53 560 [20]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa Agar-
bshanks@44 561 wala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood, Robert Baertsch,
bshanks@44 562 Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer Bork, Marc Botcherby,
bshanks@44 563 Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John Burton, Jonathan Butler,
bshanks@44 564 Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T Chinwalla, Deanna M Church,
bshanks@44 565 Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R Copley, Alan Coulson, Olivier Couronne,
bshanks@44 566 James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy Davies, Kimberly D Delehaunty, Justin Deri,
bshanks@44 567 Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens, Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M
bshanks@44 568 Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes, Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A
bshanks@44 569 Fewell, Paul Flicek, Karen Foley, Wayne N Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage,
bshanks@44 570 Richard A Gibbs, Gustavo Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves,
bshanks@44 571 Eric D Green, Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki,
bshanks@44 572 LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard, Adrienne
bshanks@44 573 Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy, Michael Kamal,
bshanks@44 574 Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn Kells, W James Kent,
bshanks@44 575 Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David Kulp, Tom Landers, J P
bshanks@44 576 Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd, Susan Lucas, Bin Ma, Donna R
bshanks@44 577 Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer, Megan McCarthy, W Richard McCombie,
bshanks@44 578 Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim, Beverley Meredith, Jill P Mesirov, Webb Miller,
bshanks@44 579 Tracie L Miner, Emmanuel Mongin, Kate T Montgomery, Michael Morgan, Richard Mott, James C Mullikin, Donna M
bshanks@44 580 Muzny, William E Nash, Joanne O Nelson, Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J
bshanks@44 581 O&#8217;Connor, Yasushi Okazaki, Karen Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane
bshanks@44 582 Peterson, Pavel Pevzner, Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter,
bshanks@44 583 Michael Quail, Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph San-
bshanks@44 584 tos, Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven Seaman,
bshanks@44 585 Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer, Guy Slater, Arian
bshanks@44 586 Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles Sugnet, Mikita Suyama,
bshanks@44 587 Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp, Catherine Ucla, Abel Ureta-Vidal,
bshanks@44 588 Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie Wall, Ryan J Weber, Robert B Weiss, Michael C
bshanks@44 589 Wendl, Anthony P West, Kris Wetterstrand, Raymond Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey,
bshanks@44 590 Sophie Williams, Richard K Wilson, Eitan Winter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang,
bshanks@44 591 Evgeny M Zdobnov, Michael C Zody, and Eric S Lander. Initial sequencing and comparative analysis of the mouse
bshanks@44 592 genome. Nature, 420(6915):520&#8211;62, December 2002. PMID: 12466850.
bshanks@33 593
bshanks@33 594 _______________________________________________________________________________________________________
bshanks@30 595 stuff i dunno where to put yet (there is more scattered through grant-oldtext):
bshanks@16 596 Principle 4: Work in 2-D whenever possible
bshanks@33 597 &#8212;
bshanks@33 598 note:
bshanks@33 599 do we need to cite: no known markers, impressive results?
bshanks@36 600 two hemis
bshanks@33 601
bshanks@33 602