rev |
line source |
bshanks@0 | 1 Specific aims
|
bshanks@42 | 2 Massivenew datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, or in situ trans-
|
bshanks@42 | 3 genic reporter allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated
|
bshanks@42 | 4 methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific anatomical
|
bshanks@42 | 5 regions, and also to draw new anatomical maps based on gene expression patterns. We have three specific aims:
|
bshanks@30 | 6 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target
|
bshanks@30 | 7 anatomical regions
|
bshanks@42 | 8 (2) develop an algorithm to suggest new ways of carving up a structure into anatomical regions, based on spatial patterns
|
bshanks@42 | 9 in gene expression
|
bshanks@33 | 10 (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse
|
bshanks@35 | 11 Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of
|
bshanks@35 | 12 Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2).
|
bshanks@30 | 13 In addition to validating the usefulness of the algorithms, the application of these methods to cerebral cortex will produce
|
bshanks@30 | 14 immediate benefits, because there are currently no known genetic markers for many cortical areas. The results of the project
|
bshanks@33 | 15 will support the development of new ways to selectively target cortical areas, and it will support the development of a
|
bshanks@33 | 16 method for identifying the cortical areal boundaries present in small tissue samples.
|
bshanks@33 | 17 All algorithms that we develop will be implemented in an open-source software toolkit. The toolkit, as well as the
|
bshanks@30 | 18 machine-readable datasets developed in aim (3), will be published and freely available for others to use.
|
bshanks@30 | 19 Background and significance
|
bshanks@30 | 20 Aim 1
|
bshanks@30 | 21 Machine learning terminology: supervised learning
|
bshanks@42 | 22 The task of looking for marker genes for anatomical regions means that one is looking for a set of genes such that, if the
|
bshanks@42 | 23 expression level of those genes is known, then the locations of the regions can be inferred.
|
bshanks@42 | 24 If we define the regions so that they cover the entire anatomical structure to be divided, then instead of saying that we
|
bshanks@42 | 25 are using gene expression to find the locations of the regions, we may say that we are using gene expression to determine to
|
bshanks@42 | 26 which region each voxel within the structure belongs. We call this a classification task, because each voxel is being assigned
|
bshanks@42 | 27 to a class (namely, its region).
|
bshanks@30 | 28 Therefore, an understanding of the relationship between the combination of their expression levels and the locations of
|
bshanks@42 | 29 the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels
|
bshanks@42 | 30 within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs.
|
bshanks@42 | 31 We call this function a classifier. In general, the input to a classifier is called an instance, and the output is called a label
|
bshanks@42 | 32 (or a class label).
|
bshanks@30 | 33 The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a
|
bshanks@30 | 34 classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be
|
bshanks@30 | 35 analyzed in concert with an anatomical atlas in order to produce a classifier. Such a procedure is a type of a machine learning
|
bshanks@33 | 36 procedure. The construction of the classifier is called training (also learning), and the initial gene expression dataset used
|
bshanks@33 | 37 in the construction of the classifier is called training data.
|
bshanks@30 | 38 In the machine learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a
|
bshanks@30 | 39 task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances
|
bshanks@42 | 40 (voxels) for which the labels (regions) are known.
|
bshanks@30 | 41 Each gene expression level is called a feature, and the selection of which genes1 to include is called feature selection.
|
bshanks@33 | 42 Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with
|
bshanks@33 | 43 a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
|
bshanks@30 | 44 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then
|
bshanks@30 | 45 chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic
|
bshanks@30 | 46 procedure may be used in which features are added and subtracted from the selected set depending on how much they raise
|
bshanks@30 | 47 the score. Such procedures are called “stepwise” or “greedy”.
|
bshanks@30 | 48 Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the
|
bshanks@30 | 49 learning algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature
|
bshanks@30 | 50 selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to
|
bshanks@30 | 51 each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares). If
|
bshanks@30 | 52 only information from nearby voxels is used to calculate a voxel’s sub-score, then we say it is a local scoring method. If only
|
bshanks@30 | 53 information from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method.
|
bshanks@30 | 54 Key questions when choosing a learning method are: What are the instances? What are the features? How are the
|
bshanks@30 | 55 features chosen? Here are four principles that outline our answers to these questions.
|
bshanks@30 | 56 Principle 1: Combinatorial gene expression It is too much to hope that every anatomical region of interest will be
|
bshanks@30 | 57 identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene
|
bshanks@30 | 58 included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at
|
bshanks@30 | 59 combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary
|
bshanks@30 | 60 Results). Therefore, each instance should contain multiple features (genes).
|
bshanks@30 | 61 Principle 2: Only look at combinations of small numbers of genes When the classifier classifies a voxel, it is
|
bshanks@30 | 62 only allowed to look at the expression of the genes which have been selected as features. The more data that is available to
|
bshanks@30 | 63 a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a
|
bshanks@30 | 64 strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations
|
bshanks@30 | 65 in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as
|
bshanks@30 | 66 a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the
|
bshanks@30 | 67 expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the
|
bshanks@33 | 68 level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order
|
bshanks@30 | 69 to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as
|
bshanks@30 | 70 features.
|
bshanks@30 | 71 Principle 3: Use geometry in feature selection
|
bshanks@33 | 72 _________________________________________
|
bshanks@33 | 73 1Strictly speaking, the features are gene expression levels, but we’ll call them genes.
|
bshanks@30 | 74 When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of
|
bshanks@30 | 75 each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information
|
bshanks@30 | 76 about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See
|
bshanks@30 | 77 Preliminary Results for evidence of the complementary nature of pointwise and local scoring methods.
|
bshanks@30 | 78 Principle 4: Work in 2-D whenever possible
|
bshanks@30 | 79 There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When
|
bshanks@30 | 80 it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis
|
bshanks@33 | 81 algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D
|
bshanks@33 | 82 data.
|
bshanks@30 | 83 Therefore, when possible, the instances should represent pixels, not voxels.
|
bshanks@43 | 84 Related work
|
bshanks@43 | 85 There is a substantial body of work on the analysis of gene expression data, however, most of this concerns gene expression
|
bshanks@43 | 86 data which is not fundamentally spatial.
|
bshanks@43 | 87 As noted above, there has been much work on both supervised learning and there are many available algorithms for
|
bshanks@43 | 88 each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the
|
bshanks@43 | 89 way that this framework is set up has a large impact on performance. Creating a good framework can require creatively
|
bshanks@43 | 90 reconceptualizing the problem domain, and is not merely a mechanical “fine-tuning” of numerical parameters. For example,
|
bshanks@43 | 91 we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Work) may
|
bshanks@43 | 92 be necessary in order to achieve the best results in this application.
|
bshanks@43 | 93 We are aware of three existing efforts to find marker genes using spatial gene expression data using automated methods.
|
bshanks@43 | 94 [? ] describes GeneAtlas. GeneAtlas allows the user to construct a search query by freely demarcating one or two 2-D
|
bshanks@43 | 95 regions on sagittal slices, and then to specify either the strength of expression or the name of another gene whose expression
|
bshanks@43 | 96 pattern is to be matched. GeneAtlas differs from our Aim 1 in at least two ways. First, GeneAtlas finds only single genes,
|
bshanks@43 | 97 whereas we will also look for combinations of genes2. Second, at least for the custom spatial search, Gene Atlas appears to
|
bshanks@43 | 98 use a simple pointwise scoring method (strength of expression), whereas we will also use geometric metrics such as gradient
|
bshanks@43 | 99 similarity.
|
bshanks@43 | 100 [2 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components:
|
bshanks@43 | 101 * Gene Finder: The user selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2)
|
bshanks@43 | 102 yields a list of genes which are overexpressed in that cluster. (note: the ABA website also contains pre-prepared lists of
|
bshanks@43 | 103 overexpressed genes for selected structures)
|
bshanks@43 | 104 * Correlation: The user selects a seed voxel and the shows the user how much correlation there is between the gene
|
bshanks@43 | 105 expression profile of the seed voxel and every other voxel.
|
bshanks@43 | 106 * Clusters: AGEA includes a precomputed hierarchial clustering of voxels based on a recursive bifurcation algorithm
|
bshanks@43 | 107 with correlation as the similarity metric.
|
bshanks@43 | 108 Gene Finder is different from our Aim 1 in at least three ways. First, Gene Finder finds only single genes, whereas we
|
bshanks@43 | 109 will also look for combinations of genes. Second, gene finder can only use overexpression as a marker, whereas we will also
|
bshanks@43 | 110 search for underexpression. Third, Gene Finder uses a simple pointwise score3, whereas we will also use geometric scores
|
bshanks@43 | 111 such as gradient similarity. The Preliminary Data section contains evidence that each of our three choices is the right one.
|
bshanks@43 | 112 In summary, none of the previous projects explores combinations of marker genes, and none of their publications compare
|
bshanks@43 | 113 the results obtained by using different algorithms or scoring methods.
|
bshanks@30 | 114 Aim 2
|
bshanks@30 | 115 Machine learning terminology: clustering
|
bshanks@30 | 116 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as
|
bshanks@30 | 117 unsupervised learning in the jargon of machine learning. One thing that you can do with such a dataset is to group instances
|
bshanks@30 | 118 together. A set of similar instances is called a cluster, and the activity of finding grouping the data into clusters is called
|
bshanks@30 | 119 clustering or cluster analysis.
|
bshanks@42 | 120 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are
|
bshanks@42 | 121 once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from
|
bshanks@42 | 122 the same region have similar gene expression profiles, at least compared to the other regions. This means that clustering
|
bshanks@42 | 123 voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels
|
bshanks@42 | 124 with similar gene expression.
|
bshanks@43 | 125 _________________
|
bshanks@43 | 126 2See Preliminary Data for an example of an area which cannot be marked by any single gene in the dataset, but which can be marked by a
|
bshanks@43 | 127 combination.
|
bshanks@43 | 128 3“Expression energy ratio”, which captures overexpression.
|
bshanks@42 | 129 It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps
|
bshanks@43 | 130 some ofthe regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they
|
bshanks@42 | 131 could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests
|
bshanks@42 | 132 the outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels.
|
bshanks@42 | 133 This is called hierarchial clustering.
|
bshanks@30 | 134 Similarity scores
|
bshanks@30 | 135 A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or
|
bshanks@33 | 136 clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and
|
bshanks@30 | 137 scoring methods for similarity.
|
bshanks@30 | 138 Spatially contiguous clusters; image segmentation
|
bshanks@33 | 139 We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have
|
bshanks@33 | 140 an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary
|
bshanks@33 | 141 Results, we show that one can get reasonable results without enforcing this constraint, however, we plan to compare these
|
bshanks@33 | 142 results against other methods which guarantee contiguous clusters.
|
bshanks@30 | 143 Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a
|
bshanks@33 | 144 variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into
|
bshanks@30 | 145 clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in
|
bshanks@30 | 146 our task, there are thousands of color channels (one for each gene), rather than just three. There are imaging tasks which
|
bshanks@33 | 147 use more than three colors, however, for example multispectral imaging and hyperspectral imaging, which are often used
|
bshanks@33 | 148 to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting
|
bshanks@33 | 149 sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene
|
bshanks@33 | 150 expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of
|
bshanks@33 | 151 spatially arranged data, some of these algorithms are specialized for visual images.
|
bshanks@30 | 152 Dimensionality reduction
|
bshanks@33 | 153 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the
|
bshanks@30 | 154 instances. However, some clustering algorithms perform better on small numbers of features. There are techniques which
|
bshanks@30 | 155 “summarize” a larger number of features using a smaller number of features; these techniques go by the name of feature
|
bshanks@30 | 156 extraction or dimensionality reduction. The small set of features that such a technique yields is called the reduced feature
|
bshanks@30 | 157 set. After the reduced feature set is created, the instances may be replaced by reduced instances, which have as their features
|
bshanks@30 | 158 the reduced feature set rather than the original feature set of all gene expression levels. Note that the features in the reduced
|
bshanks@30 | 159 feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene
|
bshanks@30 | 160 expression levels.
|
bshanks@42 | 161 Another use for dimensionality reduction is to visualize the relationships between regions. For example, one might want
|
bshanks@42 | 162 to make a 2-D plot upon which each region is represented by a single point, and with the property that regions with similar
|
bshanks@42 | 163 gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot
|
bshanks@42 | 164 should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on
|
bshanks@42 | 165 a 2-D plan will exactly satisfy this property – however, dimensionality reduction techniques allow one to find arrangements
|
bshanks@42 | 166 of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied
|
bshanks@42 | 167 after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering.
|
bshanks@30 | 168 Clustering genes rather than voxels
|
bshanks@30 | 169 Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster
|
bshanks@30 | 170 the features (genes). There are two ways that clusters of genes could be used.
|
bshanks@30 | 171 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could
|
bshanks@30 | 172 have one reduced feature for each gene cluster.
|
bshanks@30 | 173 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression
|
bshanks@42 | 174 pattern which seems to pick out a single, spatially continguous region. Therefore, it seems likely that an anatomically
|
bshanks@43 | 175 interesting region will have multiple genes which each individually pick it out4. This suggests the following procedure:
|
bshanks@42 | 176 cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters.
|
bshanks@42 | 177 In the Preliminary Data we show that a number of anatomically recognized cortical regions, as well as some “superregions”
|
bshanks@42 | 178 formed by lumping together a few regions, are associated with gene clusters in this fashion.
|
bshanks@43 | 179 Related work
|
bshanks@43 | 180 We are aware of three existing efforts to cluster spatial gene expression data.
|
bshanks@43 | 181 _________________________________________
|
bshanks@43 | 182 4This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is
|
bshanks@43 | 183 possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression;
|
bshanks@43 | 184 perhaps there is some other way to map the cortex for which each region can be identified by single genes.
|
bshanks@43 | 185 [5 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual analysis,
|
bshanks@43 | 186 two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and a hierarchial recursive
|
bshanks@43 | 187 bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving the
|
bshanks@43 | 188 usefulness of such research. We have run NNMF on the cortical dataset5 and while the results are promising (see Preliminary
|
bshanks@43 | 189 Data), we think that it will be possible to find an even better method. In addition, this paper described a visual screening
|
bshanks@43 | 190 of the data, specifically, a visual analysis of 6000 genes with the primary purpose of observing how the spatial pattern of
|
bshanks@43 | 191 their expression coincided with the regions that had been identified by NNMF. We propose to do this sort of screening
|
bshanks@43 | 192 automatically, which would yield an objective, quantifiable result, rather than qualitative observations.
|
bshanks@43 | 193 AGEA’s[2] hierarchial clustering differs from our Aim 2 in at least two ways. First, AGEA uses perhaps the simplest
|
bshanks@43 | 194 possible similarity score (correlation), and does no dimensionality reduction before calculating similarity. While it is possible
|
bshanks@43 | 195 that a more complex system will not do any better than this, we believe further exploration of alternative methods of scoring
|
bshanks@43 | 196 and dimensionality reduction is warranted. Second, AGEA did not look at clusters of genes; in Preliminary Data we have
|
bshanks@43 | 197 shown that clusters of genes may identify interesting spatial regions such as cortical areas.
|
bshanks@43 | 198 [? ] todo
|
bshanks@43 | 199 In summary, although these projects obtained hierarchial clusterings, there has not been much comparison between
|
bshanks@43 | 200 different algorithms or scoring methods, so it is likely that the best clustering method for this application has not yet been
|
bshanks@43 | 201 found.
|
bshanks@30 | 202 Aim 3
|
bshanks@30 | 203 Background
|
bshanks@33 | 204 The cortex is divided into areas and layers. To a first approximation, the parcellation of the cortex into areas can
|
bshanks@33 | 205 be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue
|
bshanks@33 | 206 downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can
|
bshanks@33 | 207 picture an area of the cortex as a slice of many-layered cake.
|
bshanks@30 | 208 Although it is known that different cortical areas have distinct roles in both normal functioning and in disease processes,
|
bshanks@30 | 209 there are no known marker genes for many cortical areas. When it is necessary to divide a tissue sample into cortical areas,
|
bshanks@30 | 210 this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of
|
bshanks@30 | 211 their approximate location upon the cortical surface.
|
bshanks@33 | 212 Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not
|
bshanks@33 | 213 completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a
|
bshanks@36 | 214 single agreed-upon map can be seen by contrasting the recent maps given by Swanson[4] on the one hand, and Paxinos
|
bshanks@36 | 215 and Franklin[3] on the other. While the maps are certainly very similar in their general arrangement, significant differences
|
bshanks@30 | 216 remain in the details.
|
bshanks@36 | 217 The Allen Mouse Brain Atlas dataset
|
bshanks@36 | 218 The Allen Mouse Brain Atlas (ABA) data was produced by doing in-situ hybridization on slices of male, 56-day-old
|
bshanks@36 | 219 C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed
|
bshanks@36 | 220 in order to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial
|
bshanks@36 | 221 resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different
|
bshanks@36 | 222 mouse brains were needed in order to measure the expression of many genes.
|
bshanks@36 | 223 Next, an automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate
|
bshanks@36 | 224 system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 = 159,326
|
bshanks@36 | 225 voxels in the 3D coordinate system, of which 51,533 are in the brain[2].
|
bshanks@36 | 226 Mus musculus, the common house mouse, is thought to contain about 22,000 protein-coding genes[6]. The ABA contains
|
bshanks@36 | 227 data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our
|
bshanks@36 | 228 dataset is derived from only the coronal subset of the ABA, because the sagittal data does not cover the entire cortex,
|
bshanks@36 | 229 and has greater registration error[2]. Genes were selected by the Allen Institute for coronal sectioning based on, “classes of
|
bshanks@36 | 230 known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern”[2].
|
bshanks@42 | 231 The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT[?],
|
bshanks@42 | 232 GenePaint[?], its sister project GeneAtlas[?], BGEM[?], EMAGE[?], EurExpress (http://www.eurexpress.org/ee/; Eur-
|
bshanks@42 | 233 Express data is also entered into EMAGE), todo. With the exception of the ABA, GenePaint, and EMAGE, most of these
|
bshanks@42 | 234 resources, have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D
|
bshanks@42 | 235 space, and only ABA and EMAGE make this form of data available for public download from the website. Many of these
|
bshanks@42 | 236 resources focus on developmental gene expression.
|
bshanks@42 | 237 Significance
|
bshanks@43 | 238 ___________________________
|
bshanks@43 | 239 5We ran “vanilla” NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft
|
bshanks@43 | 240 spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was
|
bshanks@43 | 241 needed. The paper under discussion also mentions that they tried a hierarchial variant of NNMF, which we have not yet tried.
|
bshanks@43 | 242 The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the
|
bshanks@42 | 243 combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for
|
bshanks@30 | 244 drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively
|
bshanks@30 | 245 target individual cortical areas.
|
bshanks@30 | 246 The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatom-
|
bshanks@33 | 247 ical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can
|
bshanks@33 | 248 find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that
|
bshanks@30 | 249 will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex.
|
bshanks@33 | 250 The method developed in aim (3) will provide a genoarchitectonic viewpoint that will contribute to the creation of
|
bshanks@33 | 251 a better map. The development of present-day cortical maps was driven by the application of histological stains. It is
|
bshanks@33 | 252 conceivable that if a different set of stains had been available which identified a different set of features, then the today’s
|
bshanks@33 | 253 cortical maps would have come out differently. Since the number of classes of stains is small compared to the number of
|
bshanks@33 | 254 genes, it is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been
|
bshanks@33 | 255 captured by any stain. Therefore, current ideas about cortical anatomy need to incorporate what we can learn from looking
|
bshanks@33 | 256 at the patterns of gene expression.
|
bshanks@30 | 257 While we do not here propose to analyze human gene expression data, it is conceivable that the methods we propose to
|
bshanks@30 | 258 develop could be used to suggest modifications to the human cortical map as well.
|
bshanks@30 | 259 Related work
|
bshanks@43 | 260 [2 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations
|
bshanks@43 | 261 between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to
|
bshanks@43 | 262 either of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither
|
bshanks@43 | 263 of the other components of AGEA can be applied to cortical areas; AGEA’s Gene Finder cannot be used to find marker
|
bshanks@43 | 264 genes for cortical areas; and AGEA’s hierarchial clustering does not produce clusters corresponding to cortical areas.
|
bshanks@43 | 265 In both cases, the root cause is that pairwise correlations between the gene expression of voxels in different areas but
|
bshanks@43 | 266 the same layer are stronger than pairwise correlations between the gene expression of voxels in different layers but the same
|
bshanks@43 | 267 area. Therefore a pairwise voxel correlation clustering algorithm will always create clusters representing cortical layers, not
|
bshanks@43 | 268 areas. This is why the hierarchial clustering does not find cortical areas6. The reason that Gene Finder cannot find marker
|
bshanks@43 | 269 genes for cortical areas is that in Gene Finder, although the user chooses a seed voxel, Gene Finder chooses the ROI for
|
bshanks@43 | 270 which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed.
|
bshanks@43 | 271 In summary, for all three aims, (a) none of the previous projects explores combinations of marker genes, (b) there has
|
bshanks@43 | 272 been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally
|
bshanks@43 | 273 finding marker genes for cortical areas, or on finding a hierarchial clustering that will yield a map of cortical areas de novo
|
bshanks@43 | 274 from gene expression data.
|
bshanks@43 | 275 Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker
|
bshanks@43 | 276 genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods.
|
bshanks@42 | 277 _________________________________________
|
bshanks@43 | 278 6There are clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area
|
bshanks@34 | 279 intersection clusters, further work is needed to make sense of these.
|
bshanks@30 | 280 Preliminary work
|
bshanks@30 | 281 Format conversion between SEV, MATLAB, NIFTI
|
bshanks@35 | 282 We have created software to (politely) download all of the SEV files from the Allen Institute website. We have also created
|
bshanks@38 | 283 software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s file formats.
|
bshanks@30 | 284 Flatmap of cortex
|
bshanks@36 | 285 We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. We divided
|
bshanks@36 | 286 the cortex into hemispheres.
|
bshanks@42 | 287 Using Caret[1], we created a mesh representation of the surface of the selected voxels. For each gene, for each node of
|
bshanks@42 | 288 the mesh, we calculated an average of the gene expression of the voxels “underneath” that mesh node. We then flattened
|
bshanks@42 | 289 the cortex, creating a two-dimensional mesh.
|
bshanks@36 | 290 We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this grid
|
bshanks@36 | 291 into a MATLAB matrix.
|
bshanks@36 | 292 We manually traced the boundaries of each cortical area from the ABA coronal reference atlas slides. We then converted
|
bshanks@42 | 293 these manual traces into Caret-format regional boundary data on the mesh surface. We projected the regions onto the 2-d
|
bshanks@42 | 294 mesh, and then onto the grid, and then we converted the region data into MATLAB format.
|
bshanks@37 | 295 At this point, the data is in the form of a number of 2-D matrices, all in registration, with the matrix entries representing
|
bshanks@37 | 296 a grid of points (pixels) over the cortical surface:
|
bshanks@36 | 297 ∙A 2-D matrix whose entries represent the regional label associated with each surface pixel
|
bshanks@36 | 298 ∙For each gene, a 2-D matrix whose entries represent the average expression level underneath each surface pixel
|
bshanks@38 | 299 We created a normalized version of the gene expression data by subtracting each gene’s mean expression level (over all
|
bshanks@38 | 300 surface pixels) and dividing each gene by its standard deviation.
|
bshanks@40 | 301 The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over
|
bshanks@40 | 302 the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface.
|
bshanks@37 | 303 To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each
|
bshanks@37 | 304 cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in
|
bshanks@37 | 305 different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines
|
bshanks@37 | 306 that allow the depth of the ROI for volume-to-surface projection to vary.
|
bshanks@36 | 307 In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually
|
bshanks@36 | 308 demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
|
bshanks@38 | 309 Feature selection and scoring methods
|
bshanks@38 | 310 Correlation Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance
|
bshanks@38 | 311 as either a member of a particular anatomical area, or not. The target area can be represented as a binary mask over the
|
bshanks@38 | 312 surface pixels.
|
bshanks@40 | 313 One class of feature selection scoring method are those which calculate some sort of “match” between each gene image
|
bshanks@40 | 314 and the target image. Those genes which match the best are good candidates for features.
|
bshanks@38 | 315 One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between
|
bshanks@38 | 316 each gene and each cortical area.
|
bshanks@39 | 317 todo: fig
|
bshanks@38 | 318 Conditional entropy An information-theoretic scoring method is to find features such that, if the features (gene
|
bshanks@38 | 319 expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty,
|
bshanks@38 | 320 so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution
|
bshanks@38 | 321 to which we are referring is the probability distribution over the population of surface pixels.
|
bshanks@38 | 322 The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating,
|
bshanks@38 | 323 for each gene, five thresholded binary masks of the gene data. For each gene, we created a binary mask of its expression
|
bshanks@40 | 324 levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two
|
bshanks@40 | 325 standard deviations, the mean plus one standard deviation, the mean plus two standard deviations.
|
bshanks@39 | 326 Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression
|
bshanks@39 | 327 binary masks such that the conditional entropy of the target area’s binary mask, conditioned upon the pair of gene expression
|
bshanks@39 | 328 binary masks, is minimized.
|
bshanks@39 | 329 This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question,
|
bshanks@39 | 330 “Is this surface pixel a member of the target area?”.
|
bshanks@38 | 331
|
bshanks@41 | 332
|
bshanks@41 | 333
|
bshanks@41 | 334 Figure 1: The top row shows the three genes which (individually) best predict area AUD, according to logistic regression.
|
bshanks@41 | 335 The bottom row shows the three genes which (individually) best match area AUD, according to gradient similarity. From
|
bshanks@41 | 336 left to right and top to bottom, the genes are Ssr1, Efcbp1, Aph1a, Ptk7, Aph1a again, and Lepr
|
bshanks@39 | 337 todo: fig
|
bshanks@39 | 338 Gradient similarity We noticed that the previous two scoring methods, which are pointwise, often found genes whose
|
bshanks@39 | 339 pattern of expression did not look similar in shape to the target region. Fort his reason we designed a non-pointwise local
|
bshanks@39 | 340 scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar
|
bshanks@40 | 341 to the shape of the target region. We call this scoring method “gradient similarity”.
|
bshanks@40 | 342 One might say that gradient similarity attempts to measure how much the border of the area of gene expression and
|
bshanks@40 | 343 the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its
|
bshanks@40 | 344 maximum value to zero, the spatial pattern of a gene’s expression often does not have a discrete border. Therefore, instead
|
bshanks@40 | 345 of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images
|
bshanks@40 | 346 (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have
|
bshanks@40 | 347 gradients which are oriented in a similar direction. The formula is:
|
bshanks@41 | 348 ∑
|
bshanks@41 | 349 pixel<img src="cmsy7-32.png" alt="∈" />pixels cos(abs(∠∇1 -∠∇2)) ⋅|∇1| + |∇2|
|
bshanks@41 | 350 2 ⋅ pixel_value1 + pixel_value2
|
bshanks@41 | 351 2
|
bshanks@40 | 352 where ∇1 and ∇2 are the gradient vectors of the two images at the current pixel; ∠∇i is the angle of the gradient of
|
bshanks@41 | 353 image i at the current pixel; |∇i| is the magnitude of the gradient of image i at the current pixel; and pixel_valuei is the
|
bshanks@40 | 354 value of the current pixel in image i.
|
bshanks@40 | 355 The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar,
|
bshanks@40 | 356 then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a
|
bshanks@40 | 357 similar direction (because the borders are similar).
|
bshanks@43 | 358 Gradient similarity provides information complementary to correlation
|
bshanks@41 | 359 To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider
|
bshanks@43 | 360 Fig. . The top row of Fig. displays the 3 genes which most match area AUD, according to a pointwise method7. The bottom
|
bshanks@43 | 361 row displays the 3 genes which most match AUD according to a method which considers local geometry8 The pointwise
|
bshanks@43 | 362 method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this
|
bshanks@43 | 363 includes many areas which don’t have a salient border matching the areal border. The geometric method identifies genes
|
bshanks@43 | 364 whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes
|
bshanks@43 | 365 which don’t express over the entire area. Genes which have high rankings using both pointwise and border criteria, such as
|
bshanks@43 | 366 Aph1a in the example, may be particularly good markers. None of these genes are, individually, a perfect marker for AUD;
|
bshanks@43 | 367 we deliberately chose a “difficult” area in order to better contrast pointwise with geometric methods.
|
bshanks@43 | 368 Combinations of multiple genes are useful
|
bshanks@30 | 369 Here we give an example of a cortical area which is not marked by any single gene, but which can be identified combi-
|
bshanks@43 | 370 natorially. according to logistic regression, gene wwc19 is the best fit single gene for predicting whether or not a pixel on
|
bshanks@41 | 371 _________________________________________
|
bshanks@43 | 372 7For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor
|
bshanks@41 | 373 variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well
|
bshanks@41 | 374 they predict area AUD.
|
bshanks@43 | 375 8For each gene the gradient similarity (see section ??) between (a) a map of the expression of each gene on the cortical surface and (b) the
|
bshanks@41 | 376 shape of area AUD, was calculated, and this was used to rank the genes.
|
bshanks@43 | 377 9“WW, C2 and coiled-coil domain containing 1”; EntrezGene ID 211652
|
bshanks@41 | 378
|
bshanks@41 | 379
|
bshanks@41 | 380
|
bshanks@41 | 381 Figure 2: Upper left: wwc1. Upper right: mtif2. Lower left: wwc1 + mtif2 (each pixel’s value on the lower left is the sum
|
bshanks@41 | 382 of the corresponding pixels in the upper row). Within each picture, the vertical axis roughly corresponds to anterior at the
|
bshanks@41 | 383 top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right.
|
bshanks@41 | 384 The red outline is the boundary of region MO. Pixels are colored approximately according to the density of expressing cells
|
bshanks@41 | 385 underneath each pixel, with red meaning a lot of expression and blue meaning little.
|
bshanks@30 | 386 the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure shows wwc1’s spatial expression
|
bshanks@30 | 387 pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, however the gene
|
bshanks@33 | 388 overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the
|
bshanks@30 | 389 overshoot is the medial surface of the cortex. MO is only found on the lateral surface (todo).
|
bshanks@43 | 390 Gene mtif210 is shown in figure the upper-right of Fig. . Mtif2 captures MO’s upper-left boundary, but not its lower-right
|
bshanks@33 | 391 boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these
|
bshanks@33 | 392 two figures, we get the lower-left of Figure . This combination captures area MO much better than any single gene.
|
bshanks@38 | 393 Areas which can be identified by single genes
|
bshanks@39 | 394 todo
|
bshanks@43 | 395 Underexpression of a gene can serve as a marker
|
bshanks@39 | 396 todo
|
bshanks@39 | 397 Specific to Aim 1 (and Aim 3)
|
bshanks@39 | 398 Forward stepwise logistic regression todo
|
bshanks@30 | 399 SVM on all genes at once
|
bshanks@30 | 400 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical
|
bshanks@43 | 401 surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%11. As noted above,
|
bshanks@30 | 402 however, a classifier that looks at all the genes at once isn’t practically useful.
|
bshanks@30 | 403 The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many
|
bshanks@33 | 404 of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task
|
bshanks@30 | 405 combines feature selection with supervised learning.
|
bshanks@30 | 406 Decision trees
|
bshanks@30 | 407 todo
|
bshanks@30 | 408 Specific to Aim 2 (and Aim 3)
|
bshanks@30 | 409 Raw dimensionality reduction results
|
bshanks@30 | 410 todo
|
bshanks@30 | 411 (might want to incld nnMF since mentioned above)
|
bshanks@41 | 412 _________________________________________
|
bshanks@43 | 413 10“mitochondrial translational initiation factor 2”; EntrezGene ID 76784
|
bshanks@43 | 414 115-fold cross-validation.
|
bshanks@30 | 415 Dimensionality reduction plus K-means or spectral clustering
|
bshanks@30 | 416 Many areas are captured by clusters of genes
|
bshanks@40 | 417 todo
|
bshanks@40 | 418 todo
|
bshanks@30 | 419 Research plan
|
bshanks@42 | 420 Further work on flatmapping
|
bshanks@42 | 421 In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo),
|
bshanks@42 | 422 or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but
|
bshanks@42 | 423 in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane.
|
bshanks@42 | 424 In the case of the cerebral cortex, it remains to be seen which method of mapping the manifold into a plane is optimal
|
bshanks@42 | 425 for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret[1]) with
|
bshanks@42 | 426 mappings which preserve angle (conformal maps).
|
bshanks@42 | 427 Although there is much 2-D organization in anatomy, there are also structures whose shape is fundamentally 3-dimensional.
|
bshanks@42 | 428 If possible, we would like the method we develop to include a statistical test that warns the user if the assumption of 2-D
|
bshanks@42 | 429 structure seems to be wrong.
|
bshanks@30 | 430 todo amongst other things:
|
bshanks@30 | 431 Develop algorithms that find genetic markers for anatomical regions
|
bshanks@30 | 432 1.Develop scoring measures for evaluating how good individual genes are at marking areas: we will compare pointwise,
|
bshanks@30 | 433 geometric, and information-theoretic measures.
|
bshanks@30 | 434 2.Develop a procedure to find single marker genes for anatomical regions: for each cortical area, by using or combining
|
bshanks@30 | 435 the scoring measures developed, we will rank the genes by their ability to delineate each area.
|
bshanks@30 | 436 3.Extend the procedure to handle difficult areas by using combinatorial coding: for areas that cannot be identified by any
|
bshanks@30 | 437 single gene, identify them with a handful of genes. We will consider both (a) algorithms that incrementally/greedily
|
bshanks@30 | 438 combine single gene markers into sets, such as forward stepwise regression and decision trees, and also (b) supervised
|
bshanks@33 | 439 learning techniques which use soft constraints to minimize the number of features, such as sparse support vector
|
bshanks@30 | 440 machines.
|
bshanks@33 | 441 4.Extend the procedure to handle difficult areas by combining or redrawing the boundaries: An area may be difficult
|
bshanks@33 | 442 to identify because the boundaries are misdrawn, or because it does not “really” exist as a single area, at least on the
|
bshanks@30 | 443 genetic level. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its
|
bshanks@30 | 444 boundary were redrawn slightly, and (b) detect when a difficult area could be combined with adjacent areas to create
|
bshanks@30 | 445 a larger area which can be fit.
|
bshanks@30 | 446 Apply these algorithms to the cortex
|
bshanks@30 | 447 1.Create open source format conversion tools: we will create tools to bulk download the ABA dataset and to convert
|
bshanks@30 | 448 between SEV, NIFTI and MATLAB formats.
|
bshanks@30 | 449 2.Flatmap the ABA cortex data: map the ABA data onto a plane and draw the cortical area boundaries onto it.
|
bshanks@30 | 450 3.Find layer boundaries: cluster similar voxels together in order to automatically find the cortical layer boundaries.
|
bshanks@30 | 451 4.Run the procedures that we developed on the cortex: we will present, for each area, a short list of markers to identify
|
bshanks@30 | 452 that area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once.
|
bshanks@30 | 453 Develop algorithms to suggest a division of a structure into anatomical parts
|
bshanks@30 | 454 1.Explore dimensionality reduction algorithms applied to pixels: including TODO
|
bshanks@30 | 455 2.Explore dimensionality reduction algorithms applied to genes: including TODO
|
bshanks@30 | 456 3.Explore clustering algorithms applied to pixels: including TODO
|
bshanks@30 | 457 4.Explore clustering algorithms applied to genes: including gene shaving, TODO
|
bshanks@30 | 458 5.Develop an algorithm to use dimensionality reduction and/or hierarchial clustering to create anatomical maps
|
bshanks@30 | 459 6.Run this algorithm on the cortex: present a hierarchial, genoarchitectonic map of the cortex
|
bshanks@33 | 460 Bibliography & References Cited
|
bshanks@33 | 461 [1]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite for surface-
|
bshanks@33 | 462 based analyses of cerebral cortex. Journal of the American Medical Informatics Association: JAMIA, 8(5):443–59, 2001.
|
bshanks@33 | 463 PMID: 11522765.
|
bshanks@33 | 464 [2]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Susan M Sunkin,
|
bshanks@33 | 465 Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann, David J Anderson, Ed S
|
bshanks@33 | 466 Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas of the adult mouse brain. Nat Neurosci,
|
bshanks@33 | 467 12(3):356–362, March 2009.
|
bshanks@36 | 468 [3]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2 edition, July
|
bshanks@36 | 469 2001.
|
bshanks@36 | 470 [4]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003.
|
bshanks@36 | 471 [5]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T. Mortrud,
|
bshanks@33 | 472 Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H. Gage, Allan R. Jones,
|
bshanks@33 | 473 Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the hippocampus. Neuron, 60(6):1010–
|
bshanks@33 | 474 1021, December 2008.
|
bshanks@36 | 475 [6]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa Agarwala,
|
bshanks@36 | 476 Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood, Robert Baertsch, Jonathon
|
bshanks@36 | 477 Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer Bork, Marc Botcherby, Nicolas Bray,
|
bshanks@36 | 478 Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John Burton, Jonathan Butler, Robert D Campbell,
|
bshanks@36 | 479 Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T Chinwalla, Deanna M Church, Michele Clamp, Christopher
|
bshanks@36 | 480 Clee, Francis S Collins, Lisa L Cook, Richard R Copley, Alan Coulson, Olivier Couronne, James Cuff, Val Curwen, Tim
|
bshanks@36 | 481 Cutts, Mark Daly, Robert David, Joy Davies, Kimberly D Delehaunty, Justin Deri, Emmanouil T Dermitzakis, Colin
|
bshanks@36 | 482 Dewey, Nicholas J Dickens, Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M Dunn, Sean R Eddy, Laura Elnitski,
|
bshanks@36 | 483 Richard D Emes, Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A Fewell, Paul Flicek, Karen Foley, Wayne N
|
bshanks@36 | 484 Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage, Richard A Gibbs, Gustavo Glusman, Sante
|
bshanks@36 | 485 Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves, Eric D Green, Simon Gregory, Roderic Guig,
|
bshanks@36 | 486 Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki, LaDeana W Hillier, Angela Hinrichs, Wratko
|
bshanks@36 | 487 Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard, Adrienne Hunt, Ian Jackson, David B Jaffe, L Steven
|
bshanks@36 | 488 Johnson, Matthew Jones, Thomas A Jones, Ann Joy, Michael Kamal, Elinor K Karlsson, Donna Karolchik, Arkadiusz
|
bshanks@36 | 489 Kasprzyk, Jun Kawai, Evan Keibler, Cristyn Kells, W James Kent, Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S
|
bshanks@36 | 490 Kucherlapati, Edward J Kulbokas, David Kulp, Tom Landers, J P Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia
|
bshanks@36 | 491 Li, Ming Li, Christine Lloyd, Susan Lucas, Bin Ma, Donna R Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli,
|
bshanks@36 | 492 John H Mayer, Megan McCarthy, W Richard McCombie, Stuart McLaren, Kirsten McLay, John D McPherson, Jim
|
bshanks@36 | 493 Meldrim, Beverley Meredith, Jill P Mesirov, Webb Miller, Tracie L Miner, Emmanuel Mongin, Kate T Montgomery,
|
bshanks@36 | 494 Michael Morgan, Richard Mott, James C Mullikin, Donna M Muzny, William E Nash, Joanne O Nelson, Michael N
|
bshanks@36 | 495 Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J O’Connor, Yasushi Okazaki, Karen Oliver, Emma Overton-
|
bshanks@36 | 496 Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane Peterson, Pavel Pevzner, Robert Plumb, Craig S Pohl, Alex
|
bshanks@36 | 497 Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter, Michael Quail, Alexandre Reymond, Bruce A Roe, Krishna M
|
bshanks@36 | 498 Roskin, Edward M Rubin, Alistair G Rust, Ralph Santos, Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S
|
bshanks@36 | 499 Schwartz, Scott Schwartz, Carol Scott, Steven Seaman, Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen,
|
bshanks@36 | 500 Sarah Sims, Jonathan B Singer, Guy Slater, Arian Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-
|
bshanks@36 | 501 Thomann, Charles Sugnet, Mikita Suyama, Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John
|
bshanks@36 | 502 Tromp, Catherine Ucla, Abel Ureta-Vidal, Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie Wall,
|
bshanks@36 | 503 Ryan J Weber, Robert B Weiss, Michael C Wendl, Anthony P West, Kris Wetterstrand, Raymond Wheeler, Simon
|
bshanks@36 | 504 Whelan, Jamey Wierzbowski, David Willey, Sophie Williams, Richard K Wilson, Eitan Winter, Kim C Worley, Dudley
|
bshanks@36 | 505 Wyman, Shan Yang, Shiaw-Pyng Yang, Evgeny M Zdobnov, Michael C Zody, and Eric S Lander. Initial sequencing and
|
bshanks@36 | 506 comparative analysis of the mouse genome. Nature, 420(6915):520–62, December 2002. PMID: 12466850.
|
bshanks@33 | 507
|
bshanks@33 | 508 _______________________________________________________________________________________________________
|
bshanks@30 | 509 stuff i dunno where to put yet (there is more scattered through grant-oldtext):
|
bshanks@16 | 510 Principle 4: Work in 2-D whenever possible
|
bshanks@33 | 511 —
|
bshanks@33 | 512 note:
|
bshanks@33 | 513 do we need to cite: no known markers, impressive results?
|
bshanks@36 | 514 two hemis
|
bshanks@42 | 515 “genomic anatomy” is a name found in the titles of one of the cited papers which seems good
|
bshanks@33 | 516
|
bshanks@33 | 517
|