bshanks@0: Specific aims bshanks@42: Massivenew datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, or in situ trans- bshanks@42: genic reporter allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated bshanks@42: methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific anatomical bshanks@42: regions, and also to draw new anatomical maps based on gene expression patterns. We have three specific aims: bshanks@30: (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target bshanks@30: anatomical regions bshanks@42: (2) develop an algorithm to suggest new ways of carving up a structure into anatomical regions, based on spatial patterns bshanks@42: in gene expression bshanks@33: (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse bshanks@35: Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of bshanks@35: Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2). bshanks@30: In addition to validating the usefulness of the algorithms, the application of these methods to cerebral cortex will produce bshanks@30: immediate benefits, because there are currently no known genetic markers for many cortical areas. The results of the project bshanks@33: will support the development of new ways to selectively target cortical areas, and it will support the development of a bshanks@33: method for identifying the cortical areal boundaries present in small tissue samples. bshanks@33: All algorithms that we develop will be implemented in an open-source software toolkit. The toolkit, as well as the bshanks@30: machine-readable datasets developed in aim (3), will be published and freely available for others to use. bshanks@30: Background and significance bshanks@30: Aim 1 bshanks@30: Machine learning terminology: supervised learning bshanks@42: The task of looking for marker genes for anatomical regions means that one is looking for a set of genes such that, if the bshanks@42: expression level of those genes is known, then the locations of the regions can be inferred. bshanks@42: If we define the regions so that they cover the entire anatomical structure to be divided, then instead of saying that we bshanks@42: are using gene expression to find the locations of the regions, we may say that we are using gene expression to determine to bshanks@42: which region each voxel within the structure belongs. We call this a classification task, because each voxel is being assigned bshanks@42: to a class (namely, its region). bshanks@30: Therefore, an understanding of the relationship between the combination of their expression levels and the locations of bshanks@42: the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels bshanks@42: within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs. bshanks@42: We call this function a classifier. In general, the input to a classifier is called an instance, and the output is called a label bshanks@42: (or a class label). bshanks@30: The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a bshanks@30: classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be bshanks@30: analyzed in concert with an anatomical atlas in order to produce a classifier. Such a procedure is a type of a machine learning bshanks@33: procedure. The construction of the classifier is called training (also learning), and the initial gene expression dataset used bshanks@33: in the construction of the classifier is called training data. bshanks@30: In the machine learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a bshanks@30: task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances bshanks@42: (voxels) for which the labels (regions) are known. bshanks@30: Each gene expression level is called a feature, and the selection of which genes1 to include is called feature selection. bshanks@33: Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with bshanks@33: a separate feature selection phase, whereas other methods combine feature selection with other aspects of training. bshanks@30: One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then bshanks@30: chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic bshanks@30: procedure may be used in which features are added and subtracted from the selected set depending on how much they raise bshanks@30: the score. Such procedures are called “stepwise” or “greedy”. bshanks@30: Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the bshanks@30: learning algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature bshanks@30: selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to bshanks@30: each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares). If bshanks@30: only information from nearby voxels is used to calculate a voxel’s sub-score, then we say it is a local scoring method. If only bshanks@30: information from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method. bshanks@30: Key questions when choosing a learning method are: What are the instances? What are the features? How are the bshanks@30: features chosen? Here are four principles that outline our answers to these questions. bshanks@30: Principle 1: Combinatorial gene expression It is too much to hope that every anatomical region of interest will be bshanks@30: identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene bshanks@30: included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at bshanks@30: combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary bshanks@30: Results). Therefore, each instance should contain multiple features (genes). bshanks@30: Principle 2: Only look at combinations of small numbers of genes When the classifier classifies a voxel, it is bshanks@30: only allowed to look at the expression of the genes which have been selected as features. The more data that is available to bshanks@30: a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a bshanks@30: strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations bshanks@30: in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as bshanks@30: a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the bshanks@30: expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the bshanks@33: level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order bshanks@30: to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as bshanks@30: features. bshanks@30: Principle 3: Use geometry in feature selection bshanks@33: _________________________________________ bshanks@33: 1Strictly speaking, the features are gene expression levels, but we’ll call them genes. bshanks@30: When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of bshanks@30: each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information bshanks@30: about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See bshanks@30: Preliminary Results for evidence of the complementary nature of pointwise and local scoring methods. bshanks@30: Principle 4: Work in 2-D whenever possible bshanks@30: There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When bshanks@30: it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis bshanks@33: algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D bshanks@33: data. bshanks@30: Therefore, when possible, the instances should represent pixels, not voxels. bshanks@43: Related work bshanks@44: There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expression data bshanks@44: which is not fundamentally spatial2. bshanks@43: As noted above, there has been much work on both supervised learning and there are many available algorithms for bshanks@43: each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the bshanks@43: way that this framework is set up has a large impact on performance. Creating a good framework can require creatively bshanks@43: reconceptualizing the problem domain, and is not merely a mechanical “fine-tuning” of numerical parameters. For example, bshanks@43: we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Work) may bshanks@43: be necessary in order to achieve the best results in this application. bshanks@51: We are aware of five existing efforts to find marker genes using spatial gene expression data using automated methods. bshanks@51: GeneAtlas[1] and EMAGE [11] allow the user to construct a search query by demarcating regions and then specifing bshanks@51: either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. For bshanks@51: the similiarity score (match score), GeneAtlas appears to use strength of expression, and EMAGE uses Jaccard similarity, bshanks@51: which is equal to the number of true pixels in the intersection of the two images, divided by the number of pixels in their bshanks@51: union. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that together match a region. bshanks@44: [6 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components: bshanks@43: * Gene Finder: The user selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2) bshanks@43: yields a list of genes which are overexpressed in that cluster. (note: the ABA website also contains pre-prepared lists of bshanks@43: overexpressed genes for selected structures) bshanks@43: * Correlation: The user selects a seed voxel and the shows the user how much correlation there is between the gene bshanks@43: expression profile of the seed voxel and every other voxel. bshanks@43: * Clusters: AGEA includes a precomputed hierarchial clustering of voxels based on a recursive bifurcation algorithm bshanks@43: with correlation as the similarity metric. bshanks@43: Gene Finder is different from our Aim 1 in at least three ways. First, Gene Finder finds only single genes, whereas we bshanks@43: will also look for combinations of genes. Second, gene finder can only use overexpression as a marker, whereas we will also bshanks@51: search for underexpression. Third, Gene Finder uses a simple pointwise score3, whereas we will also use geometric scores bshanks@43: such as gradient similarity. The Preliminary Data section contains evidence that each of our three choices is the right one. bshanks@51: [? ] looks at the mean expression level of genes within anatomical regions, and applies a Student’s t-test with Bonferroni bshanks@51: correction to determine whether the mean expression level of a gene is significantly higher in the target region. Like AGEA, bshanks@51: this is a pointwise measure (only the mean expression level per pixel is being analyzed), it is not being used to look for bshanks@51: underexpression, and does not look for combinations of genes. bshanks@46: [4 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary bshanks@46: algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image. Their bshanks@51: match score is Jaccard similarity. bshanks@51: In summary, there has been fruitful work on finding marker genes, however, only one of the previous projects explores bshanks@51: combinations of marker genes, and none of these publications compare the results obtained by using different algorithms or bshanks@51: scoring methods. bshanks@30: Aim 2 bshanks@30: Machine learning terminology: clustering bshanks@30: If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as bshanks@30: unsupervised learning in the jargon of machine learning. One thing that you can do with such a dataset is to group instances bshanks@44: _________________________________________ bshanks@48: 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not bshanks@48: just data which has only a few different locations or which is indexed by anatomical label. bshanks@51: 3“Expression energy ratio”, which captures overexpression. bshanks@46: together. A set of similar instances is called a cluster, and the activity of finding grouping the data into clusters is called bshanks@46: clustering or cluster analysis. bshanks@46: The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are bshanks@46: once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from bshanks@42: the same region have similar gene expression profiles, at least compared to the other regions. This means that clustering bshanks@42: voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels bshanks@42: with similar gene expression. bshanks@42: It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps bshanks@44: some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they bshanks@42: could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests bshanks@42: the outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels. bshanks@42: This is called hierarchial clustering. bshanks@30: Similarity scores bshanks@30: A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or bshanks@33: clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and bshanks@30: scoring methods for similarity. bshanks@30: Spatially contiguous clusters; image segmentation bshanks@33: We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have bshanks@33: an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary bshanks@33: Results, we show that one can get reasonable results without enforcing this constraint, however, we plan to compare these bshanks@33: results against other methods which guarantee contiguous clusters. bshanks@30: Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a bshanks@33: variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into bshanks@30: clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in bshanks@30: our task, there are thousands of color channels (one for each gene), rather than just three. There are imaging tasks which bshanks@33: use more than three colors, however, for example multispectral imaging and hyperspectral imaging, which are often used bshanks@33: to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting bshanks@33: sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene bshanks@33: expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of bshanks@33: spatially arranged data, some of these algorithms are specialized for visual images. bshanks@51: Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression feature bshanks@51: vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying data. bshanks@33: Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the bshanks@30: instances. However, some clustering algorithms perform better on small numbers of features. There are techniques which bshanks@30: “summarize” a larger number of features using a smaller number of features; these techniques go by the name of feature bshanks@30: extraction or dimensionality reduction. The small set of features that such a technique yields is called the reduced feature bshanks@30: set. After the reduced feature set is created, the instances may be replaced by reduced instances, which have as their features bshanks@30: the reduced feature set rather than the original feature set of all gene expression levels. Note that the features in the reduced bshanks@30: feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene bshanks@30: expression levels. bshanks@51: Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the bshanks@51: reduced data set is less than in the original data set, the running time of clustering algorithms may be much less. Second, bshanks@51: it is thought that some clustering algorithms may give better results on reduced data. bshanks@51: Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example, bshanks@51: one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions bshanks@51: with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points bshanks@51: in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of bshanks@51: the points on a 2-D plan will exactly satisfy this property – however, dimensionality reduction techniques allow one to find bshanks@51: arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction bshanks@51: is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction bshanks@51: before clustering. bshanks@30: Clustering genes rather than voxels bshanks@30: Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster bshanks@30: the features (genes). There are two ways that clusters of genes could be used. bshanks@30: Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could bshanks@30: have one reduced feature for each gene cluster. bshanks@30: Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression bshanks@51: patternwhich seems to pick out a single, spatially continguous region. Therefore, it seems likely that an anatomically bshanks@51: interesting region will have multiple genes which each individually pick it out4. This suggests the following procedure: bshanks@42: cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters. bshanks@42: In the Preliminary Data we show that a number of anatomically recognized cortical regions, as well as some “superregions” bshanks@42: formed by lumping together a few regions, are associated with gene clusters in this fashion. bshanks@51: The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering bshanks@51: algorithms. bshanks@43: Related work bshanks@51: We are aware of five existing efforts to cluster spatial gene expression data. bshanks@44: [9 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual analysis, bshanks@43: two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and a hierarchial recursive bshanks@44: bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving bshanks@51: the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset5 and while the results are bshanks@44: promising (see Preliminary Data), we think that it will be possible to find an even better method. bshanks@51: AGEA’s[6] hierarchial clustering was described above. EMAGE[11] allows the user to select a dataset from among a bshanks@51: large number of alternatives, or by running a search query, and then to cluster the genes within that dataset. Clustering is bshanks@51: hierarchial complete linkage clustering with un-centred correlation as the similarity score. bshanks@51: todo [?] bshanks@46: In an interesting twist, [4] applies their technique for finding combinations of marker genes for the purpose of clustering bshanks@46: genes around a “seed gene”. The way they do this is by using the pattern of expression of the seed gene as the target image, bshanks@46: and then searching for other genes which can be combined to reproduce this pattern. Those other genes which are found bshanks@46: are considered to be related to the seed. The same team also describes a method[10] for finding “association rules” such as, bshanks@46: “if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene”. This could be bshanks@46: useful as part of a procedure for clustering voxels. bshanks@46: In summary, although these projects obtained clusterings, there has not been much comparison between different algo- bshanks@51: rithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. Also, bshanks@51: none of these projects did a separate dimensionality reduction step before clustering pixels, or tried to cluster genes first in bshanks@51: order to guide the clustering of pixels into spatial regions, or used co-clustering algorithms. bshanks@30: Aim 3 bshanks@30: Background bshanks@33: The cortex is divided into areas and layers. To a first approximation, the parcellation of the cortex into areas can bshanks@33: be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue bshanks@33: downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can bshanks@33: picture an area of the cortex as a slice of many-layered cake. bshanks@30: Although it is known that different cortical areas have distinct roles in both normal functioning and in disease processes, bshanks@30: there are no known marker genes for many cortical areas. When it is necessary to divide a tissue sample into cortical areas, bshanks@30: this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of bshanks@30: their approximate location upon the cortical surface. bshanks@33: Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not bshanks@33: completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a bshanks@44: single agreed-upon map can be seen by contrasting the recent maps given by Swanson[8] on the one hand, and Paxinos bshanks@44: and Franklin[7] on the other. While the maps are certainly very similar in their general arrangement, significant differences bshanks@30: remain in the details. bshanks@36: The Allen Mouse Brain Atlas dataset bshanks@36: The Allen Mouse Brain Atlas (ABA) data was produced by doing in-situ hybridization on slices of male, 56-day-old bshanks@36: C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed bshanks@36: in order to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial bshanks@51: _________________________________________ bshanks@51: 4This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is bshanks@51: possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression; bshanks@51: perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although bshanks@51: the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype. bshanks@51: 5We ran “vanilla” NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft bshanks@51: spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was bshanks@51: needed. The paper under discussion also mentions that they tried a hierarchial variant of NNMF, which we have not yet tried. bshanks@36: resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different bshanks@36: mouse brains were needed in order to measure the expression of many genes. bshanks@36: Next, an automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate bshanks@36: system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 = 159,326 bshanks@44: voxels in the 3D coordinate system, of which 51,533 are in the brain[6]. bshanks@46: Mus musculus, the common house mouse, is thought to contain about 22,000 protein-coding genes[13]. The ABA contains bshanks@36: data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our bshanks@46: dataset is derived from only the coronal subset of the ABA, because the sagittal data does not cover the entire cortex, and bshanks@46: also has greater registration error[6]. Genes were selected by the Allen Institute for coronal sectioning based on, “classes of bshanks@46: known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern”[6]. bshanks@51: The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT[3], bshanks@51: GenePaint[12], its sister project GeneAtlas[1], BGEM[5], EMAGE[11], EurExpress6, EADHB7, MAMEP8, Xenbase9, ZFIN[?], bshanks@51: Aniseed10, VisiGene11, GEISHA[?], Fruitfly.org[?], COMPARE[?] todo. With the exception of the ABA, GenePaint, and bshanks@51: EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH images and registered the bshanks@51: results into a single 3-D space, and only ABA and EMAGE make this form of data available for public download from the bshanks@51: website12. Many of these resources focus on developmental gene expression. bshanks@46: Significance bshanks@43: The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the bshanks@42: combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for bshanks@30: drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively bshanks@30: target individual cortical areas. bshanks@30: The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatom- bshanks@33: ical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can bshanks@33: find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that bshanks@30: will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex. bshanks@33: The method developed in aim (3) will provide a genoarchitectonic viewpoint that will contribute to the creation of bshanks@33: a better map. The development of present-day cortical maps was driven by the application of histological stains. It is bshanks@33: conceivable that if a different set of stains had been available which identified a different set of features, then the today’s bshanks@33: cortical maps would have come out differently. Since the number of classes of stains is small compared to the number of bshanks@33: genes, it is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been bshanks@33: captured by any stain. Therefore, current ideas about cortical anatomy need to incorporate what we can learn from looking bshanks@33: at the patterns of gene expression. bshanks@30: While we do not here propose to analyze human gene expression data, it is conceivable that the methods we propose to bshanks@30: develop could be used to suggest modifications to the human cortical map as well. bshanks@30: Related work bshanks@44: [6 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations bshanks@46: between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either bshanks@46: of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of bshanks@46: the other components of AGEA can be applied to cortical areas; AGEA’s Gene Finder cannot be used to find marker genes bshanks@48: for the cortical areas; and AGEA’s hierarchial clustering does not produce clusters corresponding to the cortical areas13. bshanks@46: In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, (b) there has bshanks@43: been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally bshanks@43: finding marker genes for cortical areas, or on finding a hierarchial clustering that will yield a map of cortical areas de novo bshanks@43: from gene expression data. bshanks@51: ___________________ bshanks@51: 6http://www.eurexpress.org/ee/; EurExpress data is also entered into EMAGE bshanks@51: 7http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html bshanks@51: 8http://mamep.molgen.mpg.de/index.php bshanks@51: 9http://xenbase.org/ bshanks@51: 10http://aniseed-ibdm.univ-mrs.fr/ bshanks@51: 11http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources bshanks@51: 12without prior offline registration bshanks@48: 13In both cases, the root cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are bshanks@44: often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel bshanks@46: correlation clustering algorithm will tend to create clusters representing cortical layers, not areas. This is why the hierarchial clustering does not bshanks@44: find most cortical areas (there are clusters which presumably correspond to the intersection of a layer and an area, but since one area will have bshanks@44: many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot find marker genes for bshanks@44: most cortical areas is that in Gene Finder, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found, bshanks@44: and it creates that ROI by (pairwise voxel correlation) clustering around the seed. bshanks@51: Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker bshanks@51: genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods. bshanks@30: Preliminary work bshanks@30: Format conversion between SEV, MATLAB, NIFTI bshanks@35: We have created software to (politely) download all of the SEV files from the Allen Institute website. We have also created bshanks@38: software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s file formats. bshanks@30: Flatmap of cortex bshanks@36: We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. We divided bshanks@36: the cortex into hemispheres. bshanks@44: Using Caret[2], we created a mesh representation of the surface of the selected voxels. For each gene, for each node of bshanks@42: the mesh, we calculated an average of the gene expression of the voxels “underneath” that mesh node. We then flattened bshanks@42: the cortex, creating a two-dimensional mesh. bshanks@36: We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this grid bshanks@36: into a MATLAB matrix. bshanks@36: We manually traced the boundaries of each cortical area from the ABA coronal reference atlas slides. We then converted bshanks@42: these manual traces into Caret-format regional boundary data on the mesh surface. We projected the regions onto the 2-d bshanks@42: mesh, and then onto the grid, and then we converted the region data into MATLAB format. bshanks@37: At this point, the data is in the form of a number of 2-D matrices, all in registration, with the matrix entries representing bshanks@37: a grid of points (pixels) over the cortical surface: bshanks@36: ∙A 2-D matrix whose entries represent the regional label associated with each surface pixel bshanks@36: ∙For each gene, a 2-D matrix whose entries represent the average expression level underneath each surface pixel bshanks@38: We created a normalized version of the gene expression data by subtracting each gene’s mean expression level (over all bshanks@38: surface pixels) and dividing each gene by its standard deviation. bshanks@40: The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over bshanks@40: the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface. bshanks@37: To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each bshanks@37: cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in bshanks@37: different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines bshanks@37: that allow the depth of the ROI for volume-to-surface projection to vary. bshanks@36: In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually bshanks@36: demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex. bshanks@38: Feature selection and scoring methods bshanks@38: Correlation Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance bshanks@46: as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the bshanks@38: surface pixels. bshanks@40: One class of feature selection scoring method are those which calculate some sort of “match” between each gene image bshanks@40: and the target image. Those genes which match the best are good candidates for features. bshanks@38: One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between bshanks@38: each gene and each cortical area. bshanks@39: todo: fig bshanks@38: Conditional entropy An information-theoretic scoring method is to find features such that, if the features (gene bshanks@38: expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty, bshanks@38: so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution bshanks@38: to which we are referring is the probability distribution over the population of surface pixels. bshanks@38: The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating, bshanks@46: for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression bshanks@40: levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two bshanks@40: standard deviations, the mean plus one standard deviation, the mean plus two standard deviations. bshanks@39: Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression bshanks@46: boolean masks such that the conditional entropy of the target area’s boolean mask, conditioned upon the pair of gene bshanks@46: expression boolean masks, is minimized. bshanks@39: This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question, bshanks@39: “Is this surface pixel a member of the target area?”. bshanks@38: bshanks@41: bshanks@41: bshanks@41: Figure 1: The top row shows the three genes which (individually) best predict area AUD, according to logistic regression. bshanks@41: The bottom row shows the three genes which (individually) best match area AUD, according to gradient similarity. From bshanks@41: left to right and top to bottom, the genes are Ssr1, Efcbp1, Aph1a, Ptk7, Aph1a again, and Lepr bshanks@39: todo: fig bshanks@39: Gradient similarity We noticed that the previous two scoring methods, which are pointwise, often found genes whose bshanks@39: pattern of expression did not look similar in shape to the target region. Fort his reason we designed a non-pointwise local bshanks@39: scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar bshanks@40: to the shape of the target region. We call this scoring method “gradient similarity”. bshanks@40: One might say that gradient similarity attempts to measure how much the border of the area of gene expression and bshanks@40: the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its bshanks@40: maximum value to zero, the spatial pattern of a gene’s expression often does not have a discrete border. Therefore, instead bshanks@40: of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images bshanks@40: (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have bshanks@40: gradients which are oriented in a similar direction. The formula is: bshanks@41: ∑ bshanks@41: pixel∈pixels cos(abs(∠∇1 -∠∇2)) ⋅|∇1| + |∇2| bshanks@41: 2 ⋅ pixel_value1 + pixel_value2 bshanks@41: 2 bshanks@40: where ∇1 and ∇2 are the gradient vectors of the two images at the current pixel; ∠∇i is the angle of the gradient of bshanks@41: image i at the current pixel; |∇i| is the magnitude of the gradient of image i at the current pixel; and pixel_valuei is the bshanks@40: value of the current pixel in image i. bshanks@40: The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, bshanks@40: then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a bshanks@40: similar direction (because the borders are similar). bshanks@43: Gradient similarity provides information complementary to correlation bshanks@41: To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider bshanks@48: Fig. . The top row of Fig. displays the 3 genes which most match area AUD, according to a pointwise method14. The bshanks@48: bottom row displays the 3 genes which most match AUD according to a method which considers local geometry15 The bshanks@46: pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is bshanks@46: that this includes many areas which don’t have a salient border matching the areal border. The geometric method identifies bshanks@46: genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes bshanks@46: genes which don’t express over the entire area. Genes which have high rankings using both pointwise and border criteria, bshanks@46: such as Aph1a in the example, may be particularly good markers. None of these genes are, individually, a perfect marker bshanks@46: for AUD; we deliberately chose a “difficult” area in order to better contrast pointwise with geometric methods. bshanks@43: Combinations of multiple genes are useful bshanks@30: Here we give an example of a cortical area which is not marked by any single gene, but which can be identified combi- bshanks@48: natorially. according to logistic regression, gene wwc116 is the best fit single gene for predicting whether or not a pixel on bshanks@48: _________________________________________ bshanks@48: 14For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor bshanks@41: variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well bshanks@41: they predict area AUD. bshanks@48: 15For each gene the gradient similarity (see section ??) between (a) a map of the expression of each gene on the cortical surface and (b) the bshanks@41: shape of area AUD, was calculated, and this was used to rank the genes. bshanks@48: 16“WW, C2 and coiled-coil domain containing 1”; EntrezGene ID 211652 bshanks@41: bshanks@41: bshanks@41: bshanks@41: Figure 2: Upper left: wwc1. Upper right: mtif2. Lower left: wwc1 + mtif2 (each pixel’s value on the lower left is the sum bshanks@41: of the corresponding pixels in the upper row). Within each picture, the vertical axis roughly corresponds to anterior at the bshanks@41: top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. bshanks@41: The red outline is the boundary of region MO. Pixels are colored approximately according to the density of expressing cells bshanks@41: underneath each pixel, with red meaning a lot of expression and blue meaning little. bshanks@30: the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure shows wwc1’s spatial expression bshanks@30: pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, however the gene bshanks@33: overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the bshanks@30: overshoot is the medial surface of the cortex. MO is only found on the lateral surface (todo). bshanks@48: Gene mtif217 is shown in figure the upper-right of Fig. . Mtif2 captures MO’s upper-left boundary, but not its lower-right bshanks@33: boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these bshanks@33: two figures, we get the lower-left of Figure . This combination captures area MO much better than any single gene. bshanks@38: Areas which can be identified by single genes bshanks@39: todo bshanks@43: Underexpression of a gene can serve as a marker bshanks@39: todo bshanks@39: Specific to Aim 1 (and Aim 3) bshanks@39: Forward stepwise logistic regression todo bshanks@30: SVM on all genes at once bshanks@30: In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical bshanks@48: surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%18. As noted above, bshanks@30: however, a classifier that looks at all the genes at once isn’t practically useful. bshanks@30: The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many bshanks@33: of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task bshanks@30: combines feature selection with supervised learning. bshanks@30: Decision trees bshanks@30: todo bshanks@30: Specific to Aim 2 (and Aim 3) bshanks@30: Raw dimensionality reduction results bshanks@30: todo bshanks@30: (might want to incld nnMF since mentioned above) bshanks@41: _________________________________________ bshanks@48: 17“mitochondrial translational initiation factor 2”; EntrezGene ID 76784 bshanks@48: 185-fold cross-validation. bshanks@30: Dimensionality reduction plus K-means or spectral clustering bshanks@30: Many areas are captured by clusters of genes bshanks@40: todo bshanks@40: todo bshanks@30: Research plan bshanks@42: Further work on flatmapping bshanks@42: In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo), bshanks@42: or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but bshanks@42: in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane. bshanks@42: In the case of the cerebral cortex, it remains to be seen which method of mapping the manifold into a plane is optimal bshanks@44: for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret[2]) with bshanks@42: mappings which preserve angle (conformal maps). bshanks@42: Although there is much 2-D organization in anatomy, there are also structures whose shape is fundamentally 3-dimensional. bshanks@42: If possible, we would like the method we develop to include a statistical test that warns the user if the assumption of 2-D bshanks@42: structure seems to be wrong. bshanks@30: todo amongst other things: bshanks@30: Develop algorithms that find genetic markers for anatomical regions bshanks@30: 1.Develop scoring measures for evaluating how good individual genes are at marking areas: we will compare pointwise, bshanks@30: geometric, and information-theoretic measures. bshanks@30: 2.Develop a procedure to find single marker genes for anatomical regions: for each cortical area, by using or combining bshanks@30: the scoring measures developed, we will rank the genes by their ability to delineate each area. bshanks@30: 3.Extend the procedure to handle difficult areas by using combinatorial coding: for areas that cannot be identified by any bshanks@30: single gene, identify them with a handful of genes. We will consider both (a) algorithms that incrementally/greedily bshanks@30: combine single gene markers into sets, such as forward stepwise regression and decision trees, and also (b) supervised bshanks@33: learning techniques which use soft constraints to minimize the number of features, such as sparse support vector bshanks@30: machines. bshanks@33: 4.Extend the procedure to handle difficult areas by combining or redrawing the boundaries: An area may be difficult bshanks@33: to identify because the boundaries are misdrawn, or because it does not “really” exist as a single area, at least on the bshanks@30: genetic level. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its bshanks@30: boundary were redrawn slightly, and (b) detect when a difficult area could be combined with adjacent areas to create bshanks@30: a larger area which can be fit. bshanks@51: # Linear discriminant analysis bshanks@30: Apply these algorithms to the cortex bshanks@30: 1.Create open source format conversion tools: we will create tools to bulk download the ABA dataset and to convert bshanks@30: between SEV, NIFTI and MATLAB formats. bshanks@30: 2.Flatmap the ABA cortex data: map the ABA data onto a plane and draw the cortical area boundaries onto it. bshanks@30: 3.Find layer boundaries: cluster similar voxels together in order to automatically find the cortical layer boundaries. bshanks@30: 4.Run the procedures that we developed on the cortex: we will present, for each area, a short list of markers to identify bshanks@30: that area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once. bshanks@30: Develop algorithms to suggest a division of a structure into anatomical parts bshanks@30: 1.Explore dimensionality reduction algorithms applied to pixels: including TODO bshanks@30: 2.Explore dimensionality reduction algorithms applied to genes: including TODO bshanks@30: 3.Explore clustering algorithms applied to pixels: including TODO bshanks@30: 4.Explore clustering algorithms applied to genes: including gene shaving, TODO bshanks@30: 5.Develop an algorithm to use dimensionality reduction and/or hierarchial clustering to create anatomical maps bshanks@30: 6.Run this algorithm on the cortex: present a hierarchial, genoarchitectonic map of the cortex bshanks@51: # Linear discriminant analysis bshanks@51: # jbt, coclustering bshanks@51: # self-organizing map bshanks@33: Bibliography & References Cited bshanks@44: [1]J. Carson, T. Ju, C. Thaller, M. Bello, I. Kakadiaris, J. Warren, G. Eichele, and W. Chiu. Data mining in situ gene bshanks@44: expression patterns at cellular resolution. In Computational Systems Bioinformatics Conference, 2005. Workshops and bshanks@44: Poster Abstracts. IEEE, page 358, 2005. bshanks@44: [2]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite for surface- bshanks@33: based analyses of cerebral cortex. Journal of the American Medical Informatics Association: JAMIA, 8(5):443–59, 2001. bshanks@33: PMID: 11522765. bshanks@44: [3]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Schambra, Norma J. bshanks@44: Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A gene expression atlas of the bshanks@44: central nervous system based on bacterial artificial chromosomes. Nature, 425(6961):917–925, October 2003. bshanks@46: [4]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Expression Pat- bshanks@46: terns, volume 13 of Communications in Computer and Information Science, pages 347–361. Springer Berlin Heidelberg, bshanks@46: 2008. bshanks@44: [5]Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew Asbury, Tony Cheung, bshanks@44: Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M. Norland, Dennis S. Rice, Nilesh Dosooye, Sundeep bshanks@44: Shakya, Perdeep Mehta, and Tom Curran. BGEM: an in situ hybridization database of gene expression in the embryonic bshanks@44: and adult mouse nervous system. PLoS Biology, 4(4):e86 EP –, April 2006. bshanks@44: [6]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Susan M bshanks@44: Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann, David J Anderson, bshanks@44: Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas of the adult mouse brain. Nat bshanks@44: Neurosci, 12(3):356–362, March 2009. bshanks@44: [7]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2 edition, July bshanks@36: 2001. bshanks@44: [8]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003. bshanks@44: [9]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T. Mortrud, bshanks@33: Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H. Gage, Allan R. Jones, bshanks@33: Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the hippocampus. Neuron, 60(6):1010– bshanks@33: 1021, December 2008. bshanks@46: [10]Jano van Hemert and Richard Baldock. Mining Spatial Gene Expression Data for Association Rules, pages 66–76. 2007. bshanks@46: [11]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton, Thomas P. Perry, bshanks@44: Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen. EMAGE edinburgh mouse atlas bshanks@44: of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860–865, 2008. bshanks@46: [12]Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression patterns in the mouse bshanks@44: embryo. Nucl. Acids Res., 32(suppl_1):D552–556, 2004. bshanks@46: [13]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa Agar- bshanks@44: wala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood, Robert Baertsch, bshanks@44: Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer Bork, Marc Botcherby, bshanks@44: Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John Burton, Jonathan Butler, bshanks@44: Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T Chinwalla, Deanna M Church, bshanks@44: Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R Copley, Alan Coulson, Olivier Couronne, bshanks@44: James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy Davies, Kimberly D Delehaunty, Justin Deri, bshanks@44: Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens, Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M bshanks@44: Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes, Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A bshanks@44: Fewell, Paul Flicek, Karen Foley, Wayne N Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage, bshanks@44: Richard A Gibbs, Gustavo Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves, bshanks@44: Eric D Green, Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki, bshanks@44: LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard, Adrienne bshanks@44: Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy, Michael Kamal, bshanks@44: Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn Kells, W James Kent, bshanks@44: Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David Kulp, Tom Landers, J P bshanks@44: Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd, Susan Lucas, Bin Ma, Donna R bshanks@44: Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer, Megan McCarthy, W Richard McCombie, bshanks@44: Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim, Beverley Meredith, Jill P Mesirov, Webb Miller, bshanks@44: Tracie L Miner, Emmanuel Mongin, Kate T Montgomery, Michael Morgan, Richard Mott, James C Mullikin, Donna M bshanks@44: Muzny, William E Nash, Joanne O Nelson, Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J bshanks@44: O’Connor, Yasushi Okazaki, Karen Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane bshanks@44: Peterson, Pavel Pevzner, Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter, bshanks@44: Michael Quail, Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph San- bshanks@44: tos, Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven Seaman, bshanks@44: Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer, Guy Slater, Arian bshanks@44: Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles Sugnet, Mikita Suyama, bshanks@44: Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp, Catherine Ucla, Abel Ureta-Vidal, bshanks@44: Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie Wall, Ryan J Weber, Robert B Weiss, Michael C bshanks@44: Wendl, Anthony P West, Kris Wetterstrand, Raymond Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey, bshanks@44: Sophie Williams, Richard K Wilson, Eitan Winter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang, bshanks@44: Evgeny M Zdobnov, Michael C Zody, and Eric S Lander. Initial sequencing and comparative analysis of the mouse bshanks@44: genome. Nature, 420(6915):520–62, December 2002. PMID: 12466850. bshanks@33: bshanks@33: _______________________________________________________________________________________________________ bshanks@30: stuff i dunno where to put yet (there is more scattered through grant-oldtext): bshanks@16: Principle 4: Work in 2-D whenever possible bshanks@33: — bshanks@33: note: bshanks@33: do we need to cite: no known markers, impressive results? bshanks@36: two hemis bshanks@33: bshanks@33: