bshanks@0: Specific aims bshanks@53: Massivenew datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ transgenic bshanks@53: reporter, microarray voxelation, and others, allow the expression levels of many genes at many locations to be compared. bshanks@53: Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker bshanks@53: genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns. We have bshanks@53: three specific aims: bshanks@30: (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target bshanks@30: anatomical regions bshanks@84: (2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions, based on bshanks@84: spatial patterns in gene expression bshanks@33: (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse bshanks@35: Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of bshanks@35: Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2). bshanks@84: Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that the methods bshanks@84: developed in aims (1) and (2) will generalize to any sort of high-dimensional data over points located in a low-dimensional bshanks@84: space. bshanks@84: In terms of the application of the methods to cerebral cortex, aim (1) is to go from cortical areas to marker genes, bshanks@84: and aim (2) is to let the gene profile define the cortical areas. In addition to validating the usefulness of the algorithms, bshanks@84: the application of these methods to cortex will produce immediate benefits, because there are currently no known genetic bshanks@84: markers for most cortical areas. The results of the project will support the development of new ways to selectively target bshanks@84: cortical areas, and it will support the development of a method for identifying the cortical areal boundaries present in small bshanks@84: tissue samples. bshanks@53: All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well as the bshanks@30: machine-readable datasets developed in aim (3), will be published and freely available for others to use. bshanks@87: The challenge topic bshanks@87: This proposal addresses challenge topic 06-HG-101. Massive new datasets obtained with techniques such as in situ hybridiza- bshanks@87: tion (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others, allow the expression levels bshanks@87: of many genes at many locations to be compared. Our goal is to develop automated methods to relate spatial variation in bshanks@87: gene expression to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical bshanks@87: maps based on gene expression patterns. bshanks@87: The Challenge and Potential impact bshanks@87: Now we will discuss each of our three aims in turn. For each aim, we will develop a conceptual framework for thinking bshanks@87: about the task, and we will present our strategy for solving it. Next we will discuss related work. At the conclusion of each bshanks@87: section, we will summarize why our strategy is different from what has been done before. At the end of this section, we will bshanks@87: describe the potential impact. bshanks@84: Aim 1: Given a map of regions, find genes that mark the regions bshanks@85: Machine learning terminology The task of looking for marker genes for known anatomical regions means that one is bshanks@85: looking for a set of genes such that, if the expression level of those genes is known, then the locations of the regions can be bshanks@85: inferred. bshanks@85: If we define the regions so that they cover the entire anatomical structure to be divided, we may say that we are using bshanks@85: gene expression to determine to which region each voxel within the structure belongs. We call this a classification task, bshanks@85: because each voxel is being assigned to a class (namely, its region). An understanding of the relationship between the bshanks@85: combination of their expression levels and the locations of the regions may be expressed as a function. The input to this bshanks@85: function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target bshanks@85: voxel, that is, the region to which the target voxel belongs. We call this function a classifier. In general, the input to a bshanks@85: classifier is called an instance, and the output is called a label (or a class label). bshanks@30: The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a bshanks@30: classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be bshanks@85: analyzed in concert with an anatomical atlas in order to produce a classifier. The initial gene expression dataset used in bshanks@85: the construction of the classifier is called training data. In the machine learning literature, this sort of procedure may be bshanks@85: thought of as a supervised learning task, defined as a task in which the goal is to learn a mapping from instances to labels, bshanks@85: and the training data consists of a set of instances (voxels) for which the labels (regions) are known. bshanks@30: Each gene expression level is called a feature, and the selection of which genes1 to include is called feature selection. bshanks@33: Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with bshanks@33: a separate feature selection phase, whereas other methods combine feature selection with other aspects of training. bshanks@30: One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then bshanks@30: chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic bshanks@30: procedure may be used in which features are added and subtracted from the selected set depending on how much they raise bshanks@30: the score. Such procedures are called “stepwise” or “greedy”. bshanks@30: Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the bshanks@85: algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection bshanks@85: methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel, bshanks@85: and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or average). If bshanks@85: only information from nearby voxels is used to calculate a voxel’s sub-score, then we say it is a local scoring method. If only bshanks@85: information from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method. bshanks@85: Our strategy for Aim 1 bshanks@85: Key questions when choosing a learning method are: What are the instances? What are the features? How are the features bshanks@85: chosen? Here are four principles that outline our answers to these questions. bshanks@84: Principle 1: Combinatorial gene expression bshanks@84: It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, in the bshanks@84: cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas (ABA) dataset. bshanks@84: However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for bshanks@84: which multiple genes are necessary and sufficient is provided in Preliminary Studies, Figure 4). Therefore, each instance bshanks@84: should contain multiple features (genes). bshanks@87: _______ bshanks@87: 1Strictly speaking, the features are gene expression levels, but we’ll call them genes. bshanks@84: Principle 2: Only look at combinations of small numbers of genes bshanks@84: When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected bshanks@84: as features. The more data that are available to a classifier, the better that it can do. For example, perhaps there are weak bshanks@84: correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that bshanks@84: we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we bshanks@84: want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must bshanks@84: contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible bshanks@84: to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a bshanks@84: procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. bshanks@84: Therefore, we must select only a few genes as features. bshanks@63: The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many bshanks@63: of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task bshanks@63: combines feature selection with supervised learning. bshanks@30: Principle 3: Use geometry in feature selection bshanks@30: When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of bshanks@30: each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information bshanks@30: about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See bshanks@84: Preliminary Studies, figure 3 for evidence of the complementary nature of pointwise and local scoring methods. bshanks@30: Principle 4: Work in 2-D whenever possible bshanks@30: There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When bshanks@30: it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis bshanks@33: algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D bshanks@85: data. Therefore, when possible, the instances should represent pixels, not voxels. bshanks@43: Related work bshanks@44: There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expression data bshanks@84: which are not fundamentally spatial2. bshanks@43: As noted above, there has been much work on both supervised learning and there are many available algorithms for bshanks@43: each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the bshanks@43: way that this framework is set up has a large impact on performance. Creating a good framework can require creatively bshanks@43: reconceptualizing the problem domain, and is not merely a mechanical “fine-tuning” of numerical parameters. For example, bshanks@84: we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Studies) may bshanks@43: be necessary in order to achieve the best results in this application. bshanks@53: We are aware of six existing efforts to find marker genes using spatial gene expression data using automated methods. bshanks@85: [11 ] mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of bshanks@53: interest, computing what proportion of this structure is covered by the gene’s spatial region. bshanks@85: GeneAtlas[5] and EMAGE [23] allow the user to construct a search query by demarcating regions and then specifing bshanks@53: either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. For the bshanks@53: similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses bshanks@53: the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel3 bshanks@85: whose expression is within four discretization levels. EMAGE uses Jaccard similarity4. Neither GeneAtlas nor EMAGE bshanks@53: allow one to search for combinations of genes that define a region in concert but not separately. bshanks@85: [13 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components. Gene Finder: The user bshanks@85: selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2) yields a list of genes which are bshanks@85: overexpressed in that cluster. (note: the ABA website also contains pre-prepared lists of overexpressed genes for selected bshanks@85: structures). Correlation: The user selects a seed voxel and the system then shows the user how much correlation there is bshanks@85: between the gene expression profile of the seed voxel and every other voxel. Clusters: will be described later bshanks@43: Gene Finder is different from our Aim 1 in at least three ways. First, Gene Finder finds only single genes, whereas we bshanks@43: will also look for combinations of genes. Second, gene finder can only use overexpression as a marker, whereas we will also bshanks@85: search for underexpression. Third, Gene Finder uses a simple pointwise score5, whereas we will also use geometric scores bshanks@84: such as gradient similarity (described in Preliminary Studies). Figures 4, 2, and 3 in the Preliminary Studies section contains bshanks@84: evidence that each of our three choices is the right one. bshanks@87: _________________________________________ bshanks@87: 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not bshanks@87: just data which have only a few different locations or which is indexed by anatomical label. bshanks@87: 3Actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity. bshanks@87: 4the number of true pixels in the intersection of the two images, divided by the number of pixels in their union. bshanks@87: 5“Expression energy ratio”, which captures overexpression. bshanks@85: [6 ] looks at the mean expression level of genes within anatomical regions, and applies a Student’s t-test with Bonferroni bshanks@51: correction to determine whether the mean expression level of a gene is significantly higher in the target region. Like AGEA, bshanks@51: this is a pointwise measure (only the mean expression level per pixel is being analyzed), it is not being used to look for bshanks@51: underexpression, and does not look for combinations of genes. bshanks@85: [9 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary bshanks@46: algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image. Their bshanks@51: match score is Jaccard similarity. bshanks@84: In summary, there has been fruitful work on finding marker genes, but only one of the previous projects explores bshanks@51: combinations of marker genes, and none of these publications compare the results obtained by using different algorithms or bshanks@51: scoring methods. bshanks@84: Aim 2: From gene expression data, discover a map of regions bshanks@30: Machine learning terminology: clustering bshanks@30: If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as bshanks@30: unsupervised learning in the jargon of machine learning. One thing that you can do with such a dataset is to group instances bshanks@46: together. A set of similar instances is called a cluster, and the activity of finding grouping the data into clusters is called bshanks@46: clustering or cluster analysis. bshanks@84: The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances bshanks@84: are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels bshanks@84: from the same anatomical region have similar gene expression profiles, at least compared to the other regions. This means bshanks@84: that clustering voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into bshanks@84: clusters of voxels with similar gene expression. bshanks@85: It is desirable to determine not just one set of regions, but also how these regions relate to each other. The outcome of bshanks@85: clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels. This is called bshanks@85: hierarchial clustering. bshanks@85: Similarity scores A crucial choice when designing a clustering method is how to measure similarity, across either pairs bshanks@85: of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above bshanks@85: under Aim 1) and scoring methods for similarity. bshanks@85: Spatially contiguous clusters; image segmentation We have shown that aim 2 is a type of clustering task. In fact, bshanks@85: it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a bshanks@85: cluster must be spatially contiguous. In Preliminary Studies, we show that one can get reasonable results without enforcing bshanks@85: this constraint; however, we plan to compare these results against other methods which guarantee contiguous clusters. bshanks@85: Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim bshanks@85: 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels bshanks@85: (one for each gene), rather than just three6. A more crucial difference is that there are various cues which are appropriate bshanks@85: for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data bshanks@85: such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other bshanks@85: sorts of spatially arranged data, some of these algorithms are specialized for visual images. bshanks@51: Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression feature bshanks@51: vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying data. bshanks@33: Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the bshanks@85: instances. However, some clustering algorithms perform better on small numbers of features7. There are techniques which bshanks@30: “summarize” a larger number of features using a smaller number of features; these techniques go by the name of feature bshanks@30: extraction or dimensionality reduction. The small set of features that such a technique yields is called the reduced feature bshanks@85: set. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set bshanks@85: may be any function of the set of gene expression levels. bshanks@85: Clustering genes rather than voxels Although the ultimate goal is to cluster the instances (voxels or pixels), one bshanks@85: strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used. bshanks@30: Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could bshanks@30: have one reduced feature for each gene cluster. bshanks@87: __ bshanks@87: 6There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often bshanks@87: used to process satellite imagery. bshanks@87: 7First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms bshanks@87: may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data. bshanks@30: Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression bshanks@87: patternwhich seems to pick out a single, spatially continguous region. Therefore, it seems likely that an anatomically bshanks@85: interesting region will have multiple genes which each individually pick it out8. This suggests the following procedure: bshanks@42: cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters. bshanks@84: In Preliminary Studies, Figure 7, we show that a number of anatomically recognized cortical regions, as well as some bshanks@84: “superregions” formed by lumping together a few regions, are associated with gene clusters in this fashion. bshanks@51: The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering bshanks@51: algorithms. bshanks@43: Related work bshanks@85: Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, [15], [2], [16], bshanks@85: and [1 ] associate spots on the cortex with the radial profile9 of response to some stain ([10] uses MRI), extract features from bshanks@85: this profile, and then use similarity between surface pixels to cluster. Features used include statistical moments, wavelets, bshanks@85: and the excess mass functional. Some of these features are motivated by the presence of tangential lines of stain intensity bshanks@85: which correspond to laminar structure. Some methods use standard clustering procedures, whereas others make use of the bshanks@85: spatial nature of the data to look for sudden transitions, which are identified as areal borders. bshanks@85: [20 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual analysis, bshanks@43: two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and a hierarchial recursive bshanks@44: bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving bshanks@85: the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset10 and while the results are bshanks@84: promising, they also demonstrate that NNMF is not necessarily the best dimensionality reduction method for this application bshanks@84: (see Preliminary Studies, Figure 6). bshanks@85: AGEA[13] includes a preset hierarchial clustering of voxels based on a recursive bifurcation algorithm with correlation bshanks@85: as the similarity metric. EMAGE[23] allows the user to select a dataset from among a large number of alternatives, or by bshanks@53: running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchial complete linkage bshanks@53: clustering with un-centred correlation as the similarity score. bshanks@85: [6 ] clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were bshanks@53: highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and bshanks@53: ordered the rows of this matrix as follows: “the first row of the matrix was chosen to show the strongest contrast between bshanks@53: the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing bshanks@53: similarity using a least squares metric”. The resulting matrix showed four clusters. For each cluster, prototypical spatial bshanks@53: expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without bshanks@85: clustering voxels. bshanks@85: [9 ] applies their technique for finding combinations of marker genes for the purpose of clustering genes around a “seed bshanks@85: gene”. They do this by using the pattern of expression of the seed gene as the target image, and then searching for other bshanks@85: genes which can be combined to reproduce this pattern. Other genes which are found are considered to be related to the bshanks@85: seed. The same team also describes a method[22] for finding “association rules” such as, “if this voxel is expressed in by bshanks@85: any gene, then that voxel is probably also expressed in by the same gene”. This could be useful as part of a procedure for bshanks@85: clustering voxels. bshanks@46: In summary, although these projects obtained clusterings, there has not been much comparison between different algo- bshanks@85: rithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. The bshanks@85: projects using gene expression on cortex did not attempt to make use of the radial profile of gene expression. Also, none of bshanks@85: these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first in order bshanks@85: to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms. bshanks@85: Aim 3: apply the methods developed to the cerebral cortex bshanks@30: Background bshanks@84: The cortex is divided into areas and layers. Because of the cortical columnar organization, the parcellation of the cortex bshanks@84: into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the bshanks@87: _________________________________________ bshanks@87: 8This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is bshanks@87: possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression; bshanks@87: perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although bshanks@87: the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype. bshanks@87: 9A radial profile is a profile along a line perpendicular to the cortical surface. bshanks@87: 10We ran “vanilla” NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft bshanks@87: spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was bshanks@87: needed. The paper under discussion also mentions that they tried a hierarchial variant of NNMF, which we have not yet tried. bshanks@84: areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the bshanks@87: surface.One can picture an area of the cortex as a slice of a six-layered cake11. bshanks@85: It is known that different cortical areas have distinct roles in both normal functioning and in disease processes, yet there bshanks@85: are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is bshanks@85: a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their bshanks@85: approximate location upon the cortical surface. bshanks@33: Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not bshanks@53: completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single bshanks@85: agreed-upon map can be seen by contrasting the recent maps given by Swanson[19] on the one hand, and Paxinos and bshanks@85: Franklin[14] on the other. While the maps are certainly very similar in their general arrangement, significant differences bshanks@85: remain. bshanks@36: The Allen Mouse Brain Atlas dataset bshanks@84: The Allen Mouse Brain Atlas (ABA) data were produced by doing in-situ hybridization on slices of male, 56-day-old bshanks@36: C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed bshanks@85: to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution bshanks@85: is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse bshanks@85: brains were needed in order to measure the expression of many genes. bshanks@85: An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate bshanks@36: system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 = 159,326 bshanks@85: voxels in the 3D coordinate system, of which 51,533 are in the brain[13]. bshanks@85: Mus musculus is thought to contain about 22,000 protein-coding genes[25]. The ABA contains data on about 20,000 bshanks@85: genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from bshanks@85: only the coronal subset of the ABA12. bshanks@85: The ABA is not the only large public spatial gene expression dataset13. With the exception of the ABA, GenePaint, and bshanks@85: EMAGE, most of the other resources have not (yet) extracted the expression intensity from the ISH images and registered bshanks@85: the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public bshanks@85: download from the website14. Many of these resources focus on developmental gene expression. bshanks@63: Related work bshanks@85: [13 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations bshanks@63: between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either bshanks@46: of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of bshanks@46: the other components of AGEA can be applied to cortical areas; AGEA’s Gene Finder cannot be used to find marker genes bshanks@85: for the cortical areas; and AGEA’s hierarchial clustering does not produce clusters corresponding to the cortical areas15. bshanks@46: In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, (b) there has bshanks@43: been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally bshanks@43: finding marker genes for cortical areas, or on finding a hierarchial clustering that will yield a map of cortical areas de novo bshanks@43: from gene expression data. bshanks@53: Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker bshanks@53: genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods. bshanks@53: _________________________________________ bshanks@87: 11Outside of isocortex, the number of layers varies. bshanks@87: 12The sagittal data do not cover the entire cortex, and also have greater registration error[13]. Genes were selected by the Allen Institute for bshanks@85: coronal sectioning based on, “classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression bshanks@85: pattern”[13]. bshanks@85: 13Other such resources include GENSAT[8], GenePaint[24], its sister project GeneAtlas[5], BGEM[12], EMAGE[23], EurExpress (http://www. bshanks@85: eurexpress.org/ee/; EurExpress data are also entered into EMAGE), EADHB (http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database. bshanks@85: html), MAMEP (http://mamep.molgen.mpg.de/index.php), Xenbase (http://xenbase.org/), ZFIN[18], Aniseed (http://aniseed-ibdm. bshanks@85: univ-mrs.fr/), VisiGene (http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some of the other listed data sources), GEISHA[4], bshanks@85: Fruitfly.org[21], COMPARE (http://compare.ibdml.univ-mrs.fr/), GXD[17], GEO[3] (GXD and GEO contain spatial data but also non-spatial bshanks@85: data. All GXD spatial data are also in EMAGE.) bshanks@85: 14without prior offline registration bshanks@85: 15In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger bshanks@85: than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation bshanks@85: clustering algorithm will tend to create clusters representing cortical layers, not areas (there may be clusters which presumably correspond to the bshanks@85: intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of bshanks@85: these). The reason that Gene Finder cannot the find marker genes for cortical areas is that, although the user chooses a seed voxel, Gene Finder bshanks@85: chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed. bshanks@87: Significance bshanks@87: The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the combinatorial bshanks@87: expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for drug discovery bshanks@87: as well as for experimentation because marker genes can be used to design interventions which selectively target individual bshanks@87: cortical areas. bshanks@87: The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatom- bshanks@87: ical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can bshanks@87: find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that bshanks@87: will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex. bshanks@87: The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation of a bshanks@87: better map. The development of present-day cortical maps was driven by the application of histological stains. If a different bshanks@87: set of stains had been available which identified a different set of features, then today’s cortical maps may have come out bshanks@87: differently. It is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been bshanks@87: captured by any stain. Therefore, cortical anatomy needs to incorporate what we can learn from looking at the patterns of bshanks@87: gene expression. bshanks@87: While we do not here propose to analyze human gene expression data, it is conceivable that the methods we propose bshanks@87: to develop could be used to suggest modifications to the human cortical map as well. In fact, the methods we will develop bshanks@87: will be applicable to other datasets beyond the brain. We will provide an open-source toolbox to allow other researchers bshanks@87: to easily use our methods. With these methods, researchers with gene expression for any area of the body will be able to bshanks@87: efficiently find marker genes for anatomical regions, or to use gene expression to discover new anatomical patterning. As bshanks@87: described above, marker genes have a variety of uses in the development of drugs and experimental manipulations, and in bshanks@87: the anatomical characterization of tissue samples. The discovery of new ways to carve up anatomical structures into regions bshanks@92: may lead to the discovery of new anatomical subregions in various structures, which will widely impact all areas of biology. bshanks@92: Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that the bshanks@92: methods developed in aims (1) and (2) will not be limited to gene expression data, but rather will generalize to any sort of bshanks@92: high-dimensional data over points located in a low-dimensional space. bshanks@87: The approach: Preliminary Studies bshanks@92: Format conversion between SEV, MATLAB, NIFTI bshanks@92: We have created software to (politely) download all of the SEV files16 from the Allen Institute website. We have also created bshanks@92: software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s file formats. bshanks@92: Flatmap of cortex bshanks@92: We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. We divided bshanks@92: the cortex into hemispheres. bshanks@92: Using Caret[7], we created a mesh representation of the surface of the selected voxels. For each gene, for each node of bshanks@92: the mesh, we calculated an average of the gene expression of the voxels “underneath” that mesh node. We then flattened bshanks@92: the cortex, creating a two-dimensional mesh. bshanks@92: We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this grid bshanks@92: into a MATLAB matrix. bshanks@92: We manually traced the boundaries of each of 49 cortical areas from the ABA coronal reference atlas slides. We then bshanks@92: converted these manual traces into Caret-format regional boundary data on the mesh surface. We projected the regions bshanks@92: onto the 2-d mesh, and then onto the grid, and then we converted the region data into MATLAB format. bshanks@92: At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries representing bshanks@92: a grid of points (pixels) over the cortical surface: bshanks@92: _ bshanks@92: 16SEV is a sparse format for spatial data. It is the format in which the ABA data is made available. bshanks@92: bshanks@85: bshanks@85: bshanks@85: Figure 1: Top row: Genes Nfic and bshanks@85: A930001M12Rik are the most correlated bshanks@85: with area SS (somatosensory cortex). Bot- bshanks@85: tom row: Genes C130038G02Rik and bshanks@85: Cacna1i are those with the best fit using bshanks@85: logistic regression. Within each picture, the bshanks@85: vertical axis roughly corresponds to anterior bshanks@85: at the top and posterior at the bottom, and bshanks@85: the horizontal axis roughly corresponds to bshanks@85: medial at the left and lateral at the right. bshanks@85: The red outline is the boundary of region bshanks@85: SS. Pixels are colored according to correla- bshanks@85: tion, with red meaning high correlation and bshanks@92: blue meaning low. bshanks@92: ∙A 2-D matrix whose entries represent the regional label associated with each surface pixel bshanks@92: ∙For each gene, a 2-D matrix whose entries represent the average expression level underneath each surface pixel bshanks@75: bshanks@78: Figure 2: Gene Pitx2 bshanks@75: is selectively underex- bshanks@77: pressed in area SS. We created a normalized version of the gene expression data by subtracting each gene’s mean bshanks@84: expression level (over all surface pixels) and dividing the expression level of each gene by its bshanks@84: standard deviation. bshanks@75: The features and the target area are both functions on the surface pixels. They can be referred bshanks@75: to as scalar fields over the space of surface pixels; alternately, they can be thought of as images bshanks@75: which can be displayed on the flatmapped surface. bshanks@75: To move beyond a single average expression level for each surface pixel, we plan to create a bshanks@75: separate matrix for each cortical layer to represent the average expression level within that layer. bshanks@75: Cortical layers are found at different depths in different parts of the cortex. In preparation for bshanks@75: extracting the layer-specific datasets, we have extended Caret with routines that allow the depth bshanks@75: of the ROI for volume-to-surface projection to vary. bshanks@92: In the Research Plan, we describe how we will automatically locate the layer depths. For bshanks@92: validation, we have manually demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex. bshanks@77: Feature selection and scoring methods bshanks@75: Underexpression of a gene can serve as a marker Underexpression of a gene can sometimes serve as a marker. See, bshanks@75: for example, Figure 2. bshanks@75: Correlation Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance bshanks@75: as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the bshanks@75: surface pixels. bshanks@84: One class of feature selection scoring methods contains methods which calculate some sort of “match” between each gene bshanks@84: image and the target image. Those genes which match the best are good candidates for features. bshanks@75: One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between bshanks@75: each gene and each cortical area. The top row of Figure 1 shows the three genes most correlated with area SS. bshanks@92: bshanks@85: bshanks@85: bshanks@85: Figure 3: The top row shows the two genes bshanks@85: which (individually) best predict area AUD, bshanks@85: according to logistic regression. The bot- bshanks@85: tom row shows the two genes which (indi- bshanks@85: vidually) best match area AUD, according bshanks@85: to gradient similarity. From left to right and bshanks@85: top to bottom, the genes are Ssr1, Efcbp1, bshanks@85: Ptk7, and Aph1a. Conditional entropy An information-theoretic scoring method is to find bshanks@85: features such that, if the features (gene expression levels) are known, uncer- bshanks@85: tainty about the target (the regional identity) is reduced. Entropy measures bshanks@85: uncertainty, so what we want is to find features such that the conditional dis- bshanks@85: tribution of the target has minimal entropy. The distribution to which we are bshanks@85: referring is the probability distribution over the population of surface pixels. bshanks@85: The simplest way to use information theory is on discrete data, so we bshanks@85: discretized our gene expression data by creating, for each gene, five thresholded bshanks@85: boolean masks of the gene data. For each gene, we created a boolean mask of bshanks@85: its expression levels using each of these thresholds: the mean of that gene, the bshanks@85: mean minus one standard deviation, the mean minus two standard deviations, bshanks@85: the mean plus one standard deviation, the mean plus two standard deviations. bshanks@85: Now, for each region, we created and ran a forward stepwise procedure bshanks@85: which attempted to find pairs of gene expression boolean masks such that the bshanks@85: conditional entropy of the target area’s boolean mask, conditioned upon the bshanks@85: pair of gene expression boolean masks, is minimized. bshanks@85: This finds pairs of genes which are most informative (at least at these dis- bshanks@85: cretization thresholds) relative to the question, “Is this surface pixel a member bshanks@85: of the target area?”. Its advantage over linear methods such as logistic regres- bshanks@85: sion is that it takes account of arbitrarily nonlinear relationships; for example, bshanks@85: if the XOR of two variables predicts the target, conditional entropy would bshanks@85: notice, whereas linear methods would not. bshanks@85: bshanks@85: bshanks@85: Figure 4: Upper left: wwc1. Upper right: bshanks@85: mtif2. Lower left: wwc1 + mtif2 (each bshanks@85: pixel’s value on the lower left is the sum of bshanks@92: the corresponding pixels in the upper row). Gradient similarity We noticed that the previous two scoring methods, bshanks@92: which are pointwise, often found genes whose pattern of expression did not bshanks@92: look similar in shape to the target region. For this reason we designed a bshanks@92: non-pointwise local scoring method to detect when a gene had a pattern of bshanks@92: expression which looked like it had a boundary whose shape is similar to the bshanks@92: shape of the target region. We call this scoring method “gradient similarity”. bshanks@92: One might say that gradient similarity attempts to measure how much the bshanks@92: border of the area of gene expression and the border of the target region over- bshanks@92: lap. However, since gene expression falls off continuously rather than jumping bshanks@92: from its maximum value to zero, the spatial pattern of a gene’s expression often bshanks@92: does not have a discrete border. Therefore, instead of looking for a discrete bshanks@92: border, we look for large gradients. Gradient similarity is a symmetric function bshanks@92: over two images (i.e. two scalar fields). It is is high to the extent that matching bshanks@92: pixels which have large values and large gradients also have gradients which bshanks@92: are oriented in a similar direction. The formula is: bshanks@92: ∑ bshanks@92: pixel∈pixels cos(abs(∠∇1 -∠∇2)) ⋅|∇1| + |∇2| bshanks@92: 2 ⋅ pixel_value1 + pixel_value2 bshanks@92: 2 bshanks@92: where ∇1 and ∇2 are the gradient vectors of the two images at the current bshanks@92: pixel; ∠∇i is the angle of the gradient of image i at the current pixel; |∇i| is the magnitude of the gradient of image i at bshanks@92: the current pixel; and pixel_valuei is the value of the current pixel in image i. bshanks@92: The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, bshanks@92: then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a bshanks@92: similar direction (because the borders are similar). bshanks@92: Most of the genes in Figure 5 were identified via gradient similarity. bshanks@92: Gradient similarity provides information complementary to correlation bshanks@92: To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider bshanks@92: Fig. 3. The top row of Fig. 3 displays the 3 genes which most match area AUD, according to a pointwise method17. The bshanks@92: bottom row displays the 3 genes which most match AUD according to a method which considers local geometry18 The bshanks@92: pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is bshanks@92: that this includes many areas which don’t have a salient border matching the areal border. The geometric method identifies bshanks@85: _________________________________________ bshanks@85: 17For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor bshanks@85: variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well bshanks@85: they predict area AUD. bshanks@89: 18For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, bshanks@89: was calculated, and this was used to rank the genes. bshanks@92: genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes bshanks@92: genes which don’t express over the entire area. Genes which have high rankings using both pointwise and border criteria, bshanks@92: such as Aph1a in the example, may be particularly good markers. None of these genes are, individually, a perfect marker bshanks@92: for AUD; we deliberately chose a “difficult” area in order to better contrast pointwise with geometric methods. bshanks@85: bshanks@85: bshanks@85: bshanks@85: bshanks@85: Figure 5: From left to right and top bshanks@85: to bottom, single genes which roughly bshanks@85: identify areas SS (somatosensory primary bshanks@85: + supplemental), SSs (supplemental so- bshanks@85: matosensory), PIR (piriform), FRP (frontal bshanks@85: pole), RSP (retrosplenial), COApm (Corti- bshanks@85: cal amygdalar, posterior part, medial zone). bshanks@85: Grouping some areas together, we have bshanks@85: also found genes to identify the groups bshanks@85: ACA+PL+ILA+DP+ORB+MO (anterior bshanks@85: cingulate, prelimbic, infralimbic, dorsal pe- bshanks@85: duncular, orbital, motor), posterior and lat- bshanks@85: eral visual (VISpm, VISpl, VISI, VISp; pos- bshanks@85: teromedial, posterolateral, lateral, and pri- bshanks@85: mary visual; the posterior and lateral vi- bshanks@85: sual area is distinguished from its neigh- bshanks@85: bors, but not from the entire rest of the bshanks@85: cortex). The genes are Pitx2, Aldh1a2, bshanks@85: Ppfibp1, Slco1a5, Tshz2, Trhr, Col12a1, bshanks@92: Ets1. Areas which can be identified by single genes Using gradient simi- bshanks@92: larity, we have already found single genes which roughly identify some areas bshanks@92: and groupings of areas. For each of these areas, an example of a gene which bshanks@92: roughly identifies it is shown in Figure 5. We have not yet cross-verified these bshanks@92: genes in other atlases. bshanks@92: In addition, there are a number of areas which are almost identified by single bshanks@92: genes: COAa+NLOT (anterior part of cortical amygdalar area, nucleus of the bshanks@92: lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), bshanks@92: VIS (visual), AUD (auditory). bshanks@92: These results validate our expectation that the ABA dataset can be ex- bshanks@92: ploited to find marker genes for many cortical areas, while also validating the bshanks@92: relevancy of our new scoring method, gradient similarity. bshanks@92: Combinations of multiple genes are useful and necessary for some bshanks@92: areas bshanks@92: In Figure 4, we give an example of a cortical area which is not marked by bshanks@92: any single gene, but which can be identified combinatorially. Acccording to bshanks@92: logistic regression, gene wwc1 is the best fit single gene for predicting whether bshanks@92: or not a pixel on the cortical surface belongs to the motor area (area MO). bshanks@92: The upper-left picture in Figure 4 shows wwc1’s spatial expression pattern over bshanks@92: the cortex. The lower-right boundary of MO is represented reasonably well by bshanks@92: this gene, but the gene overshoots the upper-left boundary. This flattened 2-D bshanks@92: representation does not show it, but the area corresponding to the overshoot is bshanks@92: the medial surface of the cortex. MO is only found on the dorsal surface. Gene bshanks@92: mtif2 is shown in the upper-right. Mtif2 captures MO’s upper-left boundary, bshanks@92: but not its lower-right boundary. Mtif2 does not express very much on the bshanks@92: medial surface. By adding together the values at each pixel in these two figures, bshanks@92: we get the lower-left image. This combination captures area MO much better bshanks@92: than any single gene. bshanks@92: This shows that our proposal to develop a method to find combinations of bshanks@92: marker genes is both possible and necessary. bshanks@92: Feature selection integrated with prediction As noted earlier, in gen- bshanks@92: eral, any predictive method can be used for feature selection by running it bshanks@92: inside a stepwise wrapper. Also, some predictive methods integrate soft con- bshanks@92: straints on number of features used. Examples of both of these will be seen in bshanks@92: the section “Multivariate Predictive methods”. bshanks@92: Multivariate Predictive methods bshanks@92: Forward stepwise logistic regression Logistic regression is a popular bshanks@92: method for predictive modeling of categorial data. As a pilot run, for five bshanks@92: cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise bshanks@92: logistic regression to find single genes, pairs of genes, and triplets of genes bshanks@92: which predict areal identify. This is an example of feature selection integrated bshanks@92: with prediction using a stepwise wrapper. Some of the single genes found bshanks@92: were shown in various figures throughout this document, and Figure 4 shows bshanks@92: a combination of genes which was found. bshanks@92: We felt that, for single genes, gradient similarity did a better job than bshanks@92: logistic regression at capturing our subjective impression of a “good gene”. bshanks@92: SVM on all genes at once bshanks@92: In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical bshanks@92: surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%19. This shows that bshanks@92: _________________________________________ bshanks@92: 195-fold cross-validation. bshanks@92: the genes included in the ABA dataset are sufficient to define much of cortical anatomy. However, as noted above, a classifier bshanks@92: that looks at all the genes at once isn’t as practically useful as a classifier that uses only a few genes. bshanks@60: bshanks@69: bshanks@69: bshanks@69: bshanks@69: Figure 6: First row: the first 6 reduced dimensions, using PCA. Second bshanks@69: row: the first 6 reduced dimensions, using NNMF. Third row: the first bshanks@69: six reduced dimensions, using landmark Isomap. Bottom row: examples bshanks@69: of kmeans clustering applied to reduced datasets to find 7 clusters. Left: bshanks@69: 19 of the major subdivisions of the cortex. Second from left: PCA. Third bshanks@69: from left: NNMF. Right: Landmark Isomap. Additional details: In the bshanks@69: third and fourth rows, 7 dimensions were found, but only 6 displayed. In bshanks@69: the last row: for PCA, 50 dimensions were used; for NNMF, 6 dimensions bshanks@92: were used; for landmark Isomap, 7 dimensions were used. Data-driven redrawing of the cor- bshanks@85: tical map bshanks@92: We have applied the following dimensionality bshanks@92: reduction algorithms to reduce the dimension- bshanks@92: ality of the gene expression profile associated bshanks@92: with each voxel: Principal Components Anal- bshanks@92: ysis (PCA), Simple PCA (SPCA), Multi-Dimensional bshanks@92: Scaling (MDS), Isomap, Landmark Isomap, Lapla- bshanks@92: cian eigenmaps, Local Tangent Space Alignment bshanks@92: (LTSA), Hessian locally linear embedding, Dif- bshanks@92: fusion maps, Stochastic Neighbor Embedding bshanks@92: (SNE), Stochastic Proximity Embedding (SPE), bshanks@92: Fast Maximum Variance Unfolding (FastMVU), bshanks@92: Non-negative Matrix Factorization (NNMF). Space bshanks@92: constraints prevent us from showing many of bshanks@92: the results, but as a sample, PCA, NNMF, and bshanks@92: landmark Isomap are shown in the first, second, bshanks@92: and third rows of Figure 6. bshanks@92: After applying the dimensionality reduction, bshanks@92: we ran clustering algorithms on the reduced data. bshanks@92: To date we have tried k-means and spectral bshanks@92: clustering. The results of k-means after PCA, bshanks@92: NNMF, and landmark Isomap are shown in the bshanks@92: last row of Figure 6. To compare, the left- bshanks@92: most picture on the bottom row of Figure 6 bshanks@92: shows some of the major subdivisions of cor- bshanks@92: tex. These results clearly show that different di- bshanks@92: mensionality reduction techniques capture dif- bshanks@92: ferent aspects of the data and lead to differ- bshanks@92: ent clusterings, indicating the utility of our pro- bshanks@92: posal to produce a detailed comparion of these bshanks@92: techniques as applied to the domain of genomic bshanks@92: anatomy. bshanks@71: bshanks@85: Figure 7: Prototypes corresponding to sample gene clusters, bshanks@85: clustered by gradient similarity. Region boundaries for the bshanks@92: region that most matches each prototype are overlayed. Many areas are captured by clusters of genes We bshanks@92: also clustered the genes using gradient similarity to see if bshanks@92: the spatial regions defined by any clusters matched known bshanks@92: anatomical regions. Figure 7 shows, for ten sample gene bshanks@92: clusters, each cluster’s average expression pattern, compared bshanks@92: to a known anatomical boundary. This suggests that it is bshanks@92: worth attempting to cluster genes, and then to use the re- bshanks@92: sults to cluster voxels. bshanks@92: The approach: what we plan to do bshanks@92: Flatmap cortex and segment cortical layers bshanks@92: There are multiple ways to flatten 3-D data into 2-D. We bshanks@92: will compare mappings from manifolds to planes which at- bshanks@92: tempt to preserve size (such as the one used by Caret[7]) bshanks@92: with mappings which preserve angle (conformal maps). Our bshanks@92: method will include a statistical test that warns the user if bshanks@92: the assumption of 2-D structure seems to be wrong. bshanks@86: We have not yet made use of radial profiles. While the radial profiles may be used “raw”, for laminar structures like the bshanks@86: cortex another strategy is to group together voxels in the same cortical layer; each surface pixel would then be associated bshanks@86: with one expression level per gene per layer. We will develop a segmentation algorithm to automatically identify the layer bshanks@86: boundaries. bshanks@30: Develop algorithms that find genetic markers for anatomical regions bshanks@92: We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, bshanks@92: geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), bshanks@92: but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy bshanks@92: ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such bshanks@92: as Student’s t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any predictive procedure induces a bshanks@92: scoring measure on genes by taking the prediction error when using that gene to predict the target. bshanks@92: Using some combination of these measures, we will develop a procedure to find single marker genes for anatomical regions: bshanks@92: for each cortical area, we will rank the genes by their ability to delineate each area. bshanks@92: Some cortical areas have no single marker genes but can be identified by combinatorial coding. This requires multivariate bshanks@92: scoring measures and feature selection procedures. Many of the measures, such as expression energy, gradient similarity, bshanks@92: Jaccard, Dice, Hough, Student’s t, and Mann-Whitney U are univariate. We will extend these scoring measures for use bshanks@92: in multivariate feature selection, that is, for scoring how well combinations of genes, rather than individual genes, can bshanks@92: distinguish a target area. There are existing multivariate forms of some of the univariate scoring measures, for example, bshanks@92: Hotelling’s T-square is a multivariate analog of Student’s t. bshanks@92: We will develop a feature selection procedure for choosing the best small set of marker genes for a given anatomical bshanks@92: area. In addition to using the scoring measures that we develop, we will also explore (a) feature selection using a stepwise bshanks@92: wrapper over “vanilla” predictive methods such as logistic regression, (b) predictive methods such as decision trees which bshanks@92: incrementally/greedily combine single gene markers into sets, and (c) predictive methods which use soft constraints to bshanks@92: minimize number of features used, such as sparse support vector machines. bshanks@92: todo bshanks@92: Some of these methods, such as the Hough transform, are designed to be resistant to registration error and error in the bshanks@92: anatomical map. bshanks@92: We will also consider extensions to scoring measures that may improve their robustness to registration error and to bshanks@92: error in the anatomical map; for example, a wrapper that runs a scoring method on small displacements and distortions bshanks@92: of the data adds robustness to registration error at the expense of computation time. It is possible that some areas in the bshanks@92: anatomical map do not correspond to natural domains of gene expression. bshanks@92: # Extend the procedure to handle difficult areas by combining or redrawing the boundaries: An area may be difficult to bshanks@92: identify because the boundaries are misdrawn, or because it does not “really” exist as a single area, at least on the genetic bshanks@92: level. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were bshanks@92: redrawn slightly, and (b) detect when a difficult area could be combined with adjacent areas to create a larger area which bshanks@92: can be fit. bshanks@92: A future publication on the method that we develop in Aim 1 will review the scoring measures and quantitatively compare bshanks@92: their performance in order to provide a foundation for future research of methods of marker gene finding. We will measure bshanks@92: the robustness of the scoring measures as well as their absolute performance on our dataset. bshanks@64: Decision trees todo bshanks@85: 20. bshanks@86: # confirm with EMAGE, GeneAtlas, GENSAT, etc, to fight overfitting, two hemis bshanks@86: # mixture models, etc bshanks@30: Develop algorithms to suggest a division of a structure into anatomical parts bshanks@30: 1.Explore dimensionality reduction algorithms applied to pixels: including TODO bshanks@30: 2.Explore dimensionality reduction algorithms applied to genes: including TODO bshanks@30: 3.Explore clustering algorithms applied to pixels: including TODO bshanks@30: 4.Explore clustering algorithms applied to genes: including gene shaving, TODO bshanks@30: 5.Develop an algorithm to use dimensionality reduction and/or hierarchial clustering to create anatomical maps bshanks@30: 6.Run this algorithm on the cortex: present a hierarchial, genoarchitectonic map of the cortex bshanks@92: _________________________________________ bshanks@92: 20Already, for each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy bshanks@92: on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate bshanks@92: trees that use fewer genes bshanks@51: # Linear discriminant analysis bshanks@51: # jbt, coclustering bshanks@51: # self-organizing map bshanks@92: # Linear discriminant analysis bshanks@53: # compare using clustering scores bshanks@64: # multivariate gradient similarity bshanks@66: # deep belief nets bshanks@87: Apply these algorithms to the cortex bshanks@87: Using the methods developed in Aim 1, we will present, for each cortical area, a short list of markers to identify that bshanks@87: area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once. Using the methods bshanks@87: developed in Aim 2, we will present one or more hierarchial cortical maps. We will identify and explain how the statistical bshanks@92: structure in the gene expression data led to any unexpected or interesting features of these maps, and we will provide bshanks@92: biological hypotheses to interpret any new cortical areas, or groupings of areas, which are discovered. bshanks@87: Timeline and milestones bshanks@90: Finding marker genes bshanks@89: ∙September-November 2009: Develop an automated mechanism for segmenting the cortical voxels into layers bshanks@89: ∙November 2009 (milestone): Have completed construction of a flatmapped, cortical dataset with information for each bshanks@89: layer bshanks@89: ∙October 2009-April 2010: Develop scoring methods and to test them in various supervised learning frameworks. Also bshanks@88: test out various dimensionality reduction schemes in combination with supervised learning. create or extend supervised bshanks@88: learning frameworks which use multivariate versions of the best scoring methods. bshanks@89: ∙January 2010 (milestone): Submit a publication on single marker genes for cortical areas bshanks@88: ∙February-July 2010: Continue to develop scoring methods and supervised learning frameworks. Explore the best way bshanks@88: to integrate radial profiles with supervised learning. Explore the best way to make supervised learning techniques bshanks@88: robust against incorrect labels (i.e. when the areas drawn on the input cortical map are slightly off). Quantitatively bshanks@88: compare the performance of different supervised learning techniques. Validate marker genes found in the ABA dataset bshanks@88: by checking against other gene expression datasets. Create documentation and unit tests for software toolbox for Aim bshanks@88: 1. Respond to user bug reports for Aim 1 software toolbox. bshanks@89: ∙June 2010 (milestone): Submit a paper describing a method fulfilling Aim 1. Release toolbox. bshanks@89: ∙July 2010 (milestone): Submit a paper describing combinations of marker genes for each cortical area, and a small bshanks@88: number of marker genes that can, in combination, define most of the areas at once bshanks@90: Revealing new ways to parcellate a structure into regions bshanks@91: ∙June 2010-March 2011: Explore dimensionality reduction algorithms for Aim 2. Explore standard hierarchial clus- bshanks@91: tering algorithms, used in combination with dimensionality reduction, for Aim 2. Explore co-clustering algorithms. bshanks@91: Think about how radial profile information can be used for Aim 2. Adapt clustering algorithms to use radial profile bshanks@91: information. Quantitatively compare the performance of different dimensionality reduction and clustering techniques. bshanks@89: Quantitatively compare the value of different flatmapping methods and ways of representing radial profiles. bshanks@89: ∙March 2011 (milestone): Submit a paper describing a method fulfilling Aim 2. Release toolbox. bshanks@89: ∙February-May 2011: Using the methods developed for Aim 2, explore the genomic anatomy of the cortex. If new ways bshanks@89: of organizing the cortex into areas are discovered, read the literature and talk to people to learn about research related bshanks@89: to interpreting our results. Create documentation and unit tests for software toolbox for Aim 2. Respond to user bug bshanks@90: reports for Aim 2 software toolbox. bshanks@89: ∙May 2011 (milestone): Submit a paper on the genomic anatomy of the cortex, using the methods developed in Aim 2 bshanks@89: ∙May-August 2011: Revisit Aim 1 to see if what was learned during Aim 2 can improve the methods for Aim 1. Follow bshanks@89: up on responses to our papers. Possibly submit another paper. bshanks@33: Bibliography & References Cited bshanks@85: [1]Chris Adamson, Leigh Johnston, Terrie Inder, Sandra Rees, Iven Mareels, and Gary Egan. A Tracking Approach to bshanks@85: Parcellation of the Cerebral Cortex, volume Volume 3749/2005 of Lecture Notes in Computer Science, pages 294–301. bshanks@85: Springer Berlin / Heidelberg, 2005. bshanks@85: [2]J. Annese, A. Pitiot, I. D. Dinov, and A. W. Toga. A myelo-architectonic method for the structural classification of bshanks@85: cortical areas. NeuroImage, 21(1):15–26, 2004. bshanks@85: [3]Tanya Barrett, Dennis B. Troup, Stephen E. Wilhite, Pierre Ledoux, Dmitry Rudnev, Carlos Evangelista, Irene F. bshanks@53: Kim, Alexandra Soboleva, Maxim Tomashevsky, and Ron Edgar. NCBI GEO: mining tens of millions of expression bshanks@53: profiles–database and tools update. Nucl. Acids Res., 35(suppl_1):D760–765, 2007. bshanks@85: [4]George W. Bell, Tatiana A. Yatskievych, and Parker B. Antin. GEISHA, a whole-mount in situ hybridization gene bshanks@53: expression screen in chicken embryos. Developmental Dynamics, 229(3):677–687, 2004. bshanks@85: [5]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe Warren, Wah bshanks@53: Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome. PLoS Comput Biol, 1(4):e41, bshanks@53: 2005. bshanks@85: [6]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy, Arthur W. bshanks@53: Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of expression for a mouse bshanks@53: brain section obtained using voxelation. Physiol. Genomics, 30(3):313–321, August 2007. bshanks@85: [7]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite for surface- bshanks@33: based analyses of cerebral cortex. Journal of the American Medical Informatics Association: JAMIA, 8(5):443–59, 2001. bshanks@33: PMID: 11522765. bshanks@85: [8]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Schambra, Norma J. bshanks@44: Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A gene expression atlas of the bshanks@44: central nervous system based on bacterial artificial chromosomes. Nature, 425(6961):917–925, October 2003. bshanks@85: [9]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Expression Pat- bshanks@46: terns, volume 13 of Communications in Computer and Information Science, pages 347–361. Springer Berlin Heidelberg, bshanks@46: 2008. bshanks@85: [10]F. Kruggel, M. K. Brckner, Th. Arendt, C. J. Wiggins, and D. Y. von Cramon. Analyzing the neocortical fine-structure. bshanks@85: Medical Image Analysis, 7(3):251–264, September 2003. bshanks@85: [11]Erh-Fang Lee, Jyl Boline, and Arthur W. Toga. A High-Resolution anatomical framework of the neonatal mouse brain bshanks@53: for managing gene expression data. Frontiers in Neuroinformatics, 1:6, 2007. PMC2525996. bshanks@85: [12]Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew Asbury, Tony Cheung, bshanks@44: Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M. Norland, Dennis S. Rice, Nilesh Dosooye, Sundeep bshanks@44: Shakya, Perdeep Mehta, and Tom Curran. BGEM: an in situ hybridization database of gene expression in the embryonic bshanks@44: and adult mouse nervous system. PLoS Biology, 4(4):e86 EP –, April 2006. bshanks@85: [13]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Susan M bshanks@44: Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann, David J Anderson, bshanks@44: Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas of the adult mouse brain. Nat bshanks@44: Neurosci, 12(3):356–362, March 2009. bshanks@85: [14]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2 edition, July bshanks@36: 2001. bshanks@85: [15]A. Schleicher, N. Palomero-Gallagher, P. Morosan, S. Eickhoff, T. Kowalski, K. Vos, K. Amunts, and K. Zilles. Quanti- bshanks@85: tative architectural analysis: a new approach to cortical mapping. Anatomy and Embryology, 210(5):373–386, December bshanks@85: 2005. bshanks@85: [16]Oliver Schmitt, Lars Hmke, and Lutz Dmbgen. Detection of cortical transition regions utilizing statistical analyses of bshanks@85: excess masses. NeuroImage, 19(1):42–63, May 2003. bshanks@85: [17]Constance M. Smith, Jacqueline H. Finger, Terry F. Hayamizu, Ingeborg J. McCright, Janan T. Eppig, James A. bshanks@53: Kadin, Joel E. Richardson, and Martin Ringwald. The mouse gene expression database (GXD): 2007 update. Nucl. bshanks@53: Acids Res., 35(suppl_1):D618–623, 2007. bshanks@85: [18]Judy Sprague, Leyla Bayraktaroglu, Dave Clements, Tom Conlin, David Fashena, Ken Frazer, Melissa Haendel, Dou- bshanks@53: glas G Howe, Prita Mani, Sridhar Ramachandran, Kevin Schaper, Erik Segerdell, Peiran Song, Brock Sprunger, Sierra bshanks@53: Taylor, Ceri E Van Slyke, and Monte Westerfield. The zebrafish information network: the zebrafish model organism bshanks@53: database. Nucleic Acids Research, 34(Database issue):D581–5, 2006. PMID: 16381936. bshanks@85: [19]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003. bshanks@85: [20]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T. Mortrud, bshanks@33: Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H. Gage, Allan R. Jones, bshanks@33: Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the hippocampus. Neuron, 60(6):1010– bshanks@33: 1021, December 2008. bshanks@85: [21]Pavel Tomancak, Amy Beaton, Richard Weiszmann, Elaine Kwan, ShengQiang Shu, Suzanna E Lewis, Stephen bshanks@53: Richards, Michael Ashburner, Volker Hartenstein, Susan E Celniker, and Gerald M Rubin. Systematic determina- bshanks@53: tion of patterns of gene expression during drosophila embryogenesis. Genome Biology, 3(12):research008818814, 2002. bshanks@53: PMC151190. bshanks@85: [22]Jano van Hemert and Richard Baldock. Mining Spatial Gene Expression Data for Association Rules, volume 4414/2007 bshanks@53: of Lecture Notes in Computer Science, pages 66–76. Springer Berlin / Heidelberg, 2007. bshanks@85: [23]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton, Thomas P. Perry, bshanks@44: Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen. EMAGE edinburgh mouse atlas bshanks@44: of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860–865, 2008. bshanks@85: [24]Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression patterns in the mouse bshanks@44: embryo. Nucl. Acids Res., 32(suppl_1):D552–556, 2004. bshanks@85: [25]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa Agar- bshanks@44: wala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood, Robert Baertsch, bshanks@44: Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer Bork, Marc Botcherby, bshanks@44: Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John Burton, Jonathan Butler, bshanks@44: Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T Chinwalla, Deanna M Church, bshanks@44: Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R Copley, Alan Coulson, Olivier Couronne, bshanks@44: James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy Davies, Kimberly D Delehaunty, Justin Deri, bshanks@44: Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens, Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M bshanks@44: Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes, Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A bshanks@44: Fewell, Paul Flicek, Karen Foley, Wayne N Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage, bshanks@44: Richard A Gibbs, Gustavo Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves, bshanks@44: Eric D Green, Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki, bshanks@44: LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard, Adrienne bshanks@44: Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy, Michael Kamal, bshanks@44: Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn Kells, W James Kent, bshanks@44: Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David Kulp, Tom Landers, J P bshanks@44: Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd, Susan Lucas, Bin Ma, Donna R bshanks@44: Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer, Megan McCarthy, W Richard McCombie, bshanks@44: Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim, Beverley Meredith, Jill P Mesirov, Webb Miller, bshanks@44: Tracie L Miner, Emmanuel Mongin, Kate T Montgomery, Michael Morgan, Richard Mott, James C Mullikin, Donna M bshanks@44: Muzny, William E Nash, Joanne O Nelson, Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J bshanks@44: O’Connor, Yasushi Okazaki, Karen Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane bshanks@44: Peterson, Pavel Pevzner, Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter, bshanks@44: Michael Quail, Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph San- bshanks@44: tos, Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven Seaman, bshanks@44: Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer, Guy Slater, Arian bshanks@44: Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles Sugnet, Mikita Suyama, bshanks@44: Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp, Catherine Ucla, Abel Ureta-Vidal, bshanks@44: Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie Wall, Ryan J Weber, Robert B Weiss, Michael C bshanks@44: Wendl, Anthony P West, Kris Wetterstrand, Raymond Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey, bshanks@44: Sophie Williams, Richard K Wilson, Eitan Winter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang, bshanks@44: Evgeny M Zdobnov, Michael C Zody, and Eric S Lander. Initial sequencing and comparative analysis of the mouse bshanks@44: genome. Nature, 420(6915):520–62, December 2002. PMID: 12466850. bshanks@33: bshanks@33: