bshanks@0: Specific aims bshanks@96: Massive new datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in bshanks@96: situ transgenic reporter, microarray voxelation, and others, allow the expression levels of many genes at many bshanks@96: locations to be compared. Our goal is to develop automated methods to relate spatial variation in gene expres- bshanks@96: sion to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical bshanks@96: maps based on gene expression patterns. We have three specific aims: bshanks@96: (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which bshanks@96: selectively target anatomical regions bshanks@96: (2) develop an algorithm to suggest new ways of carving up a structure into anatomically distinct regions, bshanks@96: based on spatial patterns in gene expression bshanks@96: (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen bshanks@96: Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending bshanks@96: the functionality of Caret, an existing open-source scientific imaging program. Use this dataset to validate the bshanks@96: methods developed in (1) and (2). bshanks@96: Although our particular application involves the 3D spatial distribution of gene expression, we anticipate that bshanks@96: the methods developed in aims (1) and (2) will generalize to any sort of high-dimensional data over points located bshanks@96: in a low-dimensional space. In particular, our method could be applied to genome-wide sequencing data derived bshanks@96: from sets of tissues and disease states. bshanks@96: In terms of the application of the methods to cerebral cortex, aim (1) is to go from cortical areas to marker bshanks@96: genes, and aim (2) is to let the gene profile define the cortical areas. In addition to validating the usefulness bshanks@96: of the algorithms, the application of these methods to cortex will produce immediate benefits, because there bshanks@96: are currently no known genetic markers for most cortical areas. The results of the project will support the bshanks@96: development of new ways to selectively target cortical areas, and it will support the development of a method for bshanks@96: identifying the cortical areal boundaries present in small tissue samples. bshanks@96: All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well bshanks@96: as the machine-readable datasets developed in aim (3), will be published and freely available for others to use. bshanks@87: The challenge topic bshanks@96: This proposal addresses challenge topic 06-HG-101. Massive new datasets obtained with techniques such as bshanks@96: in situ hybridization (ISH), immunohistochemistry, in situ transgenic reporter, microarray voxelation, and others, bshanks@96: allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated bshanks@96: methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific bshanks@96: anatomical regions, and also to draw new anatomical maps based on gene expression patterns. bshanks@101: ______________ bshanks@101: The Challenge and Potential impact bshanks@96: Each of our three aims will be discussed in turn. For each aim, we will develop a conceptual framework for bshanks@96: thinking about the task, and we will present our strategy for solving it. Next we will discuss related work. At the bshanks@96: conclusion of each section, we will summarize why our strategy is different from what has been done before. At bshanks@96: the end of this section, we will describe the potential impact. bshanks@101: Aim 1: Given a map of regions, find genes that mark the regions bshanks@104: Machine learning terminology: classifiers The task of looking for marker genes for known anatomical regions bshanks@104: means that one is looking for a set of genes such that, if the expression level of those genes is known, then the bshanks@104: locations of the regions can be inferred. bshanks@104: If we define the regions so that they cover the entire anatomical structure to be subdivided, we may say that bshanks@104: we are using gene expression in each voxel to assign that voxel to the proper area. We call this a classification bshanks@104: task, because each voxel is being assigned to a class (namely, its region). An understanding of the relationship bshanks@104: between the combination of their expression levels and the locations of the regions may be expressed as a bshanks@104: function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is bshanks@104: the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function bshanks@104: a classifier. In general, the input to a classifier is called an instance, and the output is called a label (or a class bshanks@104: label). bshanks@101: The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for bshanks@96: determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene bshanks@96: expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. The bshanks@96: initial gene expression dataset used in the construction of the classifier is called training data. In the machine bshanks@96: learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a task in bshanks@96: which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances bshanks@96: (voxels) for which the labels (regions) are known. bshanks@101: Each gene expression level is called a feature, and the selection of which genes1 to include is called feature bshanks@96: selection. Feature selection is one component of the task of learning a classifier. Some methods for learning bshanks@96: classifiers start out with a separate feature selection phase, whereas other methods combine feature selection bshanks@96: with other aspects of training. bshanks@101: One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked bshanks@96: genes are then chosen. Some scoring measures can assign a score to a set of selected genes, not just to a bshanks@96: single gene; in this case, a dynamic procedure may be used in which features are added and subtracted from the bshanks@96: selected set depending on how much they raise the score. Such procedures are called “stepwise” or “greedy”. bshanks@101: Although the classifier itself may only look at the gene expression data within each voxel before classifying bshanks@96: that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize bshanks@96: score-based feature selection methods depending on how the score of calculated. Often the score calculation bshanks@96: consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the bshanks@96: aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to bshanks@96: calculate a voxel’s sub-score, then we say it is a local scoring method. If only information from the voxel itself is bshanks@96: used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method. bshanks@101: Both gene expression data and anatomical atlases have errors, due to a variety of factors. Individual subjects bshanks@99: have idiosyncratic anatomy. Subjects may be improperly registered to the atlas. The method used to measure bshanks@96: gene expression may be noisy. The atlas may have errors. It is even possible that some areas in the anatomical bshanks@104: 1Strictly speaking, the features are gene expression levels, but we’ll call them genes. bshanks@96: atlas are “wrong” in that they do not have the same shape as the natural domains of gene expression to which bshanks@96: they correspond. These sources of error can affect the displacement and the shape of both the gene expression bshanks@96: data and the anatomical target areas. Therefore, it is important to use feature selection methods which are bshanks@96: robust to these kinds of errors. bshanks@85: Our strategy for Aim 1 bshanks@104: Key questions when choosing a learning method are: What are the instances? What are the features? How are bshanks@104: the features chosen? Here are four principles that outline our answers to these questions. bshanks@104: Principle 1: Combinatorial gene expression bshanks@104: It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, bshanks@104: in the cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas bshanks@104: (ABA) dataset. However, at least some of these areas can be delineated by looking at combinations of genes bshanks@104: (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Studies, bshanks@104: Figure 4). Therefore, each instance should contain multiple features (genes). bshanks@104: Principle 2: Only look at combinations of small numbers of genes bshanks@104: When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have bshanks@104: been selected as features. The more data that are available to a classifier, the better that it can do. For example, bshanks@104: perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every bshanks@104: gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to bshanks@104: gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for bshanks@104: some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the bshanks@104: expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that bshanks@104: checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on bshanks@104: tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we bshanks@104: must select only a few genes as features. bshanks@96: The requirement to find combinations of only a small number of genes limits us from straightforwardly ap- bshanks@96: plying many of the most simple techniques from the field of supervised machine learning. In the parlance of bshanks@96: machine learning, our task combines feature selection with supervised learning. bshanks@30: Principle 3: Use geometry in feature selection bshanks@96: When doing feature selection with score-based methods, the simplest thing to do would be to score the per- bshanks@96: formance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach bshanks@96: is to also use information about the geometric relations between each voxel and its neighbors; this requires non- bshanks@96: pointwise, local scoring methods. See Preliminary Studies, figure 3 for evidence of the complementary nature of bshanks@96: pointwise and local scoring methods. bshanks@30: Principle 4: Work in 2-D whenever possible bshanks@96: There are many anatomical structures which are commonly characterized in terms of a two-dimensional bshanks@96: manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be bshanks@96: improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for bshanks@96: humans to visualize and work with 2-D data. Therefore, when possible, the instances should represent pixels, bshanks@96: not voxels. bshanks@43: Related work bshanks@104: There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expres- bshanks@104: sion data which are not fundamentally spatial2. bshanks@104: As noted above, there has been much work on both supervised learning and there are many available bshanks@104: algorithms for each. However, the algorithms require the scientist to provide a framework for representing the bshanks@104: problem domain, and the way that this framework is set up has a large impact on performance. Creating a bshanks@104: good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical bshanks@104: “fine-tuning” of numerical parameters. For example, we believe that domain-specific scoring measures (such bshanks@104: _________________________________________ bshanks@104: 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates; bshanks@104: not just data which have only a few different locations or which is indexed by anatomical label. bshanks@104: as gradient similarity, which is discussed in Preliminary Studies) may be necessary in order to achieve the best bshanks@104: results in this application. bshanks@104: We now turn to efforts to find marker genes using spatial gene expression data using automated methods. bshanks@104: GeneAtlas[5] and EMAGE [26] allow the user to construct a search query by demarcating regions and then bshanks@104: specifying either the strength of expression or the name of another gene or dataset whose expression pattern bshanks@104: is to be matched. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a bshanks@104: region in concert but not separately. bshanks@104: [15 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components. Gene Finder: The bshanks@104: user selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2) yields a list of bshanks@104: genes which are overexpressed in that cluster. Correlation: The user selects a seed voxel and the system then bshanks@104: shows the user how much correlation there is between the gene expression profile of the seed voxel and every bshanks@104: other voxel. Clusters: will be described later. [6] looks at the mean expression level of genes within anatomical bshanks@104: regions, and applies a Student’s t-test with Bonferroni correction to determine whether the mean expression bshanks@104: level of a gene is significantly higher in the target region. [15] and [6] differ from our Aim 1 in at least three bshanks@104: ways. First, [15] and [6] find only single genes, whereas we will also look for combinations of genes. Second, bshanks@104: [15 ] and [6] can only use overexpression as a marker, whereas we will also search for underexpression. Third, bshanks@104: [15 ] and [6] use scores based on pointwise expression levels, whereas we will also use geometric scores such bshanks@104: as gradient similarity (described in Preliminary Studies). Figures 4, 2, and 3 in the Preliminary Studies section bshanks@104: contain evidence that each of our three choices is the right one. bshanks@96: [10 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use bshanks@96: an evolutionary algorithm to evolve logical operators which combine boolean (thresholded) images in order to bshanks@99: match a target image. bshanks@96: In summary, there has been fruitful work on finding marker genes, but only one of the previous projects bshanks@96: explores combinations of marker genes, and none of these publications compare the results obtained by using bshanks@96: different algorithms or scoring methods. bshanks@84: Aim 2: From gene expression data, discover a map of regions bshanks@101: Machine learning terminology: clustering bshanks@101: If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is bshanks@101: referred to as unsupervised learning in the jargon of machine learning. One thing that you can do with such a bshanks@101: dataset is to group instances together. A set of similar instances is called a cluster, and the activity of finding bshanks@101: grouping the data into clusters is called clustering or cluster analysis. bshanks@101: The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The bshanks@101: instances are once again voxels (or pixels) along with their associated gene expression profiles. We make bshanks@101: the assumption that voxels from the same anatomical region have similar gene expression profiles, at least bshanks@101: compared to the other regions. This means that clustering voxels is the same as finding potential regions; we bshanks@101: seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression. bshanks@101: It is desirable to determine not just one set of regions, but also how these regions relate to each other. The bshanks@101: outcome of clustering may be a hierarchical tree of clusters, rather than a single set of clusters which partition bshanks@101: the voxels. This is called hierarchical clustering. bshanks@101: Similarity scores A crucial choice when designing a clustering method is how to measure similarity, across bshanks@101: either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature bshanks@101: selection (discussed above under Aim 1) and scoring methods for similarity. bshanks@104: Spatially contiguous clusters; image segmentation We have shown that aim 2 is a type of clustering bshanks@104: task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels bshanks@104: grouped together into a cluster must be spatially contiguous. In Preliminary Studies, we show that one can get bshanks@104: reasonable results without enforcing this constraint; however, we plan to compare these results against other bshanks@104: methods which guarantee contiguous clusters. bshanks@104: Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous bshanks@104: clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are bshanks@104: thousands of color channels (one for each gene), rather than just three3. A more crucial difference is that there bshanks@104: are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not bshanks@104: appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation bshanks@104: algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these bshanks@104: algorithms are specialized for visual images. bshanks@96: Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression bshanks@96: feature vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying bshanks@96: data. bshanks@96: Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion bshanks@98: in the instances. However, some clustering algorithms perform better on small numbers of features4. There are bshanks@96: techniques which “summarize” a larger number of features using a smaller number of features; these techniques bshanks@101: go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique bshanks@101: yields is called the reduced feature set. Note that the features in the reduced feature set do not necessarily bshanks@101: correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels. bshanks@101: Clustering genes rather than voxels Although the ultimate goal is to cluster the instances (voxels or pixels), bshanks@101: one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes bshanks@101: could be used. bshanks@101: Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, bshanks@101: we could have one reduced feature for each gene cluster. bshanks@101: Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have bshanks@101: an expression pattern which seems to pick out a single, spatially contiguous region. This suggests the following bshanks@104: procedure: cluster together genes which pick out similar regions, and then to use the more popular common bshanks@104: regions as the final clusters. In Preliminary Studies, Figure 7, we show that a number of anatomically recognized bshanks@104: cortical regions, as well as some “superregions” formed by lumping together a few regions, are associated with bshanks@104: gene clusters in this fashion. bshanks@104: Related work bshanks@104: Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, bshanks@104: [18 ], [2 ], [19], and [1] associate spots on the cortex with the radial profile5 of response to some stain ([12] uses bshanks@104: MRI), extract features from this profile, and then use similarity between surface pixels to cluster. bshanks@104: [23 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual bshanks@104: analysis, two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and bshanks@104: a hierarchical recursive bifurcation clustering scheme based on correlation as the similarity score. The paper bshanks@104: yielded impressive results, proving the usefulness of computational genomic anatomy. We have run NNMF on bshanks@104: the cortical dataset bshanks@104: AGEA[15] includes a preset hierarchical clustering of voxels based on a recursive bifurcation algorithm with bshanks@104: correlation as the similarity metric. EMAGE[26] allows the user to select a dataset from among a large number bshanks@104: of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters bshanks@104: via hierarchical complete linkage clustering. bshanks@104: [6 ] clusters genes. For each cluster, prototypical spatial expression patterns were created by averaging the bshanks@104: genes in the cluster. The prototypes were analyzed manually, without clustering voxels. bshanks@104: [10 ] applies their technique for finding combinations of marker genes for the purpose of clustering genes bshanks@104: around a “seed gene”. bshanks@104: In summary, although these projects obtained clusterings, there has not been much comparison between bshanks@104: different algorithms or scoring methods, so it is likely that the best clustering method for this application has not bshanks@104: yet been found. The projects using gene expression on cortex did not attempt to make use of the radial profile bshanks@104: of gene expression. Also, none of these projects did a separate dimensionality reduction step before clustering bshanks@98: _________________________________________ bshanks@98: 3There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are bshanks@98: often used to process satellite imagery. bshanks@98: 4First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering bshanks@98: algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data. bshanks@104: 5A radial profile is a profile along a line perpendicular to the cortical surface. bshanks@104: pixels, none tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, and bshanks@104: none used co-clustering algorithms. bshanks@104: Aim 3: apply the methods developed to the cerebral cortex bshanks@85: bshanks@85: bshanks@104: Figure 1: Top row: Genes Nfic bshanks@104: and A930001M12Rik are the most bshanks@104: correlated with area SS (somatosen- bshanks@104: sory cortex). Bottom row: Genes bshanks@104: C130038G02Rik and Cacna1i are bshanks@104: those with the best fit using logistic bshanks@104: regression. Within each picture, the bshanks@104: vertical axis roughly corresponds to bshanks@104: anterior at the top and posterior at the bshanks@104: bottom, and the horizontal axis roughly bshanks@104: corresponds to medial at the left and bshanks@104: lateral at the right. The red outline is bshanks@104: the boundary of region SS. Pixels are bshanks@104: colored according to correlation, with bshanks@104: red meaning high correlation and blue bshanks@104: meaning low. Background bshanks@101: The cortex is divided into areas and layers. Because of the cortical bshanks@101: columnar organization, the parcellation of the cortex into areas can be bshanks@101: drawn as a 2-D map on the surface of the cortex. In the third dimension, bshanks@101: the boundaries between the areas continue downwards into the cortical bshanks@101: depth, perpendicular to the surface. The layer boundaries run parallel bshanks@101: to the surface. One can picture an area of the cortex as a slice of a bshanks@101: six-layered cake6. bshanks@101: It is known that different cortical areas have distinct roles in both bshanks@101: normal functioning and in disease processes, yet there are no known bshanks@101: marker genes for most cortical areas. When it is necessary to divide a bshanks@101: tissue sample into cortical areas, this is a manual process that requires bshanks@101: a skilled human to combine multiple visual cues and interpret them in bshanks@101: the context of their approximate location upon the cortical surface. bshanks@104: Even the questions of how many areas should be recognized in bshanks@104: cortex, and what their arrangement is, are still not completely settled. bshanks@104: A proposed division of the cortex into areas is called a cortical map. bshanks@104: In the rodent, the lack of a single agreed-upon map can be seen by bshanks@104: contrasting the recent maps given by Swanson[22] on the one hand, bshanks@104: and Paxinos and Franklin[17] on the other. While the maps are cer- bshanks@104: tainly very similar in their general arrangement, significant differences bshanks@104: remain. bshanks@104: The Allen Mouse Brain Atlas dataset bshanks@104: The Allen Mouse Brain Atlas (ABA) data were produced by doing in- bshanks@104: situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains. bshanks@104: Pictures were taken of the processed slice, and these pictures were bshanks@104: semi-automatically analyzed to create a digital measurement of gene bshanks@104: expression levels at each location in each slice. Per slice, cellular spa- bshanks@104: tial resolution is achieved. Using this method, a single physical slice bshanks@104: can only be used to measure one single gene; many different mouse brains were needed in order to measure bshanks@104: the expression of many genes. bshanks@104: An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D bshanks@104: coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are bshanks@104: 67x41x58 = 159,326 voxels in the 3D coordinate system, of which 51,533 are in the brain[15]. bshanks@98: Mus musculus is thought to contain about 22,000 protein-coding genes[28]. The ABA contains data on about bshanks@98: 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our bshanks@98: dataset is derived from only the coronal subset of the ABA7. bshanks@98: The ABA is not the only large public spatial gene expression dataset. However, with the exception of the ABA, bshanks@98: GenePaint, and EMAGE, most of the other resources have not (yet) extracted the expression intensity from the bshanks@98: ISH images and registered the results into a single 3-D space. bshanks@98: Related work bshanks@98: [15 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure bshanks@98: of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort bshanks@104: _________________________________________ bshanks@104: 6Outside of isocortex, the number of layers varies. bshanks@104: 7The sagittal data do not cover the entire cortex, and also have greater registration error[15]. Genes were selected by the Allen bshanks@104: Institute for coronal sectioning based on, “classes of known neuroscientific interest... or through post hoc identification of a marked bshanks@104: non-ubiquitous expression pattern”[15]. bshanks@98: of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical bshanks@98: map based on gene expression data. Neither of the other components of AGEA can be applied to cortical bshanks@99: areas; AGEA’s Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA’s hierarchical bshanks@98: clustering does not produce clusters corresponding to the cortical areas8. bshanks@98: In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, bshanks@98: (b) there has been almost no comparison of different algorithms or scoring methods, and (c) there has been no bshanks@99: work on computationally finding marker genes for cortical areas, or on finding a hierarchical clustering that will bshanks@98: yield a map of cortical areas de novo from gene expression data. bshanks@98: Our project is guided by a concrete application with a well-specified criterion of success (how well we can bshanks@98: find marker genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing bshanks@98: different methods. bshanks@98: Significance bshanks@98: bshanks@104: Figure 2: Gene Pitx2 bshanks@104: is selectively underex- bshanks@104: pressed in area SS. The method developed in aim (1) will be applied to each cortical area to find a set of bshanks@104: marker genes such that the combinatorial expression pattern of those genes uniquely bshanks@104: picks out the target area. Finding marker genes will be useful for drug discovery as bshanks@104: well as for experimentation because marker genes can be used to design interventions bshanks@104: which selectively target individual cortical areas. bshanks@104: The application of the marker gene finding algorithm to the cortex will also support bshanks@104: the development of new neuroanatomical methods. In addition to finding markers for bshanks@104: each individual cortical areas, we will find a small panel of genes that can find many of bshanks@104: the areal boundaries at once. This panel of marker genes will allow the development of bshanks@104: an ISH protocol that will allow experimenters to more easily identify which anatomical bshanks@104: areas are present in small samples of cortex. bshanks@98: The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation bshanks@98: of a better map. The development of present-day cortical maps was driven by the application of histological bshanks@98: stains. If a different set of stains had been available which identified a different set of features, then today’s bshanks@98: cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns bshanks@100: in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to bshanks@100: incorporate what we can learn from looking at the patterns of gene expression. bshanks@98: While we do not here propose to analyze human gene expression data, it is conceivable that the methods bshanks@98: we propose to develop could be used to suggest modifications to the human cortical map as well. In fact, the bshanks@98: methods we will develop will be applicable to other datasets beyond the brain. bshanks@101: _______________________________ bshanks@101: The approach: Preliminary Studies bshanks@101: Format conversion between SEV, MATLAB, NIFTI bshanks@98: We have created software to (politely) download all of the SEV files9 from the Allen Institute website. We have bshanks@98: also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s bshanks@98: file formats. bshanks@101: Flatmap of cortex bshanks@98: We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. bshanks@98: We divided the cortex into hemispheres. Using Caret[7], we created a mesh representation of the surface of the bshanks@98: selected voxels. For each gene, and for each node of the mesh, we calculated an average of the gene expression bshanks@98: of the voxels “underneath” that mesh node. We then flattened the cortex, creating a two-dimensional mesh. We bshanks@98: sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this bshanks@98: grid into a MATLAB matrix. We manually traced the boundaries of each of 49 cortical areas from the ABA coronal bshanks@104: reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the bshanks@101: 8In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer bshanks@101: are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a bshanks@101: pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas. bshanks@101: 9SEV is a sparse format for spatial data. It is the format in which the ABA data is made available. bshanks@98: mesh surface. We projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the bshanks@98: region data into MATLAB format. bshanks@98: At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries bshanks@98: representing a grid of points (pixels) over the cortical surface. There is one 2-D matrix whose entries represent bshanks@98: the regional label associated with each surface pixel. And for each gene, there is a 2-D matrix whose entries bshanks@98: represent the average expression level underneath each surface pixel. We created a normalized version of the bshanks@98: gene expression data by subtracting each gene’s mean expression level (over all surface pixels) and dividing the bshanks@98: expression level of each gene by its standard deviation. The features and the target area are both functions on bshanks@98: the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can bshanks@98: be thought of as images which can be displayed on the flatmapped surface. bshanks@98: To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix bshanks@98: for each cortical layer to represent the average expression level within that layer. Cortical layers are found at bshanks@98: different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have bshanks@98: extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary. In the bshanks@98: Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually bshanks@98: demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex. bshanks@98: Feature selection and scoring methods bshanks@104: bshanks@104: bshanks@104: Figure 3: The top row shows the two bshanks@104: genes which (individually) best predict bshanks@104: area AUD, according to logistic regres- bshanks@104: sion. The bottom row shows the two bshanks@104: genes which (individually) best match bshanks@104: area AUD, according to gradient sim- bshanks@104: ilarity. From left to right and top to bshanks@104: bottom, the genes are Ssr1, Efcbp1, bshanks@104: Ptk7, and Aph1a. Underexpression of a gene can serve as a marker Underexpression bshanks@104: of a gene can sometimes serve as a marker. See, for example, Figure bshanks@104: 2. bshanks@104: Correlation Recall that the instances are surface pixels, and con- bshanks@104: sider the problem of attempting to classify each instance as either a bshanks@104: member of a particular anatomical area, or not. The target area can be bshanks@104: represented as a boolean mask over the surface pixels. bshanks@104: We calculated the correlation between each gene and each cortical bshanks@104: area. The top row of Figure 1 shows the three genes most correlated bshanks@104: with area SS. bshanks@104: Conditional entropy bshanks@104: For each region, we created and ran a forward stepwise procedure bshanks@104: which attempted to find pairs of gene expression boolean masks such bshanks@104: that the conditional entropy of the target area’s boolean mask, condi- bshanks@104: tioned upon the pair of gene expression boolean masks, is minimized. bshanks@104: This finds pairs of genes which are most informative (at least at bshanks@104: these discretization thresholds) relative to the question, “Is this surface bshanks@104: pixel a member of the target area?”. Its advantage over linear methods bshanks@104: such as logistic regression is that it takes account of arbitrarily nonlin- bshanks@104: ear relationships; for example, if the XOR of two variables predicts the bshanks@104: target, conditional entropy would notice, whereas linear methods would bshanks@104: not. bshanks@98: Gradient similarity We noticed that the previous two scoring methods, which are pointwise, often found bshanks@98: genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed bshanks@98: a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a bshanks@98: boundary whose shape is similar to the shape of the target region. We call this scoring method “gradient bshanks@98: similarity”. The formula is: bshanks@98: ∑ bshanks@98: pixel∈pixels cos(abs(∠∇1 -∠∇2)) ⋅|∇1| + |∇2| bshanks@98: 2 ⋅ pixel_value1 + pixel_value2 bshanks@98: 2 bshanks@98: where ∇1 and ∇2 are the gradient vectors of the two images at the current pixel; ∠∇i is the angle of the bshanks@98: gradient of image i at the current pixel; |∇i| is the magnitude of the gradient of image i at the current pixel; and bshanks@98: pixel valuei is the value of the current pixel in image i. bshanks@98: The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders bshanks@98: are similar, then both images will have corresponding pixels with large gradients (because this is a border) which bshanks@98: are oriented in a similar direction (because the borders are similar). bshanks@98: Gradient similarity provides information complementary to correlation bshanks@104: bshanks@104: bshanks@104: Figure 4: Upper left: wwc1. Upper bshanks@104: right: mtif2. Lower left: wwc1 + mtif2 bshanks@104: (each pixel’s value on the lower left is bshanks@104: the sum of the corresponding pixels in bshanks@104: the upper row). To show that gradient similarity can provide useful information that bshanks@104: cannot be detected via pointwise analyses, consider Fig. 3. The bshanks@104: pointwise method in the top row identifies genes which express more bshanks@104: strongly in AUD than outside of it; its weakness is that this includes bshanks@104: many areas which don’t have a salient border matching the areal bor- bshanks@104: der. The geometric method identifies genes whose salient expression bshanks@104: border seems to partially line up with the border of AUD; its weakness bshanks@104: is that this includes genes which don’t express over the entire area. bshanks@104: Areas which can be identified by single genes Using gradient bshanks@104: similarity, we have already found single genes which roughly identify bshanks@104: some areas and groupings of areas. For each of these areas, an ex- bshanks@104: ample of a gene which roughly identifies it is shown in Figure 5. We bshanks@104: have not yet cross-verified these genes in other atlases. bshanks@104: In addition, there are a number of areas which are almost identified bshanks@104: by single genes: COAa+NLOT (anterior part of cortical amygdalar area, bshanks@104: nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral bshanks@104: anterior cingulate), VIS (visual), AUD (auditory). bshanks@104: These results validate our expectation that the ABA dataset can be bshanks@104: exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring bshanks@104: method, gradient similarity. bshanks@98: Combinations of multiple genes are useful and necessary for some areas bshanks@98: In Figure 4, we give an example of a cortical area which is not marked by any single gene, but which bshanks@99: can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for bshanks@98: predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left bshanks@98: picture in Figure 4 shows wwc1’s spatial expression pattern over the cortex. The lower-right boundary of MO is bshanks@98: represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D bshanks@98: representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. bshanks@98: MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO’s upper-left bshanks@98: boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding bshanks@98: together the values at each pixel in these two figures, we get the lower-left image. This combination captures bshanks@98: area MO much better than any single gene. bshanks@98: This shows that our proposal to develop a method to find combinations of marker genes is both possible and bshanks@98: necessary. bshanks@98: Multivariate supervised learning bshanks@98: Forward stepwise logistic regression Logistic regression is a popular method for predictive modeling of cate- bshanks@99: gorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise bshanks@98: logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is bshanks@98: an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes bshanks@98: found were shown in various figures throughout this document, and Figure 4 shows a combination of genes bshanks@98: which was found. bshanks@98: SVM on all genes at once bshanks@98: In order to see how well one can do when looking at all genes at once, we ran a support vector machine to bshanks@98: classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of bshanks@98: about 81%10. This shows that the genes included in the ABA dataset are sufficient to define much of cortical bshanks@98: anatomy. However, as noted above, a classifier that looks at all the genes at once isn’t as practically useful as a bshanks@98: classifier that uses only a few genes. bshanks@96: Data-driven redrawing of the cortical map bshanks@104: bshanks@104: bshanks@104: bshanks@104: bshanks@104: Figure 5: From left to right and top bshanks@104: to bottom, single genes which roughly bshanks@104: identify areas SS (somatosensory pri- bshanks@104: mary + supplemental), SSs (supple- bshanks@104: mental somatosensory), PIR (piriform), bshanks@104: FRP (frontal pole), RSP (retrosple- bshanks@104: nial), COApm (Cortical amygdalar, pos- bshanks@104: terior part, medial zone). Grouping bshanks@104: some areas together, we have also bshanks@104: found genes to identify the groups bshanks@104: ACA+PL+ILA+DP+ORB+MO (anterior bshanks@104: cingulate, prelimbic, infralimbic, dor- bshanks@104: sal peduncular, orbital, motor), poste- bshanks@104: rior and lateral visual (VISpm, VISpl, bshanks@104: VISI, VISp; posteromedial, posterolat- bshanks@104: eral, lateral, and primary visual; the bshanks@104: posterior and lateral visual area is dis- bshanks@104: tinguished from its neighbors, but not bshanks@104: from the entire rest of the cortex). The bshanks@104: genes are Pitx2, Aldh1a2, Ppfibp1, bshanks@104: Slco1a5, Tshz2, Trhr, Col12a1, Ets1. We have applied the following dimensionality reduction algorithms bshanks@104: to reduce the dimensionality of the gene expression profile associ- bshanks@104: ated with each pixel: Principal Components Analysis (PCA), Simple bshanks@104: PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian bshanks@104: eigenmaps, Local Tangent Space Alignment, Stochastic Proximity Em- bshanks@104: bedding, Fast Maximum Variance Unfolding, Non-negative Matrix Fac- bshanks@104: torization (NNMF). Space constraints prevent us from showing many of bshanks@104: the results, but as a sample, PCA, NNMF, and landmark Isomap are bshanks@104: shown in the first, second, and third rows of Figure 6. bshanks@104: After applying the dimensionality reduction, we ran clustering algo- bshanks@104: rithms on the reduced data. To date we have tried k-means and spec- bshanks@104: tral clustering. The results of k-means after PCA, NNMF, and landmark bshanks@104: Isomap are shown in the last row of Figure 6. To compare, the leftmost bshanks@104: picture on the bottom row of Figure 6 shows some of the major subdivi- bshanks@104: sions of cortex. These results clearly show that different dimensionality bshanks@104: reduction techniques capture different aspects of the data and lead to bshanks@104: different clusterings, indicating the utility of our proposal to produce a bshanks@104: detailed comparison of these techniques as applied to the domain of bshanks@104: genomic anatomy. bshanks@104: Many areas are captured by clusters of genes We also clustered bshanks@104: the genes using gradient similarity to see if the spatial regions defined bshanks@104: by any clusters matched known anatomical regions. Figure 7 shows, for bshanks@104: ten sample gene clusters, each cluster’s average expression pattern, bshanks@104: compared to a known anatomical boundary. This suggests that it is bshanks@104: worth attempting to cluster genes, and then to use the results to cluster bshanks@104: pixels. bshanks@104: The approach: what we plan to do bshanks@104: Flatmap cortex and segment cortical layers bshanks@104: There are multiple ways to flatten 3-D data into 2-D. We will compare bshanks@104: mappings from manifolds to planes which attempt to preserve size bshanks@104: (such as the one used by Caret[7]) with mappings which preserve an- bshanks@104: gle (conformal maps). Our method will include a statistical test that bshanks@104: warns the user if the assumption of 2-D structure seems to be wrong. bshanks@104: We have not yet made use of radial profiles. While the radial pro- bshanks@104: files may be used “raw”, for laminar structures like the cortex another bshanks@104: strategy is to group together voxels in the same cortical layer; each sur- bshanks@104: face pixel would then be associated with one expression level per gene bshanks@104: per layer. We will develop a segmentation algorithm to automatically bshanks@104: identify the layer boundaries. bshanks@104: Develop algorithms that find genetic markers for anatomical re- bshanks@104: gions bshanks@104: Scoring measures and feature selection We will develop scoring bshanks@104: methods for evaluating how good individual genes are at marking ar- bshanks@104: eas. We will compare pointwise, geometric, and information-theoretic bshanks@104: _________________________________________ bshanks@101: 105-fold cross-validation. bshanks@104: measures. We already developed one entirely new scoring method (gradient similarity), but we may develop bshanks@104: more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, con- bshanks@104: ditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such bshanks@104: as Student’s t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a bshanks@104: scoring measure on genes by taking the prediction error when using that gene to predict the target. bshanks@104: bshanks@104: bshanks@104: bshanks@104: bshanks@104: Figure 6: First row: the first 6 reduced dimensions, using PCA. Sec- bshanks@104: ond row: the first 6 reduced dimensions, using NNMF. Third row: bshanks@104: the first six reduced dimensions, using landmark Isomap. Bottom bshanks@104: row: examples of kmeans clustering applied to reduced datasets bshanks@104: to find 7 clusters. Left: 19 of the major subdivisions of the cortex. bshanks@104: Second from left: PCA. Third from left: NNMF. Right: Landmark bshanks@104: Isomap. Additional details: In the third and fourth rows, 7 dimen- bshanks@104: sions were found, but only 6 displayed. In the last row: for PCA, bshanks@104: 50 dimensions were used; for NNMF, 6 dimensions were used; for bshanks@104: landmark Isomap, 7 dimensions were used. Using some combination of these mea- bshanks@104: sures, we will develop a procedure to bshanks@104: find single marker genes for anatomical bshanks@104: regions: for each cortical area, we will bshanks@104: rank the genes by their ability to delineate bshanks@104: each area. We will quantitatively compare bshanks@104: the list of single genes generated by our bshanks@104: method to the lists generated by previous bshanks@104: methods which are mentioned in Aim 1 Re- bshanks@104: lated Work. bshanks@104: Some cortical areas have no single bshanks@104: marker genes but can be identified by com- bshanks@104: binatorial coding. This requires multivari- bshanks@104: ate scoring measures and feature selec- bshanks@104: tion procedures. Many of the measures, bshanks@104: such as expression energy, gradient sim- bshanks@104: ilarity, Jaccard, Dice, Hough, Student’s t, bshanks@104: and Mann-Whitney U are univariate. We bshanks@104: will extend these scoring measures for use bshanks@104: in multivariate feature selection, that is, for bshanks@104: scoring how well combinations of genes, bshanks@104: rather than individual genes, can distin- bshanks@104: guish a target area. There are existing bshanks@104: multivariate forms of some of the univariate bshanks@104: scoring measures, for example, Hotelling’s bshanks@104: T-square is a multivariate analog of Stu- bshanks@104: dent’s t. bshanks@104: We will develop a feature selection pro- bshanks@104: cedure for choosing the best small set of bshanks@104: marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will bshanks@104: also explore (a) feature selection using a stepwise wrapper over “vanilla” classifiers such as logistic regression, bshanks@104: (b) supervised learning methods such as decision trees which incrementally/greedily combine single gene mark- bshanks@104: ers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features bshanks@104: used, such as sparse support vector machines (SVMs). bshanks@96: Since errors of displacement and of shape may cause genes and target areas to match less than they should, bshanks@96: we will consider the robustness of feature selection methods in the presence of error. Some of these methods, bshanks@96: such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will bshanks@96: consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a bshanks@96: scoring method on small displacements and distortions of the data adds robustness to registration error at the bshanks@96: expense of computation time. bshanks@96: An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape bshanks@96: of the natural domain of gene expression corresponding to the area is different from the shape of the area as bshanks@96: recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing bshanks@96: their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be bshanks@98: fit if its boundary were redrawn slightly11, and (b) detect when a difficult area could be combined with adjacent bshanks@104: _________________________________________ bshanks@104: 11Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by bshanks@104: multiple sources of evidence. Interestingly, the need to detect “natural spatial domains of gene expression” in a data-driven fashion bshanks@104: means that the methods of Aim 2 might be useful in achieving Aim 1, as well – particularly discriminative dimensionality reduction. bshanks@96: areas to create a larger area which can be fit. bshanks@96: A future publication on the method that we develop in Aim 1 will review the scoring measures and quantita- bshanks@96: tively compare their performance in order to provide a foundation for future research of methods of marker gene bshanks@96: finding. We will measure the robustness of the scoring measures as well as their absolute performance on our bshanks@96: dataset. bshanks@96: Classifiers We will explore and compare different classifiers. As noted above, this activity is not separate bshanks@96: from the previous one, because some supervised learning algorithms include feature selection, and any clas- bshanks@96: sifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic bshanks@98: regression (including spatial models[16]), decision trees12, sparse SVMs, generative mixture models (including bshanks@96: naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic bshanks@96: algorithms, and artificial neural networks. bshanks@30: Develop algorithms to suggest a division of a structure into anatomical parts bshanks@104: bshanks@104: Figure 7: Prototypes corresponding to sample gene bshanks@104: clusters, clustered by gradient similarity. Region bound- bshanks@104: aries for the region that most matches each prototype bshanks@104: are overlaid. Dimensionality reduction on gene expression pro- bshanks@104: files We have already described the application of bshanks@104: ten dimensionality reduction algorithms for the pur- bshanks@104: pose of replacing the gene expression profiles, which bshanks@104: are vectors of about 4000 gene expression levels, bshanks@104: with a smaller number of features. We plan to fur- bshanks@104: ther explore and interpret these results, as well as to bshanks@104: apply other unsupervised learning algorithms, includ- bshanks@104: ing independent components analysis, self-organizing bshanks@104: maps, and generative models such as deep Boltz- bshanks@104: mann machines. We will explore ways to quantitatively bshanks@104: compare the relevance of the different dimensionality bshanks@104: reduction methods for identifying cortical areal bound- bshanks@104: aries. bshanks@98: Dimensionality reduction on pixels Instead of applying dimensionality reduction to the gene expression bshanks@99: profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in bshanks@98: this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions. bshanks@98: Clustering and segmentation on pixels We will explore clustering and segmentation algorithms in order to bshanks@98: segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving[9], recursive division bshanks@98: clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transforma- bshanks@98: tions, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with bshanks@98: various linkage functions. These methods can be combined with dimensionality reduction. bshanks@98: Clustering on genes We have already shown that the procedure of clustering genes according to gradient bshanks@98: similarity, and then creating an averaged prototype of each cluster’s expression pattern, yields some spatial bshanks@98: patterns which match cortical areas. We will further explore the clustering of genes. bshanks@96: In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful bshanks@96: as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then bshanks@96: replacing their expression levels with a single average expression level, thereby removing some redundancy from bshanks@96: the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality bshanks@96: reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would bshanks@96: help or hurt the ultimate goal of identifying interesting spatial regions. bshanks@99: Co-clustering There are some algorithms which simultaneously incorporate clustering on instances and on bshanks@98: features (in our case, genes and pixels), for example, IRM[11]. These are called co-clustering or biclustering bshanks@101: _________________________________________ bshanks@104: 12Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision bshanks@101: tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was bshanks@101: too large. We plan to implement a pruning procedure to generate trees that use fewer genes. bshanks@98: algorithms. bshanks@98: Radial profiles We wil explore the use of the radial profile of gene expression under each pixel. bshanks@98: Compare different methods In order to tell which method is best for genomic anatomy, for each experimental bshanks@98: method we will compare the cortical map found by unsupervised learning to a cortical map derived from the Allen bshanks@98: Reference Atlas. We will explore various quantitative metrics that purport to measure how similar two clusterings bshanks@98: are, such as Jaccard, Rand index, Fowlkes-Mallows, variation of information, Larsen, Van Dongen, and others. bshanks@96: Discriminative dimensionality reduction In addition to using a purely data-driven approach to identify bshanks@96: spatial regions, it might be useful to see how well the known regions can be reconstructed from a small number bshanks@96: of features, even if those features are chosen by using knowledge of the regions. For example, linear discriminant bshanks@96: analysis could be used as a dimensionality reduction technique in order to identify a few features which are the bshanks@96: best linear summary of gene expression profiles for the purpose of discriminating between regions. This reduced bshanks@96: feature set could then be used to cluster pixels into regions. Perhaps the resulting clusters will be similar to the bshanks@96: reference atlas, yet more faithful to natural spatial domains of gene expression than the reference atlas is. bshanks@96: Apply the new methods to the cortex bshanks@96: Using the methods developed in Aim 1, we will present, for each cortical area, a short list of markers to identify bshanks@96: that area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once. bshanks@96: Because in most cases the ABA coronal dataset only contains one ISH per gene, it is possible for an unrelated bshanks@96: combination of genes to seem to identify an area when in fact it is only coincidence. There are two ways we will bshanks@96: validate our marker genes to guard against this. First, we will confirm that putative combinations of marker genes bshanks@96: express the same pattern in both hemispheres. Second, we will manually validate our final results on other gene bshanks@98: expression datasets such as EMAGE, GeneAtlas, and GENSAT[8]. bshanks@99: Using the methods developed in Aim 2, we will present one or more hierarchical cortical maps. We will identify bshanks@96: and explain how the statistical structure in the gene expression data led to any unexpected or interesting features bshanks@96: of these maps, and we will provide biological hypotheses to interpret any new cortical areas, or groupings of bshanks@96: areas, which are discovered. bshanks@101: ____________________________________________________________________________ bshanks@101: Timeline and milestones bshanks@90: Finding marker genes bshanks@96: September-November 2009: Develop an automated mechanism for segmenting the cortical voxels into layers bshanks@96: November 2009 (milestone): Have completed construction of a flatmapped, cortical dataset with information bshanks@96: for each layer bshanks@101: October 2009-April 2010: Develop scoring and supervised learning methods. bshanks@96: January 2010 (milestone): Submit a publication on single marker genes for cortical areas bshanks@99: February-July 2010: Continue to develop scoring methods and supervised learning frameworks. Extend tech- bshanks@99: niques for robustness. Compare the performance of techniques. Validate marker genes. Prepare software bshanks@99: toolbox for Aim 1. bshanks@96: June 2010 (milestone): Submit a paper describing a method fulfilling Aim 1. Release toolbox. bshanks@96: July 2010 (milestone): Submit a paper describing combinations of marker genes for each cortical area, and a bshanks@96: small number of marker genes that can, in combination, define most of the areas at once bshanks@101: Revealing new ways to parcellate a structure into regions bshanks@101: June 2010-March 2011: Explore dimensionality reduction algorithms. Explore clustering algorithms. Adapt bshanks@101: clustering algorithms to use radial profile information. Compare the performance of techniques. bshanks@96: March 2011 (milestone): Submit a paper describing a method fulfilling Aim 2. Release toolbox. bshanks@101: February-May 2011: Using the methods developed for Aim 2, explore the genomic anatomy of the cortex, bshanks@101: interpret the results. Prepare software toolbox for Aim 2. bshanks@96: May 2011 (milestone): Submit a paper on the genomic anatomy of the cortex, using the methods developed in bshanks@96: Aim 2 bshanks@96: May-August 2011: Revisit Aim 1 to see if what was learned during Aim 2 can improve the methods for Aim 1. bshanks@99: Possibly submit another paper. bshanks@33: Bibliography & References Cited bshanks@96: [1]Chris Adamson, Leigh Johnston, Terrie Inder, Sandra Rees, Iven Mareels, and Gary Egan. A Tracking bshanks@96: Approach to Parcellation of the Cerebral Cortex, volume Volume 3749/2005 of Lecture Notes in Computer bshanks@96: Science, pages 294–301. Springer Berlin / Heidelberg, 2005. bshanks@96: [2]J. Annese, A. Pitiot, I. D. Dinov, and A. W. Toga. A myelo-architectonic method for the structural classification bshanks@96: of cortical areas. NeuroImage, 21(1):15–26, 2004. bshanks@96: [3]Tanya Barrett, Dennis B. Troup, Stephen E. Wilhite, Pierre Ledoux, Dmitry Rudnev, Carlos Evangelista, bshanks@96: Irene F. Kim, Alexandra Soboleva, Maxim Tomashevsky, and Ron Edgar. NCBI GEO: mining tens of millions bshanks@96: of expression profiles–database and tools update. Nucl. Acids Res., 35(suppl_1):D760–765, 2007. bshanks@96: [4]George W. Bell, Tatiana A. Yatskievych, and Parker B. Antin. GEISHA, a whole-mount in situ hybridization bshanks@96: gene expression screen in chicken embryos. Developmental Dynamics, 229(3):677–687, 2004. bshanks@96: [5]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe bshanks@96: Warren, Wah Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome. bshanks@96: PLoS Comput Biol, 1(4):e41, 2005. bshanks@96: [6]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy, bshanks@96: Arthur W. Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of bshanks@96: expression for a mouse brain section obtained using voxelation. Physiol. Genomics, 30(3):313–321, August bshanks@96: 2007. bshanks@96: [7]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite bshanks@96: for surface-based analyses of cerebral cortex. Journal of the American Medical Informatics Association: bshanks@96: JAMIA, 8(5):443–59, 2001. PMID: 11522765. bshanks@96: [8]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Scham- bshanks@96: bra, Norma J. Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A bshanks@96: gene expression atlas of the central nervous system based on bacterial artificial chromosomes. Nature, bshanks@96: 425(6961):917–925, October 2003. bshanks@96: [9]Trevor Hastie, Robert Tibshirani, Michael Eisen, Ash Alizadeh, Ronald Levy, Louis Staudt, Wing Chan, bshanks@96: David Botstein, and Patrick Brown. ’Gene shaving’ as a method for identifying distinct sets of genes with bshanks@96: similar expression patterns. Genome Biology, 1(2):research0003.1–research0003.21, 2000. bshanks@96: [10]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Ex- bshanks@96: pression Patterns, volume 13 of Communications in Computer and Information Science, pages 347–361. bshanks@96: Springer Berlin Heidelberg, 2008. bshanks@96: [11]C Kemp, JB Tenenbaum, TL Griffiths, T Yamada, and N Ueda. Learning systems of concepts with an infinite bshanks@96: relational model. In AAAI, 2006. bshanks@96: [12]F. Kruggel, M. K. Brckner, Th. Arendt, C. J. Wiggins, and D. Y. von Cramon. Analyzing the neocortical bshanks@96: fine-structure. Medical Image Analysis, 7(3):251–264, September 2003. bshanks@96: [13]Erh-Fang Lee, Jyl Boline, and Arthur W. Toga. A High-Resolution anatomical framework of the neonatal bshanks@96: mouse brain for managing gene expression data. Frontiers in Neuroinformatics, 1:6, 2007. PMC2525996. bshanks@96: [14]Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew Asbury, Tony bshanks@96: Cheung, Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M. Norland, Dennis S. Rice, bshanks@96: Nilesh Dosooye, Sundeep Shakya, Perdeep Mehta, and Tom Curran. BGEM: an in situ hybridization bshanks@96: database of gene expression in the embryonic and adult mouse nervous system. PLoS Biology, 4(4):e86 bshanks@96: EP –, April 2006. bshanks@96: [15]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Su- bshanks@96: san M Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann, bshanks@96: David J Anderson, Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas bshanks@96: of the adult mouse brain. Nat Neurosci, 12(3):356–362, March 2009. bshanks@96: [16]Christopher J. Paciorek. Computational techniques for spatial logistic regression with large data sets. Com- bshanks@96: putational Statistics & Data Analysis, 51(8):3631–3653, May 2007. bshanks@96: [17]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2 bshanks@96: edition, July 2001. bshanks@96: [18]A. Schleicher, N. Palomero-Gallagher, P. Morosan, S. Eickhoff, T. Kowalski, K. Vos, K. Amunts, and bshanks@96: K. Zilles. Quantitative architectural analysis: a new approach to cortical mapping. Anatomy and Em- bshanks@96: bryology, 210(5):373–386, December 2005. bshanks@96: [19]Oliver Schmitt, Lars Hmke, and Lutz Dmbgen. Detection of cortical transition regions utilizing statistical bshanks@96: analyses of excess masses. NeuroImage, 19(1):42–63, May 2003. bshanks@96: [20]Constance M. Smith, Jacqueline H. Finger, Terry F. Hayamizu, Ingeborg J. McCright, Janan T. Eppig, bshanks@96: James A. Kadin, Joel E. Richardson, and Martin Ringwald. The mouse gene expression database (GXD): bshanks@96: 2007 update. Nucl. Acids Res., 35(suppl_1):D618–623, 2007. bshanks@96: [21]Judy Sprague, Leyla Bayraktaroglu, Dave Clements, Tom Conlin, David Fashena, Ken Frazer, Melissa bshanks@96: Haendel, Douglas G Howe, Prita Mani, Sridhar Ramachandran, Kevin Schaper, Erik Segerdell, Peiran bshanks@96: Song, Brock Sprunger, Sierra Taylor, Ceri E Van Slyke, and Monte Westerfield. The zebrafish information bshanks@96: network: the zebrafish model organism database. Nucleic Acids Research, 34(Database issue):D581–5, bshanks@96: 2006. PMID: 16381936. bshanks@96: [22]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003. bshanks@96: [23]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T. bshanks@96: Mortrud, Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H. bshanks@96: Gage, Allan R. Jones, Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the bshanks@96: hippocampus. Neuron, 60(6):1010–1021, December 2008. bshanks@96: [24]Pavel Tomancak, Amy Beaton, Richard Weiszmann, Elaine Kwan, ShengQiang Shu, Suzanna E Lewis, bshanks@96: Stephen Richards, Michael Ashburner, Volker Hartenstein, Susan E Celniker, and Gerald M Rubin. Sys- bshanks@96: tematic determination of patterns of gene expression during drosophila embryogenesis. Genome Biology, bshanks@96: 3(12):research008818814, 2002. PMC151190. bshanks@96: [25]Jano van Hemert and Richard Baldock. Mining Spatial Gene Expression Data for Association Rules, volume bshanks@96: 4414/2007 of Lecture Notes in Computer Science, pages 66–76. Springer Berlin / Heidelberg, 2007. bshanks@96: [26]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton, bshanks@96: Thomas P. Perry, Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen. bshanks@96: EMAGE edinburgh mouse atlas of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860– bshanks@96: 865, 2008. bshanks@96: [27]Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression patterns in bshanks@96: the mouse embryo. Nucl. Acids Res., 32(suppl_1):D552–556, 2004. bshanks@96: [28]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa bshanks@96: Agarwala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood, bshanks@96: Robert Baertsch, Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer bshanks@96: Bork, Marc Botcherby, Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John bshanks@96: Burton, Jonathan Butler, Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T bshanks@96: Chinwalla, Deanna M Church, Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R bshanks@96: Copley, Alan Coulson, Olivier Couronne, James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy bshanks@96: Davies, Kimberly D Delehaunty, Justin Deri, Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens, bshanks@96: Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes, bshanks@96: Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A Fewell, Paul Flicek, Karen Foley, Wayne N bshanks@96: Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage, Richard A Gibbs, Gustavo bshanks@96: Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves, Eric D Green, bshanks@96: Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki, bshanks@96: LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard, bshanks@96: Adrienne Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy, bshanks@96: Michael Kamal, Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn bshanks@96: Kells, W James Kent, Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David bshanks@96: Kulp, Tom Landers, J P Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd, bshanks@96: Susan Lucas, Bin Ma, Donna R Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer, bshanks@96: Megan McCarthy, W Richard McCombie, Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim, bshanks@96: Beverley Meredith, Jill P Mesirov, Webb Miller, Tracie L Miner, Emmanuel Mongin, Kate T Montgomery, bshanks@96: Michael Morgan, Richard Mott, James C Mullikin, Donna M Muzny, William E Nash, Joanne O Nelson, bshanks@96: Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J O’Connor, Yasushi Okazaki, Karen bshanks@96: Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane Peterson, Pavel Pevzner, bshanks@96: Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter, Michael Quail, bshanks@96: Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph Santos, bshanks@96: Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven bshanks@96: Seaman, Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer, bshanks@96: Guy Slater, Arian Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles bshanks@96: Sugnet, Mikita Suyama, Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp, bshanks@96: Catherine Ucla, Abel Ureta-Vidal, Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie bshanks@96: Wall, Ryan J Weber, Robert B Weiss, Michael C Wendl, Anthony P West, Kris Wetterstrand, Raymond bshanks@96: Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey, Sophie Williams, Richard K Wilson, Eitan Win- bshanks@96: ter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang, Evgeny M Zdobnov, Michael C Zody, and bshanks@96: Eric S Lander. Initial sequencing and comparative analysis of the mouse genome. Nature, 420(6915):520– bshanks@96: 62, December 2002. PMID: 12466850. bshanks@33: bshanks@33: