bshanks@0: Specific aims bshanks@33: Massive new datasets obtained with techniques such as in situ hybridization (ISH) and BAC-transgenics allow the expres- bshanks@33: sion levels of many genes at many locations to be compared. Our goal is to develop automated methods to relate spatial bshanks@33: variation in gene expression to anatomy. We want to find marker genes for specific anatomical regions, and also to draw bshanks@33: new anatomical maps based on gene expression patterns. We have three specific aims: bshanks@30: (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target bshanks@30: anatomical regions bshanks@30: (2) develop an algorithm to suggest new ways of carving up a structure into anatomical subregions, based on spatial bshanks@30: patterns in gene expression bshanks@33: (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse bshanks@35: Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of bshanks@35: Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2). bshanks@30: In addition to validating the usefulness of the algorithms, the application of these methods to cerebral cortex will produce bshanks@30: immediate benefits, because there are currently no known genetic markers for many cortical areas. The results of the project bshanks@33: will support the development of new ways to selectively target cortical areas, and it will support the development of a bshanks@33: method for identifying the cortical areal boundaries present in small tissue samples. bshanks@33: All algorithms that we develop will be implemented in an open-source software toolkit. The toolkit, as well as the bshanks@30: machine-readable datasets developed in aim (3), will be published and freely available for others to use. bshanks@30: Background and significance bshanks@30: Aim 1 bshanks@30: Machine learning terminology: supervised learning bshanks@30: The task of looking for marker genes for anatomical subregions means that one is looking for a set of genes such that, if bshanks@30: the expression level of those genes is known, then the locations of the subregions can be inferred. bshanks@30: If we define the subregions so that they cover the entire anatomical structure to be divided, then instead of saying that we bshanks@30: are using gene expression to find the locations of the subregions, we may say that we are using gene expression to determine bshanks@30: to which subregion each voxel within the structure belongs. We call this a classification task, because each voxel is being bshanks@30: assigned to a class (namely, its subregion). bshanks@30: Therefore, an understanding of the relationship between the combination of their expression levels and the locations of bshanks@33: the subregions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels bshanks@30: within that voxel; the output is the subregional identity of the target voxel, that is, the subregion to which the target voxel bshanks@30: belongs. We call this function a classifier. In general, the input to a classifier is called an instance, and the output is called bshanks@30: a label (or a class label). bshanks@30: The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a bshanks@30: classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be bshanks@30: analyzed in concert with an anatomical atlas in order to produce a classifier. Such a procedure is a type of a machine learning bshanks@33: procedure. The construction of the classifier is called training (also learning), and the initial gene expression dataset used bshanks@33: in the construction of the classifier is called training data. bshanks@30: In the machine learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a bshanks@30: task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances bshanks@30: (voxels) for which the labels (subregions) are known. bshanks@30: Each gene expression level is called a feature, and the selection of which genes1 to include is called feature selection. bshanks@33: Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with bshanks@33: a separate feature selection phase, whereas other methods combine feature selection with other aspects of training. bshanks@30: One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then bshanks@30: chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic bshanks@30: procedure may be used in which features are added and subtracted from the selected set depending on how much they raise bshanks@30: the score. Such procedures are called “stepwise” or “greedy”. bshanks@30: Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the bshanks@30: learning algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature bshanks@30: selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to bshanks@30: each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares). If bshanks@30: only information from nearby voxels is used to calculate a voxel’s sub-score, then we say it is a local scoring method. If only bshanks@30: information from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a pointwise scoring method. bshanks@30: Key questions when choosing a learning method are: What are the instances? What are the features? How are the bshanks@30: features chosen? Here are four principles that outline our answers to these questions. bshanks@30: Principle 1: Combinatorial gene expression It is too much to hope that every anatomical region of interest will be bshanks@30: identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene bshanks@30: included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at bshanks@30: combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary bshanks@30: Results). Therefore, each instance should contain multiple features (genes). bshanks@30: Principle 2: Only look at combinations of small numbers of genes When the classifier classifies a voxel, it is bshanks@30: only allowed to look at the expression of the genes which have been selected as features. The more data that is available to bshanks@30: a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a bshanks@30: strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations bshanks@30: in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as bshanks@30: a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the bshanks@30: expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the bshanks@33: level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order bshanks@30: to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as bshanks@30: features. bshanks@30: Principle 3: Use geometry in feature selection bshanks@33: _________________________________________ bshanks@33: 1Strictly speaking, the features are gene expression levels, but we’ll call them genes. bshanks@30: When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of bshanks@30: each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information bshanks@30: about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See bshanks@30: Preliminary Results for evidence of the complementary nature of pointwise and local scoring methods. bshanks@30: Principle 4: Work in 2-D whenever possible bshanks@30: There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When bshanks@30: it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis bshanks@33: algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D bshanks@33: data. bshanks@30: Therefore, when possible, the instances should represent pixels, not voxels. bshanks@30: Aim 2 bshanks@30: Machine learning terminology: clustering bshanks@30: If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as bshanks@30: unsupervised learning in the jargon of machine learning. One thing that you can do with such a dataset is to group instances bshanks@30: together. A set of similar instances is called a cluster, and the activity of finding grouping the data into clusters is called bshanks@30: clustering or cluster analysis. bshanks@30: The task of deciding how to carve up a structure into anatomical subregions can be put into these terms. The instances bshanks@33: are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels bshanks@33: from the same subregion have similar gene expression profiles, at least compared to the other subregions. This means that bshanks@33: clustering voxels is the same as finding potential subregions; we seek a partitioning of the voxels into subregions, that is, bshanks@33: into clusters of voxels with similar gene expression. bshanks@30: It is desirable to determine not just one set of subregions, but also how these subregions relate to each other, if at all; bshanks@33: perhaps some of the subregions are more similar to each other than to the rest, suggesting that, although at a fine spatial bshanks@33: scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large subregion. bshanks@33: This suggests the outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which bshanks@33: partition the voxels. This is called hierarchial clustering. bshanks@30: Similarity scores bshanks@30: A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or bshanks@33: clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and bshanks@30: scoring methods for similarity. bshanks@30: Spatially contiguous clusters; image segmentation bshanks@33: We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have bshanks@33: an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary bshanks@33: Results, we show that one can get reasonable results without enforcing this constraint, however, we plan to compare these bshanks@33: results against other methods which guarantee contiguous clusters. bshanks@30: Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a bshanks@33: variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into bshanks@30: clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in bshanks@30: our task, there are thousands of color channels (one for each gene), rather than just three. There are imaging tasks which bshanks@33: use more than three colors, however, for example multispectral imaging and hyperspectral imaging, which are often used bshanks@33: to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting bshanks@33: sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene bshanks@33: expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of bshanks@33: spatially arranged data, some of these algorithms are specialized for visual images. bshanks@30: Dimensionality reduction bshanks@33: Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the bshanks@30: instances. However, some clustering algorithms perform better on small numbers of features. There are techniques which bshanks@30: “summarize” a larger number of features using a smaller number of features; these techniques go by the name of feature bshanks@30: extraction or dimensionality reduction. The small set of features that such a technique yields is called the reduced feature bshanks@30: set. After the reduced feature set is created, the instances may be replaced by reduced instances, which have as their features bshanks@30: the reduced feature set rather than the original feature set of all gene expression levels. Note that the features in the reduced bshanks@30: feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene bshanks@30: expression levels. bshanks@30: Another use for dimensionality reduction is to visualize the relationships between subregions. For example, one might bshanks@33: want tomake a 2-D plot upon which each subregion is represented by a single point, and with the property that subregions bshanks@30: with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points bshanks@33: in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of bshanks@30: the points on a 2-D plan will exactly satisfy this property – however, dimensionality reduction techniques allow one to find bshanks@30: arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction bshanks@30: is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction bshanks@30: before clustering. bshanks@30: Clustering genes rather than voxels bshanks@30: Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster bshanks@30: the features (genes). There are two ways that clusters of genes could be used. bshanks@30: Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could bshanks@30: have one reduced feature for each gene cluster. bshanks@30: Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression bshanks@30: pattern which seems to pick out a single, spatially continguous subregion. Therefore, it seems likely that an anatomically bshanks@30: interesting subregion will have multiple genes which each individually pick it out2. This suggests the following procedure: bshanks@33: cluster together genes which pick out similar subregions, and then to use the more popular common subregions as the bshanks@33: final clusters. In the Preliminary Data we show that a number of anatomically recognized cortical regions, as well as some bshanks@30: “superregions” formed by lumping together a few regions, are associated with gene clusters in this fashion. bshanks@30: Aim 3 bshanks@30: Background bshanks@33: The cortex is divided into areas and layers. To a first approximation, the parcellation of the cortex into areas can bshanks@33: be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue bshanks@33: downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can bshanks@33: picture an area of the cortex as a slice of many-layered cake. bshanks@30: Although it is known that different cortical areas have distinct roles in both normal functioning and in disease processes, bshanks@30: there are no known marker genes for many cortical areas. When it is necessary to divide a tissue sample into cortical areas, bshanks@30: this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of bshanks@30: their approximate location upon the cortical surface. bshanks@33: Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not bshanks@33: completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a bshanks@33: single agreed-upon map can be seen by contrasting the recent maps given by Swanson[?] on the one hand, and Paxinos bshanks@33: and Franklin[?] on the other. While the maps are certainly very similar in their general arrangement, significant differences bshanks@30: remain in the details. bshanks@30: Significance bshanks@30: The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the bshanks@33: combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for bshanks@30: drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively bshanks@30: target individual cortical areas. bshanks@30: The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatom- bshanks@33: ical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can bshanks@33: find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that bshanks@30: will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex. bshanks@33: The method developed in aim (3) will provide a genoarchitectonic viewpoint that will contribute to the creation of bshanks@33: a better map. The development of present-day cortical maps was driven by the application of histological stains. It is bshanks@33: conceivable that if a different set of stains had been available which identified a different set of features, then the today’s bshanks@33: cortical maps would have come out differently. Since the number of classes of stains is small compared to the number of bshanks@33: genes, it is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been bshanks@33: captured by any stain. Therefore, current ideas about cortical anatomy need to incorporate what we can learn from looking bshanks@33: at the patterns of gene expression. bshanks@30: While we do not here propose to analyze human gene expression data, it is conceivable that the methods we propose to bshanks@30: develop could be used to suggest modifications to the human cortical map as well. bshanks@33: _________________________________________ bshanks@33: 2This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is bshanks@33: possible that the currently accepted cortical maps divide the cortex into subregions which are unnatural from the point of view of gene expression; bshanks@33: perhaps there is some other way to map the cortex for which each subregion can be identified by single genes. bshanks@30: Related work bshanks@30: There does not appear to be much work on the automated analysis of spatial gene expression data. bshanks@30: There is a substantial body of work on the analysis of gene expression data, however, most of this concerns gene expression bshanks@30: data which is not fundamentally spatial. bshanks@30: As noted above, there has been much work on both supervised learning and clustering, and there are many available bshanks@30: algorithms for each. However, the completion of Aims 1 and 2 involves more than just choosing between a set of existing bshanks@33: algorithms, and will constitute a substantial contribution to biology. The algorithms require the scientist to provide a bshanks@30: framework for representing the problem domain, and the way that this framework is set up has a large impact on performance. bshanks@30: Creating a good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical bshanks@30: “fine-tuning” of numerical parameters. For example, we believe that domain-specific scoring measures (such as gradient bshanks@30: similarity, which is discussed in Preliminary Work) may be necessary in order to achieve the best results in this application. bshanks@30: We are aware of two existing efforts to relate spatial gene expression data to anatomy through computational methods. bshanks@33: [3 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual analysis, bshanks@32: two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and a hierarchial recursive bshanks@32: bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving the bshanks@34: usefulness of such research. We have run NNMF on the cortical dataset3 and while the results are promising (see Preliminary bshanks@34: Data), we think that it will be possible to find a better method (we also think that more automation of the parts that this bshanks@32: paper’s authors did manually will be possible). bshanks@33: [2 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA is an analysis tool for the ABA dataset. AGEA has bshanks@32: three components: bshanks@33: * Gene Finder: The user selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2) bshanks@33: yields a list of genes which are overexpressed in that cluster. bshanks@32: * Correlation: The user selects a seed voxel and the shows the user how much correlation there is between the gene bshanks@32: expression profile of the seed voxel and every other voxel. bshanks@33: * Clusters: AGEA includes a precomputed hierarchial clustering of voxels based on a recursive bifurcation algorithm bshanks@33: with correlation as the similarity metric. bshanks@34: Gene Finder is different from our Aim 1 in at least four ways. First, although the user chooses a seed voxel, Gene bshanks@34: Finder, not the user, chooses the cluster for which genes will be found, and in our experience it never chooses cortical areas, bshanks@34: instead preferring cortical layers4. Therefore, Gene Finder cannot be used to find marker genes for cortical areas. Second, bshanks@34: Gene Finder finds only single genes, whereas we will also look for combinations of genes5. Third, gene finder can only use bshanks@34: overexpression as a marker, whereas in the Preliminary Data we show that underexpression can also be used. Fourth, Gene bshanks@34: Finder uses a simple pointwise score6, whereas we will also use geometric metrics such as gradient similarity. bshanks@34: The hierarchial clustering is different from our Aim 2 in at least three ways. First, the clustering finds clusters cor- bshanks@34: responding to layers, but no clusters corresponding to areas7 8 Our Aim 2 will not be accomplished until a clustering is bshanks@34: produced which yields areas. Second, AGEA uses perhaps the simplest possible similarity score (correlation), and does no bshanks@34: dimensionality reduction before calculating similarity. While it is possible that a more complex system will not do any better bshanks@34: than this, we believe further exploration of alternative methods of scoring and dimensionality reduction is warranted. Third, bshanks@34: AGEA did not look at clusters of genes; in Preliminary Data we have shown that clusters of genes may identify intersting bshanks@34: spatial subregions such as cortical areas. bshanks@34: _______ bshanks@30: 3We ran “vanilla” NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft bshanks@30: spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was bshanks@33: needed. The paper under discussion mentions that they also tried a hierarchial variant of NNMF, but since they didn’t report its results, we bshanks@33: assume that those result were not any more impressive than the results of the non-hierarchial variant. bshanks@34: 4Because of the way in which Gene Finder chooses a cluster, layers will always be preferred to areas if pairwise correlations between the gene bshanks@34: expression of voxels in different areas but the same layer are stronger than pairwise correlatios between the gene expression of voxels in different bshanks@34: layers but the same area. This appears to be the case. bshanks@34: 5See Preliminary Data for an example of an area which cannot be marked by any single gene in the dataset, but which can be marked by a bshanks@34: combination. bshanks@34: 6“Expression energy ratio”, which captures overexpression. bshanks@34: 7This is for the same reason as in footnote 4. bshanks@34: 8There are clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area bshanks@34: intersection clusters, further work is needed to make sense of these. bshanks@33: bshanks@30: bshanks@26: bshanks@30: Figure 1: Upper left: wwc1. Upper right: mtif2. Lower left: wwc1 + mtif2 (each pixel’s value on the lower left is the sum bshanks@30: of the corresponding pixels in the upper row). Within each picture, the vertical axis roughly corresponds to anterior at the bshanks@30: top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. bshanks@30: The red outline is the boundary of region MO. Pixels are colored approximately according to the density of expressing cells bshanks@30: underneath each pixel, with red meaning a lot of expression and blue meaning little. bshanks@30: Preliminary work bshanks@30: Format conversion between SEV, MATLAB, NIFTI bshanks@35: We have created software to (politely) download all of the SEV files from the Allen Institute website. We have also created bshanks@35: software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s formats. bshanks@30: Flatmap of cortex bshanks@35: We created a mask which selects only those voxels within the ABA atlas space which belong to cerebral cortex. bshanks@35: todo bshanks@35: Using Caret, [1] bshanks@35: We manually entered the boundaries of each cortical area into Caret. bshanks@35: Cortical layers are found at different depths in different parts of the cortex. We have manually demarcated the depth of bshanks@35: the outer boundary of cortical layer 5 throughout the cortex. bshanks@35: In preparation for extracting the layer-specific datasets, we have extended Caret with routines that allow the depth of bshanks@35: the ROI for volume-to-surface projection to vary. bshanks@30: Using combinations of multiple genes is necessary and sufficient to delineate some cortical areas bshanks@30: Here we give an example of a cortical area which is not marked by any single gene, but which can be identified combi- bshanks@34: natorially. according to logistic regression, gene wwc19 is the best fit single gene for predicting whether or not a pixel on bshanks@30: the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure shows wwc1’s spatial expression bshanks@30: pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, however the gene bshanks@33: overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the bshanks@30: overshoot is the medial surface of the cortex. MO is only found on the lateral surface (todo). bshanks@34: Gnee mtif210 is shown in figure the upper-right of Fig. . Mtif2 captures MO’s upper-left boundary, but not its lower-right bshanks@33: boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these bshanks@33: two figures, we get the lower-left of Figure . This combination captures area MO much better than any single gene. bshanks@30: Correlation todo bshanks@30: Conditional entropy todo bshanks@30: Gradient similarity todo bshanks@30: Geometric and pointwise scoring methods provide complementary information bshanks@30: _________________________________________ bshanks@34: 9“WW, C2 and coiled-coil domain containing 1”; EntrezGene ID 211652 bshanks@34: 10“mitochondrial translational initiation factor 2”; EntrezGene ID 76784 bshanks@33: bshanks@30: bshanks@30: bshanks@33: Figure 2: The top row shows the three genes which (individually) best predict area AUD, according to logistic regression. bshanks@30: The bottom row shows the three genes which (individually) best match area AUD, according to gradient similarity. From bshanks@30: left to right and top to bottom, the genes are Ssr1, Efcbp1, Aph1a, Ptk7, Aph1a again, and Lepr bshanks@35: To show that local geometry can provide useful information that cannot be detected via pointwise analyses, consider Fig. bshanks@35: . The top row of Fig. displays the 3 genes which most match area AUD, according to a pointwise method11. The bottom bshanks@35: row displays the 3 genes which most match AUD according to a method which considers local geometry12 The pointwise bshanks@35: method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this bshanks@33: includes many areas which don’t have a salient border matching the areal border. The geometric method identifies genes bshanks@33: whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes bshanks@33: which don’t express over the entire area. Genes which have high rankings using both pointwise and border criteria, such as bshanks@33: Aph1a in the example, may be particularly good markers. None of these genes are, individually, a perfect marker for AUD; bshanks@33: we deliberately chose a “difficult” area in order to better contrast pointwise with geometric methods. bshanks@30: Areas which can be identified by single genes bshanks@30: todo bshanks@35: Areas can sometimes be marked by underexpression bshanks@35: todo bshanks@30: Specific to Aim 1 (and Aim 3) bshanks@30: Forward stepwise logistic regression todo bshanks@30: SVM on all genes at once bshanks@30: In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical bshanks@34: surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%13. As noted above, bshanks@30: however, a classifier that looks at all the genes at once isn’t practically useful. bshanks@30: The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many bshanks@33: of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task bshanks@30: combines feature selection with supervised learning. bshanks@30: Decision trees bshanks@30: todo bshanks@30: Specific to Aim 2 (and Aim 3) bshanks@30: Raw dimensionality reduction results bshanks@30: todo bshanks@35: _________________________________________ bshanks@35: 11For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor bshanks@35: variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well bshanks@35: they predict area AUD. bshanks@35: 12For each gene the gradient similarity (see section ??) between (a) a map of the expression of each gene on the cortical surface and (b) the bshanks@35: shape of area AUD, was calculated, and this was used to rank the genes. bshanks@35: 135-fold cross-validation. bshanks@30: (might want to incld nnMF since mentioned above) bshanks@30: Dimensionality reduction plus K-means or spectral clustering bshanks@30: Many areas are captured by clusters of genes bshanks@30: todo bshanks@30: todo bshanks@30: Research plan bshanks@30: todo amongst other things: bshanks@30: Develop algorithms that find genetic markers for anatomical regions bshanks@30: 1.Develop scoring measures for evaluating how good individual genes are at marking areas: we will compare pointwise, bshanks@30: geometric, and information-theoretic measures. bshanks@30: 2.Develop a procedure to find single marker genes for anatomical regions: for each cortical area, by using or combining bshanks@30: the scoring measures developed, we will rank the genes by their ability to delineate each area. bshanks@30: 3.Extend the procedure to handle difficult areas by using combinatorial coding: for areas that cannot be identified by any bshanks@30: single gene, identify them with a handful of genes. We will consider both (a) algorithms that incrementally/greedily bshanks@30: combine single gene markers into sets, such as forward stepwise regression and decision trees, and also (b) supervised bshanks@33: learning techniques which use soft constraints to minimize the number of features, such as sparse support vector bshanks@30: machines. bshanks@33: 4.Extend the procedure to handle difficult areas by combining or redrawing the boundaries: An area may be difficult bshanks@33: to identify because the boundaries are misdrawn, or because it does not “really” exist as a single area, at least on the bshanks@30: genetic level. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its bshanks@30: boundary were redrawn slightly, and (b) detect when a difficult area could be combined with adjacent areas to create bshanks@30: a larger area which can be fit. bshanks@30: Apply these algorithms to the cortex bshanks@30: 1.Create open source format conversion tools: we will create tools to bulk download the ABA dataset and to convert bshanks@30: between SEV, NIFTI and MATLAB formats. bshanks@30: 2.Flatmap the ABA cortex data: map the ABA data onto a plane and draw the cortical area boundaries onto it. bshanks@30: 3.Find layer boundaries: cluster similar voxels together in order to automatically find the cortical layer boundaries. bshanks@30: 4.Run the procedures that we developed on the cortex: we will present, for each area, a short list of markers to identify bshanks@30: that area; and we will also present lists of “panels” of genes that can be used to delineate many areas at once. bshanks@30: Develop algorithms to suggest a division of a structure into anatomical parts bshanks@30: 1.Explore dimensionality reduction algorithms applied to pixels: including TODO bshanks@30: 2.Explore dimensionality reduction algorithms applied to genes: including TODO bshanks@30: 3.Explore clustering algorithms applied to pixels: including TODO bshanks@30: 4.Explore clustering algorithms applied to genes: including gene shaving, TODO bshanks@30: 5.Develop an algorithm to use dimensionality reduction and/or hierarchial clustering to create anatomical maps bshanks@30: 6.Run this algorithm on the cortex: present a hierarchial, genoarchitectonic map of the cortex bshanks@33: Bibliography & References Cited bshanks@33: [1]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite for surface- bshanks@33: based analyses of cerebral cortex. Journal of the American Medical Informatics Association: JAMIA, 8(5):443–59, 2001. bshanks@33: PMID: 11522765. bshanks@33: [2]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Susan M Sunkin, bshanks@33: Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann, David J Anderson, Ed S bshanks@33: Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas of the adult mouse brain. Nat Neurosci, bshanks@33: 12(3):356–362, March 2009. bshanks@33: [3]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T. Mortrud, bshanks@33: Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H. Gage, Allan R. Jones, bshanks@33: Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the hippocampus. Neuron, 60(6):1010– bshanks@33: 1021, December 2008. bshanks@33: bshanks@33: _______________________________________________________________________________________________________ bshanks@30: stuff i dunno where to put yet (there is more scattered through grant-oldtext): bshanks@16: Principle 4: Work in 2-D whenever possible bshanks@33: In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo), bshanks@33: or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but bshanks@33: in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane. bshanks@33: The method that we will develop will begin by mapping the data into a 2-D plane. Although the manifold that bshanks@33: characterized cortical areas is known to be the cortical surface, it remains to be seen which method of mapping the manifold bshanks@33: into a plane is optimal for this application. We will compare mappings which attempt to preserve size (such as the one used bshanks@33: by Caret[1]) with mappings which preserve angle (conformal maps). bshanks@33: Although there is much 2-D organization in anatomy, there are also structures whose shape is fundamentally 3-dimensional. bshanks@30: If possible, we would like the method we develop to include a statistical test that warns the user if the assumption of 2-D bshanks@30: structure seems to be wrong. bshanks@33: — bshanks@33: note: bshanks@33: do we need to cite: no known markers, impressive results? bshanks@33: bshanks@33: