cg

annotate grant.txt @ 41:34e681823d3a

.
author bshanks@bshanks.dyndns.org
date Tue Apr 14 02:53:00 2009 -0700 (16 years ago)
parents cb2ac88dd526
children 282ba15dcfbe

rev   line source
bshanks@33 1 \documentclass{nih-blank}
bshanks@33 2 %%\piname{Stevens, Charles F.}
bshanks@30 3
bshanks@0 4 == Specific aims ==
bshanks@0 5
bshanks@17 6 Massive new datasets obtained with techniques such as in situ hybridization (ISH) and BAC-transgenics allow the expression levels of many genes at many locations to be compared. Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns. We have three specific aims:\\
bshanks@17 7
bshanks@17 8 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target anatomical regions\\
bshanks@17 9
bshanks@17 10 (2) develop an algorithm to suggest new ways of carving up a structure into anatomical subregions, based on spatial patterns in gene expression\\
bshanks@17 11
bshanks@35 12 (3) create a 2-D "flat map" dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2).\\
bshanks@0 13
bshanks@0 14 In addition to validating the usefulness of the algorithms, the application of these methods to cerebral cortex will produce immediate benefits, because there are currently no known genetic markers for many cortical areas. The results of the project will support the development of new ways to selectively target cortical areas, and it will support the development of a method for identifying the cortical areal boundaries present in small tissue samples.
bshanks@0 15
bshanks@0 16 All algorithms that we develop will be implemented in an open-source software toolkit. The toolkit, as well as the machine-readable datasets developed in aim (3), will be published and freely available for others to use.
bshanks@0 17
bshanks@0 18
bshanks@26 19 \newpage
bshanks@0 20
bshanks@0 21 == Background and significance ==
bshanks@0 22
bshanks@0 23 === Aim 1 ===
bshanks@16 24
bshanks@27 25 \vspace{0.3cm}**Machine learning terminology: supervised learning**
bshanks@16 26
bshanks@0 27 The task of looking for marker genes for anatomical subregions means that one is looking for a set of genes such that, if the expression level of those genes is known, then the locations of the subregions can be inferred.
bshanks@0 28
bshanks@0 29 If we define the subregions so that they cover the entire anatomical structure to be divided, then instead of saying that we are using gene expression to find the locations of the subregions, we may say that we are using gene expression to determine to which subregion each voxel within the structure belongs. We call this a __classification task__, because each voxel is being assigned to a class (namely, its subregion).
bshanks@0 30
bshanks@15 31 Therefore, an understanding of the relationship between the combination of their expression levels and the locations of the subregions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is the subregional identity of the target voxel, that is, the subregion to which the target voxel belongs. We call this function a __classifier__. In general, the input to a classifier is called an __instance__, and the output is called a __label__ (or a __class label__).
bshanks@0 32
bshanks@0 33 The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. Such a procedure is a type of a machine learning procedure. The construction of the classifier is called __training__ (also __learning__), and the initial gene expression dataset used in the construction of the classifier is called __training data__.
bshanks@0 34
bshanks@28 35 In the machine learning literature, this sort of procedure may be thought of as a __supervised learning task__, defined as a task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances (voxels) for which the labels (subregions) are known.
bshanks@0 36
bshanks@29 37 Each gene expression level is called a __feature__, and the selection of which genes\footnote{Strictly speaking, the features are gene expression levels, but we'll call them genes.} to include is called __feature selection__. Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
bshanks@0 38
bshanks@0 39 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic procedure may be used in which features are added and subtracted from the selected set depending on how much they raise the score. Such procedures are called "stepwise" or "greedy".
bshanks@0 40
bshanks@0 41 Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the learning algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares). If only information from nearby voxels is used to calculate a voxel's sub-score, then we say it is a __local scoring method__. If only information from the voxel itself is used to calculate a voxel's sub-score, then we say it is a __pointwise scoring method__.
bshanks@0 42
bshanks@0 43 Key questions when choosing a learning method are: What are the instances? What are the features? How are the features chosen? Here are four principles that outline our answers to these questions.
bshanks@0 44
bshanks@16 45
bshanks@27 46 \vspace{0.3cm}**Principle 1: Combinatorial gene expression**
bshanks@29 47 It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Results). Therefore, each instance should contain multiple features (genes).
bshanks@0 48
bshanks@16 49
bshanks@27 50 \vspace{0.3cm}**Principle 2: Only look at combinations of small numbers of genes**
bshanks@0 51 When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected as features. The more data that is available to a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as features.
bshanks@0 52
bshanks@0 53
bshanks@16 54
bshanks@27 55 \vspace{0.3cm}**Principle 3: Use geometry in feature selection**
bshanks@16 56
bshanks@0 57 When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See Preliminary Results for evidence of the complementary nature of pointwise and local scoring methods.
bshanks@0 58
bshanks@0 59
bshanks@16 60
bshanks@27 61 \vspace{0.3cm}**Principle 4: Work in 2-D whenever possible**
bshanks@16 62
bshanks@0 63
bshanks@0 64 There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D data.
bshanks@0 65
bshanks@0 66 Therefore, when possible, the instances should represent pixels, not voxels.
bshanks@0 67
bshanks@0 68
bshanks@0 69 === Aim 2 ===
bshanks@16 70
bshanks@27 71 \vspace{0.3cm}**Machine learning terminology: clustering**
bshanks@16 72
bshanks@15 73 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as __unsupervised learning__ in the jargon of machine learning. One thing that you can do with such a dataset is to group instances together. A set of similar instances is called a __cluster__, and the activity of finding grouping the data into clusters is called clustering or cluster analysis.
bshanks@15 74
bshanks@15 75 The task of deciding how to carve up a structure into anatomical subregions can be put into these terms. The instances are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from the same subregion have similar gene expression profiles, at least compared to the other subregions. This means that clustering voxels is the same as finding potential subregions; we seek a partitioning of the voxels into subregions, that is, into clusters of voxels with similar gene expression.
bshanks@15 76
bshanks@15 77 It is desirable to determine not just one set of subregions, but also how these subregions relate to each other, if at all; perhaps some of the subregions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large subregion. This suggests the outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchial clustering.
bshanks@15 78
bshanks@16 79
bshanks@27 80 \vspace{0.3cm}**Similarity scores**
bshanks@16 81
bshanks@18 82 A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and scoring methods for similarity.
bshanks@16 83
bshanks@16 84
bshanks@27 85 \vspace{0.3cm}**Spatially contiguous clusters; image segmentation**
bshanks@16 86
bshanks@15 87
bshanks@15 88 We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary Results, we show that one can get reasonable results without enforcing this constraint, however, we plan to compare these results against other methods which guarantee contiguous clusters.
bshanks@15 89
bshanks@15 90 Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three. There are imaging tasks which use more than three colors, however, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
bshanks@15 91
bshanks@16 92
bshanks@27 93 \vspace{0.3cm}**Dimensionality reduction**
bshanks@16 94
bshanks@15 95
bshanks@15 96 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the instances. However, some clustering algorithms perform better on small numbers of features. There are techniques which "summarize" a larger number of features using a smaller number of features; these techniques go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique yields is called the __reduced feature set__. After the reduced feature set is created, the instances may be replaced by __reduced instances__, which have as their features the reduced feature set rather than the original feature set of all gene expression levels. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
bshanks@15 97
bshanks@15 98 Another use for dimensionality reduction is to visualize the relationships between subregions. For example, one might want to make a 2-D plot upon which each subregion is represented by a single point, and with the property that subregions with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on a 2-D plan will exactly satisfy this property -- however, dimensionality reduction techniques allow one to find arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering.
bshanks@15 99
bshanks@16 100
bshanks@27 101 \vspace{0.3cm}**Clustering genes rather than voxels**
bshanks@16 102
bshanks@15 103
bshanks@15 104 Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used.
bshanks@15 105
bshanks@15 106 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could have one reduced feature for each gene cluster.
bshanks@15 107
bshanks@15 108 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression pattern which seems to pick out a single, spatially continguous subregion. Therefore, it seems likely that an anatomically interesting subregion will have multiple genes which each individually pick it out\footnote{This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is possible that the currently accepted cortical maps divide the cortex into subregions which are unnatural from the point of view of gene expression; perhaps there is some other way to map the cortex for which each subregion can be identified by single genes.}. This suggests the following procedure: cluster together genes which pick out similar subregions, and then to use the more popular common subregions as the final clusters. In the Preliminary Data we show that a number of anatomically recognized cortical regions, as well as some "superregions" formed by lumping together a few regions, are associated with gene clusters in this fashion.
bshanks@15 109
bshanks@0 110
bshanks@0 111
bshanks@0 112
bshanks@0 113
bshanks@0 114 === Aim 3 ===
bshanks@16 115
bshanks@27 116 \vspace{0.3cm}**Background**
bshanks@16 117
bshanks@0 118 The cortex is divided into areas and layers. To a first approximation, the parcellation of the cortex into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can picture an area of the cortex as a slice of many-layered cake.
bshanks@0 119
bshanks@0 120 Although it is known that different cortical areas have distinct roles in both normal functioning and in disease processes, there are no known marker genes for many cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their approximate location upon the cortical surface.
bshanks@0 121
bshanks@36 122 Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single agreed-upon map can be seen by contrasting the recent maps given by Swanson\cite{swanson_brain_2003} on the one hand, and Paxinos and Franklin\cite{paxinos_mouse_2001} on the other. While the maps are certainly very similar in their general arrangement, significant differences remain in the details.
bshanks@36 123
bshanks@36 124 \vspace{0.3cm}**The Allen Mouse Brain Atlas dataset**
bshanks@36 125
bshanks@36 126 The Allen Mouse Brain Atlas (ABA) data was produced by doing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed in order to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse brains were needed in order to measure the expression of many genes.
bshanks@36 127
bshanks@36 128 Next, an automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 \= 159,326 voxels in the 3D coordinate system, of which 51,533 are in the brain\cite{ng_anatomic_2009}.
bshanks@36 129
bshanks@36 130 Mus musculus, the common house mouse, is thought to contain about 22,000 protein-coding genes\cite{waterston_initial_2002}. The ABA contains data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from only the coronal subset of the ABA, because the sagittal data does not cover the entire cortex, and has greater registration error\cite{ng_anatomic_2009}. Genes were selected by the Allen Institute for coronal sectioning based on, "classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern"\cite{ng_anatomic_2009}.
bshanks@0 131
bshanks@0 132
bshanks@0 133
bshanks@16 134
bshanks@27 135 \vspace{0.3cm}**Significance**
bshanks@16 136
bshanks@0 137 The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively target individual cortical areas.
bshanks@0 138
bshanks@0 139 The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatomical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex.
bshanks@0 140
bshanks@0 141 The method developed in aim (3) will provide a genoarchitectonic viewpoint that will contribute to the creation of a better map. The development of present-day cortical maps was driven by the application of histological stains. It is conceivable that if a different set of stains had been available which identified a different set of features, then the today's cortical maps would have come out differently. Since the number of classes of stains is small compared to the number of genes, it is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been captured by any stain. Therefore, current ideas about cortical anatomy need to incorporate what we can learn from looking at the patterns of gene expression.
bshanks@0 142
bshanks@0 143 While we do not here propose to analyze human gene expression data, it is conceivable that the methods we propose to develop could be used to suggest modifications to the human cortical map as well.
bshanks@0 144
bshanks@0 145
bshanks@0 146 === Related work ===
bshanks@18 147 There does not appear to be much work on the automated analysis of spatial gene expression data.
bshanks@18 148
bshanks@23 149 There is a substantial body of work on the analysis of gene expression data, however, most of this concerns gene expression data which is not fundamentally spatial.
bshanks@18 150
bshanks@22 151 As noted above, there has been much work on both supervised learning and clustering, and there are many available algorithms for each. However, the completion of Aims 1 and 2 involves more than just choosing between a set of existing algorithms, and will constitute a substantial contribution to biology. The algorithms require the scientist to provide a framework for representing the problem domain, and the way that this framework is set up has a large impact on performance. Creating a good framework can require creatively reconceptualizing the problem domain, and is not merely a mechanical "fine-tuning" of numerical parameters. For example, we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Work) may be necessary in order to achieve the best results in this application.
bshanks@18 152
bshanks@20 153 We are aware of two existing efforts to relate spatial gene expression data to anatomy through computational methods.
bshanks@20 154
bshanks@32 155 \cite{thompson_genomic_2008} describes an analysis of the anatomy of
bshanks@32 156 the hippocampus using the ABA dataset. In addition to manual analysis,
bshanks@32 157 two clustering methods were employed, a modified Non-negative Matrix
bshanks@34 158 Factorization (NNMF), and a hierarchial recursive bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving the usefulness of such research. We have run NNMF on the cortical dataset\footnote{We ran "vanilla" NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was needed. The paper under discussion mentions that they also tried a hierarchial variant of NNMF, but since they didn't report its results, we assume that those result were not any more impressive than the results of the non-hierarchial variant.} and while the results are promising (see Preliminary Data), we think that it will be possible to find a better method (we also think that more automation of the parts that this paper's authors did manually will be possible).
bshanks@32 159
bshanks@32 160
bshanks@32 161 \cite{ng_anatomic_2009} describes AGEA, "Anatomic Gene Expression
bshanks@32 162 Atlas". AGEA is an analysis tool for the ABA dataset. AGEA has three
bshanks@32 163 components:
bshanks@32 164
bshanks@32 165 * Gene Finder: The user selects a seed voxel and the system (1) chooses a
bshanks@32 166 cluster which includes the seed voxel, (2) yields a list of genes
bshanks@32 167 which are overexpressed in that cluster.
bshanks@32 168
bshanks@32 169 * Correlation: The user selects a seed voxel and
bshanks@32 170 the shows the user how much correlation there is between the gene
bshanks@32 171 expression profile of the seed voxel and every other voxel.
bshanks@32 172
bshanks@32 173 * Clusters: AGEA includes a precomputed hierarchial clustering of voxels based on a recursive bifurcation algorithm with correlation as the similarity metric.
bshanks@32 174
bshanks@34 175 Gene Finder is different from our Aim 1 in at least four ways. First, although the user chooses a seed voxel, Gene Finder, not the user, chooses the cluster for which genes will be found, and in our experience it never chooses cortical areas, instead preferring cortical layers\footnote{\label{layersNotAreas}Because of the way in which Gene Finder chooses a cluster, layers will always be preferred to areas if pairwise correlations between the gene expression of voxels in different areas but the same layer are stronger than pairwise correlatios between the gene expression of voxels in different layers but the same area. This appears to be the case.}. Therefore, Gene Finder cannot be used to find marker genes for cortical areas. Second, Gene Finder finds only single genes, whereas we will also look for combinations of genes\footnote{See Preliminary Data for an example of an area which cannot be marked by any single gene in the dataset, but which can be marked by a combination.}. Third, gene finder can only use overexpression as a marker, whereas in the Preliminary Data we show that underexpression can also be used. Fourth, Gene Finder uses a simple pointwise score\footnote{"Expression energy ratio", which captures overexpression.}, whereas we will also use geometric metrics such as gradient similarity.
bshanks@34 176
bshanks@34 177 The hierarchial clustering is different from our Aim 2 in at least three ways. First, the clustering finds clusters corresponding to layers, but no clusters corresponding to areas\footnote{This is for the same reason as in footnote \ref{layersNotAreas}.} \footnote{There are clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of these.} Our Aim 2 will not be accomplished until a clustering is produced which yields areas. Second, AGEA uses perhaps the simplest possible similarity score (correlation), and does no dimensionality reduction before calculating similarity. While it is possible that a more complex system will not do any better than this, we believe further exploration of alternative methods of scoring and dimensionality reduction is warranted. Third, AGEA did not look at clusters of genes; in Preliminary Data we have shown that clusters of genes may identify intersting spatial subregions such as cortical areas.
bshanks@18 178
bshanks@18 179
bshanks@0 180
bshanks@26 181 \newpage
bshanks@26 182
bshanks@0 183 == Preliminary work ==
bshanks@0 184
bshanks@15 185 === Format conversion between SEV, MATLAB, NIFTI ===
bshanks@38 186 We have created software to (politely) download all of the SEV files from the Allen Institute website. We have also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret's file formats.
bshanks@35 187
bshanks@15 188
bshanks@15 189 === Flatmap of cortex ===
bshanks@36 190 We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. We divided the cortex into hemispheres.
bshanks@36 191
bshanks@37 192 Using Caret\cite{van_essen_integrated_2001}, we created a mesh representation of the surface of the selected region. For each gene, for each node of the mesh, we used Caret to calculate an average of the gene expression of the voxels "underneath" that mesh node. We then used Caret to flatten the cortex, creating a two-dimensional mesh.
bshanks@36 193
bshanks@36 194 We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this grid into a MATLAB matrix.
bshanks@36 195
bshanks@36 196 We manually traced the boundaries of each cortical area from the ABA coronal reference atlas slides. We then converted these manual traces into Caret-format regional boundary data on the mesh surface. Using Caret, we projected the regions onto the 2-d mesh, and then onto the grid, and then we converted the region data into MATLAB format.
bshanks@36 197
bshanks@37 198 At this point, the data is in the form of a number of 2-D matrices, all in registration, with the matrix entries representing a grid of points (pixels) over the cortical surface:
bshanks@36 199
bshanks@36 200 * A 2-D matrix whose entries represent the regional label associated with each surface pixel
bshanks@36 201 * For each gene, a 2-D matrix whose entries represent the average expression level underneath each surface pixel
bshanks@36 202
bshanks@38 203 We created a normalized version of the gene expression data by subtracting each gene's mean expression level (over all surface pixels) and dividing each gene by its standard deviation.
bshanks@38 204
bshanks@40 205 The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface.
bshanks@40 206
bshanks@37 207 To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines that allow the depth of the ROI for volume-to-surface projection to vary.
bshanks@36 208
bshanks@36 209 In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
bshanks@36 210
bshanks@36 211
bshanks@35 212
bshanks@35 213
bshanks@35 214
bshanks@35 215
bshanks@35 216
bshanks@38 217 === Feature selection and scoring methods ===
bshanks@38 218
bshanks@38 219
bshanks@38 220 \vspace{0.3cm}**Correlation**
bshanks@38 221 Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance as either a member of a particular anatomical area, or not. The target area can be represented as a binary mask over the surface pixels.
bshanks@38 222
bshanks@40 223 One class of feature selection scoring method are those which calculate some sort of "match" between each gene image and the target image. Those genes which match the best are good candidates for features.
bshanks@38 224
bshanks@38 225 One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between each gene and each cortical area.
bshanks@38 226
bshanks@38 227 todo: fig
bshanks@38 228
bshanks@38 229 \vspace{0.3cm}**Conditional entropy**
bshanks@38 230 An information-theoretic scoring method is to find features such that, if the features (gene expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty, so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution to which we are referring is the probability distribution over the population of surface pixels.
bshanks@38 231
bshanks@40 232 The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating, for each gene, five thresholded binary masks of the gene data. For each gene, we created a binary mask of its expression levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two standard deviations, the mean plus one standard deviation, the mean plus two standard deviations.
bshanks@38 233
bshanks@39 234 Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression binary masks such that the conditional entropy of the target area's binary mask, conditioned upon the pair of gene expression binary masks, is minimized.
bshanks@39 235
bshanks@39 236 This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question, "Is this surface pixel a member of the target area?".
bshanks@38 237
bshanks@38 238 todo: fig
bshanks@38 239
bshanks@38 240 \vspace{0.3cm}**Gradient similarity**
bshanks@40 241 We noticed that the previous two scoring methods, which are pointwise, often found genes whose pattern of expression did not look similar in shape to the target region. Fort his reason we designed a non-pointwise local scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar to the shape of the target region. We call this scoring method "gradient similarity".
bshanks@40 242
bshanks@40 243 One might say that gradient similarity attempts to measure how much the border of the area of gene expression and the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its maximum value to zero, the spatial pattern of a gene's expression often does not have a discrete border. Therefore, instead of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have gradients which are oriented in a similar direction. The formula is:
bshanks@40 244
bshanks@41 245 \begin{align*}
bshanks@40 246 \sum_{pixel \in pixels} cos(abs(\angle \nabla_1 - \angle \nabla_2)) \cdot \frac{\vert \nabla_1 \vert + \vert \nabla_2 \vert}{2} \cdot \frac{pixel\_value_1 + pixel\_value_2}{2}
bshanks@41 247 \end{align*}
bshanks@41 248
bshanks@41 249 where $\nabla_1$ and $\nabla_2$ are the gradient vectors of the two images at the current pixel; $\angle \nabla_i$ is the angle of the gradient of image $i$ at the current pixel; $\vert \nabla_i \vert$ is the magnitude of the gradient of image $i$ at the current pixel; and $pixel\_value_i$ is the value of the current pixel in image $i$.
bshanks@40 250
bshanks@40 251 The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a similar direction (because the borders are similar).
bshanks@27 252
bshanks@27 253 \vspace{0.3cm}**Geometric and pointwise scoring methods provide complementary information**
bshanks@16 254
bshanks@41 255 To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity (see section \ref{gradientSim}) between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.} The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area. Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers. None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods.
bshanks@0 256
bshanks@0 257
bshanks@0 258 \begin{figure}\label{AUDgeometry}
bshanks@0 259 \includegraphics[scale=.31]{singlegene_AUD_lr_top_1_3386_jet.eps}
bshanks@0 260 \includegraphics[scale=.31]{singlegene_AUD_lr_top_2_1258_jet.eps}
bshanks@0 261 \includegraphics[scale=.31]{singlegene_AUD_lr_top_3_420_jet.eps}
bshanks@0 262
bshanks@0 263 \includegraphics[scale=.31]{singlegene_AUD_gr_top_1_2856_jet.eps}
bshanks@0 264 \includegraphics[scale=.31]{singlegene_AUD_gr_top_2_420_jet.eps}
bshanks@0 265 \includegraphics[scale=.31]{singlegene_AUD_gr_top_3_2072_jet.eps}
bshanks@0 266 \caption{The top row shows the three genes which (individually) best predict area AUD, according to logistic regression. The bottom row shows the three genes which (individually) best match area AUD, according to gradient similarity. From left to right and top to bottom, the genes are $Ssr1$, $Efcbp1$, $Aph1a$, $Ptk7$, $Aph1a$ again, and $Lepr$}
bshanks@0 267 \end{figure}
bshanks@0 268
bshanks@16 269
bshanks@40 270 \vspace{0.3cm}**Using combinations of multiple genes is necessary and sufficient to delineate some cortical areas**
bshanks@40 271
bshanks@40 272 Here we give an example of a cortical area which is not marked by any single gene, but which can be identified combinatorially. according to logistic regression, gene wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, however the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface (todo).
bshanks@40 273
bshanks@40 274 Gene mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} is shown in figure the upper-right of Fig. \ref{MOcombo}. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left of Figure \ref{MOcombo}. This combination captures area MO much better than any single gene.
bshanks@40 275
bshanks@40 276 \begin{figure}\label{MOcombo}
bshanks@40 277 \includegraphics[scale=.36]{MO_vs_Wwc1_jet.eps}
bshanks@40 278 \includegraphics[scale=.36]{MO_vs_Mtif2_jet.eps}
bshanks@40 279
bshanks@40 280 \includegraphics[scale=.36]{MO_vs_Wwc1_plus_Mtif2_jet.eps}
bshanks@40 281 \caption{Upper left: $wwc1$. Upper right: $mtif2$. Lower left: wwc1 + mtif2 (each pixel's value on the lower left is the sum of the corresponding pixels in the upper row). Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region MO. Pixels are colored approximately according to the density of expressing cells underneath each pixel, with red meaning a lot of expression and blue meaning little.}
bshanks@40 282 \end{figure}
bshanks@40 283
bshanks@40 284
bshanks@40 285
bshanks@40 286
bshanks@40 287
bshanks@27 288 \vspace{0.3cm}**Areas which can be identified by single genes**
bshanks@16 289
bshanks@15 290 todo
bshanks@15 291
bshanks@35 292 \vspace{0.3cm}**Areas can sometimes be marked by underexpression**
bshanks@35 293
bshanks@35 294 todo
bshanks@15 295
bshanks@18 296 === Specific to Aim 1 (and Aim 3) ===
bshanks@27 297 \vspace{0.3cm}**Forward stepwise logistic regression**
bshanks@27 298 todo
bshanks@27 299
bshanks@27 300
bshanks@27 301 \vspace{0.3cm}**SVM on all genes at once**
bshanks@16 302
bshanks@18 303 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%\footnote{5-fold cross-validation.}. As noted above, however, a classifier that looks at all the genes at once isn't practically useful.
bshanks@15 304
bshanks@15 305 The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task combines feature selection with supervised learning.
bshanks@15 306
bshanks@15 307
bshanks@16 308
bshanks@27 309 \vspace{0.3cm}**Decision trees**
bshanks@16 310
bshanks@15 311 todo
bshanks@15 312
bshanks@15 313
bshanks@18 314 === Specific to Aim 2 (and Aim 3) ===
bshanks@18 315
bshanks@27 316 \vspace{0.3cm}**Raw dimensionality reduction results**
bshanks@18 317
bshanks@20 318 todo
bshanks@20 319
bshanks@20 320 (might want to incld nnMF since mentioned above)
bshanks@18 321
bshanks@27 322 \vspace{0.3cm}**Dimensionality reduction plus K-means or spectral clustering**
bshanks@27 323
bshanks@27 324
bshanks@27 325
bshanks@27 326 \vspace{0.3cm}**Many areas are captured by clusters of genes**
bshanks@16 327
bshanks@15 328 todo
bshanks@15 329
bshanks@15 330
bshanks@15 331
bshanks@15 332
bshanks@15 333
bshanks@15 334
bshanks@15 335
bshanks@15 336
bshanks@15 337
bshanks@15 338
bshanks@15 339
bshanks@15 340 todo
bshanks@15 341
bshanks@26 342
bshanks@26 343
bshanks@26 344 \newpage
bshanks@15 345 == Research plan ==
bshanks@15 346
bshanks@18 347 todo amongst other things:
bshanks@0 348
bshanks@16 349
bshanks@27 350 \vspace{0.3cm}**Develop algorithms that find genetic markers for anatomical regions**
bshanks@16 351
bshanks@0 352 # Develop scoring measures for evaluating how good individual genes are at marking areas: we will compare pointwise, geometric, and information-theoretic measures.
bshanks@0 353 # Develop a procedure to find single marker genes for anatomical regions: for each cortical area, by using or combining the scoring measures developed, we will rank the genes by their ability to delineate each area.
bshanks@0 354 # Extend the procedure to handle difficult areas by using combinatorial coding: for areas that cannot be identified by any single gene, identify them with a handful of genes. We will consider both (a) algorithms that incrementally/greedily combine single gene markers into sets, such as forward stepwise regression and decision trees, and also (b) supervised learning techniques which use soft constraints to minimize the number of features, such as sparse support vector machines.
bshanks@0 355 # Extend the procedure to handle difficult areas by combining or redrawing the boundaries: An area may be difficult to identify because the boundaries are misdrawn, or because it does not "really" exist as a single area, at least on the genetic level. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly, and (b) detect when a difficult area could be combined with adjacent areas to create a larger area which can be fit.
bshanks@0 356
bshanks@0 357
bshanks@16 358
bshanks@27 359 \vspace{0.3cm}**Apply these algorithms to the cortex**
bshanks@16 360
bshanks@0 361 # Create open source format conversion tools: we will create tools to bulk download the ABA dataset and to convert between SEV, NIFTI and MATLAB formats.
bshanks@0 362 # Flatmap the ABA cortex data: map the ABA data onto a plane and draw the cortical area boundaries onto it.
bshanks@0 363 # Find layer boundaries: cluster similar voxels together in order to automatically find the cortical layer boundaries.
bshanks@0 364 # Run the procedures that we developed on the cortex: we will present, for each area, a short list of markers to identify that area; and we will also present lists of "panels" of genes that can be used to delineate many areas at once.
bshanks@0 365
bshanks@16 366
bshanks@27 367 \vspace{0.3cm}**Develop algorithms to suggest a division of a structure into anatomical parts**
bshanks@16 368
bshanks@0 369 # Explore dimensionality reduction algorithms applied to pixels: including TODO
bshanks@0 370 # Explore dimensionality reduction algorithms applied to genes: including TODO
bshanks@0 371 # Explore clustering algorithms applied to pixels: including TODO
bshanks@0 372 # Explore clustering algorithms applied to genes: including gene shaving, TODO
bshanks@0 373 # Develop an algorithm to use dimensionality reduction and/or hierarchial clustering to create anatomical maps
bshanks@0 374 # Run this algorithm on the cortex: present a hierarchial, genoarchitectonic map of the cortex
bshanks@0 375
bshanks@0 376
bshanks@0 377
bshanks@33 378 \newpage
bshanks@33 379
bshanks@33 380 \bibliographystyle{plain}
bshanks@33 381 \bibliography{grant}
bshanks@33 382
bshanks@33 383 \newpage
bshanks@0 384
bshanks@0 385 ----
bshanks@0 386
bshanks@15 387 stuff i dunno where to put yet (there is more scattered through grant-oldtext):
bshanks@15 388
bshanks@16 389
bshanks@27 390 \vspace{0.3cm}**Principle 4: Work in 2-D whenever possible**
bshanks@16 391
bshanks@15 392
bshanks@15 393 In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo), or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane.
bshanks@15 394
bshanks@33 395 The method that we will develop will begin by mapping the data into a 2-D plane. Although the manifold that characterized cortical areas is known to be the cortical surface, it remains to be seen which method of mapping the manifold into a plane is optimal for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps).
bshanks@15 396
bshanks@15 397 Although there is much 2-D organization in anatomy, there are also structures whose shape is fundamentally 3-dimensional. If possible, we would like the method we develop to include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong.
bshanks@15 398
bshanks@17 399
bshanks@17 400
bshanks@31 401 %%if we need citations for aim 3 significance, http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WSS-4V70FHY-9&_user=4429&_coverDate=12%2F26%2F2008&_rdoc=1&_fmt=full&_orig=na&_cdi=7054&_docanchor=&_acct=C000059602&_version=1&_urlVersion=0&_userid=4429&md5=551eccc743a2bfe6e992eee0c3194203#app2 has examples of genetic targeting to specific anatomical regions
bshanks@20 402
bshanks@20 403 ---
bshanks@20 404
bshanks@20 405 note:
bshanks@28 406
bshanks@29 407 do we need to cite: no known markers, impressive results?
bshanks@33 408
bshanks@33 409
bshanks@33 410
bshanks@36 411 two hemis