cg
diff grant.txt @ 85:da8f81785211
.
author | bshanks@bshanks.dyndns.org |
---|---|
date | Tue Apr 21 03:36:06 2009 -0700 (16 years ago) |
parents | d89a99c9ea9a |
children | aafe6f8c3593 |
line diff
1.1 --- a/grant.txt Tue Apr 21 00:54:22 2009 -0700
1.2 +++ b/grant.txt Tue Apr 21 03:36:06 2009 -0700
1.3 @@ -27,23 +27,27 @@
1.4
1.5 === Aim 1: Given a map of regions, find genes that mark the regions ===
1.6
1.7 -After defining terms, we will describe a set of principles which determine our strategy to completing this aim.
1.8 -
1.9 -\vspace{0.3cm}**Machine learning terminology: supervised learning** The task of looking for marker genes for known anatomical regions means that one is looking for a set of genes such that, if the expression level of those genes is known, then the locations of the regions can be inferred.
1.10 -
1.11 -If we define the regions so that they cover the entire anatomical structure to be divided, then instead of saying that we are using gene expression to find the locations of the regions, we may say that we are using gene expression to determine to which region each voxel within the structure belongs. We call this a __classification task__, because each voxel is being assigned to a class (namely, its region).
1.12 -
1.13 -Therefore, an understanding of the relationship between the combination of their expression levels and the locations of the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function a __classifier__. In general, the input to a classifier is called an __instance__, and the output is called a __label__ (or a __class label__).
1.14 -
1.15 -The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. Such a procedure is a type of a machine learning procedure. The construction of the classifier is called __training__ (also __learning__), and the initial gene expression dataset used in the construction of the classifier is called __training data__.
1.16 -
1.17 -In the machine learning literature, this sort of procedure may be thought of as a __supervised learning task__, defined as a task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances (voxels) for which the labels (regions) are known.
1.18 +\vspace{0.3cm}**Machine learning terminology** The task of looking for marker genes for known anatomical regions means that one is looking for a set of genes such that, if the expression level of those genes is known, then the locations of the regions can be inferred.
1.19 +
1.20 +%% then instead of saying that we are using gene expression to find the locations of the regions,
1.21 +
1.22 +%%If we define the regions so that they cover the entire anatomical structure to be divided, we may say that we are using gene expression to determine to which region each voxel within the structure belongs. We call this a __classification task__, because each voxel is being assigned to a class (namely, its region).
1.23 +
1.24 +%%Therefore, an understanding of the relationship between the combination of their expression levels and the locations of the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function a __classifier__. In general, the input to a classifier is called an __instance__, and the output is called a __label__ (or a __class label__).
1.25 +
1.26 +If we define the regions so that they cover the entire anatomical structure to be divided, we may say that we are using gene expression to determine to which region each voxel within the structure belongs. We call this a __classification task__, because each voxel is being assigned to a class (namely, its region). An understanding of the relationship between the combination of their expression levels and the locations of the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs. We call this function a __classifier__. In general, the input to a classifier is called an __instance__, and the output is called a __label__ (or a __class label__).
1.27 +
1.28 +%% The construction of the classifier is called __training__ (also __learning__), and
1.29 +
1.30 +The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be analyzed in concert with an anatomical atlas in order to produce a classifier. The initial gene expression dataset used in the construction of the classifier is called __training data__. In the machine learning literature, this sort of procedure may be thought of as a __supervised learning task__, defined as a task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances (voxels) for which the labels (regions) are known.
1.31
1.32 Each gene expression level is called a __feature__, and the selection of which genes\footnote{Strictly speaking, the features are gene expression levels, but we'll call them genes.} to include is called __feature selection__. Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
1.33
1.34 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic procedure may be used in which features are added and subtracted from the selected set depending on how much they raise the score. Such procedures are called "stepwise" or "greedy".
1.35
1.36 -Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the learning algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to calculate a voxel's sub-score, then we say it is a __local scoring method__. If only information from the voxel itself is used to calculate a voxel's sub-score, then we say it is a __pointwise scoring method__.
1.37 +Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or average). If only information from nearby voxels is used to calculate a voxel's sub-score, then we say it is a __local scoring method__. If only information from the voxel itself is used to calculate a voxel's sub-score, then we say it is a __pointwise scoring method__.
1.38 +
1.39 +=== Our strategy for Aim 1 ===
1.40
1.41 Key questions when choosing a learning method are: What are the instances? What are the features? How are the features chosen? Here are four principles that outline our answers to these questions.
1.42
1.43 @@ -69,9 +73,7 @@
1.44 \vspace{0.3cm}**Principle 4: Work in 2-D whenever possible**
1.45
1.46
1.47 -There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D data.
1.48 -
1.49 -Therefore, when possible, the instances should represent pixels, not voxels.
1.50 +There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D data. Therefore, when possible, the instances should represent pixels, not voxels.
1.51
1.52
1.53 === Related work ===
1.54 @@ -85,23 +87,15 @@
1.55
1.56 \cite{lee_high-resolution_2007} mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of interest, computing what proportion of this structure is covered by the gene's spatial region.
1.57
1.58 -GeneAtlas\cite{carson_digital_2005} and EMAGE \cite{venkataraman_emage_2008} allow the user to construct a search query by demarcating regions and then specifing either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. For the similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel\footnote{Actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity.} whose expression is within four discretization levels. EMAGE uses Jaccard similarity, which is equal to the number of true pixels in the intersection of the two images, divided by the number of pixels in their union. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a region in concert but not separately.
1.59 +GeneAtlas\cite{carson_digital_2005} and EMAGE \cite{venkataraman_emage_2008} allow the user to construct a search query by demarcating regions and then specifing either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. For the similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel\footnote{Actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity.} whose expression is within four discretization levels. EMAGE uses Jaccard similarity\footnote{the number of true pixels in the intersection of the two images, divided by the number of pixels in their union.}. Neither GeneAtlas nor EMAGE allow one to search for combinations of genes that define a region in concert but not separately.
1.60
1.61 \cite{ng_anatomic_2009} describes AGEA, "Anatomic Gene Expression
1.62 Atlas". AGEA has three
1.63 -components:
1.64 -
1.65 -\begin{itemize}
1.66 -\item Gene Finder: The user selects a seed voxel and the system (1) chooses a
1.67 +components. **Gene Finder**: The user selects a seed voxel and the system (1) chooses a
1.68 cluster which includes the seed voxel, (2) yields a list of genes
1.69 -which are overexpressed in that cluster. (note: the ABA website also contains pre-prepared lists of overexpressed genes for selected structures)
1.70 -
1.71 -\item Correlation: The user selects a seed voxel and the system
1.72 +which are overexpressed in that cluster. (note: the ABA website also contains pre-prepared lists of overexpressed genes for selected structures). **Correlation**: The user selects a seed voxel and the system
1.73 then shows the user how much correlation there is between the gene
1.74 -expression profile of the seed voxel and every other voxel.
1.75 -
1.76 -\item Clusters: will be described later
1.77 -\end{itemize}
1.78 +expression profile of the seed voxel and every other voxel. **Clusters**: will be described later
1.79
1.80 Gene Finder is different from our Aim 1 in at least three ways. First, Gene Finder finds only single genes, whereas we will also look for combinations of genes. Second, gene finder can only use overexpression as a marker, whereas we will also search for underexpression. Third, Gene Finder uses a simple pointwise score\footnote{"Expression energy ratio", which captures overexpression.}, whereas we will also use geometric scores such as gradient similarity (described in Preliminary Studies). Figures \ref{MOcombo}, \ref{hole}, and \ref{AUDgeometry} in the Preliminary Studies section contains evidence that each of our three choices is the right one.
1.81
1.82 @@ -122,35 +116,36 @@
1.83
1.84 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from the same anatomical region have similar gene expression profiles, at least compared to the other regions. This means that clustering voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels with similar gene expression.
1.85
1.86 -It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests the outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchial clustering.
1.87 +%%It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests the outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchial clustering.
1.88 +
1.89 +It is desirable to determine not just one set of regions, but also how these regions relate to each other. The outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchial clustering.
1.90
1.91
1.92 \vspace{0.3cm}**Similarity scores**
1.93 -
1.94 A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and scoring methods for similarity.
1.95
1.96
1.97 \vspace{0.3cm}**Spatially contiguous clusters; image segmentation**
1.98 -
1.99 -
1.100 We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary Studies, we show that one can get reasonable results without enforcing this constraint; however, we plan to compare these results against other methods which guarantee contiguous clusters.
1.101
1.102 -Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three. However, there are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
1.103 +%%Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three. However, there are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
1.104 +
1.105 +Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three\footnote{There are imaging tasks which use more than three colors, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery.}. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
1.106
1.107
1.108 \vspace{0.3cm}**Dimensionality reduction**
1.109 In this section, we discuss reducing the length of the per-pixel gene expression feature vector. By "dimension", we mean the dimension of this vector, not the spatial dimension of the underlying data.
1.110
1.111 -Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the instances. However, some clustering algorithms perform better on small numbers of features. There are techniques which "summarize" a larger number of features using a smaller number of features; these techniques go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique yields is called the __reduced feature set__. After the reduced feature set is created, the instances may be replaced by __reduced instances__, which have as their features the reduced feature set rather than the original feature set of all gene expression levels. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
1.112 -
1.113 -Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.
1.114 -
1.115 -Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example, one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on a 2-D plan will exactly satisfy this property; however, dimensionality reduction techniques allow one to find arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering.
1.116 +%% After the reduced feature set is created, the instances may be replaced by __reduced instances__, which have as their features the reduced feature set rather than the original feature set of all gene expression levels.
1.117 +
1.118 +Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the instances. However, some clustering algorithms perform better on small numbers of features\footnote{First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data.}. There are techniques which "summarize" a larger number of features using a smaller number of features; these techniques go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique yields is called the __reduced feature set__. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
1.119 +
1.120 +%%Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the reduced dataset is less than in the original dataset, the running time of clustering algorithms may be much less. Second, it is thought that some clustering algorithms may give better results on reduced data. Another use for dimensionality reduction is to visualize the relationships between regions after clustering.
1.121 +
1.122 +%%Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example, one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on a 2-D plan will exactly satisfy this property; however, dimensionality reduction techniques allow one to find arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering.
1.123
1.124
1.125 \vspace{0.3cm}**Clustering genes rather than voxels**
1.126 -
1.127 -
1.128 Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used.
1.129
1.130 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could have one reduced feature for each gene cluster.
1.131 @@ -160,7 +155,7 @@
1.132 The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering algorithms.
1.133
1.134 === Related work ===
1.135 -We are aware of five existing efforts to cluster spatial gene expression data.
1.136 +Some researchers have attempted to parcellate cortex on the basis of non-gene expression data. For example, \cite{schleicher_quantitative_2005}, \cite{annese_myelo-architectonic_2004}, \cite{schmitt_detection_2003}, and \cite{adamson_tracking_2005} associate spots on the cortex with the radial profile\footnote{A radial profile is a profile along a line perpendicular to the cortical surface.} of response to some stain (\cite{kruggel_analyzingneocortical_2003} uses MRI), extract features from this profile, and then use similarity between surface pixels to cluster. Features used include statistical moments, wavelets, and the excess mass functional. Some of these features are motivated by the presence of tangential lines of stain intensity which correspond to laminar structure. Some methods use standard clustering procedures, whereas others make use of the spatial nature of the data to look for sudden transitions, which are identified as areal borders.
1.137
1.138 \cite{thompson_genomic_2008} describes an analysis of the anatomy of
1.139 the hippocampus using the ABA dataset. In addition to manual analysis,
1.140 @@ -174,33 +169,35 @@
1.141
1.142 AGEA\cite{ng_anatomic_2009} includes a preset hierarchial clustering of voxels based on a recursive bifurcation algorithm with correlation as the similarity metric. EMAGE\cite{venkataraman_emage_2008} allows the user to select a dataset from among a large number of alternatives, or by running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchial complete linkage clustering with un-centred correlation as the similarity score.
1.143
1.144 -\cite{chin_genome-scale_2007} clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and ordered the rows of this matrix as follows: "the first row of the matrix was chosen to show the strongest contrast between the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing similarity using a least squares metric". The resulting matrix showed four clusters. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels
1.145 -
1.146 -In an interesting twist, \cite{hemert_matching_2008} applies their technique for finding combinations of marker genes for the purpose of clustering genes around a "seed gene". The way they do this is by using the pattern of expression of the seed gene as the target image, and then searching for other genes which can be combined to reproduce this pattern. Those other genes which are found are considered to be related to the seed. The same team also describes a method\cite{van_hemert_mining_2007} for finding "association rules" such as, "if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene". This could be useful as part of a procedure for clustering voxels.
1.147 -
1.148 -In summary, although these projects obtained clusterings, there has not been much comparison between different algorithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. Also, none of these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms.
1.149 -
1.150 -
1.151 -
1.152 -=== Aim 3 ===
1.153 +\cite{chin_genome-scale_2007} clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and ordered the rows of this matrix as follows: "the first row of the matrix was chosen to show the strongest contrast between the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing similarity using a least squares metric". The resulting matrix showed four clusters. For each cluster, prototypical spatial expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without clustering voxels.
1.154 +
1.155 +\cite{hemert_matching_2008} applies their technique for finding combinations of marker genes for the purpose of clustering genes around a "seed gene". They do this by using the pattern of expression of the seed gene as the target image, and then searching for other genes which can be combined to reproduce this pattern. Other genes which are found are considered to be related to the seed. The same team also describes a method\cite{van_hemert_mining_2007} for finding "association rules" such as, "if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene". This could be useful as part of a procedure for clustering voxels.
1.156 +
1.157 +In summary, although these projects obtained clusterings, there has not been much comparison between different algorithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. The projects using gene expression on cortex did not attempt to make use of the radial profile of gene expression. Also, none of these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first in order to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms.
1.158 +
1.159 +
1.160 +
1.161 +=== Aim 3: apply the methods developed to the cerebral cortex ===
1.162
1.163 \vspace{0.3cm}**Background**
1.164
1.165 The cortex is divided into areas and layers. Because of the cortical columnar organization, the parcellation of the cortex into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can picture an area of the cortex as a slice of a six-layered cake\footnote{Outside of isocortex, the number of layers varies.}.
1.166
1.167 -Although it is known that different cortical areas have distinct roles in both normal functioning and in disease processes, there are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their approximate location upon the cortical surface.
1.168 -
1.169 -Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single agreed-upon map can be seen by contrasting the recent maps given by Swanson\cite{swanson_brain_2003} on the one hand, and Paxinos and Franklin\cite{paxinos_mouse_2001} on the other. While the maps are certainly very similar in their general arrangement, significant differences remain in the details.
1.170 +It is known that different cortical areas have distinct roles in both normal functioning and in disease processes, yet there are no known marker genes for most cortical areas. When it is necessary to divide a tissue sample into cortical areas, this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of their approximate location upon the cortical surface.
1.171 +
1.172 +Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single agreed-upon map can be seen by contrasting the recent maps given by Swanson\cite{swanson_brain_2003} on the one hand, and Paxinos and Franklin\cite{paxinos_mouse_2001} on the other. While the maps are certainly very similar in their general arrangement, significant differences remain.
1.173
1.174 \vspace{0.3cm}**The Allen Mouse Brain Atlas dataset**
1.175
1.176 -The Allen Mouse Brain Atlas (ABA) data were produced by doing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed in order to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse brains were needed in order to measure the expression of many genes.
1.177 -
1.178 -Next, an automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 \= 159,326 voxels in the 3D coordinate system, of which 51,533 are in the brain\cite{ng_anatomic_2009}.
1.179 -
1.180 -Mus musculus, the common house mouse, is thought to contain about 22,000 protein-coding genes\cite{waterston_initial_2002}. The ABA contains data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from only the coronal subset of the ABA, because the sagittal data do not cover the entire cortex, and also has greater registration error\cite{ng_anatomic_2009}. Genes were selected by the Allen Institute for coronal sectioning based on, "classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern"\cite{ng_anatomic_2009}.
1.181 -
1.182 -The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress\footnote{http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE}, EADHB\footnote{http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html}, MAMEP\footnote{http://mamep.molgen.mpg.de/index.php}, Xenbase\footnote{http://xenbase.org/}, ZFIN\cite{sprague_zebrafish_2006}, Aniseed\footnote{http://aniseed-ibdm.univ-mrs.fr/}, VisiGene\footnote{http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources}, GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE\footnote{http://compare.ibdml.univ-mrs.fr/} GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007}\footnote{GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.}. With the exception of the ABA, GenePaint, and EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression.
1.183 +The Allen Mouse Brain Atlas (ABA) data were produced by doing in-situ hybridization on slices of male, 56-day-old C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different mouse brains were needed in order to measure the expression of many genes.
1.184 +
1.185 +An automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 \= 159,326 voxels in the 3D coordinate system, of which 51,533 are in the brain\cite{ng_anatomic_2009}.
1.186 +
1.187 +Mus musculus is thought to contain about 22,000 protein-coding genes\cite{waterston_initial_2002}. The ABA contains data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our dataset is derived from only the coronal subset of the ABA\footnote{The sagittal data do not cover the entire cortex, and also have greater registration error\cite{ng_anatomic_2009}. Genes were selected by the Allen Institute for coronal sectioning based on, "classes of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern"\cite{ng_anatomic_2009}.}.
1.188 +
1.189 +%%The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress\footnote{http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE}, EADHB\footnote{http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html}, MAMEP\footnote{http://mamep.molgen.mpg.de/index.php}, Xenbase\footnote{http://xenbase.org/}, ZFIN\cite{sprague_zebrafish_2006}, Aniseed\footnote{http://aniseed-ibdm.univ-mrs.fr/}, VisiGene\footnote{http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources}, GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE\footnote{http://compare.ibdml.univ-mrs.fr/} GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007}\footnote{GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.}. With the exception of the ABA, GenePaint, and EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression.
1.190 +
1.191 +The ABA is not the only large public spatial gene expression dataset\footnote{Other such resources include GENSAT\cite{gong_gene_2003}, GenePaint\cite{visel_genepaint.org:atlas_2004}, its sister project GeneAtlas\cite{carson_digital_2005}, BGEM\cite{magdaleno_bgem:in_2006}, EMAGE\cite{venkataraman_emage_2008}, EurExpress (http://www.eurexpress.org/ee/; EurExpress data are also entered into EMAGE), EADHB (http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html), MAMEP (http://mamep.molgen.mpg.de/index.php), Xenbase (http://xenbase.org/), ZFIN\cite{sprague_zebrafish_2006}, Aniseed (http://aniseed-ibdm.univ-mrs.fr/), VisiGene (http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some of the other listed data sources), GEISHA\cite{bell_geishawhole-mount_2004}, Fruitfly.org\cite{tomancak_systematic_2002}, COMPARE (http://compare.ibdml.univ-mrs.fr/), GXD\cite{smith_mouse_2007}, GEO\cite{barrett_ncbi_2007} (GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.)}. With the exception of the ABA, GenePaint, and EMAGE, most of the other resources have not (yet) extracted the expression intensity from the ISH images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of data available for public download from the website\footnote{without prior offline registration}. Many of these resources focus on developmental gene expression.
1.192
1.193
1.194
1.195 @@ -210,7 +207,9 @@
1.196
1.197 The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatomical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex.
1.198
1.199 -The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation of a better map. The development of present-day cortical maps was driven by the application of histological stains. It is conceivable that if a different set of stains had been available which identified a different set of features, then the today's cortical maps would have come out differently. Since the number of classes of stains is small compared to the number of genes, it is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been captured by any stain. Therefore, current ideas about cortical anatomy need to incorporate what we can learn from looking at the patterns of gene expression.
1.200 +
1.201 +%% Since the number of classes of stains is small compared to the number of genes,
1.202 +The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation of a better map. The development of present-day cortical maps was driven by the application of histological stains. If a different set of stains had been available which identified a different set of features, then today's cortical maps may have come out differently. It is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been captured by any stain. Therefore, cortical anatomy needs to incorporate what we can learn from looking at the patterns of gene expression.
1.203
1.204
1.205 While we do not here propose to analyze human gene expression data, it is conceivable that the methods we propose to develop could be used to suggest modifications to the human cortical map as well.
1.206 @@ -218,7 +217,7 @@
1.207
1.208 === Related work ===
1.209
1.210 -\cite{ng_anatomic_2009} describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of the other components of AGEA can be applied to cortical areas; AGEA's Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA's hierarchial clustering does not produce clusters corresponding to the cortical areas\footnote{In both cases, the root cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas. This is why the hierarchial clustering does not find cortical areas (there are clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot the find marker genes for cortical areas is that in Gene Finder, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed.}.
1.211 +\cite{ng_anatomic_2009} describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of the other components of AGEA can be applied to cortical areas; AGEA's Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA's hierarchial clustering does not produce clusters corresponding to the cortical areas\footnote{In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas (there may be clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot the find marker genes for cortical areas is that, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed.}.
1.212
1.213
1.214 %% Most of the projects which have been discussed have been done by the same groups that develop the public datasets. Although these projects make their algorithms available for use on their own website, none of them have released an open-source software toolkit; instead, users are restricted to using the provided algorithms only on their own dataset.
1.215 @@ -232,15 +231,15 @@
1.216 \newpage
1.217
1.218 == Preliminary Studies ==
1.219 -\begin{wrapfigure}{L}{0.4\textwidth}\centering
1.220 -%%\includegraphics[scale=.31]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.31]{singlegene_SS_corr_top_2_242_jet.eps}\includegraphics[scale=.31]{singlegene_SS_corr_top_3_654_jet.eps}
1.221 +\begin{wrapfigure}{L}{0.35\textwidth}\centering
1.222 +%%\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_3_654_jet.eps}
1.223 %%\\
1.224 -%%\includegraphics[scale=.31]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.31]{singlegene_SS_lr_top_2_685_jet.eps}\includegraphics[scale=.31]{singlegene_SS_lr_top_3_724_jet.eps}
1.225 +%%\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_3_724_jet.eps}
1.226 %%\caption{Top row: Genes Nfic, A930001M12Rik, C130038G02Rik are the most correlated with area SS (somatosensory cortex). Bottom row: Genes C130038G02Rik, Cacna1i, Car10 are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.}
1.227
1.228 -\includegraphics[scale=.31]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.31]{singlegene_SS_corr_top_2_242_jet.eps}
1.229 +\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}
1.230 \\
1.231 -\includegraphics[scale=.31]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.31]{singlegene_SS_lr_top_2_685_jet.eps}
1.232 +\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}
1.233
1.234 \caption{Top row: Genes $Nfic$ and $A930001M12Rik$ are the most correlated with area SS (somatosensory cortex). Bottom row: Genes $C130038G02Rik$ and $Cacna1i$ are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.}
1.235 \label{SScorrLr}\end{wrapfigure}
1.236 @@ -265,13 +264,15 @@
1.237
1.238 At this point, the data are in the form of a number of 2-D matrices, all in registration, with the matrix entries representing a grid of points (pixels) over the cortical surface:
1.239
1.240 +\begin{wrapfigure}{L}{0.2\textwidth}\centering
1.241 +\includegraphics[scale=.27]{holeExample_2682_SS_jet.eps}
1.242 +\caption{Gene $Pitx2$ is selectively underexpressed in area SS.}
1.243 +\label{hole}\end{wrapfigure}
1.244 +
1.245 +
1.246 * A 2-D matrix whose entries represent the regional label associated with each surface pixel
1.247 * For each gene, a 2-D matrix whose entries represent the average expression level underneath each surface pixel
1.248
1.249 -\begin{wrapfigure}{L}{0.2\textwidth}\centering
1.250 -\includegraphics[scale=.31]{holeExample_2682_SS_jet.eps}
1.251 -\caption{Gene $Pitx2$ is selectively underexpressed in area SS.}
1.252 -\label{hole}\end{wrapfigure}
1.253
1.254
1.255
1.256 @@ -306,14 +307,14 @@
1.257 One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS.
1.258
1.259
1.260 -\begin{wrapfigure}{L}{0.4\textwidth}\centering
1.261 -%%\includegraphics[scale=.31]{singlegene_AUD_lr_top_1_3386_jet.eps}\includegraphics[scale=.31]{singlegene_AUD_lr_top_2_1258_jet.eps}\includegraphics[scale=.31]{singlegene_AUD_lr_top_3_420_jet.eps}
1.262 +\begin{wrapfigure}{L}{0.35\textwidth}\centering
1.263 +%%\includegraphics[scale=.27]{singlegene_AUD_lr_top_1_3386_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_2_1258_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_3_420_jet.eps}
1.264 %%
1.265 -%%\includegraphics[scale=.31]{singlegene_AUD_gr_top_1_2856_jet.eps}\includegraphics[scale=.31]{singlegene_AUD_gr_top_2_420_jet.eps}\includegraphics[scale=.31]{singlegene_AUD_gr_top_3_2072_jet.eps}
1.266 +%%\includegraphics[scale=.27]{singlegene_AUD_gr_top_1_2856_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_gr_top_2_420_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_gr_top_3_2072_jet.eps}
1.267 %%\caption{The top row shows the three genes which (individually) best predict area AUD, according to logistic regression. The bottom row shows the three genes which (individually) best match area AUD, according to gradient similarity. From left to right and top to bottom, the genes are $Ssr1$, $Efcbp1$, $Aph1a$, $Ptk7$, $Aph1a$ again, and $Lepr$}
1.268 -\includegraphics[scale=.31]{singlegene_AUD_lr_top_1_3386_jet.eps}\includegraphics[scale=.31]{singlegene_AUD_lr_top_2_1258_jet.eps}
1.269 +\includegraphics[scale=.27]{singlegene_AUD_lr_top_1_3386_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_2_1258_jet.eps}
1.270 \\
1.271 -\includegraphics[scale=.31]{singlegene_AUD_gr_top_1_2856_jet.eps}\includegraphics[scale=.31]{singlegene_AUD_gr_top_2_420_jet.eps}
1.272 +\includegraphics[scale=.27]{singlegene_AUD_gr_top_1_2856_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_gr_top_2_420_jet.eps}
1.273 \caption{The top row shows the two genes which (individually) best predict area AUD, according to logistic regression. The bottom row shows the two genes which (individually) best match area AUD, according to gradient similarity. From left to right and top to bottom, the genes are $Ssr1$, $Efcbp1$, $Ptk7$, and $Aph1a$.}
1.274 \label{AUDgeometry}\end{wrapfigure}
1.275
1.276 @@ -327,10 +328,10 @@
1.277 This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question, "Is this surface pixel a member of the target area?". Its advantage over linear methods such as logistic regression is that it takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional entropy would notice, whereas linear methods would not.
1.278
1.279
1.280 -\begin{wrapfigure}{L}{0.4\textwidth}\centering
1.281 -\includegraphics[scale=.31]{MO_vs_Wwc1_jet.eps}\includegraphics[scale=.31]{MO_vs_Mtif2_jet.eps}
1.282 -
1.283 -\includegraphics[scale=.31]{MO_vs_Wwc1_plus_Mtif2_jet.eps}
1.284 +\begin{wrapfigure}{L}{0.35\textwidth}\centering
1.285 +\includegraphics[scale=.27]{MO_vs_Wwc1_jet.eps}\includegraphics[scale=.27]{MO_vs_Mtif2_jet.eps}
1.286 +
1.287 +\includegraphics[scale=.27]{MO_vs_Wwc1_plus_Mtif2_jet.eps}
1.288 \caption{Upper left: $wwc1$. Upper right: $mtif2$. Lower left: wwc1 + mtif2 (each pixel's value on the lower left is the sum of the corresponding pixels in the upper row).}
1.289 \label{MOcombo}\end{wrapfigure}
1.290
1.291 @@ -352,19 +353,20 @@
1.292
1.293 \vspace{0.3cm}**Gradient similarity provides information complementary to correlation**
1.294
1.295 -To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.} The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area. Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers. None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods.
1.296 -
1.297 -
1.298 -
1.299 -\begin{wrapfigure}{L}{0.4\textwidth}\centering
1.300 -\includegraphics[scale=.31]{singlegene_example_2682_Pitx2_SS_jet.eps}\includegraphics[scale=.31]{singlegene_example_371_Aldh1a2_SSs_jet.eps}
1.301 -\includegraphics[scale=.31]{singlegene_example_2759_Ppfibp1_PIR_jet.eps}\includegraphics[scale=.31]{singlegene_example_3310_Slco1a5_FRP_jet.eps}
1.302 -\includegraphics[scale=.31]{singlegene_example_3709_Tshz2_RSP_jet.eps}\includegraphics[scale=.31]{singlegene_example_3674_Trhr_COApm_jet.eps}
1.303 -\includegraphics[scale=.31]{singlegene_example_925_Col12a1_ACA+PL+ILA+DP+ORB+MO_jet.eps}\includegraphics[scale=.31]{singlegene_example_1334_Ets1_post_lat_vis_jet.eps}
1.304 +
1.305 +\begin{wrapfigure}{L}{0.35\textwidth}\centering
1.306 +\includegraphics[scale=.27]{singlegene_example_2682_Pitx2_SS_jet.eps}\includegraphics[scale=.27]{singlegene_example_371_Aldh1a2_SSs_jet.eps}
1.307 +\includegraphics[scale=.27]{singlegene_example_2759_Ppfibp1_PIR_jet.eps}\includegraphics[scale=.27]{singlegene_example_3310_Slco1a5_FRP_jet.eps}
1.308 +\includegraphics[scale=.27]{singlegene_example_3709_Tshz2_RSP_jet.eps}\includegraphics[scale=.27]{singlegene_example_3674_Trhr_COApm_jet.eps}
1.309 +\includegraphics[scale=.27]{singlegene_example_925_Col12a1_ACA+PL+ILA+DP+ORB+MO_jet.eps}\includegraphics[scale=.27]{singlegene_example_1334_Ets1_post_lat_vis_jet.eps}
1.310
1.311 \caption{From left to right and top to bottom, single genes which roughly identify areas SS (somatosensory primary \begin{latex}+\end{latex} supplemental), SSs (supplemental somatosensory), PIR (piriform), FRP (frontal pole), RSP (retrosplenial), COApm (Cortical amygdalar, posterior part, medial zone). Grouping some areas together, we have also found genes to identify the groups ACA+PL+ILA+DP+ORB+MO (anterior cingulate, prelimbic, infralimbic, dorsal peduncular, orbital, motor), posterior and lateral visual (VISpm, VISpl, VISI, VISp; posteromedial, posterolateral, lateral, and primary visual; the posterior and lateral visual area is distinguished from its neighbors, but not from the entire rest of the cortex). The genes are $Pitx2$, $Aldh1a2$, $Ppfibp1$, $Slco1a5$, $Tshz2$, $Trhr$, $Col12a1$, $Ets1$.}
1.312 \label{singleSoFar}\end{wrapfigure}
1.313
1.314 +To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.} The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area. Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers. None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods.
1.315 +
1.316 +
1.317 +
1.318 \vspace{0.3cm}**Areas which can be identified by single genes**
1.319 Using gradient similarity, we have already found single genes which roughly identify some areas and groupings of areas. For each of these areas, an example of a gene which roughly identifies it is shown in Figure \ref{singleSoFar}. We have not yet cross-verified these genes in other atlases.
1.320
1.321 @@ -396,10 +398,6 @@
1.322
1.323 === Multivariate Predictive methods ===
1.324
1.325 -\vspace{0.3cm}**Forward stepwise logistic regression**
1.326 -Logistic regression is a popular method for predictive modeling of categorial data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes found were shown in various figures throughout this document, and Figure \ref{MOcombo} shows a combination of genes which was found.
1.327 -
1.328 -We felt that, for single genes, gradient similarity did a better job than logistic regression at capturing our subjective impression of a "good gene".
1.329
1.330 \begin{wrapfigure}{L}{0.6\textwidth}\centering
1.331 \includegraphics[scale=1]{merge3_norm_hv_PCA_ndims50_prototypes_collage_sm_border.eps}
1.332 @@ -411,6 +409,12 @@
1.333 \label{dimReduc}\end{wrapfigure}
1.334
1.335
1.336 +\vspace{0.3cm}**Forward stepwise logistic regression**
1.337 +Logistic regression is a popular method for predictive modeling of categorial data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes found were shown in various figures throughout this document, and Figure \ref{MOcombo} shows a combination of genes which was found.
1.338 +
1.339 +We felt that, for single genes, gradient similarity did a better job than logistic regression at capturing our subjective impression of a "good gene".
1.340 +
1.341 +
1.342 \vspace{0.3cm}**SVM on all genes at once**
1.343
1.344 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%\footnote{5-fold cross-validation.}. This shows that the genes included in the ABA dataset are sufficient to define much of cortical anatomy. However, as noted above, a classifier that looks at all the genes at once isn't as practically useful as a classifier that uses only a few genes.
1.345 @@ -421,13 +425,15 @@
1.346
1.347 === Data-driven redrawing of the cortical map ===
1.348
1.349 -We have applied the following dimensionality reduction algorithms to reduce the dimensionality of the gene expression profile associated with each voxel: Principal Components Analysis (PCA), Simple PCA (SPCA), Multi-Dimensional Scaling (MDS), Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space Alignment (LTSA), Hessian locally linear embedding, Diffusion maps, Stochastic Neighbor Embedding (SNE), Stochastic Proximity Embedding (SPE), Fast Maximum Variance Unfolding (FastMVU), Non-negative Matrix Factorization (NNMF). Space constraints prevent us from showing many of the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in the first, second, and third rows of Figure \ref{dimReduc}.
1.350 -
1.351 -\begin{wrapfigure}{L}{0.6\textwidth}\centering
1.352 +\begin{wrapfigure}{L}{0.5\textwidth}\centering
1.353 \includegraphics[scale=.2]{cosine_similarity1_rearrange_colorize.eps}
1.354 \caption{Prototypes corresponding to sample gene clusters, clustered by gradient similarity. Region boundaries for the region that most matches each prototype are overlayed.}
1.355 \label{geneClusters}\end{wrapfigure}
1.356
1.357 +
1.358 +
1.359 +We have applied the following dimensionality reduction algorithms to reduce the dimensionality of the gene expression profile associated with each voxel: Principal Components Analysis (PCA), Simple PCA (SPCA), Multi-Dimensional Scaling (MDS), Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space Alignment (LTSA), Hessian locally linear embedding, Diffusion maps, Stochastic Neighbor Embedding (SNE), Stochastic Proximity Embedding (SPE), Fast Maximum Variance Unfolding (FastMVU), Non-negative Matrix Factorization (NNMF). Space constraints prevent us from showing many of the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in the first, second, and third rows of Figure \ref{dimReduc}.
1.360 +
1.361 After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we have tried k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are shown in the last row of Figure \ref{dimReduc}. To compare, the leftmost picture on the bottom row of Figure \ref{dimReduc} shows some of the major subdivisions of cortex. These results clearly show that different dimensionality reduction techniques capture different aspects of the data and lead to different clusterings, indicating the utility of our proposal to produce a detailed comparion of these techniques as applied to the domain of genomic anatomy.
1.362
1.363
1.364 @@ -444,19 +450,19 @@
1.365
1.366 \vspace{0.3cm}**Further work on flatmapping**
1.367
1.368 -
1.369 -In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo), or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane.
1.370 -
1.371 -In the case of the cerebral cortex, it remains to be seen which method of mapping the manifold into a plane is optimal for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps).
1.372 -
1.373 -Although there is much 2-D organization in anatomy, there are also structures whose shape is fundamentally 3-dimensional. If possible, we would like the method we develop to include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong.
1.374 -
1.375 -
1.376 -todo amongst other things:
1.377 -
1.378 -
1.379 -layerfinding
1.380 -
1.381 +%%In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo), or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane.
1.382 +
1.383 +%%In the case of the cerebral cortex, it remains to be seen which method of mapping the manifold into a plane is optimal for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps).
1.384 +
1.385 +
1.386 +Often the surface of a structure serves as a natural 2-D basis for anatomical organization. Even when the shape of the surface is known, there are multiple ways to map it into a plane. We will compare mappings which attempt to preserve size (such as the one used by Caret\cite{van_essen_integrated_2001}) with mappings which preserve angle (conformal maps). Although there is much 2-D organization in anatomy, there are also structures whose anatomy is fundamentally 3-dimensional. We plan to include a statistical test that warns the user if the assumption of 2-D structure seems to be wrong.
1.387 +
1.388 +\vspace{0.3cm}**Automatic segmentation of cortical layers**
1.389 +
1.390 +
1.391 +
1.392 +\vspace{0.3cm}**Extension to probabalistic maps**
1.393 +Presently, we do not have a probabalistic atlas which is registered to the ABA space. However, in anticipation of the availability of such maps, we would like to explore extensions to our Aim 1 techniques which can handle probabalistic maps.
1.394
1.395
1.396
1.397 @@ -473,7 +479,7 @@
1.398 \vspace{0.3cm}**Decision trees**
1.399 todo
1.400
1.401 -For each cortical area, we used the C4.5 algorithm to find a pruned decision tree and ruleset for that area. We achieved estimated classification accuracy of more than 99.6% on each cortical area (as evaluated on the __training data__ without cross-validation; so actual accuracy is expected to be lower). However, the resulting decision trees each made use of many genes.
1.402 +\footnote{Already, for each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate trees that use fewer genes}.
1.403
1.404
1.405 \vspace{0.3cm}**Apply these algorithms to the cortex**
1.406 @@ -502,7 +508,8 @@
1.407
1.408 # self-organizing map
1.409
1.410 -# confirm with EMAGE, GeneAtlas, GENSAT, etc, to fight overfitting
1.411 +# confirm with EMAGE, GeneAtlas, GENSAT, etc, to fight overfitting, two hemis
1.412 +
1.413
1.414 # compare using clustering scores
1.415
1.416 @@ -517,27 +524,11 @@
1.417 \bibliographystyle{plain}
1.418 \bibliography{grant}
1.419
1.420 -\newpage
1.421 -
1.422 -----
1.423 -
1.424 -stuff i dunno where to put yet (there is more scattered through grant-oldtext):
1.425 -
1.426 -
1.427 -\vspace{0.3cm}**Principle 4: Work in 2-D whenever possible**
1.428 -
1.429 -
1.430 -
1.431 +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1.432
1.433 %%if we need citations for aim 3 significance, http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WSS-4V70FHY-9&_user=4429&_coverDate=12%2F26%2F2008&_rdoc=1&_fmt=full&_orig=na&_cdi=7054&_docanchor=&_acct=C000059602&_version=1&_urlVersion=0&_userid=4429&md5=551eccc743a2bfe6e992eee0c3194203#app2 has examples of genetic targeting to specific anatomical regions
1.434
1.435 ----
1.436 -
1.437 -note:
1.438 -
1.439 -
1.440 -
1.441 -
1.442 -two hemis
1.443 -
1.444 -
1.445 +
1.446 +
1.447 +
1.448 +