cg

diff grant.txt @ 104:d6ecbc494f0b

.
author bshanks@bshanks.dyndns.org
date Wed Apr 22 07:35:14 2009 -0700 (16 years ago)
parents 6ea7e2e5e6c3
children 6c48f37d0f0c
line diff
1.1 --- a/grant.txt Wed Apr 22 07:26:09 2009 -0700 1.2 +++ b/grant.txt Wed Apr 22 07:35:14 2009 -0700 1.3 @@ -226,6 +226,18 @@ 1.4 1.5 1.6 === Aim 3: apply the methods developed to the cerebral cortex === 1.7 +\begin{wrapfigure}{L}{0.35\textwidth}\centering 1.8 +%%\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_3_654_jet.eps} 1.9 +%%\\ 1.10 +%%\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_3_724_jet.eps} 1.11 +%%\caption{Top row: Genes Nfic, A930001M12Rik, C130038G02Rik are the most correlated with area SS (somatosensory cortex). Bottom row: Genes C130038G02Rik, Cacna1i, Car10 are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.} 1.12 + 1.13 +\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps} 1.14 +\\ 1.15 +\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps} 1.16 + 1.17 +\caption{Top row: Genes $Nfic$ and $A930001M12Rik$ are the most correlated with area SS (somatosensory cortex). Bottom row: Genes $C130038G02Rik$ and $Cacna1i$ are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.} 1.18 +\label{SScorrLr}\end{wrapfigure} 1.19 1.20 1.21 \vspace{0.3cm}**Background** 1.22 @@ -256,6 +268,8 @@ 1.23 1.24 === Related work === 1.25 1.26 + 1.27 + 1.28 \cite{ng_anatomic_2009} describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of the other components of AGEA can be applied to cortical areas; AGEA's Gene Finder cannot be used to find marker genes for the cortical areas; and AGEA's hierarchical clustering does not produce clusters corresponding to the cortical areas\footnote{In both cases, the cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel correlation clustering algorithm will tend to create clusters representing cortical layers, not areas.}. 1.29 1.30 %% (there may be clusters which presumably correspond to the intersection of a layer and an area, but since one area will have many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot the find marker genes for cortical areas is that, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found, and it creates that ROI by (pairwise voxel correlation) clustering around the seed. 1.31 @@ -268,6 +282,10 @@ 1.32 1.33 1.34 == Significance == 1.35 +\begin{wrapfigure}{L}{0.2\textwidth}\centering 1.36 +\includegraphics[scale=.27]{holeExample_2682_SS_jet.eps} 1.37 +\caption{Gene $Pitx2$ is selectively underexpressed in area SS.} 1.38 +\label{hole}\end{wrapfigure} 1.39 1.40 1.41 1.42 @@ -293,20 +311,6 @@ 1.43 \vspace{0.3cm}\hrule 1.44 1.45 == The approach: Preliminary Studies == 1.46 -\begin{wrapfigure}{L}{0.35\textwidth}\centering 1.47 -%%\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_3_654_jet.eps} 1.48 -%%\\ 1.49 -%%\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_3_724_jet.eps} 1.50 -%%\caption{Top row: Genes Nfic, A930001M12Rik, C130038G02Rik are the most correlated with area SS (somatosensory cortex). Bottom row: Genes C130038G02Rik, Cacna1i, Car10 are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.} 1.51 - 1.52 -\includegraphics[scale=.27]{singlegene_SS_corr_top_1_2365_jet.eps}\includegraphics[scale=.27]{singlegene_SS_corr_top_2_242_jet.eps} 1.53 -\\ 1.54 -\includegraphics[scale=.27]{singlegene_SS_lr_top_1_654_jet.eps}\includegraphics[scale=.27]{singlegene_SS_lr_top_2_685_jet.eps} 1.55 - 1.56 -\caption{Top row: Genes $Nfic$ and $A930001M12Rik$ are the most correlated with area SS (somatosensory cortex). Bottom row: Genes $C130038G02Rik$ and $Cacna1i$ are those with the best fit using logistic regression. Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region SS. Pixels are colored according to correlation, with red meaning high correlation and blue meaning low.} 1.57 -\label{SScorrLr}\end{wrapfigure} 1.58 - 1.59 - 1.60 1.61 === Format conversion between SEV, MATLAB, NIFTI === 1.62 We have created software to (politely) download all of the SEV files\footnote{SEV is a sparse format for spatial data. It is the format in which the ABA data is made available.} from the Allen Institute website. We have also created software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret's file formats. 1.63 @@ -328,61 +332,6 @@ 1.64 1.65 1.66 === Feature selection and scoring methods === 1.67 - 1.68 -\begin{wrapfigure}{L}{0.2\textwidth}\centering 1.69 -\includegraphics[scale=.27]{holeExample_2682_SS_jet.eps} 1.70 -\caption{Gene $Pitx2$ is selectively underexpressed in area SS.} 1.71 -\label{hole}\end{wrapfigure} 1.72 - 1.73 - 1.74 - 1.75 -\vspace{0.3cm}**Underexpression of a gene can serve as a marker** 1.76 -Underexpression of a gene can sometimes serve as a marker. See, for example, Figure \ref{hole}. 1.77 - 1.78 - 1.79 - 1.80 - 1.81 -\vspace{0.3cm}**Correlation** 1.82 -Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the surface pixels. 1.83 - 1.84 -We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS. 1.85 - 1.86 -%%One class of feature selection scoring methods contains methods which calculate some sort of "match" between each gene image and the target image. Those genes which match the best are good candidates for features. 1.87 - 1.88 -%%One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS. 1.89 - 1.90 - 1.91 - 1.92 -\vspace{0.3cm}**Conditional entropy** 1.93 -%%An information-theoretic scoring method is to find features such that, if the features (gene expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty, so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution to which we are referring is the probability distribution over the population of surface pixels. 1.94 - 1.95 -%%The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating, for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two standard deviations, the mean plus one standard deviation, the mean plus two standard deviations. 1.96 - 1.97 -%%Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized. 1.98 - 1.99 -For each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized. 1.100 - 1.101 -This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question, "Is this surface pixel a member of the target area?". Its advantage over linear methods such as logistic regression is that it takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional entropy would notice, whereas linear methods would not. 1.102 - 1.103 - 1.104 - 1.105 - 1.106 -\vspace{0.3cm}**Gradient similarity** 1.107 -We noticed that the previous two scoring methods, which are pointwise, often found genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar to the shape of the target region. We call this scoring method "gradient similarity". The formula is: 1.108 - 1.109 -%%One might say that gradient similarity attempts to measure how much the border of the area of gene expression and the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its maximum value to zero, the spatial pattern of a gene's expression often does not have a discrete border. Therefore, instead of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have gradients which are oriented in a similar direction. 1.110 - 1.111 - 1.112 - 1.113 -\begin{align*} 1.114 -\sum_{pixel \in pixels} cos(abs(\angle \nabla_1 - \angle \nabla_2)) \cdot \frac{\vert \nabla_1 \vert + \vert \nabla_2 \vert}{2} \cdot \frac{pixel\_value_1 + pixel\_value_2}{2} 1.115 -\end{align*} 1.116 - 1.117 -where $\nabla_1$ and $\nabla_2$ are the gradient vectors of the two images at the current pixel; $\angle \nabla_i$ is the angle of the gradient of image $i$ at the current pixel; $\vert \nabla_i \vert$ is the magnitude of the gradient of image $i$ at the current pixel; and $pixel\_value_i$ is the value of the current pixel in image $i$. 1.118 - 1.119 -The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a similar direction (because the borders are similar). 1.120 - 1.121 -\vspace{0.3cm}**Gradient similarity provides information complementary to correlation** 1.122 \begin{wrapfigure}{L}{0.35\textwidth}\centering 1.123 %%\includegraphics[scale=.27]{singlegene_AUD_lr_top_1_3386_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_2_1258_jet.eps}\includegraphics[scale=.27]{singlegene_AUD_lr_top_3_420_jet.eps} 1.124 %% 1.125 @@ -396,44 +345,54 @@ 1.126 1.127 1.128 1.129 -To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area. 1.130 - 1.131 -%%None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods. 1.132 - 1.133 -%% The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.} 1.134 - 1.135 -%% Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers. 1.136 - 1.137 -\vspace{0.3cm}**Areas which can be identified by single genes** 1.138 -Using gradient similarity, we have already found single genes which roughly identify some areas and groupings of areas. For each of these areas, an example of a gene which roughly identifies it is shown in Figure \ref{singleSoFar}. We have not yet cross-verified these genes in other atlases. 1.139 - 1.140 -In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior part of cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), VIS (visual), AUD (auditory). 1.141 - 1.142 -These results validate our expectation that the ABA dataset can be exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring method, gradient similarity. 1.143 - 1.144 - 1.145 - 1.146 -\vspace{0.3cm}**Combinations of multiple genes are useful and necessary for some areas** 1.147 - 1.148 -In Figure \ref{MOcombo}, we give an example of a cortical area which is not marked by any single gene, but which can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left image. This combination captures area MO much better than any single gene. 1.149 - 1.150 -This shows that our proposal to develop a method to find combinations of marker genes is both possible and necessary. 1.151 - 1.152 -%% wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} 1.153 -%% mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} 1.154 - 1.155 -%%According to logistic regression, gene wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface. 1.156 - 1.157 -%%Gene mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} is shown in figure the upper-right of Fig. \ref{MOcombo}. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left of Figure \ref{MOcombo}. This combination captures area MO much better than any single gene. 1.158 - 1.159 - 1.160 - 1.161 - 1.162 -%%\vspace{0.3cm}**Feature selection integrated with prediction** 1.163 -%%As noted earlier, in general, any classifier can be used for feature selection by running it inside a stepwise wrapper. Also, some learning algorithms integrate soft constraints on number of features used. Examples of both of these will be seen in the section "Multivariate supervised learning". 1.164 - 1.165 - 1.166 -=== Multivariate supervised learning === 1.167 + 1.168 +\vspace{0.3cm}**Underexpression of a gene can serve as a marker** 1.169 +Underexpression of a gene can sometimes serve as a marker. See, for example, Figure \ref{hole}. 1.170 + 1.171 + 1.172 + 1.173 + 1.174 +\vspace{0.3cm}**Correlation** 1.175 +Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the surface pixels. 1.176 + 1.177 +We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS. 1.178 + 1.179 +%%One class of feature selection scoring methods contains methods which calculate some sort of "match" between each gene image and the target image. Those genes which match the best are good candidates for features. 1.180 + 1.181 +%%One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between each gene and each cortical area. The top row of Figure \ref{SScorrLr} shows the three genes most correlated with area SS. 1.182 + 1.183 + 1.184 + 1.185 +\vspace{0.3cm}**Conditional entropy** 1.186 +%%An information-theoretic scoring method is to find features such that, if the features (gene expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty, so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution to which we are referring is the probability distribution over the population of surface pixels. 1.187 + 1.188 +%%The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating, for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two standard deviations, the mean plus one standard deviation, the mean plus two standard deviations. 1.189 + 1.190 +%%Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized. 1.191 + 1.192 +For each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression boolean masks such that the conditional entropy of the target area's boolean mask, conditioned upon the pair of gene expression boolean masks, is minimized. 1.193 + 1.194 +This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question, "Is this surface pixel a member of the target area?". Its advantage over linear methods such as logistic regression is that it takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional entropy would notice, whereas linear methods would not. 1.195 + 1.196 + 1.197 + 1.198 + 1.199 +\vspace{0.3cm}**Gradient similarity** 1.200 +We noticed that the previous two scoring methods, which are pointwise, often found genes whose pattern of expression did not look similar in shape to the target region. For this reason we designed a non-pointwise scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar to the shape of the target region. We call this scoring method "gradient similarity". The formula is: 1.201 + 1.202 +%%One might say that gradient similarity attempts to measure how much the border of the area of gene expression and the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its maximum value to zero, the spatial pattern of a gene's expression often does not have a discrete border. Therefore, instead of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have gradients which are oriented in a similar direction. 1.203 + 1.204 + 1.205 + 1.206 +\begin{align*} 1.207 +\sum_{pixel \in pixels} cos(abs(\angle \nabla_1 - \angle \nabla_2)) \cdot \frac{\vert \nabla_1 \vert + \vert \nabla_2 \vert}{2} \cdot \frac{pixel\_value_1 + pixel\_value_2}{2} 1.208 +\end{align*} 1.209 + 1.210 +where $\nabla_1$ and $\nabla_2$ are the gradient vectors of the two images at the current pixel; $\angle \nabla_i$ is the angle of the gradient of image $i$ at the current pixel; $\vert \nabla_i \vert$ is the magnitude of the gradient of image $i$ at the current pixel; and $pixel\_value_i$ is the value of the current pixel in image $i$. 1.211 + 1.212 +The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar, then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a similar direction (because the borders are similar). 1.213 + 1.214 +\vspace{0.3cm}**Gradient similarity provides information complementary to correlation** 1.215 \begin{wrapfigure}{L}{0.35\textwidth}\centering 1.216 \includegraphics[scale=.27]{MO_vs_Wwc1_jet.eps}\includegraphics[scale=.27]{MO_vs_Mtif2_jet.eps} 1.217 1.218 @@ -443,6 +402,47 @@ 1.219 1.220 1.221 1.222 + 1.223 +To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider Fig. \ref{AUDgeometry}. The pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is that this includes many areas which don't have a salient border matching the areal border. The geometric method identifies genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes genes which don't express over the entire area. 1.224 + 1.225 +%%None of these genes are, individually, a perfect marker for AUD; we deliberately chose a "difficult" area in order to better contrast pointwise with geometric methods. 1.226 + 1.227 +%% The top row of Fig. \ref{AUDgeometry} displays the 3 genes which most match area AUD, according to a pointwise method\footnote{For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well they predict area AUD.}. The bottom row displays the 3 genes which most match AUD according to a method which considers local geometry\footnote{For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this was used to rank the genes.} 1.228 + 1.229 +%% Genes which have high rankings using both pointwise and border criteria, such as $Aph1a$ in the example, may be particularly good markers. 1.230 + 1.231 +\vspace{0.3cm}**Areas which can be identified by single genes** 1.232 +Using gradient similarity, we have already found single genes which roughly identify some areas and groupings of areas. For each of these areas, an example of a gene which roughly identifies it is shown in Figure \ref{singleSoFar}. We have not yet cross-verified these genes in other atlases. 1.233 + 1.234 +In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior part of cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), VIS (visual), AUD (auditory). 1.235 + 1.236 +These results validate our expectation that the ABA dataset can be exploited to find marker genes for many cortical areas, while also validating the relevancy of our new scoring method, gradient similarity. 1.237 + 1.238 + 1.239 + 1.240 +\vspace{0.3cm}**Combinations of multiple genes are useful and necessary for some areas** 1.241 + 1.242 +In Figure \ref{MOcombo}, we give an example of a cortical area which is not marked by any single gene, but which can be identified combinatorially. According to logistic regression, gene wwc1 is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the dorsal surface. Gene mtif2 is shown in the upper-right. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left image. This combination captures area MO much better than any single gene. 1.243 + 1.244 +This shows that our proposal to develop a method to find combinations of marker genes is both possible and necessary. 1.245 + 1.246 +%% wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} 1.247 +%% mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} 1.248 + 1.249 +%%According to logistic regression, gene wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, but the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface. 1.250 + 1.251 +%%Gene mtif2\footnote{"mitochondrial translational initiation factor 2"; EntrezGene ID 76784} is shown in figure the upper-right of Fig. \ref{MOcombo}. Mtif2 captures MO's upper-left boundary, but not its lower-right boundary. Mtif2 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get the lower-left of Figure \ref{MOcombo}. This combination captures area MO much better than any single gene. 1.252 + 1.253 + 1.254 + 1.255 + 1.256 +%%\vspace{0.3cm}**Feature selection integrated with prediction** 1.257 +%%As noted earlier, in general, any classifier can be used for feature selection by running it inside a stepwise wrapper. Also, some learning algorithms integrate soft constraints on number of features used. Examples of both of these will be seen in the section "Multivariate supervised learning". 1.258 + 1.259 + 1.260 +=== Multivariate supervised learning === 1.261 + 1.262 + 1.263 \vspace{0.3cm}**Forward stepwise logistic regression** 1.264 Logistic regression is a popular method for predictive modeling of categorical data. As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed forward stepwise logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify. This is an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes found were shown in various figures throughout this document, and Figure \ref{MOcombo} shows a combination of genes which was found. 1.265 1.266 @@ -459,24 +459,6 @@ 1.267 1.268 === Data-driven redrawing of the cortical map === 1.269 1.270 - 1.271 - 1.272 - 1.273 - 1.274 -We have applied the following dimensionality reduction algorithms to reduce the dimensionality of the gene expression profile associated with each pixel: Principal Components Analysis (PCA), Simple PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space Alignment, Stochastic Proximity Embedding, Fast Maximum Variance Unfolding, Non-negative Matrix Factorization (NNMF). Space constraints prevent us from showing many of the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in the first, second, and third rows of Figure \ref{dimReduc}. 1.275 - 1.276 - 1.277 - 1.278 -After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we have tried k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are shown in the last row of Figure \ref{dimReduc}. To compare, the leftmost picture on the bottom row of Figure \ref{dimReduc} shows some of the major subdivisions of cortex. These results clearly show that different dimensionality reduction techniques capture different aspects of the data and lead to different clusterings, indicating the utility of our proposal to produce a detailed comparison of these techniques as applied to the domain of genomic anatomy. 1.279 - 1.280 - 1.281 - 1.282 - 1.283 -\vspace{0.3cm}**Many areas are captured by clusters of genes** 1.284 -We also clustered the genes using gradient similarity to see if the spatial regions defined by any clusters matched known anatomical regions. Figure \ref{geneClusters} shows, for ten sample gene clusters, each cluster's average expression pattern, compared to a known anatomical boundary. This suggests that it is worth attempting to cluster genes, and then to use the results to cluster pixels. 1.285 - 1.286 - 1.287 -== The approach: what we plan to do == 1.288 \begin{wrapfigure}{L}{0.35\textwidth}\centering 1.289 \includegraphics[scale=.27]{singlegene_example_2682_Pitx2_SS_jet.eps}\includegraphics[scale=.27]{singlegene_example_371_Aldh1a2_SSs_jet.eps} 1.290 \includegraphics[scale=.27]{singlegene_example_2759_Ppfibp1_PIR_jet.eps}\includegraphics[scale=.27]{singlegene_example_3310_Slco1a5_FRP_jet.eps} 1.291 @@ -489,6 +471,24 @@ 1.292 1.293 1.294 1.295 + 1.296 + 1.297 + 1.298 +We have applied the following dimensionality reduction algorithms to reduce the dimensionality of the gene expression profile associated with each pixel: Principal Components Analysis (PCA), Simple PCA, Multi-Dimensional Scaling, Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space Alignment, Stochastic Proximity Embedding, Fast Maximum Variance Unfolding, Non-negative Matrix Factorization (NNMF). Space constraints prevent us from showing many of the results, but as a sample, PCA, NNMF, and landmark Isomap are shown in the first, second, and third rows of Figure \ref{dimReduc}. 1.299 + 1.300 + 1.301 + 1.302 +After applying the dimensionality reduction, we ran clustering algorithms on the reduced data. To date we have tried k-means and spectral clustering. The results of k-means after PCA, NNMF, and landmark Isomap are shown in the last row of Figure \ref{dimReduc}. To compare, the leftmost picture on the bottom row of Figure \ref{dimReduc} shows some of the major subdivisions of cortex. These results clearly show that different dimensionality reduction techniques capture different aspects of the data and lead to different clusterings, indicating the utility of our proposal to produce a detailed comparison of these techniques as applied to the domain of genomic anatomy. 1.303 + 1.304 + 1.305 + 1.306 + 1.307 +\vspace{0.3cm}**Many areas are captured by clusters of genes** 1.308 +We also clustered the genes using gradient similarity to see if the spatial regions defined by any clusters matched known anatomical regions. Figure \ref{geneClusters} shows, for ten sample gene clusters, each cluster's average expression pattern, compared to a known anatomical boundary. This suggests that it is worth attempting to cluster genes, and then to use the results to cluster pixels. 1.309 + 1.310 + 1.311 +== The approach: what we plan to do == 1.312 + 1.313 %%\vspace{0.3cm}**Flatmap cortex and segment cortical layers** 1.314 1.315 === Flatmap cortex and segment cortical layers === 1.316 @@ -509,30 +509,6 @@ 1.317 1.318 1.319 === Develop algorithms that find genetic markers for anatomical regions === 1.320 - 1.321 -\vspace{0.3cm}**Scoring measures and feature selection** 1.322 -%%We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Hotelling's T-square test (a multivariate generalization of Student's t-test), ANOVA, and a multivariate version of the Mann-Whitney U test (a non-parametric test). 1.323 -We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Student's t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a scoring measure on genes by taking the prediction error when using that gene to predict the target. 1.324 - 1.325 -Using some combination of these measures, we will develop a procedure to find single marker genes for anatomical regions: for each cortical area, we will rank the genes by their ability to delineate each area. We will quantitatively compare the list of single genes generated by our method to the lists generated by previous methods which are mentioned in Aim 1 Related Work. 1.326 - 1.327 - 1.328 -Some cortical areas have no single marker genes but can be identified by combinatorial coding. This requires multivariate scoring measures and feature selection procedures. Many of the measures, such as expression energy, gradient similarity, Jaccard, Dice, Hough, Student's t, and Mann-Whitney U are univariate. We will extend these scoring measures for use in multivariate feature selection, that is, for scoring how well combinations of genes, rather than individual genes, can distinguish a target area. There are existing multivariate forms of some of the univariate scoring measures, for example, Hotelling's T-square is a multivariate analog of Student's t. 1.329 - 1.330 -We will develop a feature selection procedure for choosing the best small set of marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will also explore (a) feature selection using a stepwise wrapper over "vanilla" classifiers such as logistic regression, (b) supervised learning methods such as decision trees which incrementally/greedily combine single gene markers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features used, such as sparse support vector machines (SVMs). 1.331 - 1.332 -Since errors of displacement and of shape may cause genes and target areas to match less than they should, we will consider the robustness of feature selection methods in the presence of error. Some of these methods, such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a scoring method on small displacements and distortions of the data adds robustness to registration error at the expense of computation time. 1.333 - 1.334 -An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape of the natural domain of gene expression corresponding to the area is different from the shape of the area as recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly\footnote{Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by multiple sources of evidence. Interestingly, the need to detect "natural spatial domains of gene expression" in a data-driven fashion means that the methods of Aim 2 might be useful in achieving Aim 1, as well -- particularly discriminative dimensionality reduction.}, and (b) detect when a difficult area could be combined with adjacent areas to create a larger area which can be fit. 1.335 - 1.336 -A future publication on the method that we develop in Aim 1 will review the scoring measures and quantitatively compare their performance in order to provide a foundation for future research of methods of marker gene finding. We will measure the robustness of the scoring measures as well as their absolute performance on our dataset. 1.337 - 1.338 -\vspace{0.3cm}**Classifiers** 1.339 -We will explore and compare different classifiers. As noted above, this activity is not separate from the previous one, because some supervised learning algorithms include feature selection, and any classifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic regression (including spatial models\cite{paciorek_computational_2007}), decision trees\footnote{Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate trees that use fewer genes.}, sparse SVMs, generative mixture models (including naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic algorithms, and artificial neural networks. 1.340 - 1.341 - 1.342 - 1.343 -=== Develop algorithms to suggest a division of a structure into anatomical parts === 1.344 \begin{wrapfigure}{L}{0.6\textwidth}\centering 1.345 \includegraphics[scale=1]{merge3_norm_hv_PCA_ndims50_prototypes_collage_sm_border.eps} 1.346 \includegraphics[scale=.98]{nnmf_ndims7_collage_border.eps} 1.347 @@ -542,33 +518,58 @@ 1.348 \caption{First row: the first 6 reduced dimensions, using PCA. Second row: the first 6 reduced dimensions, using NNMF. Third row: the first six reduced dimensions, using landmark Isomap. Bottom row: examples of kmeans clustering applied to reduced datasets to find 7 clusters. Left: 19 of the major subdivisions of the cortex. Second from left: PCA. Third from left: NNMF. Right: Landmark Isomap. Additional details: In the third and fourth rows, 7 dimensions were found, but only 6 displayed. In the last row: for PCA, 50 dimensions were used; for NNMF, 6 dimensions were used; for landmark Isomap, 7 dimensions were used.} 1.349 \label{dimReduc}\end{wrapfigure} 1.350 1.351 -\vspace{0.3cm}**Dimensionality reduction on gene expression profiles** 1.352 -We have already described the application of ten dimensionality reduction algorithms for the purpose of replacing the gene expression profiles, which are vectors of about 4000 gene expression levels, with a smaller number of features. We plan to further explore and interpret these results, as well as to apply other unsupervised learning algorithms, including independent components analysis, self-organizing maps, and generative models such as deep Boltzmann machines. We will explore ways to quantitatively compare the relevance of the different dimensionality reduction methods for identifying cortical areal boundaries. 1.353 - 1.354 -\vspace{0.3cm}**Dimensionality reduction on pixels** 1.355 -Instead of applying dimensionality reduction to the gene expression profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions. 1.356 - 1.357 -%% \footnote{Consider a matrix whose rows represent pixel locations, and whose columns represent genes. An entry in this matrix represents the gene expression level at a given pixel. One can look at this matrix as a collection of pixels, each corresponding to a vector of many gene expression levels; or one can look at it as a collection of genes, each corresponding to a vector giving that gene's expression at each pixel. Similarly, dimensionality reduction can be used to replace a large number of genes with a small number of features, or it can be used to replace a large number of pixels with a small number of features.} 1.358 - 1.359 -\vspace{0.3cm}**Clustering and segmentation on pixels** 1.360 -We will explore clustering and segmentation algorithms in order to segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving\cite{hastie_gene_2000}, recursive division clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transformations, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with various linkage functions. These methods can be combined with dimensionality reduction. 1.361 - 1.362 -\vspace{0.3cm}**Clustering on genes** 1.363 -We have already shown that the procedure of clustering genes according to gradient similarity, and then creating an averaged prototype of each cluster's expression pattern, yields some spatial patterns which match cortical areas. We will further explore the clustering of genes. 1.364 - 1.365 -In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then replacing their expression levels with a single average expression level, thereby removing some redundancy from the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would help or hurt the ultimate goal of identifying interesting spatial regions. 1.366 - 1.367 -\vspace{0.3cm}**Co-clustering** 1.368 -There are some algorithms which simultaneously incorporate clustering on instances and on features (in our case, genes and pixels), for example, IRM\cite{kemp_learning_2006}. These are called co-clustering or biclustering algorithms. 1.369 - 1.370 -\vspace{0.3cm}**Radial profiles** 1.371 -We wil explore the use of the radial profile of gene expression under each pixel. 1.372 +\vspace{0.3cm}**Scoring measures and feature selection** 1.373 +%%We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Hotelling's T-square test (a multivariate generalization of Student's t-test), ANOVA, and a multivariate version of the Mann-Whitney U test (a non-parametric test). 1.374 +We will develop scoring methods for evaluating how good individual genes are at marking areas. We will compare pointwise, geometric, and information-theoretic measures. We already developed one entirely new scoring method (gradient similarity), but we may develop more. Scoring measures that we will explore will include the L1 norm, correlation, expression energy ratio, conditional entropy, gradient similarity, Jaccard similarity, Dice similarity, Hough transform, and statistical tests such as Student's t-test, and the Mann-Whitney U test (a non-parametric test). In addition, any classifier induces a scoring measure on genes by taking the prediction error when using that gene to predict the target. 1.375 + 1.376 +Using some combination of these measures, we will develop a procedure to find single marker genes for anatomical regions: for each cortical area, we will rank the genes by their ability to delineate each area. We will quantitatively compare the list of single genes generated by our method to the lists generated by previous methods which are mentioned in Aim 1 Related Work. 1.377 + 1.378 + 1.379 +Some cortical areas have no single marker genes but can be identified by combinatorial coding. This requires multivariate scoring measures and feature selection procedures. Many of the measures, such as expression energy, gradient similarity, Jaccard, Dice, Hough, Student's t, and Mann-Whitney U are univariate. We will extend these scoring measures for use in multivariate feature selection, that is, for scoring how well combinations of genes, rather than individual genes, can distinguish a target area. There are existing multivariate forms of some of the univariate scoring measures, for example, Hotelling's T-square is a multivariate analog of Student's t. 1.380 + 1.381 +We will develop a feature selection procedure for choosing the best small set of marker genes for a given anatomical area. In addition to using the scoring measures that we develop, we will also explore (a) feature selection using a stepwise wrapper over "vanilla" classifiers such as logistic regression, (b) supervised learning methods such as decision trees which incrementally/greedily combine single gene markers into sets, and (c) supervised learning methods which use soft constraints to minimize number of features used, such as sparse support vector machines (SVMs). 1.382 + 1.383 +Since errors of displacement and of shape may cause genes and target areas to match less than they should, we will consider the robustness of feature selection methods in the presence of error. Some of these methods, such as the Hough transform, are designed to be resistant in the presence of error, but many are not. We will consider extensions to scoring measures that may improve their robustness; for example, a wrapper that runs a scoring method on small displacements and distortions of the data adds robustness to registration error at the expense of computation time. 1.384 + 1.385 +An area may be difficult to identify because the boundaries are misdrawn in the atlas, or because the shape of the natural domain of gene expression corresponding to the area is different from the shape of the area as recognized by anatomists. We will extend our procedure to handle difficult areas by combining areas or redrawing their boundaries. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its boundary were redrawn slightly\footnote{Not just any redrawing is acceptable, only those which appear to be justified as a natural spatial domain of gene expression by multiple sources of evidence. Interestingly, the need to detect "natural spatial domains of gene expression" in a data-driven fashion means that the methods of Aim 2 might be useful in achieving Aim 1, as well -- particularly discriminative dimensionality reduction.}, and (b) detect when a difficult area could be combined with adjacent areas to create a larger area which can be fit. 1.386 + 1.387 +A future publication on the method that we develop in Aim 1 will review the scoring measures and quantitatively compare their performance in order to provide a foundation for future research of methods of marker gene finding. We will measure the robustness of the scoring measures as well as their absolute performance on our dataset. 1.388 + 1.389 +\vspace{0.3cm}**Classifiers** 1.390 +We will explore and compare different classifiers. As noted above, this activity is not separate from the previous one, because some supervised learning algorithms include feature selection, and any classifier can be combined with a stepwise wrapper for use as a feature selection method. We will explore logistic regression (including spatial models\cite{paciorek_computational_2007}), decision trees\footnote{Actually, we have already begun to explore decision trees. For each cortical area, we have used the C4.5 algorithm to find a decision tree for that area. We achieved good classification accuracy on our training set, but the number of genes that appeared in each tree was too large. We plan to implement a pruning procedure to generate trees that use fewer genes.}, sparse SVMs, generative mixture models (including naive bayes), kernel density estimation, instance-based learning methods (such as k-nearest neighbor), genetic algorithms, and artificial neural networks. 1.391 + 1.392 + 1.393 + 1.394 + 1.395 +=== Develop algorithms to suggest a division of a structure into anatomical parts === 1.396 1.397 \begin{wrapfigure}{L}{0.5\textwidth}\centering 1.398 \includegraphics[scale=.2]{cosine_similarity1_rearrange_colorize.eps} 1.399 \caption{Prototypes corresponding to sample gene clusters, clustered by gradient similarity. Region boundaries for the region that most matches each prototype are overlaid.} 1.400 \label{geneClusters}\end{wrapfigure} 1.401 1.402 +\vspace{0.3cm}**Dimensionality reduction on gene expression profiles** 1.403 +We have already described the application of ten dimensionality reduction algorithms for the purpose of replacing the gene expression profiles, which are vectors of about 4000 gene expression levels, with a smaller number of features. We plan to further explore and interpret these results, as well as to apply other unsupervised learning algorithms, including independent components analysis, self-organizing maps, and generative models such as deep Boltzmann machines. We will explore ways to quantitatively compare the relevance of the different dimensionality reduction methods for identifying cortical areal boundaries. 1.404 + 1.405 +\vspace{0.3cm}**Dimensionality reduction on pixels** 1.406 +Instead of applying dimensionality reduction to the gene expression profiles, the same techniques can be applied instead to the pixels. It is possible that the features generated in this way by some dimensionality reduction techniques will directly correspond to interesting spatial regions. 1.407 + 1.408 +%% \footnote{Consider a matrix whose rows represent pixel locations, and whose columns represent genes. An entry in this matrix represents the gene expression level at a given pixel. One can look at this matrix as a collection of pixels, each corresponding to a vector of many gene expression levels; or one can look at it as a collection of genes, each corresponding to a vector giving that gene's expression at each pixel. Similarly, dimensionality reduction can be used to replace a large number of genes with a small number of features, or it can be used to replace a large number of pixels with a small number of features.} 1.409 + 1.410 +\vspace{0.3cm}**Clustering and segmentation on pixels** 1.411 +We will explore clustering and segmentation algorithms in order to segment the pixels into regions. We will explore k-means, spectral clustering, gene shaving\cite{hastie_gene_2000}, recursive division clustering, multivariate generalizations of edge detectors, multivariate generalizations of watershed transformations, region growing, active contours, graph partitioning methods, and recursive agglomerative clustering with various linkage functions. These methods can be combined with dimensionality reduction. 1.412 + 1.413 +\vspace{0.3cm}**Clustering on genes** 1.414 +We have already shown that the procedure of clustering genes according to gradient similarity, and then creating an averaged prototype of each cluster's expression pattern, yields some spatial patterns which match cortical areas. We will further explore the clustering of genes. 1.415 + 1.416 +In addition to using the cluster expression prototypes directly to identify spatial regions, this might be useful as a component of dimensionality reduction. For example, one could imagine clustering similar genes and then replacing their expression levels with a single average expression level, thereby removing some redundancy from the gene expression profiles. One could then perform clustering on pixels (possibly after a second dimensionality reduction step) in order to identify spatial regions. It remains to be seen whether removal of redundancy would help or hurt the ultimate goal of identifying interesting spatial regions. 1.417 + 1.418 +\vspace{0.3cm}**Co-clustering** 1.419 +There are some algorithms which simultaneously incorporate clustering on instances and on features (in our case, genes and pixels), for example, IRM\cite{kemp_learning_2006}. These are called co-clustering or biclustering algorithms. 1.420 + 1.421 +\vspace{0.3cm}**Radial profiles** 1.422 +We wil explore the use of the radial profile of gene expression under each pixel. 1.423 + 1.424 \vspace{0.3cm}**Compare different methods** 1.425 In order to tell which method is best for genomic anatomy, for each experimental method we will compare the cortical map found by unsupervised learning to a cortical map derived from the Allen Reference Atlas. We will explore various quantitative metrics that purport to measure how similar two clusterings are, such as Jaccard, Rand index, Fowlkes-Mallows, variation of information, Larsen, Van Dongen, and others. 1.426