cg
diff grant.txt @ 27:5db0420abbb6
.
author | bshanks@bshanks.dyndns.org |
---|---|
date | Mon Apr 13 03:25:42 2009 -0700 (16 years ago) |
parents | 9d0cc9c66ecd |
children | 01c118d1074b |
line diff
1.1 --- a/grant.txt Mon Apr 13 03:22:01 2009 -0700
1.2 +++ b/grant.txt Mon Apr 13 03:25:42 2009 -0700
1.3 @@ -19,7 +19,7 @@
1.4
1.5 === Aim 1 ===
1.6
1.7 -**Machine learning terminology: supervised learning**
1.8 +\vspace{0.3cm}**Machine learning terminology: supervised learning**
1.9
1.10 The task of looking for marker genes for anatomical subregions means that one is looking for a set of genes such that, if the expression level of those genes is known, then the locations of the subregions can be inferred.
1.11
1.12 @@ -40,26 +40,26 @@
1.13 Key questions when choosing a learning method are: What are the instances? What are the features? How are the features chosen? Here are four principles that outline our answers to these questions.
1.14
1.15
1.16 -**Principle 1: Combinatorial gene expression**
1.17 +\vspace{0.3cm}**Principle 1: Combinatorial gene expression**
1.18
1.19 Above, we defined an "instance" as the combination of a voxel with the "associated gene expression data". In our case this refers to the expression level of genes within the voxel, but should we include the expression levels of all genes, or only a few of them?
1.20
1.21 It is too much to hope that every anatomical region of interest will be identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary Results).
1.22
1.23
1.24 -**Principle 2: Only look at combinations of small numbers of genes**
1.25 +\vspace{0.3cm}**Principle 2: Only look at combinations of small numbers of genes**
1.26
1.27 When the classifier classifies a voxel, it is only allowed to look at the expression of the genes which have been selected as features. The more data that is available to a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as features.
1.28
1.29
1.30
1.31 -**Principle 3: Use geometry in feature selection**
1.32 +\vspace{0.3cm}**Principle 3: Use geometry in feature selection**
1.33
1.34 When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See Preliminary Results for evidence of the complementary nature of pointwise and local scoring methods.
1.35
1.36
1.37
1.38 -**Principle 4: Work in 2-D whenever possible**
1.39 +\vspace{0.3cm}**Principle 4: Work in 2-D whenever possible**
1.40
1.41
1.42 There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D data.
1.43 @@ -69,7 +69,7 @@
1.44
1.45 === Aim 2 ===
1.46
1.47 -**Machine learning terminology: clustering**
1.48 +\vspace{0.3cm}**Machine learning terminology: clustering**
1.49
1.50 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as __unsupervised learning__ in the jargon of machine learning. One thing that you can do with such a dataset is to group instances together. A set of similar instances is called a __cluster__, and the activity of finding grouping the data into clusters is called clustering or cluster analysis.
1.51
1.52 @@ -78,12 +78,12 @@
1.53 It is desirable to determine not just one set of subregions, but also how these subregions relate to each other, if at all; perhaps some of the subregions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they could be considered separate, on a coarser spatial scale they could be grouped together into one large subregion. This suggests the outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels. This is called hierarchial clustering.
1.54
1.55
1.56 -**Similarity scores**
1.57 +\vspace{0.3cm}**Similarity scores**
1.58
1.59 A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and scoring methods for similarity.
1.60
1.61
1.62 -**Spatially contiguous clusters; image segmentation**
1.63 +\vspace{0.3cm}**Spatially contiguous clusters; image segmentation**
1.64
1.65
1.66 We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary Results, we show that one can get reasonable results without enforcing this constraint, however, we plan to compare these results against other methods which guarantee contiguous clusters.
1.67 @@ -91,7 +91,7 @@
1.68 Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in our task, there are thousands of color channels (one for each gene), rather than just three. There are imaging tasks which use more than three colors, however, for example multispectral imaging and hyperspectral imaging, which are often used to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of spatially arranged data, some of these algorithms are specialized for visual images.
1.69
1.70
1.71 -**Dimensionality reduction**
1.72 +\vspace{0.3cm}**Dimensionality reduction**
1.73
1.74
1.75 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the instances. However, some clustering algorithms perform better on small numbers of features. There are techniques which "summarize" a larger number of features using a smaller number of features; these techniques go by the name of feature extraction or dimensionality reduction. The small set of features that such a technique yields is called the __reduced feature set__. After the reduced feature set is created, the instances may be replaced by __reduced instances__, which have as their features the reduced feature set rather than the original feature set of all gene expression levels. Note that the features in the reduced feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene expression levels.
1.76 @@ -99,7 +99,7 @@
1.77 Another use for dimensionality reduction is to visualize the relationships between subregions. For example, one might want to make a 2-D plot upon which each subregion is represented by a single point, and with the property that subregions with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of the points on a 2-D plan will exactly satisfy this property -- however, dimensionality reduction techniques allow one to find arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction before clustering.
1.78
1.79
1.80 -**Clustering genes rather than voxels**
1.81 +\vspace{0.3cm}**Clustering genes rather than voxels**
1.82
1.83
1.84 Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster the features (genes). There are two ways that clusters of genes could be used.
1.85 @@ -114,7 +114,7 @@
1.86
1.87 === Aim 3 ===
1.88
1.89 -**Background**
1.90 +\vspace{0.3cm}**Background**
1.91
1.92 The cortex is divided into areas and layers. To a first approximation, the parcellation of the cortex into areas can be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can picture an area of the cortex as a slice of many-layered cake.
1.93
1.94 @@ -125,7 +125,7 @@
1.95
1.96
1.97
1.98 -**Significance**
1.99 +\vspace{0.3cm}**Significance**
1.100
1.101 The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively target individual cortical areas.
1.102
1.103 @@ -164,7 +164,7 @@
1.104
1.105
1.106
1.107 -**Using combinations of multiple genes is necessary and sufficient to delineate some cortical areas**
1.108 +\vspace{0.3cm}**Using combinations of multiple genes is necessary and sufficient to delineate some cortical areas**
1.109
1.110 Here we give an example of a cortical area which is not marked by any single gene, but which can be identified combinatorially. according to logistic regression, gene wwc1\footnote{"WW, C2 and coiled-coil domain containing 1"; EntrezGene ID 211652} is the best fit single gene for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in Figure \ref{MOcombo} shows wwc1's spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably well by this gene, however the gene overshoots the upper-left boundary. This flattened 2-D representation does not show it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface (todo).
1.111
1.112 @@ -178,16 +178,16 @@
1.113 \caption{Upper left: $wwc1$. Upper right: $mtif2$. Lower left: wwc1 + mtif2 (each pixel's value on the lower left is the sum of the corresponding pixels in the upper row). Within each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region MO. Pixels are colored approximately according to the density of expressing cells underneath each pixel, with red meaning a lot of expression and blue meaning little.}
1.114 \end{figure}
1.115
1.116 -**Correlation**
1.117 -todo
1.118 -
1.119 -**Conditional entropy**
1.120 -todo
1.121 -
1.122 -**Gradient similarity**
1.123 -todo
1.124 -
1.125 -**Geometric and pointwise scoring methods provide complementary information**
1.126 +\vspace{0.3cm}**Correlation**
1.127 +todo
1.128 +
1.129 +\vspace{0.3cm}**Conditional entropy**
1.130 +todo
1.131 +
1.132 +\vspace{0.3cm}**Gradient similarity**
1.133 +todo
1.134 +
1.135 +\vspace{0.3cm}**Geometric and pointwise scoring methods provide complementary information**
1.136
1.137
1.138
1.139 @@ -206,17 +206,17 @@
1.140 \end{figure}
1.141
1.142
1.143 -**Areas which can be identified by single genes**
1.144 +\vspace{0.3cm}**Areas which can be identified by single genes**
1.145
1.146 todo
1.147
1.148
1.149 === Specific to Aim 1 (and Aim 3) ===
1.150 -**Forward stepwise logistic regression**
1.151 -todo
1.152 -
1.153 -
1.154 -**SVM on all genes at once**
1.155 +\vspace{0.3cm}**Forward stepwise logistic regression**
1.156 +todo
1.157 +
1.158 +
1.159 +\vspace{0.3cm}**SVM on all genes at once**
1.160
1.161 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%\footnote{5-fold cross-validation.}. As noted above, however, a classifier that looks at all the genes at once isn't practically useful.
1.162
1.163 @@ -224,24 +224,24 @@
1.164
1.165
1.166
1.167 -**Decision trees**
1.168 +\vspace{0.3cm}**Decision trees**
1.169
1.170 todo
1.171
1.172
1.173 === Specific to Aim 2 (and Aim 3) ===
1.174
1.175 -**Raw dimensionality reduction results**
1.176 +\vspace{0.3cm}**Raw dimensionality reduction results**
1.177
1.178 todo
1.179
1.180 (might want to incld nnMF since mentioned above)
1.181
1.182 -**Dimensionality reduction plus K-means or spectral clustering**
1.183 -
1.184 -
1.185 -
1.186 -**Many areas are captured by clusters of genes**
1.187 +\vspace{0.3cm}**Dimensionality reduction plus K-means or spectral clustering**
1.188 +
1.189 +
1.190 +
1.191 +\vspace{0.3cm}**Many areas are captured by clusters of genes**
1.192
1.193 todo
1.194
1.195 @@ -265,7 +265,7 @@
1.196 todo amongst other things:
1.197
1.198
1.199 -**Develop algorithms that find genetic markers for anatomical regions**
1.200 +\vspace{0.3cm}**Develop algorithms that find genetic markers for anatomical regions**
1.201
1.202 # Develop scoring measures for evaluating how good individual genes are at marking areas: we will compare pointwise, geometric, and information-theoretic measures.
1.203 # Develop a procedure to find single marker genes for anatomical regions: for each cortical area, by using or combining the scoring measures developed, we will rank the genes by their ability to delineate each area.
1.204 @@ -274,7 +274,7 @@
1.205
1.206
1.207
1.208 -**Apply these algorithms to the cortex**
1.209 +\vspace{0.3cm}**Apply these algorithms to the cortex**
1.210
1.211 # Create open source format conversion tools: we will create tools to bulk download the ABA dataset and to convert between SEV, NIFTI and MATLAB formats.
1.212 # Flatmap the ABA cortex data: map the ABA data onto a plane and draw the cortical area boundaries onto it.
1.213 @@ -282,7 +282,7 @@
1.214 # Run the procedures that we developed on the cortex: we will present, for each area, a short list of markers to identify that area; and we will also present lists of "panels" of genes that can be used to delineate many areas at once.
1.215
1.216
1.217 -**Develop algorithms to suggest a division of a structure into anatomical parts**
1.218 +\vspace{0.3cm}**Develop algorithms to suggest a division of a structure into anatomical parts**
1.219
1.220 # Explore dimensionality reduction algorithms applied to pixels: including TODO
1.221 # Explore dimensionality reduction algorithms applied to genes: including TODO
1.222 @@ -300,7 +300,7 @@
1.223 stuff i dunno where to put yet (there is more scattered through grant-oldtext):
1.224
1.225
1.226 -**Principle 4: Work in 2-D whenever possible**
1.227 +\vspace{0.3cm}**Principle 4: Work in 2-D whenever possible**
1.228
1.229
1.230 In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo), or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane.