rev |
line source |
bshanks@0 | 1 Specific aims
|
bshanks@15 | 2 Massive new datasets obtained with techniques such as in situ hybridization
|
bshanks@0 | 3 (ISH) and BAC-transgenics allow the expression levels of many genes at many
|
bshanks@0 | 4 locations to be compared. Our goal is to develop automated methods to relate
|
bshanks@0 | 5 spatial variation in gene expression to anatomy. We want to find marker genes
|
bshanks@0 | 6 for specific anatomical regions, and also to draw new anatomical maps based on
|
bshanks@0 | 7 gene expression patterns. We have three specific aims:
|
bshanks@0 | 8 (1) develop an algorithm to screen spatial gene expression data for combina-
|
bshanks@0 | 9 tions of marker genes which selectively target anatomical regions
|
bshanks@0 | 10 (2) develop an algorithm to suggest new ways of carving up a structure into
|
bshanks@0 | 11 anatomical subregions, based on spatial patterns in gene expression
|
bshanks@0 | 12 (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains
|
bshanks@0 | 13 a flattened version of the Allen Mouse Brain Atlas ISH data, as well as
|
bshanks@0 | 14 the boundaries of cortical anatomical areas. Use this dataset to validate
|
bshanks@0 | 15 the methods developed in (1) and (2).
|
bshanks@0 | 16 In addition to validating the usefulness of the algorithms, the application of
|
bshanks@0 | 17 these methods to cerebral cortex will produce immediate benefits, because there
|
bshanks@0 | 18 are currently no known genetic markers for many cortical areas. The results
|
bshanks@0 | 19 of the project will support the development of new ways to selectively target
|
bshanks@0 | 20 cortical areas, and it will support the development of a method for identifying
|
bshanks@0 | 21 the cortical areal boundaries present in small tissue samples.
|
bshanks@0 | 22 All algorithms that we develop will be implemented in an open-source soft-
|
bshanks@0 | 23 ware toolkit. The toolkit, as well as the machine-readable datasets developed
|
bshanks@0 | 24 in aim (3), will be published and freely available for others to use.
|
bshanks@0 | 25 Background and significance
|
bshanks@0 | 26 Aim 1
|
bshanks@15 | 27 Machine learning terminology: supervised learning
|
bshanks@0 | 28 The task of looking for marker genes for anatomical subregions means that one
|
bshanks@0 | 29 is looking for a set of genes such that, if the expression level of those genes is
|
bshanks@0 | 30 known, then the locations of the subregions can be inferred.
|
bshanks@0 | 31 If we define the subregions so that they cover the entire anatomical structure
|
bshanks@0 | 32 to be divided, then instead of saying that we are using gene expression to find
|
bshanks@0 | 33 the locations of the subregions, we may say that we are using gene expression to
|
bshanks@0 | 34 determine to which subregion each voxel within the structure belongs. We call
|
bshanks@0 | 35 this a classification task, because each voxel is being assigned to a class (namely,
|
bshanks@0 | 36 its subregion).
|
bshanks@0 | 37 Therefore, an understanding of the relationship between the combination of
|
bshanks@0 | 38 their expression levels and the locations of the subregions may be expressed as
|
bshanks@15 | 39 1
|
bshanks@15 | 40
|
bshanks@0 | 41 a function. The input to this function is a voxel, along with the gene expression
|
bshanks@0 | 42 levels within that voxel; the output is the subregional identity of the target
|
bshanks@0 | 43 voxel, that is, the subregion to which the target voxel belongs. We call this
|
bshanks@0 | 44 function a classifier. In general, the input to a classifier is called an instance,
|
bshanks@15 | 45 and the output is called a label (or a class label).
|
bshanks@0 | 46 The object of aim 1 is not to produce a single classifier, but rather to develop
|
bshanks@0 | 47 an automated method for determining a classifier for any known anatomical
|
bshanks@0 | 48 structure. Therefore, we seek a procedure by which a gene expression dataset
|
bshanks@0 | 49 may be analyzed in concert with an anatomical atlas in order to produce a
|
bshanks@0 | 50 classifier. Such a procedure is a type of a machine learning procedure. The
|
bshanks@0 | 51 construction of the classifier is called training (also learning), and the initial
|
bshanks@0 | 52 gene expression dataset used in the construction of the classifier is called training
|
bshanks@0 | 53 data.
|
bshanks@0 | 54 In the machine learning literature, this sort of procedure may be thought
|
bshanks@0 | 55 of as a supervised learning task, defined as a task in whcih the goal is to learn
|
bshanks@0 | 56 a mapping from instances to labels, and the training data consists of a set of
|
bshanks@0 | 57 instances (voxels) for which the labels (subregions) are known.
|
bshanks@0 | 58 Each gene expression level is called a feature, and the selection of which
|
bshanks@0 | 59 genes to include is called feature selection. Feature selection is one component
|
bshanks@0 | 60 of the task of learning a classifier. Some methods for learning classifiers start
|
bshanks@0 | 61 out with a separate feature selection phase, whereas other methods combine
|
bshanks@0 | 62 feature selection with other aspects of training.
|
bshanks@0 | 63 One class of feature selection methods assigns some sort of score to each
|
bshanks@0 | 64 candidate gene. The top-ranked genes are then chosen. Some scoring measures
|
bshanks@0 | 65 can assign a score to a set of selected genes, not just to a single gene; in this
|
bshanks@0 | 66 case, a dynamic procedure may be used in which features are added and sub-
|
bshanks@0 | 67 tracted from the selected set depending on how much they raise the score. Such
|
bshanks@0 | 68 procedures are called “stepwise” or “greedy”.
|
bshanks@0 | 69 Although the classifier itself may only look at the gene expression data within
|
bshanks@0 | 70 each voxel before classifying that voxel, the learning algorithm which constructs
|
bshanks@0 | 71 the classifier may look over the entire dataset. We can categorize score-based
|
bshanks@0 | 72 feature selection methods depending on how the score of calculated. Often
|
bshanks@0 | 73 the score calculation consists of assigning a sub-score to each voxel, and then
|
bshanks@0 | 74 aggregating these sub-scores into a final score (the aggregation is often a sum or
|
bshanks@0 | 75 a sum of squares). If only information from nearby voxels is used to calculate a
|
bshanks@0 | 76 voxel’s sub-score, then we say it is a local scoring method. If only information
|
bshanks@0 | 77 from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a
|
bshanks@0 | 78 pointwise scoring method.
|
bshanks@0 | 79 Key questions when choosing a learning method are: What are the instances?
|
bshanks@0 | 80 What are the features? How are the features chosen? Here are four principles
|
bshanks@0 | 81 that outline our answers to these questions.
|
bshanks@0 | 82 Principle 1: Combinatorial gene expression
|
bshanks@0 | 83 Above, we defined an “instance” as the combination of a voxel with the “asso-
|
bshanks@0 | 84 ciated gene expression data”. In our case this refers to the expression level of
|
bshanks@15 | 85 2
|
bshanks@15 | 86
|
bshanks@0 | 87 genes within the voxel, but should we include the expression levels of all genes,
|
bshanks@0 | 88 or only a few of them?
|
bshanks@0 | 89 It is too much to hope that every anatomical region of interest will be iden-
|
bshanks@0 | 90 tified by a single gene. For example, in the cortex, there are some areas which
|
bshanks@0 | 91 are not clearly delineated by any gene included in the Allen Brain Atlas (ABA)
|
bshanks@0 | 92 dataset. However, at least some of these areas can be delineated by looking
|
bshanks@0 | 93 at combinations of genes (an example of an area for which multiple genes are
|
bshanks@0 | 94 necessary and sufficient is provided in Preliminary Results).
|
bshanks@0 | 95 Principle 2: Only look at combinations of small numbers of genes
|
bshanks@0 | 96 When the classifier classifies a voxel, it is only allowed to look at the expression of
|
bshanks@0 | 97 the genes which have been selected as features. The more data that is available
|
bshanks@0 | 98 to a classifier, the better that it can do. For example, perhaps there are weak
|
bshanks@0 | 99 correlations over many genes that add up to a strong signal. So, why not include
|
bshanks@0 | 100 every gene as a feature? The reason is that we wish to employ the classifier in
|
bshanks@0 | 101 situations in which it is not feasible to gather data about every gene. For
|
bshanks@0 | 102 example, if we want to use the expression of marker genes as a trigger for some
|
bshanks@0 | 103 regionally-targeted intervention, then our intervention must contain a molecular
|
bshanks@0 | 104 mechanism to check the expression level of each marker gene before it triggers.
|
bshanks@0 | 105 It is currently infeasible to design a molecular trigger that checks the level of
|
bshanks@0 | 106 more than a handful of genes. Similarly, if the goal is to develop a procedure to
|
bshanks@0 | 107 do ISH on tissue samples in order to label their anatomy, then it is infeasible
|
bshanks@0 | 108 to label more than a few genes. Therefore, we must select only a few genes as
|
bshanks@0 | 109 features.
|
bshanks@0 | 110 Principle 3: Use geometry in feature selection
|
bshanks@1 | 111 When doing feature selection with score-based methods, the simplest thing to do
|
bshanks@1 | 112 would be to score the performance of each voxel by itself and then combine these
|
bshanks@1 | 113 scores (pointwise scoring). A more powerful approach is to also use information
|
bshanks@1 | 114 about the geometric relations between each voxel and its neighbors; this requires
|
bshanks@1 | 115 non-pointwise, local scoring methods. See Preliminary Results for evidence of
|
bshanks@1 | 116 the complementary nature of pointwise and local scoring methods.
|
bshanks@0 | 117 Principle 4: Work in 2-D whenever possible
|
bshanks@0 | 118 There are many anatomical structures which are commonly characterized in
|
bshanks@0 | 119 terms of a two-dimensional manifold. When it is known that the structure that
|
bshanks@0 | 120 one is looking for is two-dimensional, the results may be improved by allowing
|
bshanks@0 | 121 the analysis algorithm to take advantage of this prior knowledge. In addition,
|
bshanks@0 | 122 it is easier for humans to visualize and work with 2-D data.
|
bshanks@0 | 123 Therefore, when possible, the instances should represent pixels, not voxels.
|
bshanks@8 | 124 3
|
bshanks@8 | 125
|
bshanks@1 | 126 Aim 2
|
bshanks@15 | 127 Machine learning terminology: clustering
|
bshanks@15 | 128 If one is given a dataset consisting merely of instances, with no class labels, then
|
bshanks@15 | 129 analysis of the dataset is referred to as unsupervised learning in the jargon of
|
bshanks@15 | 130 machine learning. One thing that you can do with such a dataset is to group
|
bshanks@15 | 131 instances together. A set of similar instances is called a cluster, and the activity
|
bshanks@15 | 132 of finding grouping the data into clusters is called clustering or cluster analysis.
|
bshanks@15 | 133 The task of deciding how to carve up a structure into anatomical subregions
|
bshanks@15 | 134 can be put into these terms. The instances are once again voxels (or pixels)
|
bshanks@15 | 135 along with their associated gene expression profiles. We make the assumption
|
bshanks@15 | 136 that voxels from the same subregion have similar gene expression profiles, at
|
bshanks@15 | 137 least compared to the other subregions. This means that clustering voxels is
|
bshanks@15 | 138 the same as finding potential subregions; we seek a partitioning of the voxels
|
bshanks@15 | 139 into subregions, that is, into clusters of voxels with similar gene expression.
|
bshanks@15 | 140 It is desirable to determine not just one set of subregions, but also how
|
bshanks@15 | 141 these subregions relate to each other, if at all; perhaps some of the subregions
|
bshanks@15 | 142 are more similar to each other than to the rest, suggesting that, although at a
|
bshanks@15 | 143 fine spatial scale they could be considered separate, on a coarser spatial scale
|
bshanks@15 | 144 they could be grouped together into one large subregion. This suggests the
|
bshanks@15 | 145 outcome of clustering may be a hierarchial tree of clusters, rather than a single
|
bshanks@15 | 146 set of clusters which partition the voxels. This is called hierarchial clustering.
|
bshanks@15 | 147 Similarity scores
|
bshanks@15 | 148 todo
|
bshanks@15 | 149 Spatially contiguous clusters; image segmentation
|
bshanks@15 | 150 We have shown that aim 2 is a type of clustering task. In fact, it is a special type
|
bshanks@15 | 151 of clustering task because we have an additional constraint on clusters; voxels
|
bshanks@15 | 152 grouped together into a cluster must be spatially contiguous. In Preliminary
|
bshanks@15 | 153 Results, we show that one can get reasonable results without enforcing this
|
bshanks@15 | 154 constraint, however, we plan to compare these results against other methods
|
bshanks@15 | 155 which guarantee contiguous clusters.
|
bshanks@15 | 156 Perhaps the biggest source of continguous clustering algorithms is the field
|
bshanks@15 | 157 of computer vision, which has produced a variety of image segmentation algo-
|
bshanks@15 | 158 rithms. Image segmentation is the task of partitioning the pixels in a digital
|
bshanks@15 | 159 image into clusters, usually contiguous clusters. Aim 2 is similar to an image
|
bshanks@15 | 160 segmentation task. There are two main differences; in our task, there are thou-
|
bshanks@15 | 161 sands of color channels (one for each gene), rather than just three. There are
|
bshanks@15 | 162 imaging tasks which use more than three colors, however, for example multispec-
|
bshanks@15 | 163 tral imaging and hyperspectral imaging, which are often used to process satellite
|
bshanks@15 | 164 imagery. A more crucial difference is that there are various cues which are ap-
|
bshanks@15 | 165 propriate for detecting sharp object boundaries in a visual scene but which are
|
bshanks@15 | 166 not appropriate for segmenting abstract spatial data such as gene expression.
|
bshanks@15 | 167 4
|
bshanks@15 | 168
|
bshanks@15 | 169 Although many image segmentation algorithms can be expected to work well
|
bshanks@15 | 170 for segmenting other sorts of spatially arranged data, some of these algorithms
|
bshanks@15 | 171 are specialized for visual images.
|
bshanks@15 | 172 Dimensionality reduction
|
bshanks@15 | 173 Unlike aim 1, there is no externally-imposed need to select only a handful of
|
bshanks@15 | 174 informative genes for inclusion in the instances. However, some clustering al-
|
bshanks@15 | 175 gorithms perform better on small numbers of features. There are techniques
|
bshanks@15 | 176 which “summarize” a larger number of features using a smaller number of fea-
|
bshanks@15 | 177 tures; these techniques go by the name of feature extraction or dimensionality
|
bshanks@15 | 178 reduction. The small set of features that such a technique yields is called the
|
bshanks@15 | 179 reduced feature set. After the reduced feature set is created, the instances may
|
bshanks@15 | 180 be replaced by reduced instances, which have as their features the reduced fea-
|
bshanks@15 | 181 ture set rather than the original feature set of all gene expression levels. Note
|
bshanks@15 | 182 that the features in the reduced feature set do not necessarily correspond to
|
bshanks@15 | 183 genes; each feature in the reduced set may be any function of the set of gene
|
bshanks@15 | 184 expression levels.
|
bshanks@15 | 185 Another use for dimensionality reduction is to visualize the relationships
|
bshanks@15 | 186 between subregions. For example, one might want to make a 2-D plot upon
|
bshanks@15 | 187 which each subregion is represented by a single point, and with the property
|
bshanks@15 | 188 that subregions with similar gene expression profiles should be nearby on the
|
bshanks@15 | 189 plot (that is, the property that distance between pairs of points in the plot
|
bshanks@15 | 190 should be proportional to some measure of dissimilarity in gene expression). It
|
bshanks@15 | 191 is likely that no arrangement of the points on a 2-D plan will exactly satisfy
|
bshanks@15 | 192 this property – however, dimensionality reduction techniques allow one to find
|
bshanks@15 | 193 arrangements of points that approximately satisfy that property. Note that
|
bshanks@15 | 194 in this application, dimensionality reduction is being applied after clustering;
|
bshanks@15 | 195 whereas in the previous paragraph, we were talking about using dimensionality
|
bshanks@15 | 196 reduction before clustering.
|
bshanks@15 | 197 Clustering genes rather than voxels
|
bshanks@15 | 198 Although the ultimate goal is to cluster the instances (voxels or pixels), one
|
bshanks@15 | 199 strategy to achieve this goal is to first cluster the features (genes). There are
|
bshanks@15 | 200 two ways that clusters of genes could be used.
|
bshanks@15 | 201 Gene clusters could be used as part of dimensionality reduction: rather than
|
bshanks@15 | 202 have one feature for each gene, we could have one reduced feature for each gene
|
bshanks@15 | 203 cluster.
|
bshanks@15 | 204 Gene clusters could also be used to directly yield a clustering on instances.
|
bshanks@15 | 205 This is because many genes have an expression pattern which seems to pick
|
bshanks@15 | 206 out a single, spatially continguous subregion. Therefore, it seems likely that an
|
bshanks@15 | 207 anatomically interesting subregion will have multiple genes which each individ-
|
bshanks@15 | 208 ually pick it out1. This suggests the following procedure: cluster together genes
|
bshanks@15 | 209 __________________________
|
bshanks@15 | 210 1This would seem to contradict our finding in aim 1 that some cortical areas are combina-
|
bshanks@15 | 211 torially coded by multiple genes. However, it is possible that the currently accepted cortical
|
bshanks@15 | 212 5
|
bshanks@15 | 213
|
bshanks@15 | 214 which pick out similar subregions, and then to use the more popular common
|
bshanks@15 | 215 subregions as the final clusters. In the Preliminary Data we show that a num-
|
bshanks@15 | 216 ber of anatomically recognized cortical regions, as well as some “superregions”
|
bshanks@15 | 217 formed by lumping together a few regions, are associated with gene clusters in
|
bshanks@15 | 218 this fashion.
|
bshanks@0 | 219 Aim 3
|
bshanks@0 | 220 Background
|
bshanks@0 | 221 The cortex is divided into areas and layers. To a first approximation, the par-
|
bshanks@0 | 222 cellation of the cortex into areas can be drawn as a 2-D map on the surface
|
bshanks@0 | 223 of the cortex. In the third dimension, the boundaries between the areas con-
|
bshanks@0 | 224 tinue downwards into the cortical depth, perpendicular to the surface. The layer
|
bshanks@0 | 225 boundaries run parallel to the surface. One can picture an area of the cortex as
|
bshanks@0 | 226 a slice of many-layered cake.
|
bshanks@0 | 227 Although it is known that different cortical areas have distinct roles in both
|
bshanks@0 | 228 normal functioning and in disease processes, there are no known marker genes
|
bshanks@0 | 229 for many cortical areas. When it is necessary to divide a tissue sample into
|
bshanks@0 | 230 cortical areas, this is a manual process that requires a skilled human to combine
|
bshanks@0 | 231 multiple visual cues and interpret them in the context of their approximate
|
bshanks@0 | 232 location upon the cortical surface.
|
bshanks@0 | 233 Even the questions of how many areas should be recognized in cortex, and
|
bshanks@0 | 234 what their arrangement is, are still not completely settled. A proposed division
|
bshanks@0 | 235 of the cortex into areas is called a cortical map. In the rodent, the lack of a
|
bshanks@0 | 236 single agreed-upon map can be seen by contrasting the recent maps given by
|
bshanks@0 | 237 Swanson?? on the one hand, and Paxinos and Franklin?? on the other. While
|
bshanks@0 | 238 the maps are certainly very similar in their general arrangement, significant
|
bshanks@0 | 239 differences remain in the details.
|
bshanks@0 | 240 Significance
|
bshanks@0 | 241 The method developed in aim (1) will be applied to each cortical area to find
|
bshanks@0 | 242 a set of marker genes such that the combinatorial expression pattern of those
|
bshanks@0 | 243 genes uniquely picks out the target area. Finding marker genes will be useful
|
bshanks@0 | 244 for drug discovery as well as for experimentation because marker genes can be
|
bshanks@0 | 245 used to design interventions which selectively target individual cortical areas.
|
bshanks@0 | 246 The application of the marker gene finding algorithm to the cortex will
|
bshanks@0 | 247 also support the development of new neuroanatomical methods. In addition to
|
bshanks@0 | 248 finding markers for each individual cortical areas, we will find a small panel
|
bshanks@0 | 249 of genes that can find many of the areal boundaries at once. This panel of
|
bshanks@0 | 250 marker genes will allow the development of an ISH protocol that will allow
|
bshanks@0 | 251 experimenters to more easily identify which anatomical areas are present in
|
bshanks@0 | 252 small samples of cortex.
|
bshanks@15 | 253 ______
|
bshanks@15 | 254 maps divide the cortex into subregions which are unnatural from the point of view of gene
|
bshanks@15 | 255 expression; perhaps there is some other way to map the cortex for which each subregion can
|
bshanks@15 | 256 be identified by single genes.
|
bshanks@15 | 257 6
|
bshanks@15 | 258
|
bshanks@0 | 259 The method developed in aim (3) will provide a genoarchitectonic viewpoint
|
bshanks@0 | 260 that will contribute to the creation of a better map. The development of present-
|
bshanks@0 | 261 day cortical maps was driven by the application of histological stains. It is
|
bshanks@0 | 262 conceivable that if a different set of stains had been available which identified
|
bshanks@0 | 263 a different set of features, then the today’s cortical maps would have come out
|
bshanks@0 | 264 differently. Since the number of classes of stains is small compared to the number
|
bshanks@0 | 265 of genes, it is likely that there are many repeated, salient spatial patterns in
|
bshanks@0 | 266 the gene expression which have not yet been captured by any stain. Therefore,
|
bshanks@0 | 267 current ideas about cortical anatomy need to incorporate what we can learn
|
bshanks@0 | 268 from looking at the patterns of gene expression.
|
bshanks@0 | 269 While we do not here propose to analyze human gene expression data, it is
|
bshanks@0 | 270 conceivable that the methods we propose to develop could be used to suggest
|
bshanks@0 | 271 modifications to the human cortical map as well.
|
bshanks@0 | 272 Related work
|
bshanks@1 | 273 todo
|
bshanks@15 | 274 vs. AGEA – i wrote something on this but i’m going to rewrite it
|
bshanks@0 | 275 Preliminary work
|
bshanks@15 | 276 Format conversion between SEV, MATLAB, NIFTI
|
bshanks@15 | 277 todo
|
bshanks@15 | 278 Flatmap of cortex
|
bshanks@15 | 279 todo
|
bshanks@15 | 280 Using combinations of multiple genes is necessary and sufficient to
|
bshanks@15 | 281 delineate some cortical areas
|
bshanks@0 | 282 Here we give an example of a cortical area which is not marked by any single
|
bshanks@0 | 283 gene, but which can be identified combinatorially. according to logistic regres-
|
bshanks@15 | 284 sion, gene wwc12 is the best fit single gene for predicting whether or not a pixel
|
bshanks@0 | 285 on the cortical surface belongs to the motor area (area MO). The upper-left
|
bshanks@0 | 286 picture in Figure shows wwc1’s spatial expression pattern over the cortex. The
|
bshanks@0 | 287 lower-right boundary of MO is represented reasonably well by this gene, however
|
bshanks@0 | 288 the gene overshoots the upper-left boundary. This flattened 2-D representation
|
bshanks@0 | 289 does not show it, but the area corresponding to the overshoot is the medial
|
bshanks@0 | 290 surface of the cortex. MO is only found on the lateral surface (todo).
|
bshanks@15 | 291 Gnee mtif23 is shown in figure the upper-right of Fig. . Mtif2 captures MO’s
|
bshanks@0 | 292 upper-left boundary, but not its lower-right boundary. Mtif2 does not express
|
bshanks@0 | 293 very much on the medial surface. By adding together the values at each pixel
|
bshanks@15 | 294 __________________________
|
bshanks@15 | 295 2“WW, C2 and coiled-coil domain containing 1”; EntrezGene ID 211652
|
bshanks@15 | 296 3“mitochondrial translational initiation factor 2”; EntrezGene ID 76784
|
bshanks@15 | 297 7
|
bshanks@0 | 298
|
bshanks@0 | 299
|
bshanks@0 | 300
|
bshanks@0 | 301 Figure 1: Upper left: wwc1. Upper right: mtif2. Lower left: wwc1 + mtif2
|
bshanks@0 | 302 (each pixel’s value on the lower left is the sum of the corresponding pixels in
|
bshanks@0 | 303 the upper row). Within each picture, the vertical axis roughly corresponds to
|
bshanks@0 | 304 anterior at the top and posterior at the bottom, and the horizontal axis roughly
|
bshanks@0 | 305 corresponds to medial at the left and lateral at the right. The red outline is
|
bshanks@0 | 306 the boundary of region MO. Pixels are colored approximately according to the
|
bshanks@0 | 307 density of expressing cells underneath each pixel, with red meaning a lot of
|
bshanks@0 | 308 expression and blue meaning little.
|
bshanks@15 | 309 in these two figures, we get the lower-left of Figure . This combination captures
|
bshanks@15 | 310 area MO much better than any single gene.
|
bshanks@15 | 311 Geometric and pointwise scoring methods provide complementary
|
bshanks@15 | 312 information
|
bshanks@0 | 313 To show that local geometry can provide useful information that cannot be
|
bshanks@0 | 314 detected via pointwise analyses, consider Fig. . The top row of Fig. displays
|
bshanks@0 | 315 the 3 genes which most match area AUD, according to a pointwise method4. The
|
bshanks@0 | 316 bottom row displays the 3 genes which most match AUD according to a method
|
bshanks@0 | 317 which considers local geometry5 The pointwise method in the top row identifies
|
bshanks@0 | 318 genes which express more strongly in AUD than outside of it; its weakness is that
|
bshanks@15 | 319 __________________________
|
bshanks@15 | 320 4For each gene, a logistic regression in which the response variable was whether or not a
|
bshanks@15 | 321 surface pixel was within area AUD, and the predictor variable was the value of the expression
|
bshanks@15 | 322 of the gene underneath that pixel. The resulting scores were used to rank the genes in terms
|
bshanks@15 | 323 of how well they predict area AUD.
|
bshanks@15 | 324 5For each gene the gradient similarity (see section ??) between (a) a map of the expression
|
bshanks@15 | 325 of each gene on the cortical surface and (b) the shape of area AUD, was calculated, and this
|
bshanks@15 | 326 was used to rank the genes.
|
bshanks@15 | 327 8
|
bshanks@15 | 328
|
bshanks@15 | 329
|
bshanks@15 | 330
|
bshanks@15 | 331 Figure 2: The top row shows the three genes which (individually) best predict
|
bshanks@15 | 332 area AUD, according to logistic regression. The bottom row shows the three
|
bshanks@15 | 333 genes which (individually) best match area AUD, according to gradient similar-
|
bshanks@15 | 334 ity. From left to right and top to bottom, the genes are Ssr1, Efcbp1, Aph1a,
|
bshanks@15 | 335 Ptk7, Aph1a again, and Lepr
|
bshanks@0 | 336 this includes many areas which don’t have a salient border matching the areal
|
bshanks@0 | 337 border. The geometric method identifies genes whose salient expression border
|
bshanks@0 | 338 seems to partially line up with the border of AUD; its weakness is that this
|
bshanks@0 | 339 includes genes which don’t express over the entire area. Genes which have high
|
bshanks@0 | 340 rankings using both pointwise and border criteria, such as Aph1a in the example,
|
bshanks@0 | 341 may be particularly good markers. None of these genes are, individually, a
|
bshanks@0 | 342 perfect marker for AUD; we deliberately chose a “difficult” area in order to
|
bshanks@0 | 343 better contrast pointwise with geometric methods.
|
bshanks@15 | 344 Areas which can be identified by single genes
|
bshanks@15 | 345 todo
|
bshanks@15 | 346 Aim 1 (and Aim 3)
|
bshanks@15 | 347 SVM on all genes at once
|
bshanks@15 | 348 In order to see how well one can do when looking at all genes at once, we ran
|
bshanks@15 | 349 a support vector machine to classify cortical surface pixels based on their gene
|
bshanks@15 | 350 expression profiles. We achieved classification accuracy of about 81%6. As noted
|
bshanks@15 | 351 above, however, a classifier that looks at all the genes at once isn’t practically
|
bshanks@15 | 352 useful.
|
bshanks@15 | 353 _____________________
|
bshanks@15 | 354 6Using the Shogun SVM package (todo:cite), with parameters type=GMNPSVM (multi-
|
bshanks@15 | 355 class b-SVM), kernal = gaussian with sigma = 0.1, c = 10, epsilon = 1e-1 – these are the
|
bshanks@15 | 356 first parameters we tried, so presumably performance would improve with different choices of
|
bshanks@15 | 357 parameters. 5-fold cross-validation.
|
bshanks@6 | 358 9
|
bshanks@6 | 359
|
bshanks@15 | 360 The requirement to find combinations of only a small number of genes limits
|
bshanks@15 | 361 us from straightforwardly applying many of the most simple techniques from
|
bshanks@15 | 362 the field of supervised machine learning. In the parlance of machine learning,
|
bshanks@15 | 363 our task combines feature selection with supervised learning.
|
bshanks@15 | 364 Decision trees
|
bshanks@15 | 365 todo
|
bshanks@15 | 366 Aim 2 (and Aim 3)
|
bshanks@15 | 367 Raw dimensionality reduction results
|
bshanks@15 | 368 Dimensionality reduction plus K-means or spectral clus-
|
bshanks@15 | 369 tering
|
bshanks@15 | 370 Many areas are captured by clusters of genes
|
bshanks@15 | 371 todo
|
bshanks@15 | 372 todo
|
bshanks@15 | 373 Research plan
|
bshanks@15 | 374 todo
|
bshanks@15 | 375 amongst other thigns:
|
bshanks@0 | 376 Develop algorithms that find genetic markers for anatomical regions
|
bshanks@0 | 377 1. Develop scoring measures for evaluating how good individual genes are at
|
bshanks@0 | 378 marking areas: we will compare pointwise, geometric, and information-
|
bshanks@0 | 379 theoretic measures.
|
bshanks@0 | 380 2. Develop a procedure to find single marker genes for anatomical regions: for
|
bshanks@0 | 381 each cortical area, by using or combining the scoring measures developed,
|
bshanks@0 | 382 we will rank the genes by their ability to delineate each area.
|
bshanks@0 | 383 3. Extend the procedure to handle difficult areas by using combinatorial cod-
|
bshanks@0 | 384 ing: for areas that cannot be identified by any single gene, identify them
|
bshanks@0 | 385 with a handful of genes. We will consider both (a) algorithms that incre-
|
bshanks@0 | 386 mentally/greedily combine single gene markers into sets, such as forward
|
bshanks@0 | 387 stepwise regression and decision trees, and also (b) supervised learning
|
bshanks@0 | 388 techniques which use soft constraints to minimize the number of features,
|
bshanks@0 | 389 such as sparse support vector machines.
|
bshanks@0 | 390 4. Extend the procedure to handle difficult areas by combining or redrawing
|
bshanks@0 | 391 the boundaries: An area may be difficult to identify because the bound-
|
bshanks@0 | 392 aries are misdrawn, or because it does not “really” exist as a single area,
|
bshanks@0 | 393 at least on the genetic level. We will develop extensions to our procedure
|
bshanks@15 | 394 10
|
bshanks@15 | 395
|
bshanks@0 | 396 which (a) detect when a difficult area could be fit if its boundary were
|
bshanks@0 | 397 redrawn slightly, and (b) detect when a difficult area could be combined
|
bshanks@0 | 398 with adjacent areas to create a larger area which can be fit.
|
bshanks@0 | 399 Apply these algorithms to the cortex
|
bshanks@0 | 400 1. Create open source format conversion tools: we will create tools to bulk
|
bshanks@0 | 401 download the ABA dataset and to convert between SEV, NIFTI and MAT-
|
bshanks@0 | 402 LAB formats.
|
bshanks@0 | 403 2. Flatmap the ABA cortex data: map the ABA data onto a plane and draw
|
bshanks@0 | 404 the cortical area boundaries onto it.
|
bshanks@0 | 405 3. Find layer boundaries: cluster similar voxels together in order to auto-
|
bshanks@0 | 406 matically find the cortical layer boundaries.
|
bshanks@0 | 407 4. Run the procedures that we developed on the cortex: we will present, for
|
bshanks@0 | 408 each area, a short list of markers to identify that area; and we will also
|
bshanks@0 | 409 present lists of “panels” of genes that can be used to delineate many areas
|
bshanks@0 | 410 at once.
|
bshanks@0 | 411 Develop algorithms to suggest a division of a structure into anatom-
|
bshanks@0 | 412 ical parts
|
bshanks@0 | 413 1. Explore dimensionality reduction algorithms applied to pixels: including
|
bshanks@0 | 414 TODO
|
bshanks@0 | 415 2. Explore dimensionality reduction algorithms applied to genes: including
|
bshanks@0 | 416 TODO
|
bshanks@0 | 417 3. Explore clustering algorithms applied to pixels: including TODO
|
bshanks@0 | 418 4. Explore clustering algorithms applied to genes: including gene shaving,
|
bshanks@0 | 419 TODO
|
bshanks@0 | 420 5. Develop an algorithm to use dimensionality reduction and/or hierarchial
|
bshanks@0 | 421 clustering to create anatomical maps
|
bshanks@0 | 422 6. Run this algorithm on the cortex: present a hierarchial, genoarchitectonic
|
bshanks@0 | 423 map of the cortex
|
bshanks@15 | 424 ______________________________________________
|
bshanks@15 | 425 stuff i dunno where to put yet (there is more scattered through grant-
|
bshanks@15 | 426 oldtext):
|
bshanks@15 | 427 11
|
bshanks@15 | 428
|
bshanks@15 | 429 Principle 4: Work in 2-D whenever possible
|
bshanks@15 | 430 In anatomy, the manifold of interest is usually either defined by a combination
|
bshanks@15 | 431 of two relevant anatomical axes (todo), or by the surface of the structure (as is
|
bshanks@15 | 432 the case with the cortex). In the former case, the manifold of interest is a plane,
|
bshanks@15 | 433 but in the latter case it is curved. If the manifold is curved, there are various
|
bshanks@15 | 434 methods for mapping the manifold into a plane.
|
bshanks@15 | 435 The method that we will develop will begin by mapping the data into a
|
bshanks@15 | 436 2-D plane. Although the manifold that characterized cortical areas is known
|
bshanks@15 | 437 to be the cortical surface, it remains to be seen which method of mapping the
|
bshanks@15 | 438 manifold into a plane is optimal for this application. We will compare mappings
|
bshanks@15 | 439 which attempt to preserve size (such as the one used by Caret??) with mappings
|
bshanks@15 | 440 which preserve angle (conformal maps).
|
bshanks@15 | 441 Although there is much 2-D organization in anatomy, there are also struc-
|
bshanks@15 | 442 tures whose shape is fundamentally 3-dimensional. If possible, we would like
|
bshanks@15 | 443 the method we develop to include a statistical test that warns the user if the
|
bshanks@15 | 444 assumption of 2-D structure seems to be wrong.
|
bshanks@8 | 445 12
|
bshanks@8 | 446
|
bshanks@15 | 447
|