cg

view grant.html @ 65:f1f92feb3230

.
author bshanks@bshanks.dyndns.org
date Sun Apr 19 16:24:10 2009 -0700 (16 years ago)
parents 54ac7984b164
children f14c34563ff8
line source
1 Specific aims
2 Massivenew datasets obtained with techniques such as in situ hybridization (ISH), immunohistochemistry, in situ transgenic
3 reporter, microarray voxelation, and others, allow the expression levels of many genes at many locations to be compared.
4 Our goal is to develop automated methods to relate spatial variation in gene expression to anatomy. We want to find marker
5 genes for specific anatomical regions, and also to draw new anatomical maps based on gene expression patterns. We have
6 three specific aims:
7 (1) develop an algorithm to screen spatial gene expression data for combinations of marker genes which selectively target
8 anatomical regions
9 (2) develop an algorithm to suggest new ways of carving up a structure into anatomical regions, based on spatial patterns
10 in gene expression
11 (3) create a 2-D “flat map” dataset of the mouse cerebral cortex that contains a flattened version of the Allen Mouse
12 Brain Atlas ISH data, as well as the boundaries of cortical anatomical areas. This will involve extending the functionality of
13 Caret, an existing open-source scientific imaging program. Use this dataset to validate the methods developed in (1) and (2).
14 In addition to validating the usefulness of the algorithms, the application of these methods to cerebral cortex will produce
15 immediate benefits, because there are currently no known genetic markers for many cortical areas. The results of the project
16 will support the development of new ways to selectively target cortical areas, and it will support the development of a
17 method for identifying the cortical areal boundaries present in small tissue samples.
18 All algorithms that we develop will be implemented in a GPL open-source software toolkit. The toolkit, as well as the
19 machine-readable datasets developed in aim (3), will be published and freely available for others to use.
20 Background and significance
21 Aim 1
22 Machine learning terminology: supervised learning
23 The task of looking for marker genes for anatomical regions means that one is looking for a set of genes such that, if the
24 expression level of those genes is known, then the locations of the regions can be inferred.
25 If we define the regions so that they cover the entire anatomical structure to be divided, then instead of saying that we
26 are using gene expression to find the locations of the regions, we may say that we are using gene expression to determine to
27 which region each voxel within the structure belongs. We call this a classification task, because each voxel is being assigned
28 to a class (namely, its region).
29 Therefore, an understanding of the relationship between the combination of their expression levels and the locations of
30 the regions may be expressed as a function. The input to this function is a voxel, along with the gene expression levels
31 within that voxel; the output is the regional identity of the target voxel, that is, the region to which the target voxel belongs.
32 We call this function a classifier. In general, the input to a classifier is called an instance, and the output is called a label
33 (or a class label).
34 The object of aim 1 is not to produce a single classifier, but rather to develop an automated method for determining a
35 classifier for any known anatomical structure. Therefore, we seek a procedure by which a gene expression dataset may be
36 analyzed in concert with an anatomical atlas in order to produce a classifier. Such a procedure is a type of a machine learning
37 procedure. The construction of the classifier is called training (also learning), and the initial gene expression dataset used
38 in the construction of the classifier is called training data.
39 In the machine learning literature, this sort of procedure may be thought of as a supervised learning task, defined as a
40 task in which the goal is to learn a mapping from instances to labels, and the training data consists of a set of instances
41 (voxels) for which the labels (regions) are known.
42 Each gene expression level is called a feature, and the selection of which genes1 to include is called feature selection.
43 Feature selection is one component of the task of learning a classifier. Some methods for learning classifiers start out with
44 a separate feature selection phase, whereas other methods combine feature selection with other aspects of training.
45 One class of feature selection methods assigns some sort of score to each candidate gene. The top-ranked genes are then
46 chosen. Some scoring measures can assign a score to a set of selected genes, not just to a single gene; in this case, a dynamic
47 procedure may be used in which features are added and subtracted from the selected set depending on how much they raise
48 the score. Such procedures are called “stepwise” or “greedy”.
49 Although the classifier itself may only look at the gene expression data within each voxel before classifying that voxel, the
50 learning algorithm which constructs the classifier may look over the entire dataset. We can categorize score-based feature
51 selection methods depending on how the score of calculated. Often the score calculation consists of assigning a sub-score to
52 each voxel, and then aggregating these sub-scores into a final score (the aggregation is often a sum or a sum of squares or
53 average). If only information from nearby voxels is used to calculate a voxel’s sub-score, then we say it is a local scoring
54 method. If only information from the voxel itself is used to calculate a voxel’s sub-score, then we say it is a pointwise scoring
55 method.
56 Key questions when choosing a learning method are: What are the instances? What are the features? How are the
57 features chosen? Here are four principles that outline our answers to these questions.
58 Principle 1: Combinatorial gene expression It is too much to hope that every anatomical region of interest will be
59 identified by a single gene. For example, in the cortex, there are some areas which are not clearly delineated by any gene
60 included in the Allen Brain Atlas (ABA) dataset. However, at least some of these areas can be delineated by looking at
61 combinations of genes (an example of an area for which multiple genes are necessary and sufficient is provided in Preliminary
62 Results). Therefore, each instance should contain multiple features (genes).
63 Principle 2: Only look at combinations of small numbers of genes When the classifier classifies a voxel, it is
64 only allowed to look at the expression of the genes which have been selected as features. The more data that is available to
65 a classifier, the better that it can do. For example, perhaps there are weak correlations over many genes that add up to a
66 strong signal. So, why not include every gene as a feature? The reason is that we wish to employ the classifier in situations
67 in which it is not feasible to gather data about every gene. For example, if we want to use the expression of marker genes as
68 a trigger for some regionally-targeted intervention, then our intervention must contain a molecular mechanism to check the
69 expression level of each marker gene before it triggers. It is currently infeasible to design a molecular trigger that checks the
70 level of more than a handful of genes. Similarly, if the goal is to develop a procedure to do ISH on tissue samples in order
71 to label their anatomy, then it is infeasible to label more than a few genes. Therefore, we must select only a few genes as
72 features.
73 __________________________________
74 1Strictly speaking, the features are gene expression levels, but we’ll call them genes.
75 The requirement to find combinations of only a small number of genes limits us from straightforwardly applying many
76 of the most simple techniques from the field of supervised machine learning. In the parlance of machine learning, our task
77 combines feature selection with supervised learning.
78 Principle 3: Use geometry in feature selection
79 When doing feature selection with score-based methods, the simplest thing to do would be to score the performance of
80 each voxel by itself and then combine these scores (pointwise scoring). A more powerful approach is to also use information
81 about the geometric relations between each voxel and its neighbors; this requires non-pointwise, local scoring methods. See
82 Preliminary Results for evidence of the complementary nature of pointwise and local scoring methods.
83 Principle 4: Work in 2-D whenever possible
84 There are many anatomical structures which are commonly characterized in terms of a two-dimensional manifold. When
85 it is known that the structure that one is looking for is two-dimensional, the results may be improved by allowing the analysis
86 algorithm to take advantage of this prior knowledge. In addition, it is easier for humans to visualize and work with 2-D
87 data.
88 Therefore, when possible, the instances should represent pixels, not voxels.
89 Related work
90 There is a substantial body of work on the analysis of gene expression data, most of this concerns gene expression data
91 which is not fundamentally spatial2.
92 As noted above, there has been much work on both supervised learning and there are many available algorithms for
93 each. However, the algorithms require the scientist to provide a framework for representing the problem domain, and the
94 way that this framework is set up has a large impact on performance. Creating a good framework can require creatively
95 reconceptualizing the problem domain, and is not merely a mechanical “fine-tuning” of numerical parameters. For example,
96 we believe that domain-specific scoring measures (such as gradient similarity, which is discussed in Preliminary Work) may
97 be necessary in order to achieve the best results in this application.
98 We are aware of six existing efforts to find marker genes using spatial gene expression data using automated methods.
99 [8 ] mentions the possibility of constructing a spatial region for each gene, and then, for each anatomical structure of
100 interest, computing what proportion of this structure is covered by the gene’s spatial region.
101 GeneAtlas[3] and EMAGE [18] allow the user to construct a search query by demarcating regions and then specifing
102 either the strength of expression or the name of another gene or dataset whose expression pattern is to be matched. For the
103 similiarity score (match score) between two images (in this case, the query and the gene expression images), GeneAtlas uses
104 the sum of a weighted L1-norm distance between vectors whose components represent the number of cells within a pixel3
105 whose expression is within four discretization levels. EMAGE uses Jaccard similarity, which is equal to the number of true
106 pixels in the intersection of the two images, divided by the number of pixels in their union. Neither GeneAtlas nor EMAGE
107 allow one to search for combinations of genes that define a region in concert but not separately.
108 [10 ] describes AGEA, ”Anatomic Gene Expression Atlas”. AGEA has three components:
109 ∙Gene Finder: The user selects a seed voxel and the system (1) chooses a cluster which includes the seed voxel, (2)
110 yields a list of genes which are overexpressed in that cluster. (note: the ABA website also contains pre-prepared lists
111 of overexpressed genes for selected structures)
112 ∙Correlation: The user selects a seed voxel and the shows the user how much correlation there is between the gene
113 expression profile of the seed voxel and every other voxel.
114 ∙Clusters: will be described later
115 Gene Finder is different from our Aim 1 in at least three ways. First, Gene Finder finds only single genes, whereas we
116 will also look for combinations of genes. Second, gene finder can only use overexpression as a marker, whereas we will also
117 search for underexpression. Third, Gene Finder uses a simple pointwise score4, whereas we will also use geometric scores
118 such as gradient similarity. The Preliminary Data section contains evidence that each of our three choices is the right one.
119 [4 ] looks at the mean expression level of genes within anatomical regions, and applies a Student’s t-test with Bonferroni
120 correction to determine whether the mean expression level of a gene is significantly higher in the target region. Like AGEA,
121 this is a pointwise measure (only the mean expression level per pixel is being analyzed), it is not being used to look for
122 underexpression, and does not look for combinations of genes.
123 _________________________________________
124 2By “fundamentally spatial” we mean that there is information from a large number of spatial locations indexed by spatial coordinates; not
125 just data which has only a few different locations or which is indexed by anatomical label.
126 3Actually, many of these projects use quadrilaterals instead of square pixels; but we will refer to them as pixels for simplicity.
127 4“Expression energy ratio”, which captures overexpression.
128 [7 ] describes a technique to find combinations of marker genes to pick out an anatomical region. They use an evolutionary
129 algorithm to evolve logical operators which combine boolean (thresholded) images in order to match a target image. Their
130 match score is Jaccard similarity.
131 In summary, there has been fruitful work on finding marker genes, however, only one of the previous projects explores
132 combinations of marker genes, and none of these publications compare the results obtained by using different algorithms or
133 scoring methods.
134 Aim 2
135 Machine learning terminology: clustering
136 If one is given a dataset consisting merely of instances, with no class labels, then analysis of the dataset is referred to as
137 unsupervised learning in the jargon of machine learning. One thing that you can do with such a dataset is to group instances
138 together. A set of similar instances is called a cluster, and the activity of finding grouping the data into clusters is called
139 clustering or cluster analysis.
140 The task of deciding how to carve up a structure into anatomical regions can be put into these terms. The instances are
141 once again voxels (or pixels) along with their associated gene expression profiles. We make the assumption that voxels from
142 the same region have similar gene expression profiles, at least compared to the other regions. This means that clustering
143 voxels is the same as finding potential regions; we seek a partitioning of the voxels into regions, that is, into clusters of voxels
144 with similar gene expression.
145 It is desirable to determine not just one set of regions, but also how these regions relate to each other, if at all; perhaps
146 some of the regions are more similar to each other than to the rest, suggesting that, although at a fine spatial scale they
147 could be considered separate, on a coarser spatial scale they could be grouped together into one large region. This suggests
148 the outcome of clustering may be a hierarchial tree of clusters, rather than a single set of clusters which partition the voxels.
149 This is called hierarchial clustering.
150 Similarity scores
151 A crucial choice when designing a clustering method is how to measure similarity, across either pairs of instances, or
152 clusters, or both. There is much overlap between scoring methods for feature selection (discussed above under Aim 1) and
153 scoring methods for similarity.
154 Spatially contiguous clusters; image segmentation
155 We have shown that aim 2 is a type of clustering task. In fact, it is a special type of clustering task because we have
156 an additional constraint on clusters; voxels grouped together into a cluster must be spatially contiguous. In Preliminary
157 Results, we show that one can get reasonable results without enforcing this constraint, however, we plan to compare these
158 results against other methods which guarantee contiguous clusters.
159 Perhaps the biggest source of continguous clustering algorithms is the field of computer vision, which has produced a
160 variety of image segmentation algorithms. Image segmentation is the task of partitioning the pixels in a digital image into
161 clusters, usually contiguous clusters. Aim 2 is similar to an image segmentation task. There are two main differences; in
162 our task, there are thousands of color channels (one for each gene), rather than just three. There are imaging tasks which
163 use more than three colors, however, for example multispectral imaging and hyperspectral imaging, which are often used
164 to process satellite imagery. A more crucial difference is that there are various cues which are appropriate for detecting
165 sharp object boundaries in a visual scene but which are not appropriate for segmenting abstract spatial data such as gene
166 expression. Although many image segmentation algorithms can be expected to work well for segmenting other sorts of
167 spatially arranged data, some of these algorithms are specialized for visual images.
168 Dimensionality reduction In this section, we discuss reducing the length of the per-pixel gene expression feature
169 vector. By “dimension”, we mean the dimension of this vector, not the spatial dimension of the underlying data.
170 Unlike aim 1, there is no externally-imposed need to select only a handful of informative genes for inclusion in the
171 instances. However, some clustering algorithms perform better on small numbers of features. There are techniques which
172 “summarize” a larger number of features using a smaller number of features; these techniques go by the name of feature
173 extraction or dimensionality reduction. The small set of features that such a technique yields is called the reduced feature
174 set. After the reduced feature set is created, the instances may be replaced by reduced instances, which have as their features
175 the reduced feature set rather than the original feature set of all gene expression levels. Note that the features in the reduced
176 feature set do not necessarily correspond to genes; each feature in the reduced set may be any function of the set of gene
177 expression levels.
178 Dimensionality reduction before clustering is useful on large datasets. First, because the number of features in the
179 reduced data set is less than in the original data set, the running time of clustering algorithms may be much less. Second,
180 it is thought that some clustering algorithms may give better results on reduced data.
181 Another use for dimensionality reduction is to visualize the relationships between regions after clustering. For example,
182 one might want to make a 2-D plot upon which each region is represented by a single point, and with the property that regions
183 with similar gene expression profiles should be nearby on the plot (that is, the property that distance between pairs of points
184 in the plot should be proportional to some measure of dissimilarity in gene expression). It is likely that no arrangement of
185 the points on a 2-D plan will exactly satisfy this property – however, dimensionality reduction techniques allow one to find
186 arrangements of points that approximately satisfy that property. Note that in this application, dimensionality reduction
187 is being applied after clustering; whereas in the previous paragraph, we were talking about using dimensionality reduction
188 before clustering.
189 Clustering genes rather than voxels
190 Although the ultimate goal is to cluster the instances (voxels or pixels), one strategy to achieve this goal is to first cluster
191 the features (genes). There are two ways that clusters of genes could be used.
192 Gene clusters could be used as part of dimensionality reduction: rather than have one feature for each gene, we could
193 have one reduced feature for each gene cluster.
194 Gene clusters could also be used to directly yield a clustering on instances. This is because many genes have an expression
195 pattern which seems to pick out a single, spatially continguous region. Therefore, it seems likely that an anatomically
196 interesting region will have multiple genes which each individually pick it out5. This suggests the following procedure:
197 cluster together genes which pick out similar regions, and then to use the more popular common regions as the final clusters.
198 In the Preliminary Data we show that a number of anatomically recognized cortical regions, as well as some “superregions”
199 formed by lumping together a few regions, are associated with gene clusters in this fashion.
200 The task of clustering both the instances and the features is called co-clustering, and there are a number of co-clustering
201 algorithms.
202 Related work
203 We are aware of five existing efforts to cluster spatial gene expression data.
204 [15 ] describes an analysis of the anatomy of the hippocampus using the ABA dataset. In addition to manual analysis,
205 two clustering methods were employed, a modified Non-negative Matrix Factorization (NNMF), and a hierarchial recursive
206 bifurcation clustering scheme based on correlation as the similarity score. The paper yielded impressive results, proving
207 the usefulness of computational genomic anatomy. We have run NNMF on the cortical dataset6 and while the results are
208 promising (see Preliminary Data), we think that it will be possible to find an even better method.
209 AGEA[10] includes a preset hierarchial clustering of voxels based on a recursive bifurcation algorithm with correlation
210 as the similarity metric. EMAGE[18] allows the user to select a dataset from among a large number of alternatives, or by
211 running a search query, and then to cluster the genes within that dataset. EMAGE clusters via hierarchial complete linkage
212 clustering with un-centred correlation as the similarity score.
213 [4 ] clustered genes, starting out by selecting 135 genes out of 20,000 which had high variance over voxels and which were
214 highly correlated with many other genes. They computed the matrix of (rank) correlations between pairs of these genes, and
215 ordered the rows of this matrix as follows: “the first row of the matrix was chosen to show the strongest contrast between
216 the highest and lowest correlation coefficient for that row. The remaining rows were then arranged in order of decreasing
217 similarity using a least squares metric”. The resulting matrix showed four clusters. For each cluster, prototypical spatial
218 expression patterns were created by averaging the genes in the cluster. The prototypes were analyzed manually, without
219 clustering voxels
220 In an interesting twist, [7] applies their technique for finding combinations of marker genes for the purpose of clustering
221 genes around a “seed gene”. The way they do this is by using the pattern of expression of the seed gene as the target image,
222 and then searching for other genes which can be combined to reproduce this pattern. Those other genes which are found
223 are considered to be related to the seed. The same team also describes a method[17] for finding “association rules” such as,
224 “if this voxel is expressed in by any gene, then that voxel is probably also expressed in by the same gene”. This could be
225 useful as part of a procedure for clustering voxels.
226 In summary, although these projects obtained clusterings, there has not been much comparison between different algo-
227 rithms or scoring methods, so it is likely that the best clustering method for this application has not yet been found. Also,
228 none of these projects did a separate dimensionality reduction step before clustering pixels, none tried to cluster genes first
229 in order to guide automated clustering of pixels into spatial regions, and none used co-clustering algorithms.
230 _________________________________________
231 5This would seem to contradict our finding in aim 1 that some cortical areas are combinatorially coded by multiple genes. However, it is
232 possible that the currently accepted cortical maps divide the cortex into regions which are unnatural from the point of view of gene expression;
233 perhaps there is some other way to map the cortex for which each region can be identified by single genes. Another possibility is that, although
234 the cluster prototype fits an anatomical region, the individual genes are each somewhat different from the prototype.
235 6We ran “vanilla” NNMF, whereas the paper under discussion used a modified method. Their main modification consisted of adding a soft
236 spatial contiguity constraint. However, on our dataset, NNMF naturally produced spatially contiguous clusters, so no additional constraint was
237 needed. The paper under discussion also mentions that they tried a hierarchial variant of NNMF, which we have not yet tried.
238 Aim 3
239 Background
240 The cortex is divided into areas and layers. To a first approximation, the parcellation of the cortex into areas can
241 be drawn as a 2-D map on the surface of the cortex. In the third dimension, the boundaries between the areas continue
242 downwards into the cortical depth, perpendicular to the surface. The layer boundaries run parallel to the surface. One can
243 picture an area of the cortex as a slice of many-layered cake.
244 Although it is known that different cortical areas have distinct roles in both normal functioning and in disease processes,
245 there are no known marker genes for many cortical areas. When it is necessary to divide a tissue sample into cortical areas,
246 this is a manual process that requires a skilled human to combine multiple visual cues and interpret them in the context of
247 their approximate location upon the cortical surface.
248 Even the questions of how many areas should be recognized in cortex, and what their arrangement is, are still not
249 completely settled. A proposed division of the cortex into areas is called a cortical map. In the rodent, the lack of a single
250 agreed-upon map can be seen by contrasting the recent maps given by Swanson[14] on the one hand, and Paxinos and
251 Franklin[11] on the other. While the maps are certainly very similar in their general arrangement, significant differences
252 remain in the details.
253 The Allen Mouse Brain Atlas dataset
254 The Allen Mouse Brain Atlas (ABA) data was produced by doing in-situ hybridization on slices of male, 56-day-old
255 C57BL/6J mouse brains. Pictures were taken of the processed slice, and these pictures were semi-automatically analyzed
256 in order to create a digital measurement of gene expression levels at each location in each slice. Per slice, cellular spatial
257 resolution is achieved. Using this method, a single physical slice can only be used to measure one single gene; many different
258 mouse brains were needed in order to measure the expression of many genes.
259 Next, an automated nonlinear alignment procedure located the 2D data from the various slices in a single 3D coordinate
260 system. In the final 3D coordinate system, voxels are cubes with 200 microns on a side. There are 67x41x58 = 159,326
261 voxels in the 3D coordinate system, of which 51,533 are in the brain[10].
262 Mus musculus, the common house mouse, is thought to contain about 22,000 protein-coding genes[20]. The ABA contains
263 data on about 20,000 genes in sagittal sections, out of which over 4,000 genes are also measured in coronal sections. Our
264 dataset is derived from only the coronal subset of the ABA, because the sagittal data does not cover the entire cortex, and
265 also has greater registration error[10]. Genes were selected by the Allen Institute for coronal sectioning based on, “classes
266 of known neuroscientific interest... or through post hoc identification of a marked non-ubiquitous expression pattern”[10].
267 The ABA is not the only large public spatial gene expression dataset. Other such resources include GENSAT[6],
268 GenePaint[19], its sister project GeneAtlas[3], BGEM[9], EMAGE[18], EurExpress7, EADHB8, MAMEP9, Xenbase10,
269 ZFIN[13], Aniseed11, VisiGene12, GEISHA[2], Fruitfly.org[16], COMPARE13 GXD[12], GEO[1]14. With the exception of
270 the ABA, GenePaint, and EMAGE, most of these resources have not (yet) extracted the expression intensity from the ISH
271 images and registered the results into a single 3-D space, and to our knowledge only ABA and EMAGE make this form of
272 data available for public download from the website15. Many of these resources focus on developmental gene expression.
273 Significance
274 The method developed in aim (1) will be applied to each cortical area to find a set of marker genes such that the
275 combinatorial expression pattern of those genes uniquely picks out the target area. Finding marker genes will be useful for
276 drug discovery as well as for experimentation because marker genes can be used to design interventions which selectively
277 target individual cortical areas.
278 The application of the marker gene finding algorithm to the cortex will also support the development of new neuroanatom-
279 ical methods. In addition to finding markers for each individual cortical areas, we will find a small panel of genes that can
280 find many of the areal boundaries at once. This panel of marker genes will allow the development of an ISH protocol that
281 will allow experimenters to more easily identify which anatomical areas are present in small samples of cortex.
282 The method developed in aim (2) will provide a genoarchitectonic viewpoint that will contribute to the creation of
283 a better map. The development of present-day cortical maps was driven by the application of histological stains. It is
284 conceivable that if a different set of stains had been available which identified a different set of features, then the today’s
285 cortical maps would have come out differently. Since the number of classes of stains is small compared to the number of
286 genes, it is likely that there are many repeated, salient spatial patterns in the gene expression which have not yet been
287 _________________________________________
288 7http://www.eurexpress.org/ee/; EurExpress data is also entered into EMAGE
289 8http://www.ncl.ac.uk/ihg/EADHB/database/EADHB_database.html
290 9http://mamep.molgen.mpg.de/index.php
291 10http://xenbase.org/
292 11http://aniseed-ibdm.univ-mrs.fr/
293 12http://genome.ucsc.edu/cgi-bin/hgVisiGene ; includes data from some the other listed data sources
294 13http://compare.ibdml.univ-mrs.fr/
295 14GXD and GEO contain spatial data but also non-spatial data. All GXD spatial data are also in EMAGE.
296 15without prior offline registration
297 captured by any stain. Therefore, current ideas about cortical anatomy need to incorporate what we can learn from looking
298 at the patterns of gene expression.
299 While we do not here propose to analyze human gene expression data, it is conceivable that the methods we propose to
300 develop could be used to suggest modifications to the human cortical map as well.
301 Related work
302 [10 ] describes the application of AGEA to the cortex. The paper describes interesting results on the structure of correlations
303 between voxel gene expression profiles within a handful of cortical areas. However, this sort of analysis is not related to either
304 of our aims, as it neither finds marker genes, nor does it suggest a cortical map based on gene expression data. Neither of
305 the other components of AGEA can be applied to cortical areas; AGEA’s Gene Finder cannot be used to find marker genes
306 for the cortical areas; and AGEA’s hierarchial clustering does not produce clusters corresponding to the cortical areas16.
307 In summary, for all three aims, (a) only one of the previous projects explores combinations of marker genes, (b) there has
308 been almost no comparison of different algorithms or scoring methods, and (c) there has been no work on computationally
309 finding marker genes for cortical areas, or on finding a hierarchial clustering that will yield a map of cortical areas de novo
310 from gene expression data.
311 Our project is guided by a concrete application with a well-specified criterion of success (how well we can find marker
312 genes for / reproduce the layout of cortical areas), which will provide a solid basis for comparing different methods.
313 _________________________________________
314 16In both cases, the root cause is that pairwise correlations between the gene expression of voxels in different areas but the same layer are
315 often stronger than pairwise correlations between the gene expression of voxels in different layers but the same area. Therefore, a pairwise voxel
316 correlation clustering algorithm will tend to create clusters representing cortical layers, not areas. This is why the hierarchial clustering does not
317 find most cortical areas (there are clusters which presumably correspond to the intersection of a layer and an area, but since one area will have
318 many layer-area intersection clusters, further work is needed to make sense of these). The reason that Gene Finder cannot find marker genes for
319 most cortical areas is that in Gene Finder, although the user chooses a seed voxel, Gene Finder chooses the ROI for which genes will be found,
320 and it creates that ROI by (pairwise voxel correlation) clustering around the seed.
323 Figure 1: Gene Pitx2 is selectively underexpressed in area SS (somatosensory).
324 Preliminary work
325 Format conversion between SEV, MATLAB, NIFTI
326 We have created software to (politely) download all of the SEV files from the Allen Institute website. We have also created
327 software to convert between the SEV, MATLAB, and NIFTI file formats, as well as some of Caret’s file formats.
328 Flatmap of cortex
329 We downloaded the ABA data and applied a mask to select only those voxels which belong to cerebral cortex. We divided
330 the cortex into hemispheres.
331 Using Caret[5], we created a mesh representation of the surface of the selected voxels. For each gene, for each node of
332 the mesh, we calculated an average of the gene expression of the voxels “underneath” that mesh node. We then flattened
333 the cortex, creating a two-dimensional mesh.
334 We sampled the nodes of the irregular, flat mesh in order to create a regular grid of pixel values. We converted this grid
335 into a MATLAB matrix.
336 We manually traced the boundaries of each cortical area from the ABA coronal reference atlas slides. We then converted
337 these manual traces into Caret-format regional boundary data on the mesh surface. We projected the regions onto the 2-d
338 mesh, and then onto the grid, and then we converted the region data into MATLAB format.
339 At this point, the data is in the form of a number of 2-D matrices, all in registration, with the matrix entries representing
340 a grid of points (pixels) over the cortical surface:
341 ∙A 2-D matrix whose entries represent the regional label associated with each surface pixel
342 ∙For each gene, a 2-D matrix whose entries represent the average expression level underneath each surface pixel
343 We created a normalized version of the gene expression data by subtracting each gene’s mean expression level (over all
344 surface pixels) and dividing each gene by its standard deviation.
345 The features and the target area are both functions on the surface pixels. They can be referred to as scalar fields over
346 the space of surface pixels; alternately, they can be thought of as images which can be displayed on the flatmapped surface.
347 To move beyond a single average expression level for each surface pixel, we plan to create a separate matrix for each
348 cortical layer to represent the average expression level within that layer. Cortical layers are found at different depths in
349 different parts of the cortex. In preparation for extracting the layer-specific datasets, we have extended Caret with routines
350 that allow the depth of the ROI for volume-to-surface projection to vary.
351 In the Research Plan, we describe how we will automatically locate the layer depths. For validation, we have manually
352 demarcated the depth of the outer boundary of cortical layer 5 throughout the cortex.
353 Feature selection and scoring methods
354 Underexpression of a gene can serve as a marker Underexpression of a gene can sometimes serve as a marker. See,
355 for example, Figure 1.
356 Correlation Recall that the instances are surface pixels, and consider the problem of attempting to classify each instance
357 as either a member of a particular anatomical area, or not. The target area can be represented as a boolean mask over the
358 surface pixels.
359 One class of feature selection scoring method are those which calculate some sort of “match” between each gene image
360 and the target image. Those genes which match the best are good candidates for features.
361 One of the simplest methods in this class is to use correlation as the match score. We calculated the correlation between
362 each gene and each cortical area. The top row of Figure 2 shows the three genes most correlated with area SS.
366 Figure 2: Top row: Genes Nfic, A930001M12Rik, C130038G02Rik are the most correlated with area SS (somatosensory
367 cortex). Bottom row: Genes C130038G02Rik, Cacna1i, Car10 are those with the best fit using logistic regression. Within
368 each picture, the vertical axis roughly corresponds to anterior at the top and posterior at the bottom, and the horizontal
369 axis roughly corresponds to medial at the left and lateral at the right. The red outline is the boundary of region MO. Pixels
370 are colored according to correlation, with red meaning high correlation and blue meaning low.
371 Conditional entropy An information-theoretic scoring method is to find features such that, if the features (gene
372 expression levels) are known, uncertainty about the target (the regional identity) is reduced. Entropy measures uncertainty,
373 so what we want is to find features such that the conditional distribution of the target has minimal entropy. The distribution
374 to which we are referring is the probability distribution over the population of surface pixels.
375 The simplest way to use information theory is on discrete data, so we discretized our gene expression data by creating,
376 for each gene, five thresholded boolean masks of the gene data. For each gene, we created a boolean mask of its expression
377 levels using each of these thresholds: the mean of that gene, the mean minus one standard deviation, the mean minus two
378 standard deviations, the mean plus one standard deviation, the mean plus two standard deviations.
379 Now, for each region, we created and ran a forward stepwise procedure which attempted to find pairs of gene expression
380 boolean masks such that the conditional entropy of the target area’s boolean mask, conditioned upon the pair of gene
381 expression boolean masks, is minimized.
382 This finds pairs of genes which are most informative (at least at these discretization thresholds) relative to the question,
383 “Is this surface pixel a member of the target area?”. Its advantage over linear methods such as logistic regression is that it
384 takes account of arbitrarily nonlinear relationships; for example, if the XOR of two variables predicts the target, conditional
385 entropy would notice, whereas linear methods would not.
386 Gradient similarity We noticed that the previous two scoring methods, which are pointwise, often found genes whose
387 pattern of expression did not look similar in shape to the target region. For this reason we designed a non-pointwise local
388 scoring method to detect when a gene had a pattern of expression which looked like it had a boundary whose shape is similar
389 to the shape of the target region. We call this scoring method “gradient similarity”.
390 One might say that gradient similarity attempts to measure how much the border of the area of gene expression and
391 the border of the target region overlap. However, since gene expression falls off continuously rather than jumping from its
392 maximum value to zero, the spatial pattern of a gene’s expression often does not have a discrete border. Therefore, instead
393 of looking for a discrete border, we look for large gradients. Gradient similarity is a symmetric function over two images
394 (i.e. two scalar fields). It is is high to the extent that matching pixels which have large values and large gradients also have
395 gradients which are oriented in a similar direction. The formula is:
396 ∑
397 pixel<img src="cmsy7-32.png" alt="&#x2208;" />pixels cos(abs(&#x2220;&#x2207;1 -&#x2220;&#x2207;2)) &#x22C5;|&#x2207;1| + |&#x2207;2|
398 2 &#x22C5; pixel_value1 + pixel_value2
399 2
400 where &#x2207;1 and &#x2207;2 are the gradient vectors of the two images at the current pixel; &#x2220;&#x2207;i is the angle of the gradient of
401 image i at the current pixel; |&#x2207;i| is the magnitude of the gradient of image i at the current pixel; and pixel_valuei is the
402 value of the current pixel in image i.
403 The intuition is that we want to see if the borders of the pattern in the two images are similar; if the borders are similar,
404 then both images will have corresponding pixels with large gradients (because this is a border) which are oriented in a
405 similar direction (because the borders are similar).
406 Most of the genes in Figure 4 were identified via gradient similarity.
407 Gradient similarity provides information complementary to correlation
411 Figure 3: The top row shows the three genes which (individually) best predict area AUD, according to logistic regression.
412 The bottom row shows the three genes which (individually) best match area AUD, according to gradient similarity. From
413 left to right and top to bottom, the genes are Ssr1, Efcbp1, Aph1a, Ptk7, Aph1a again, and Lepr
414 To show that gradient similarity can provide useful information that cannot be detected via pointwise analyses, consider
415 Fig. 3. The top row of Fig. 3 displays the 3 genes which most match area AUD, according to a pointwise method17. The
416 bottom row displays the 3 genes which most match AUD according to a method which considers local geometry18 The
417 pointwise method in the top row identifies genes which express more strongly in AUD than outside of it; its weakness is
418 that this includes many areas which don&#8217;t have a salient border matching the areal border. The geometric method identifies
419 genes whose salient expression border seems to partially line up with the border of AUD; its weakness is that this includes
420 genes which don&#8217;t express over the entire area. Genes which have high rankings using both pointwise and border criteria,
421 such as Aph1a in the example, may be particularly good markers. None of these genes are, individually, a perfect marker
422 for AUD; we deliberately chose a &#8220;difficult&#8221; area in order to better contrast pointwise with geometric methods.
423 Areas which can be identified by single genes Using gradient similarity, we have already found single genes which
424 roughly identify some areas and groupings of areas. For each of these areas, an example of a gene which roughly identifies
425 it is shown in Figure 4. We have not yet cross-verified these genes in other atlases.
426 In addition, there are a number of areas which are almost identified by single genes: COAa+NLOT (anterior part of
427 cortical amygdalar area, nucleus of the lateral olfactory tract), ENT (entorhinal), ACAv (ventral anterior cingulate), VIS
428 (visual), AUD (auditory).
429 Combinations of multiple genes are useful and necessary for some areas
430 In Figure 5, we give an example of a cortical area which is not marked by any single gene, but which can be identified
431 combinatorially.
432 Feature selection integrated with prediction As noted earlier, in general, any predictive method can be used for
433 feature selection by running it inside a stepwise wrapper. Also, some predictive methods integrate soft constraints on number
434 of features used. Examples of both of these will be seen in the section &#8220;Multivariate Predictive methods&#8221;.
435 Multivariate Predictive methods
436 Forward stepwise logistic regression As a pilot run, for five cortical areas (SS, AUD, RSP, VIS, and MO), we performed
437 forward stepwise logistic regression to find single genes, pairs of genes, and triplets of genes which predict areal identify.
438 This is an example of feature selection integrated with prediction using a stepwise wrapper. Some of the single genes found
439 were shown in various figures throughout this document, and Figure 5 shows a combination of genes which was found.
440 We felt that, for single genes, gradient similarity did a better job than logistic regression at capturing our subjective
441 impression of a &#8220;good gene&#8221;.
442 SVM on all genes at once
443 In order to see how well one can do when looking at all genes at once, we ran a support vector machine to classify cortical
444 _________________________________________
445 17For each gene, a logistic regression in which the response variable was whether or not a surface pixel was within area AUD, and the predictor
446 variable was the value of the expression of the gene underneath that pixel. The resulting scores were used to rank the genes in terms of how well
447 they predict area AUD.
448 18For each gene the gradient similarity between (a) a map of the expression of each gene on the cortical surface and (b) the shape of area AUD,
449 was calculated, and this was used to rank the genes.
453 Figure 4: From left to right and top to bottom, single genes which roughly identify areas SS (somatosensory primary +
454 supplemental), SSs (supplemental somatosensory), PIR (piriform), FRP (frontal pole), RSP (retrosplenial), COApm (Corti-
455 cal amygdalar, posterior part, medial zone). Grouping some areas together, we have also found genes to identify the groups
456 ACA+PL+ILA+DP+ORB+MO (anterior cingulate, prelimbic, infralimbic, dorsal peduncular, orbital, motor), posterior
457 and lateral visual (VISpm, VISpl, VISI, VISp; posteromedial, posterolateral, lateral, and primary visual; the posterior and
458 lateral visual area is distinguished from its neighbors, but not from the entire rest of the cortex). The genes are Pitx2,
459 Aldh1a2, Ppfibp1, Slco1a5, Tshz2, Trhr, Col12a1, Ets1.
462 Figure 5: Upper left: wwc1. Upper right: mtif2. Lower left: wwc1 + mtif2 (each pixel&#8217;s value on the lower left is the
463 sum of the corresponding pixels in the upper row). Acccording to logistic regression, gene wwc1 is the best fit single gene
464 for predicting whether or not a pixel on the cortical surface belongs to the motor area (area MO). The upper-left picture in
465 Figure 5 shows wwc1&#8217;s spatial expression pattern over the cortex. The lower-right boundary of MO is represented reasonably
466 well by this gene, however the gene overshoots the upper-left boundary. This flattened 2-D representation does not show
467 it, but the area corresponding to the overshoot is the medial surface of the cortex. MO is only found on the lateral surface.
468 Gene mtif2 is shown in the upper-right. Mtif2 captures MO&#8217;s upper-left boundary, but not its lower-right boundary. Mtif2
469 does not express very much on the medial surface. By adding together the values at each pixel in these two figures, we get
470 the lower-left image. This combination captures area MO much better than any single gene.
471 surface pixels based on their gene expression profiles. We achieved classification accuracy of about 81%19. As noted above,
472 however, a classifier that looks at all the genes at once isn&#8217;t as practically useful as a classifier that uses only a few genes.
473 Data-driven redrawing of the cortical map
474 Raw dimensionality reduction We have applied the following dimensionality reduction algorithms to reduce the di-
475 mensionality of the gene expression profile associated with each voxel: Principal Components Analysis (PCA), Simple
476 PCA (SPCA), Multi-Dimensional Scaling (MDS), Isomap, Landmark Isomap, Laplacian eigenmaps, Local Tangent Space
477 Alignment (LTSA), Hessian locally linear embedding, Diffusion maps, Stochastic Neighbor Embedding (SNE), Stochastic
478 Proximity Embedding (SPE), Fast Maximum Variance Unfolding (FastMVU), Non-negative Matrix Factorization (NNMF).
479 todo
480 (might want to incld nnMF since mentioned above)
481 Dimensionality reduction plus K-means or spectral clustering
482 Many areas are captured by clusters of genes
483 todo
484 todo
485 _________________________________________
486 195-fold cross-validation.
487 Research plan
488 Further work on flatmapping
489 In anatomy, the manifold of interest is usually either defined by a combination of two relevant anatomical axes (todo),
490 or by the surface of the structure (as is the case with the cortex). In the former case, the manifold of interest is a plane, but
491 in the latter case it is curved. If the manifold is curved, there are various methods for mapping the manifold into a plane.
492 In the case of the cerebral cortex, it remains to be seen which method of mapping the manifold into a plane is optimal
493 for this application. We will compare mappings which attempt to preserve size (such as the one used by Caret[5]) with
494 mappings which preserve angle (conformal maps).
495 Although there is much 2-D organization in anatomy, there are also structures whose shape is fundamentally 3-dimensional.
496 If possible, we would like the method we develop to include a statistical test that warns the user if the assumption of 2-D
497 structure seems to be wrong.
498 todo amongst other things:
499 layerfinding
500 Develop algorithms that find genetic markers for anatomical regions
501 1.Develop scoring measures for evaluating how good individual genes are at marking areas: we will compare pointwise,
502 geometric, and information-theoretic measures.
503 2.Develop a procedure to find single marker genes for anatomical regions: for each cortical area, by using or combining
504 the scoring measures developed, we will rank the genes by their ability to delineate each area.
505 3.Extend the procedure to handle difficult areas by using combinatorial coding: for areas that cannot be identified by any
506 single gene, identify them with a handful of genes. We will consider both (a) algorithms that incrementally/greedily
507 combine single gene markers into sets, such as forward stepwise regression and decision trees, and also (b) supervised
508 learning techniques which use soft constraints to minimize the number of features, such as sparse support vector
509 machines.
510 4.Extend the procedure to handle difficult areas by combining or redrawing the boundaries: An area may be difficult
511 to identify because the boundaries are misdrawn, or because it does not &#8220;really&#8221; exist as a single area, at least on the
512 genetic level. We will develop extensions to our procedure which (a) detect when a difficult area could be fit if its
513 boundary were redrawn slightly, and (b) detect when a difficult area could be combined with adjacent areas to create
514 a larger area which can be fit.
515 # Linear discriminant analysis
516 Decision trees todo
517 For each cortical area, we used the C4.5 algorithm to find a pruned decision tree and ruleset for that area. We achieved
518 estimated classification accuracy of more than 99.6% on each cortical area (as evaluated on the training data without
519 cross-validation; so actual accuracy is expected to be lower). However, the resulting decision trees each made use of many
520 genes.
521 Apply these algorithms to the cortex
522 1.Create open source format conversion tools: we will create tools to bulk download the ABA dataset and to convert
523 between SEV, NIFTI and MATLAB formats.
524 2.Flatmap the ABA cortex data: map the ABA data onto a plane and draw the cortical area boundaries onto it.
525 3.Find layer boundaries: cluster similar voxels together in order to automatically find the cortical layer boundaries.
526 4.Run the procedures that we developed on the cortex: we will present, for each area, a short list of markers to identify
527 that area; and we will also present lists of &#8220;panels&#8221; of genes that can be used to delineate many areas at once.
528 Develop algorithms to suggest a division of a structure into anatomical parts
529 # mixture models, etc
530 1.Explore dimensionality reduction algorithms applied to pixels: including TODO
531 2.Explore dimensionality reduction algorithms applied to genes: including TODO
532 3.Explore clustering algorithms applied to pixels: including TODO
533 4.Explore clustering algorithms applied to genes: including gene shaving, TODO
534 5.Develop an algorithm to use dimensionality reduction and/or hierarchial clustering to create anatomical maps
535 6.Run this algorithm on the cortex: present a hierarchial, genoarchitectonic map of the cortex
536 # Linear discriminant analysis
537 # jbt, coclustering
538 # self-organizing map
539 # confirm with EMAGE, GeneAtlas, GENSAT, etc, to fight overfitting
540 # compare using clustering scores
541 # multivariate gradient similarity
542 Bibliography &amp; References Cited
543 [1]Tanya Barrett, Dennis B. Troup, Stephen E. Wilhite, Pierre Ledoux, Dmitry Rudnev, Carlos Evangelista, Irene F.
544 Kim, Alexandra Soboleva, Maxim Tomashevsky, and Ron Edgar. NCBI GEO: mining tens of millions of expression
545 profiles&#8211;database and tools update. Nucl. Acids Res., 35(suppl_1):D760&#8211;765, 2007.
546 [2]George W. Bell, Tatiana A. Yatskievych, and Parker B. Antin. GEISHA, a whole-mount in situ hybridization gene
547 expression screen in chicken embryos. Developmental Dynamics, 229(3):677&#8211;687, 2004.
548 [3]James P Carson, Tao Ju, Hui-Chen Lu, Christina Thaller, Mei Xu, Sarah L Pallas, Michael C Crair, Joe Warren, Wah
549 Chiu, and Gregor Eichele. A digital atlas to characterize the mouse brain transcriptome. PLoS Comput Biol, 1(4):e41,
550 2005.
551 [4]Mark H. Chin, Alex B. Geng, Arshad H. Khan, Wei-Jun Qian, Vladislav A. Petyuk, Jyl Boline, Shawn Levy, Arthur W.
552 Toga, Richard D. Smith, Richard M. Leahy, and Desmond J. Smith. A genome-scale map of expression for a mouse
553 brain section obtained using voxelation. Physiol. Genomics, 30(3):313&#8211;321, August 2007.
554 [5]D C Van Essen, H A Drury, J Dickson, J Harwell, D Hanlon, and C H Anderson. An integrated software suite for surface-
555 based analyses of cerebral cortex. Journal of the American Medical Informatics Association: JAMIA, 8(5):443&#8211;59, 2001.
556 PMID: 11522765.
557 [6]Shiaoching Gong, Chen Zheng, Martin L. Doughty, Kasia Losos, Nicholas Didkovsky, Uta B. Schambra, Norma J.
558 Nowak, Alexandra Joyner, Gabrielle Leblanc, Mary E. Hatten, and Nathaniel Heintz. A gene expression atlas of the
559 central nervous system based on bacterial artificial chromosomes. Nature, 425(6961):917&#8211;925, October 2003.
560 [7]Jano Hemert and Richard Baldock. Matching Spatial Regions with Combinations of Interacting Gene Expression Pat-
561 terns, volume 13 of Communications in Computer and Information Science, pages 347&#8211;361. Springer Berlin Heidelberg,
562 2008.
563 [8]Erh-Fang Lee, Jyl Boline, and Arthur W. Toga. A High-Resolution anatomical framework of the neonatal mouse brain
564 for managing gene expression data. Frontiers in Neuroinformatics, 1:6, 2007. PMC2525996.
565 [9]Susan Magdaleno, Patricia Jensen, Craig L. Brumwell, Anna Seal, Karen Lehman, Andrew Asbury, Tony Cheung,
566 Tommie Cornelius, Diana M. Batten, Christopher Eden, Shannon M. Norland, Dennis S. Rice, Nilesh Dosooye, Sundeep
567 Shakya, Perdeep Mehta, and Tom Curran. BGEM: an in situ hybridization database of gene expression in the embryonic
568 and adult mouse nervous system. PLoS Biology, 4(4):e86 EP &#8211;, April 2006.
569 [10]Lydia Ng, Amy Bernard, Chris Lau, Caroline C Overly, Hong-Wei Dong, Chihchau Kuan, Sayan Pathak, Susan M
570 Sunkin, Chinh Dang, Jason W Bohland, Hemant Bokil, Partha P Mitra, Luis Puelles, John Hohmann, David J Anderson,
571 Ed S Lein, Allan R Jones, and Michael Hawrylycz. An anatomic gene expression atlas of the adult mouse brain. Nat
572 Neurosci, 12(3):356&#8211;362, March 2009.
573 [11]George Paxinos and Keith B.J. Franklin. The Mouse Brain in Stereotaxic Coordinates. Academic Press, 2 edition, July
574 2001.
575 [12]Constance M. Smith, Jacqueline H. Finger, Terry F. Hayamizu, Ingeborg J. McCright, Janan T. Eppig, James A.
576 Kadin, Joel E. Richardson, and Martin Ringwald. The mouse gene expression database (GXD): 2007 update. Nucl.
577 Acids Res., 35(suppl_1):D618&#8211;623, 2007.
578 [13]Judy Sprague, Leyla Bayraktaroglu, Dave Clements, Tom Conlin, David Fashena, Ken Frazer, Melissa Haendel, Dou-
579 glas G Howe, Prita Mani, Sridhar Ramachandran, Kevin Schaper, Erik Segerdell, Peiran Song, Brock Sprunger, Sierra
580 Taylor, Ceri E Van Slyke, and Monte Westerfield. The zebrafish information network: the zebrafish model organism
581 database. Nucleic Acids Research, 34(Database issue):D581&#8211;5, 2006. PMID: 16381936.
582 [14]Larry Swanson. Brain Maps: Structure of the Rat Brain. Academic Press, 3 edition, November 2003.
583 [15]Carol L. Thompson, Sayan D. Pathak, Andreas Jeromin, Lydia L. Ng, Cameron R. MacPherson, Marty T. Mortrud,
584 Allison Cusick, Zackery L. Riley, Susan M. Sunkin, Amy Bernard, Ralph B. Puchalski, Fred H. Gage, Allan R. Jones,
585 Vladimir B. Bajic, Michael J. Hawrylycz, and Ed S. Lein. Genomic anatomy of the hippocampus. Neuron, 60(6):1010&#8211;
586 1021, December 2008.
587 [16]Pavel Tomancak, Amy Beaton, Richard Weiszmann, Elaine Kwan, ShengQiang Shu, Suzanna E Lewis, Stephen
588 Richards, Michael Ashburner, Volker Hartenstein, Susan E Celniker, and Gerald M Rubin. Systematic determina-
589 tion of patterns of gene expression during drosophila embryogenesis. Genome Biology, 3(12):research008818814, 2002.
590 PMC151190.
591 [17]Jano van Hemert and Richard Baldock. Mining Spatial Gene Expression Data for Association Rules, volume 4414/2007
592 of Lecture Notes in Computer Science, pages 66&#8211;76. Springer Berlin / Heidelberg, 2007.
593 [18]Shanmugasundaram Venkataraman, Peter Stevenson, Yiya Yang, Lorna Richardson, Nicholas Burton, Thomas P. Perry,
594 Paul Smith, Richard A. Baldock, Duncan R. Davidson, and Jeffrey H. Christiansen. EMAGE edinburgh mouse atlas
595 of gene expression: 2008 update. Nucl. Acids Res., 36(suppl_1):D860&#8211;865, 2008.
596 [19]Axel Visel, Christina Thaller, and Gregor Eichele. GenePaint.org: an atlas of gene expression patterns in the mouse
597 embryo. Nucl. Acids Res., 32(suppl_1):D552&#8211;556, 2004.
598 [20]Robert H Waterston, Kerstin Lindblad-Toh, Ewan Birney, Jane Rogers, Josep F Abril, Pankaj Agarwal, Richa Agar-
599 wala, Rachel Ainscough, Marina Alexandersson, Peter An, Stylianos E Antonarakis, John Attwood, Robert Baertsch,
600 Jonathon Bailey, Karen Barlow, Stephan Beck, Eric Berry, Bruce Birren, Toby Bloom, Peer Bork, Marc Botcherby,
601 Nicolas Bray, Michael R Brent, Daniel G Brown, Stephen D Brown, Carol Bult, John Burton, Jonathan Butler,
602 Robert D Campbell, Piero Carninci, Simon Cawley, Francesca Chiaromonte, Asif T Chinwalla, Deanna M Church,
603 Michele Clamp, Christopher Clee, Francis S Collins, Lisa L Cook, Richard R Copley, Alan Coulson, Olivier Couronne,
604 James Cuff, Val Curwen, Tim Cutts, Mark Daly, Robert David, Joy Davies, Kimberly D Delehaunty, Justin Deri,
605 Emmanouil T Dermitzakis, Colin Dewey, Nicholas J Dickens, Mark Diekhans, Sheila Dodge, Inna Dubchak, Diane M
606 Dunn, Sean R Eddy, Laura Elnitski, Richard D Emes, Pallavi Eswara, Eduardo Eyras, Adam Felsenfeld, Ginger A
607 Fewell, Paul Flicek, Karen Foley, Wayne N Frankel, Lucinda A Fulton, Robert S Fulton, Terrence S Furey, Diane Gage,
608 Richard A Gibbs, Gustavo Glusman, Sante Gnerre, Nick Goldman, Leo Goodstadt, Darren Grafham, Tina A Graves,
609 Eric D Green, Simon Gregory, Roderic Guig, Mark Guyer, Ross C Hardison, David Haussler, Yoshihide Hayashizaki,
610 LaDeana W Hillier, Angela Hinrichs, Wratko Hlavina, Timothy Holzer, Fan Hsu, Axin Hua, Tim Hubbard, Adrienne
611 Hunt, Ian Jackson, David B Jaffe, L Steven Johnson, Matthew Jones, Thomas A Jones, Ann Joy, Michael Kamal,
612 Elinor K Karlsson, Donna Karolchik, Arkadiusz Kasprzyk, Jun Kawai, Evan Keibler, Cristyn Kells, W James Kent,
613 Andrew Kirby, Diana L Kolbe, Ian Korf, Raju S Kucherlapati, Edward J Kulbokas, David Kulp, Tom Landers, J P
614 Leger, Steven Leonard, Ivica Letunic, Rosie Levine, Jia Li, Ming Li, Christine Lloyd, Susan Lucas, Bin Ma, Donna R
615 Maglott, Elaine R Mardis, Lucy Matthews, Evan Mauceli, John H Mayer, Megan McCarthy, W Richard McCombie,
616 Stuart McLaren, Kirsten McLay, John D McPherson, Jim Meldrim, Beverley Meredith, Jill P Mesirov, Webb Miller,
617 Tracie L Miner, Emmanuel Mongin, Kate T Montgomery, Michael Morgan, Richard Mott, James C Mullikin, Donna M
618 Muzny, William E Nash, Joanne O Nelson, Michael N Nhan, Robert Nicol, Zemin Ning, Chad Nusbaum, Michael J
619 O&#8217;Connor, Yasushi Okazaki, Karen Oliver, Emma Overton-Larty, Lior Pachter, Gens Parra, Kymberlie H Pepin, Jane
620 Peterson, Pavel Pevzner, Robert Plumb, Craig S Pohl, Alex Poliakov, Tracy C Ponce, Chris P Ponting, Simon Potter,
621 Michael Quail, Alexandre Reymond, Bruce A Roe, Krishna M Roskin, Edward M Rubin, Alistair G Rust, Ralph San-
622 tos, Victor Sapojnikov, Brian Schultz, Jrg Schultz, Matthias S Schwartz, Scott Schwartz, Carol Scott, Steven Seaman,
623 Steve Searle, Ted Sharpe, Andrew Sheridan, Ratna Shownkeen, Sarah Sims, Jonathan B Singer, Guy Slater, Arian
624 Smit, Douglas R Smith, Brian Spencer, Arne Stabenau, Nicole Stange-Thomann, Charles Sugnet, Mikita Suyama,
625 Glenn Tesler, Johanna Thompson, David Torrents, Evanne Trevaskis, John Tromp, Catherine Ucla, Abel Ureta-Vidal,
626 Jade P Vinson, Andrew C Von Niederhausern, Claire M Wade, Melanie Wall, Ryan J Weber, Robert B Weiss, Michael C
627 Wendl, Anthony P West, Kris Wetterstrand, Raymond Wheeler, Simon Whelan, Jamey Wierzbowski, David Willey,
628 Sophie Williams, Richard K Wilson, Eitan Winter, Kim C Worley, Dudley Wyman, Shan Yang, Shiaw-Pyng Yang,
629 Evgeny M Zdobnov, Michael C Zody, and Eric S Lander. Initial sequencing and comparative analysis of the mouse
630 genome. Nature, 420(6915):520&#8211;62, December 2002. PMID: 12466850.
632 _______________________________________________________________________________________________________
633 stuff i dunno where to put yet (there is more scattered through grant-oldtext):
634 Principle 4: Work in 2-D whenever possible
635 &#8212;
636 note:
637 two hemis