Solving the Problem: Genome Annotation Standards before the Data Deluge

The promise of genome sequencing was that the vast undiscovered country would be mapped out by comparison of the multitude of sequences available and would aid researchers in deciphering the role of each gene in every organism. Researchers recognize that there is a need for high quality data. However, different annotation procedures, numerous databases, and a diminishing percentage of experimentally determined gene functions have resulted in a spectrum of annotation quality. NCBI in collaboration with sequencing centers, archival databases, and researchers, has developed the first international annotation standards, a fundamental step in ensuring that high quality complete prokaryotic genomes are available as gold standard references. Highlights include the development of annotation assessment tools, community acceptance of protein naming standards, comparison of annotation resources to provide consistent annotation, and improved tracking of the evidence used to generate a particular annotation. The development of a set of minimal standards, including the requirement for annotated complete prokaryotic genomes to contain a full set of ribosomal RNAs, transfer RNAs, and proteins encoding core conserved functions, is an historic milestone. The use of these standards in existing genomes and future submissions will increase the quality of databases, enabling researchers to make accurate biological discoveries.


Introduction Annotation Issues in Genome Records
Even before the first genome sequence for a cellular organism was completed in 1995, it was recognized that the functional content encoded by and annotated on nucleotide records represented both a blessing and a curse [1][2][3]. With the complete genome sequence obtained and annotated, a full understanding of the biology of an organism was thought to be within reach. However, deposition of an annotated record into the sequence archives, excepting the rare occasion when a record is updated, meant that the archival record represented a snapshot in time of both the sequence and annotation. Scientists have sought to address the annotation issue by creating curated databases, developing computational tools for the assessment of annotation, and publishing a variety of solutions in numerous papers [4,5].
Throughout the sequencing era, continuous reassessment of annotations based on new evidence led to improved annotations on a number of sequences, even though the process is recognized as being time-intensive [6,7]. With the exponential increase in sequence data, annotation updates have become increasingly unlikely events. Errors in annotation impact downstream analyses [8]. Errors that affect the location of annotated features or that result in a missed genomic feature greatly impact the evolutionary studies and biological understanding of an organism, whereas mistakes in functional annotation lead to subsequent problems in the analyses of pathways, systems, and metabolic processes. The presence of inaccurate annotation in biological databases introduces a hidden cost to researchers that is amplified by the amount of data being produced.
http://standardsingenomics.org 169 For prokaryotic organisms, as of August 10, 2010, there were 1,218 complete and more than 1,400 draft genomes that had been sequenced and released publicly. The Genome Project database and other online efforts to catalog genome sequencing initiatives list thousands of additional sequence projects that have been initiated but for which sequence data has not yet been released [9,10]. Investigators relying on the complete genome set consisting of sequenced and closed replicon molecules and annotations as a gold standard are becoming increasingly affected by the size of the dataset even without having to take into account the presence of erroneous annotation [11]. As rapidly decreasing sequencing costs for next generation sequencing are producing unprecedented levels of data and errors that can easily inflate in size and propagate throughout many datasets, it is essential that steps be taken to address these issues [8,12].
A large body of literature devoted to describing annotation problems is available ( [13,14] and references within). Errors that plague genome annotations range from simple spelling mistakes that may affect a few records, to incorrectly tuned parameters in automatic annotation pipelines that can affect thousands of genes. Discrepancies can impact the genomic coordinates of a feature, or the function ascribed to a feature such as the protein or gene name, or both [15]. The commonly used Gene Ontology annotations are also subject to errors [16]. As our understanding of genome biology and evolution has improved, a number of methods have been developed to assess annotation quality. Typically, several pieces of evidence are combined in order to assign confidence levels to a particular annotation or to predict new functions. In some cases these methods have led investigators to target a specific function for experimental validation after the prediction was made, a process that both validated the prediction method and provided improved and experimentally determined annotations such as in the detection of the GGDEF and EAL domains as a major part of prokaryotic regulation [17][18][19]. Some of these methods include sequence similarity, phylogenomic or genomic context, metabolic reconstruction to determine pathway holes, comparative genomics, and in many cases a combination of all of the above (reviewed in [20]). A number of tools have been developed to predict annotations based on curated and experimental data. Curated model organism databases or datasets for specific molecules such as transfer RNAs, ribosomal RNAs, or other noncoding RNAs have been developed along with tools to predict their presence in a novel sequence [21][22][23][24].
Several large-scale curated databases have been created at large centers, such as at EBI and NCBI. NCBI initiated the Reference Sequence database to create a curated non-redundant set of sequences derived from original submissions to INSDC [25]. The sequences include genomic DNA, transcripts, and proteins and the annotations may consist of submitter-derived, curated, or computational predictions. One major resource for improving functional annotation is the NCBI Protein Clusters database that consists of cliques of related proteins (ProtClustDB [26]; ). A subset of clusters are curated and utilized as sources of functional annotation in the annotation pipeline as well as to incrementally update RefSeq records (see below). RefSeq records are also updated from model organism databases such as those for E. coli K-12 or Flybase. The UniProt Knowledgebase (UniProtKB) provided by the UniProt consortium is an expertly curated database, a central access point for integrated protein information with cross-references to multiple sources [27]. The Genome Reviews portal that was a comprehensively up-to-date set of genomes has now been incorporated at ENSEMBL genomes [28,29]. Ongoing collaboration between NCBI and EBI ensures that annotation will continue to be curated and improved in all databases.
RefSeq is committed to ensuring that all current and future RefSeq prokaryotic records meet the minimal standards presented in this article. However, high throughput next generation sequencing increasingly results in a large number of nonreference sequences populating the databases with the expectation that there could be tens of thousands of genomes available for all prokaryotes. Community acceptance of a set of minimal annotation standards puts the burden on all genome submitters to provide quality annotation especially for those complete genomes that are often considered gold standard records for sequencing and annotation such as Escherichia coli K-12 MG1655. Standards in Genomic Sciences

The Need for Standards
Standards and guidelines facilitate the submission, retrieval, exchange, and analysis of data. Both the format and content of data can be standardized (syntactic and semantic). Syntactic standardization is easier to implement and enforce. The format and representation of genomic records has long been established and is not discussed in this article. Semantic standardization is more difficult. Standardization of the genomic content and annotation will facilitate analyses at the functional and systems levels, in other words, the biology will be easier to understand and to put into an evolutionary context which will have a real impact on how researchers approach scientific studies. An explosion of documents for minimal standards in a variety of genomics, bioinformatics, and transcriptomics studies has occurred. Examples include the MIAME standards established for microarray expression studies, and the MIGS standards that were created to establish minimal metadata associated with genome sequencing projects [30,31].
There is now the Minimum Information for Biological and Biomedical Investigations (MIBBI) project that aims to comprehensively organize and collate all of these projects and BioDBcore, a community initiative for specifications of biological databases [32,33]. Although the reason for standards is clear, the enforcement of standards is a complex issue that remains to be resolved [34]. Community standards that are adopted by the organizations producing, archiving, and distributing the data will facilitate the usage and enforcement of these standards. Recognizing these growing problems, the National Center for Biotechnology Table 1 and the full set of links, updates, and contact information will be posted at the workshop site at NCBI [51]. Milestones from all three workshops include: 1) the E. coli CCDS project (ECCDS), 2) a publication detailing the differences between archival and curated databases, 3) a locus_tag registry, and 4) release of a set of annotation assessment tools. Specific proposals on problems of genome annotation were generated from a number of working groups and focused on the following issues: 1) standard operating procedures, 2) structured evidence, 3) structural annotation, 4) pseudogenes, 5) protein naming guidelines, 6) comparison of functional annotation, 7) and viral annotation. Several of these proposals were submitted as guidelines and standards to be approved by INSDC while others are already accepted. Some of the proposals include reports and data sources that are available online ( Table 1). The outcomes of each are summarized below.

ECCDS
The human genome CCDS project, an active collaboration project between EBI, NCBI, Sanger, and UCSC, was established to create a core set of consistently annotated protein coding genes [52]. This project has now grown to include the mouse genome, and there are considerations for expanding this to other eukaryotic organisms. Using this project as a model, the E. coli consensus CDS project was established to reconcile the annotation differences for the model organism E. coli K-12 MG1655 which was first sequenced in 1997 (GenBank Accession Number U00096 [53]; ). An updated annotation snapshot was released in 2006, and numerous curated and archival databases contain annotation for this organism [43]. Of those, the ones actively contributing to the ECCDS project include GenBank, RefSeq, Eco-Gene, EcoCyc, and UniProt [25], [27] [54][55][56]. Consistent annotation has been established between EcoGene, GenBank, and RefSeq with all three synchronizing the annotation several times a year. Reconciliation of this consistent annotation set with the EcoCyc and UniProtKB/Swiss-Prot databases is an ongoing process that has resulted in improved annotations in all five databases benefiting not only E. coli researchers but also the entire field of prokaryotic genomics (

Differences between Archival and Curated Databases
Archival and curated databases serve different needs for the genomic and bioinformatics communities, but there is still confusion about the exact roles of all of these databases in the representation of genome sequencing data. A short article ("GenBank, RefSeq, TPA and UniProt: What's in a Name?") clarifying these issues was authored by NCBI and published in the ASM journal Microbe and is also available online at NCBI ( Table 1). The article discussed the differences between the archival databases (GenBank), curated databases such as RefSeq and UniProtKB/Swiss-Prot, and Third Party Annotation (TPA), and helped researchers to understand the exact role of each database and how sequences and annotations are handled in each. Archival databases such as Gen-Bank contain primary submissions and redundant sequences whereas the TPA database provides the ability for peer reviewed and published information to be used to update the information in the primary archives. RefSeq and UniProt have been described above. These resources constitute a major part of the dataflow for the annotation, submission, retrieval, and analysis of genomic records.

Locus_tag registry
Locus_tags are systematic identifiers used for the enumeration of annotated genes even for cases when the genes have no known function. ASM journal editors had noticed that there was an increased use of locus_tags to refer to genes in the scientific literature, both in the primary genome sequencing paper as well as in subsequent publications describing specific genes and functions. However, as these identifiers were annotated by individual investigators and research labs, there were increasing instances of the same locus_tag being used to describe different but unrelated genes in different organisms. Hence the utility of a unique identifier was being lost and the use of lo-cus_tags in a scientific article to identify particular genes was resulting in confusion. The solution was to create a locus_tag registry in conjunction with the Genome Project (soon to be BioProject [57]) database. Prefixes consisting of alphanumeric characters that met the standards could be registered along with a genome project submission ( Table 1). The assignment of a unique locus_tag prefix to each genome assures that each gene feature in the dataset of all genomes records can be correctly identified.

Annotation Assessment Tools
NCBI committed to produce additional annotation assessment tools to help submitters find problems with genome annotations (Table 1). These tools are used during the submission process to Gen-Bank, in the Prokaryotic Genome Automatic Annotation Pipeline, and are available separately and include: 1) the Discrepancy Report which includes internal consistency checks without the use of external databases, and is available in Sequin, as part of the tbl2asn tool or as a stand-alone commandline tool, 2) the subcheck/frameshift tool which incorporates sequence searches in external databases during annotation assessment in order to find potentially frameshifted genes and other annotation issues and is available via the web or as a command line tool. NCBI encourages submitters to utilize these tools prior to submission to aid in the identification and correction of annotation discrepancies. A new annotation report that lists quantitative annotation measures and provides comparison with multiple organisms is also available and is detailed below.

Capturing Annotation Methods and Information Sources
The results of genome annotation processes are deposited along with sequence records in the archival databases. The combination of methods and information sources that were used in the creation of a particular genome annotation are usually detailed in a publication. With increasing numbers of genomes being deposited that do not have an associated scientific publication, it is of paramount importance that there is a process to capture the methods and databases used in creating a set of annotated features.

Standard Operating Procedures
Standard Operating Procedures (SOPs) in the context of genome annotation should: 1) document specific processes used to generate annotations, 2) with enough detail to replicate the process, 3) list the input and outputs, 4) reference any external tools, and 5) and describe how the outputs of software packages are interpreted, filtered, or combined. The concept of SOPs, along with an example using the NCBI prokaryotic genome automatic annotation pipeline (PGAAP), has been detailed elsewhere [58]. The Genome Standards Consortium (GSC), which has set forth a structured format to capture genome metadata, provides optional fields to link to an online accessible SOP via a digital object identifier (DOI) or other mechanism [31]. INSDC has agreed to adopt this structured format for genome metadata, thus providing the capability to document SOPs and link them to each genome record with the metadata appearing in the COMMENT section. An example record with structured metadata can be found in GenBank Accession Number CP002903 (although the annotation SOP is not yet provided for this particular genome). All submitters are encouraged to use this structured format to capture genome metadata.

Structured standards evidence in annotation
SOPs describe the processes used to make an annotation decision including a list of information sources which may include sequence, structure, domain databases, or protein family resources. Since many of these bioinformatics sources are large databases with many records, it is essential to note the exact record from which an annotation is derived, thus providing a one-to-one or many-toone link from annotation sources to the novel predicted annotation in a new genome. The source becomes a vital reference that facilitates analysis and comparison and the link to a particular record provides a trail through which annotation updates or problems can be addressed.
A variety of evidence or confidence-based systems are currently used. The Evidence Viewer at NCBI displays the sequences that provide evidence for the sequence of a particular gene model or mRNA [42]. The RefSeq status key provides varying levels of confidence to a particular annotation based on the level of manual review a particular annotation has received [25]. The curated Pseudomonas aeruginosa database incorporates evidence levels for functional assignments [59]. UniProt has developed an evidence attribution system which attaches an evidence tag to each data item in a UniProtKB entry identifying its source(s) and/or methods used to generate it. Users can easily identify information added during the manual curation process, imported from other databases or added by automatic annotation procedures. In addition, UniProt has developed the protein existence concept which provides the level of evidence available for the existence of a protein [27]. The Gene Ontology (GO) system provides evidence for function, component, and process and is one of the better known systems used in annotation today [60]. However, GO cannot be used for all features on a genome, nor are all genome sequencing centers and large-scale institutes routinely using GO or any of the other ontologies, and similar issues arise with all of the abovementioned evidence systems.
The INSDC flatfile is a commonly used format. It provides the capability to annotate many features such as genes, protein-binding sites, or ribosomal RNAs. For each feature there is a set of mandatory and optional qualifiers ( Table 1) that provide detailed information in a structured format for each particular feature. For example, the gene name, the protein binding the DNA, or the ribosomal RNA product. The flatfile format is reviewed every year by the member databases and proposed changes are discussed before acceptance.
The evidence used to annotate a particular feature can be encapsulated in two optional qualifiers, "/experiment" and "/inference". Whereas the "/experiment" qualifier provides information on the nature of the experiment used to derive the annotation of a particular feature, for example Nterminal sequencing to determine the peptide sequence, the "/inference" qualifier provides information on the non-experimental evidence to support the annotation of a particular feature. Three tokens have been proposed and accepted that further categorize the two annotation qualifiers: 1) existence, 2) coordinates, 3) description, and additionally the experiment qualifier provides a field for a direct link to a PubMed identifier or DOI detailing the experiment where support for one of the three tokens can be found (Table 2). A combination of the three tokens can be applied to a set of qualifiers on a feature. For example, the evidence for the exact start and stop of a protein coding region for a particular organism is experimentally determined in one publication while the function is derived by inference from a related organism and all of the evidence and the sources used to derive each annotation can be captured with the set of qualifiers and tokens.
This system of evidence linkage gives richer context to genome annotation where the evidence and processes used to derive annotation is completely traceable. RefSeq will begin implementing evidence assignments and encourages all genome researchers to do the same. Mechanisms for the search, retrieval, and subcategorization of genome records and features with different levels of evidence will be provided by the major databases. Standards in Genomic Sciences

Structural annotation and gene calling standards, validation (reports and outcomes)
Structural annotation standards refer to the methods and parameters used to call and validate genes on a genome. Numerous research laboratories and sequencing centers utilize a variety of different annotation methods and sources and those should be captured as noted above. Therefore, a specific set of software tools or databases was not chosen as a gold standard set. Instead, a nonexhaustive set of software tools and resources that produces high quality annotations and that are publicly available are listed (Table 1) and will be available online [51]. Researchers interested in annotating genomes are encouraged to start with this list. Quantitative measures of annotation were implemented to institute a set of minimal standards. Irrespective of the methodology and datasets used to annotate a particular genome, there are certain aspects of genome biology that are expected to be present for all prokaryotes. Key functions that should be present in all genomes include a set of core genes/functions as well as a complete set of ribosomal RNAs and transfer RNAs that are required for protein translation [61,62]. These requirements are detailed in the minimal standards below and are expected to be found on all complete genomes. Simple statistical reporting of various genome annotation measures can also be used to assess annotation quality. For example, the distributions of protein lengths reflects evolutionary constraints and an examination of length versus conservation showed that conserved genes tend to be longer than nonconserved [63]. Except for extreme cases, most prokaryotic genomes should exhibit similar genome characteristics and be within an expected distribution for each measure. Evolutionary forces that may drive a particular genome outside of an expected range of values include processes such as genome degradation in obligate intracellular endosymbionts or decreasing intergenic spacer size due to genome streamlining in ubiquitous ocean microbes [64,65]. NCBI now generates reports that allows comparison against publicly available genomes and will provide a similar http://standardsingenomics.org 179 report to all genome submitters in an effort to identify and correct annotation problems before a genome is publicly released ( Table 1). Examples of these statistics are shown in Table 3. Two model organisms, E. coli and Bacillus subtilis, were chosen to represent well-annotated average genomes. All other genomes in the table exhibit extremes (minimum or maximum) for a particular category, and in some instances this reflects annotation that does not meet the minimum standards. In cases where a RefSeq copy of a genome was made, corrected annotations were added so that the minimum requirements were met. Comparison of selected annotation measures for all organisms is shown in Figure 1. A selected set was used in principal component analysis to find those measures that contribute the most to variation, and to find clusters of annotation measures. The two physical measures are the length of the chromosomes and the GC content. All other measures are annotationderived. Length affects all annotation metrics and is one of the main drivers of annotation variance. For example, an assessment of protein and RNA count for all genomes shows a linear increase of the number of proteins as the genome size grows (Figure 1). Non-coding RNAs (ribosomal, transfer, and non-coding RNAs such as antisense RNAs), exhibit less of a slope, and in several genomes in the INSDC archives no RNAs have been annotated at all ( Figure 1A). In the complement of complete RefSeq genomes, the full set of ribosomal and tRNAs have been added either as functional or as potential pseudogenes ( Figure 1B). The only cases where this minimal standard could not be met were due either to issues with the sequence (sequencing or assembly) or cases of real biology such as in small compact genomes for endosymbionts. For example, Candidatus Hodgkinia cicadicola Dsem is missing several key functional tRNAs due to codon recoding [66].
Further examination of the annotation measures across all genomes shows how other measures interact. For example, increasing coding density (more genes per Kbp) in genomes results from an increase in the ratio of short proteins (ratio of proteins that are less than 150 amino acids/ total proteins: Figure 2C). As the coding density increases and the ratio of short proteins increase, the average protein length decreases, a logical result as the increased coding density is due to an increase in short overlapping predicted ORFs. A more subtle impact shows that with increasing coding density the ratio of hypothetical to total proteins in the genome increases, whereas the utilization of the ATG start codon (standard start) decreases ( Figure 2D). Increasing GC content also coincides with the usage of alternative start codons such as GTG. However, increasing GC content and increasing genome length do not generally result in an increase in the hypothetical protein ratio (data not shown) suggesting that these trends are due to differences in annotation quality.
Although genome streamlining can impact these measures, for example many genomes from the Prochlorococcus genus exhibit increased coding density; there are other factors at play [64,67,68]. This is more clearly seen when closely related genomes are compared as in a heatmap [69]. Selected annotation measures for the gammaproteobacteria are compared in a heatmap in Figure  2. In several cases, increases or decreases in physical (length, GC content) or derived measures are due to biological causes. For example, gammaproteobacterial endosymbionts such as Buchnera spp. exhibit reduced genome size and decreased GC content [70,71]. In other cases a particular strain or set of strains exhibit skewed annotation measures as compared to other genomes of the same species. For example, one particular Salmonella genome exhibits an increased coding density, ratio of short proteins, and number of hypothetical proteins along with a decreased average protein length (Salmonella enterica subsp. enterica serovar Paratyphi B str. SPB7). In other cases subclusters of a particular species are formed due to potential erroneous annotations such as the three Yersinia pestis genomes that cluster separately from other Y. pestis strains due to skews in annotation that were derived from the same pipeline [72]. In other cases, substrains do not cluster together as the annotations were derived from three different annotation pipelines such as the case for E. coli BL21 where three isolates were sequenced and annotated by three different research groups [73]. Evolutionary events that result in altered annotations in a particular organism are significant and aid our understanding of the biology of not only that particular organism but of related organisms. Annotation differences due to the utilization of different methods and sources skew these results and the conclusions that result from them. Standards in Genomic Sciences   (Table 3; data not shown). C. Protein lengths with respect to coding density for INSDC annotations. As coding density increases (more proteins per Kbp) the average protein length decreases (blue trend line) and the ratio of short proteins increases (red trend line). D. Hypothetical proteins and start codon ratios versus coding density. The ratio of proteins named 'hypothetical' increases slightly as the coding density increases whereas the standard start codon ratio decreases. Genomes where 'hypothetical protein' ratio is 1 or near 1 (large blue ellipse -every protein is annotated as 'hypothetical protein' in the genome) falls below the minimal annotation standards. For these particular cases, if a RefSeq version of the annotation existed, the functional assignment of a number of proteins was improved via curated clusters in the NCBI ProtClustDB (data not shown).  3. Number of proteins annotated as 'hypothetical protein'. 4. Number of proteins per Kbp ((total number of proteins/genome length (bp)) * 1000). 5. Number of amino acids for which at least one tRNA is annotated in the genome (excluding predicted or annotated pseudo tRNAs). 6. Percent of short proteins (number less than 150 amino acids in length/total number of proteins * 100). 7. Percent of standard starts for proteins (number of standard starts (ATG)/total starts * 100). Researchers are encouraged to update their annotations on archival records to meet the minimal standards and to correct any annotation discrepancies. Systems are being developed at NCBI to check newly submitted genomes for compliance with minimal standards and reports will be provided to submitters for quality assurance. Genomic records where the minimal standards cannot be met for real biological reasons will have explanatory comments added to the record.

Pseudogene Identification, Nomenclature, and Annotation
Pseudogene definitions take a variety of forms and the difficulties in properly defining and labeling pseudogenes stem from the same problem: a negative cannot be experimentally verified [74]. In eukaryotes, pseudogenes are defined as nonfunctional copies of gene fragments due to retrotransposition or genomic duplication, while in prokaryotes they result from degradation processes of either single copy or multiple copy genes either after duplication or failed horizontal transfer events [74,75]. A recent analysis of pseudogenes in Salmonella genomes suggests that they are cleared relatively rapidly from a genome indicating that their presence is a recent evolutionary event [76]. Although a clear definition of pseudogenes was not put forth, it was stressed that INSDC expects that all genome annotation should reflect the biology as determined by the underlying sequence. The INSDC feature table format provides several exceptions for cases of unusual biology but there are consequences for these unusual annotations that serve as flags in genome records (Table 3). A proposal was made to alter the pseudogene qualifier "/pseudo" to both"/pseudogene" and "/nonfunctional" as /pseudo is not considered to equate 100% to /pseudogene and that request is still being discussed by INSDC. The INSDC submission guidelines as they currently stand and the possible annotation strategies for pseudogenes, non-functional genes, and other cases are detailed in Table 4. It is essential for the research community to understand that in all cases, INSDC does not allow a translated product (protein or polypeptide chain) to be derived from a feature labeled as a pseudogene. More specifically, an instantiated peptide sequence, a product, and protein identifiers are not allowed for annotation purposes. Similarly, gene fragments (regions of similarity without valid start and stop) may not be annotated with translations. Exceptions to these rules require specific qualifiers that must fit specified formats and requirements.

Functional Annotation
Functional annotation results include guidelines on protein naming as well as a project to compare different protein naming resources in an effort to converge towards a consistent set of protein names by utilizing common guidelines.

Functional Annotation -Protein Naming Guidelines
Establishing protein naming standards has been a keystone of various curation efforts. In particular, this issue recognizes the protein name as the lowest common denominator of information exchange. The protein name is what is used in BLAST definition lines, which many users utilize as the sole information source. Ontologies were discussed but were not considered a priority. Ensuring up-to-date and well formatted protein names aids functional comparison and reliable hypotheses can be generated based on a set of consistent names, while the converse is true for badly formed names. UniProt had established publicly available naming guidelines that were modified during discussions and a set of prokaryoticspecific naming guidelines was adopted. The guidelines provide a basis for efficient and effective protein naming that is being used in the curation of both UniProt and RefSeq annotations. It is expected that all genomes submitted to INSDC will also follow these guidelines. A separate publication will detail the UniProt naming guidelines which are currently available online (Table 1). In addition, there is a general functional naming guideline that is applicable to protein names for all organisms (Table 1).
One particular issue of protein naming is the issue of specific names for proteins that have unknown or uncertain functional assignments. The final accepted resolution is that only two synonymous names will be acceptable: "hypothetical protein" or "uncharacterized protein". Names such as "conserved hypothetical protein", "novel protein", or "protein of unknown function" are no longer acceptable in genome submissions.

Comparison of functional annotation sources
Numerous resources are used in the annotation of protein functions and names and there are two established models for curation. Either a model organism database has been established for particularly important or well-studied organisms, or a set of protein families with similar function have been curated. One of the earliest examples of the latter was the Clusters of Orthologous Groups developed at NCBI which is no longer actively curated [46]. Since that time extensive work has been done by at least four separate groups: JCVI has produced the TIGRFAM set of protein families with a subset identified as equivalogs with the same function, UniProt's High-quality Automated and Manual Annotation of microbial and chloroplast Proteomes (HAMAP), the Kyoto Encyclopedia of Genes and Genomes (KEGG) orthology groups (KO) that uses NCBI Reference Sequences, and NCBI's Protein Clusters database that includes prokaryote, viral, and selected eukaryotic organism groups (ProtClustDB) [26], [46,47,49,77]. The TIGRFAMs and HAMAP projects contain only curated families, whereas KEGG and ProtClustDB have both curated and uncurated clusters. In 2009 NCBI and JCVI jointly collaborated on an initiative to compare the functional names derived from TIGRFAMs with NCBI's curated protein clusters. The comparison results led to improvements in both databases (data not shown). A comparison of protein family annotation from all four databases is available online (Table 1). An immediate goal of this process was the establishment of a core functional set that is expected to be encoded in all genomes. A number of studies over the years have addressed the idea of a minimal set of essential functions for a prokaryotic organism. The exact number fluctuates depending on the set of organisms used, the criteria for determining orthology, and whether only complete proteins or domains are considered [61,62], [78]. The initial set of universal COGs derived from proteins encoded in the 66 unicellular genomes at that time served as a starting point. Correspondence to the NCBI protein clusters database was checked, and a preliminary set of 61 functions corresponding to 191 clusters was created [26,46]. Next, all complete RefSeq genomes were checked to determine if all core functions were encoded. For those genomes where a protein could not be found, the nucleotide sequence and annotation were examined to assess whether a pseudogene/frameshifted gene was already annotated that corresponded to the missed function. For those cases that did not already have an annotated feature, a proper translation of the missed gene was examined with the result that a number of core functions that were previously missed from the submitted genome annotation were added to the Reference Sequence record. A total of 42 protein coding genes and translated features were added covering 12 functional groups (Table 5). To determine if the proteins were missed due to their smaller size, an examination of their average length for the proteins found in clusters corresponding to these 12 core functions was undertaken. Although most of the core cluster sets exhibit average lengths that are less than the minimum of the range of average protein lengths found in all genomes (232 aa from Table  3), especially those that were most frequently missed such as ribosomal protein S14, most are above typical length cutoffs and should still be found in even the most rudimentary annotation pipelines. Therefore, high protein length thresholds during annotation pipeline runs cannot adequately explain all discrepancies and missed core functions. To help solve these problems, all new RefSeq genomes will be tested against the core set for missed functions, and this process will be made available both as a set of clusters and incorporated into existing genome analysis tools for submitters ( Table 1). The core set will gradually be expanded to archaeal, bacterial, and then to more taxonomically restricted core functional sets such as species level pangenomic families [79]. The core set establishes the initial set for functional name comparison for the 61  Pairwise comparison of ProtClustDB clusters and the other protein family sources shows two things: 1) a number of protein family resources are missing curated core functions or that these families mapped below threshold levels, and 2) that there are substantially higher numbers of identically curated protein names in two-and three-way comparisons. All four databases have agreed to resolve differences and to work to incorporate the UniProt guidelines into the curated functional names. As these resources are heavily used in genome annotation pipelines, improvements to these records will improve annotations in many genomes and set a standard for other resources. Additional protein family resources are encouraged to be included if they agree to the same goals and are welcome to contact us. Inter-Pro, for example, is another database that integrates information from a variety of source databases and their ongoing effort was acknowledged at the workshop [80].

Viral/phage annotation standards
Viral annotation standards were discussed for the first time at the 2010 annotation workshop. A set of proposals was published separately and synthesizes many of the ideas presented above with respect to issues of annotation, capturing experimental data, meta-data, and genome classification, all in the context of viral genomes [81].

Conclusions
These guidelines provide mechanisms for individual researchers studying a single genome as well as those doing high throughput sequencing to ensure that high quality annotation is produced, submitted to, and available from the sequence archives. Mechanisms are in place to capture annotation methodologies and evidence, and in conjunction with standards developed by other international bodies where meta-data submission has been defined, provide a rich and understandable way to determine exactly how annotation was produced. Standard protein naming guidelines and projects to compare and update protein naming resources will result in higher quality annotation resources and protein names in submitted genomes. A major goal of setting minimal standards for the annotation and submission of gold standard complete genomes was achieved and will elevate that set of fundamentally important resources for all researchers, ensuring those studying basic biological processes, epidemiological outbreaks, and large-scale metagenomic projects will have a high quality resource to draw from when making hypotheses and drawing inferences (Table 6). Although not all issues were resolved, and many more remain to be addressed at future workshops, these initial guidelines provide a blueprint for a way forward to resolving these issues and we recognize that many others are working towards similar or parallel goals. One such project is the COMBREX initiative to establish a gold standard set of functionally annotated proteins as well as a source of predictions against which functions can be tested [82]. If complete genomes are to be efficiently utilized as reference genomes it is essential that they represent the highest quality annotation possible. Although this document specifically listed efforts by NCBI to provide resources and tools to improve annotation, NCBI recognizes the ongoing work to improve annotation by all of the organizations that attended and contributed to all workshops. b. a set of tRNAS (at least one each for each amino acid) c. protein-coding genes at expected density (not all named 'hypothetical protein' and all core genes annotated)

Annotations should follow INSDC submission guidelines:
Annotation standards should follow feature table format and submission guidelines (GenBank/ENA/DDBJ - Table  1) a. prior to genome submission a submitted Bioproject record with a registered locus_tag prefix is required and the genome record should contain the Bioproject ID. All proper features should have genes and locus_tags b. the genome submission should be valid according to feature table documentation and follow the standards

Methodologies and SOPs (Standard Operating Procedures):
Information about SOPs and additional meta data can be provided in a structured comment with more specific information about experimental or inference support provided on annotated features (see Table 2).

Exceptions:
Exceptions (unusual annotations, annotations not within expected ranges -see Table 1) should be documented on the genome record and strong supporting evidence should be provided.

Pseudogenes:
Annotated pseudogenes should follow the accepted formats (see Table 4).

Additional/enriched annotations:
Additional (enriched) annotations should follow INSDC guidelines, and be documented as above (SOPs and evidence).

Catalog of reputable annotation guidelines, software, and pipelines:
This non-exhaustive list of reliable software, sources, and databases for the production of microbial genome annotation is a useful community resource that aids in producing high quality genome annotation (Table 1).

Validation checks and annotation measures:
Validation checks should be done prior to the submission of a new genome record. NCBI has already provided numerous tools to validate and ensure correctness of annotation and additional checks and reports will be put in place to ensure minimal standards are met (see Table 1).