Network Working Group | R. Sparks |
Internet-Draft | Oracle |
Intended status: Informational | T. Kivinen |
Expires: July 30, 2015 | INSIDE Secure |
January 26, 2015 |
Tracking Reviews of Documents
draft-sparks-genarea-review-tracker-00
Several review teams ensure specific types of review are performed on Internet Drafts as they progress towards becoming RFCs. The tools used by these teams to assign and track reviews would benefit from tighter integration to the Datatracker. This document discusses requirements for improving those tools without disrupting current work flows.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on July 30, 2015.
Copyright (c) 2015 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
As Internet Drafts are processed, reviews are requested from several review teams. For example, the General Area Review Team (Gen-Art) and the Security Directorate (Secdir) perform reviews of documents that are in IETF Last Call. Gen-art performs a follow-up review when the document is scheduled for an IESG telechat. These teams also perform earlier reviews of documents on demand. There are several other teams that perform similar services, often focusing on specific areas of expertise.
The secretaries of these teams manage a pool of volunteer reviewers. Documents are assigned to reviewers, taking various factors into account. For instance, a reviewer will not be assigned a document for which he is an author or shepherd. Reviewers are given a deadline, usually driven by the end of last call or a telechat date. The reviewer sends each completed review to the team's mailing list, and any other lists that are relevant for document being reviewed. Often, a thread ensues on one or more of those lists to resolve any issues found in the review.
The secretaries and reviewers from several teams are using a tool developed and maintained by Tero Kivinen. Much of its design predates the modern Datatracker. The application currently keeps its own data store, and learns about documents needing review by inspecting Datatracker and tools.ietf.org pages. Most of those pages are easy to parse, but the last-call pages, in particular, require some effort. Tighter integration with the Datatracker would simplify the logic used to identify documents ready for review, make it simpler for the Datatracker to associate reviews with documents, and allow users to reuse their Datatracker credentials. It would also make it easier to detect other potential review-triggering events, such as a document entering working group last call, or when a RFC's standard level is being changed without revising the RFC. Tero currently believes this integration is best achieved by a new implementation of the tool. This document captures requirements for that reimplementation, with a focus on the workflows that the new implementation must take care not to disrupt. It also discusses new features, including changes suggested for the existing tool at its issue tracker [art-trac].
For more information about the various review teams, see the following references
Gen-Art | [Gen-Art] [RFC6385] |
Secdir | [Secdir] |
This section gives a high-level overview of how the review team secretaries and reviewers use the existing tool. It is not intended to be comprehensive documentation of how review teams operate. Please see the references for those details.
A team's secretary periodically (typically once a week) checks the tool for documents it has identified as ready for review. The tool has compiled this list from Last Call announcements and telechat agendas. The secretary creates a set of assignments from this list into the reviewer pool, choosing the reviewers in roughly a round-robin order. That order can be perturbed by several factors. Reviewers have different levels of availability. Some are willing to review multiple documents a month. Others may only be willing to review a document every other month. The assignment process takes exceptional conditions such as reviewer vacations into account. Furthermore, secretaries are careful not to assign a document to a reviewer that is an author, shepherd, responsible WG chair, or has some other already existing association with the document. The preference is to get a reviewer with a fresh perspective. The secretary may discover reasons to change assignments while going through the list of documents. In order to not cause a reviewer to make a false start on a review, the secretaries complete the full list of assignments before sending notifications to anyone. This assignment process can take several minutes, and it is possible for new last calls to be issued while the secretary is making assignments. The secretary typically checks to see if new documents are ready for review just before issuing the assignments, and updates the assignments if necessary. The issued assignments are sent to the review team list and are reflected in the tool. For those teams handling different types of reviews (Last Call vs Telechat for example), the secretary typically processes the documents for each type of review separately, and potentially with different assignment criteria. In Gen-art, for example, the Last Call reviewer for a document will almost always get the follow-up Telechat review assignment. Similarly, Secdir assigns any re-reviews of a document to the same reviewer. Other teams may choose to assign a different reviewer.
Reviewers discover their assignments through the announcement to the list or by looking at their queue in the tool. (Most reviewers only check the tool when they see they have an assignment via the list). A reviewer has the opportunity to reject the assignment for any reason. The secretary will find another volunteer for any rejected assignments. The reviewer can indicate that the assignment is accepted in the tool before starting the review.
The reviewer sends a completed review to the team's list, and any other lists relevant to the review. For instance, many last call reviews are also sent to the IETF general list. The teams typically have a template format for the review. Those templates usually start with a summary, describing the conclusion of the review. Typical summaries are "Ready for publication" or "On the right track, but has open issues". The reviewer uses the tool to indicate that the review is complete, provides the summary, and has an opportunity to provide a link to the review in the archives. (Note, however, that having to wait for the document to appear in the archive to know the link to paste into the tool is a significant enough impedance that this link is often not provided by the reviewer. The Secdir secretary manually collects these links from the list and adds them to the tool.)
Occasionally, a document is revised between when a review assignment is made and when the reviewer starts the review. Different teams can have different policies about whether the reviewer should review the assigned version or the current version.
This document discusses requirements for tools that assist review teams. These requirements do not affect the security of the Internet in any significant fashion. The tools themselves have authentication and authorization considerations (team secretaries will be able to do different things than reviewers). None of these have been identified as non-obvious.
This document has no actions for IANA.
Tero Kivinen and Henrik Levkowetz were instrumental in forming this set of requirements and in developing the initial Django models in the appendix.
[Gen-Art] | General Area Review Team Guidelines", Work in Progress , January 2015. | , "
[RFC6385] | Barnes, M., Doria, A., Alvestrand, H. and B. Carpenter, General Area Review Team (Gen-ART) Experiences", RFC 6385, October 2011. |
[Secdir] | Security Directorate", Work in Progress , January 2015. | , "
[art-trac] | Area Review Team Tool - Active Tickets", Work in Progress , January 2015. | , "
from django.db import models from ietf.doc.models import Document from ietf.person.models import Email from ietf.group.models import Group, Role from ietf.name.models import NameModel class ReviewRequestStateName(NameModel): """ Requested, Accepted, Rejected, Withdrawn, Overcome By Events, No Response , Completed """ class ReviewTypeName(NameModel): """ Early Review, Last Call, Telechat """ class ReviewResultName(NameModel): """Almost ready, Has issues, Has nits, Not Ready, On the right track, Ready, Ready with issues, Ready with nits, Serious Issues""" class Reviewer(models.Model): """ These records associate reviewers with review team, and keeps track of admin data associated with the reviewer in the particular team. There will be one record for each combination of reviewer and team. """ role = models.ForeignKey(Role) frequency = models.IntegerField(help_text= "Can review every N days") available = models.DateTimeField(blank=True,null=True, help_text= "When will this reviewer be available again") filter_re = models.CharField(blank=True) skip_next = models.IntegerField(help_text= "Skip the next N review assignments") class ReviewResultSet(models.Model): """ This table provides a way to point out a set of ReviewResultName entries which are valid for a given team, in order to be able to limit the result choices that can be set for a given review, as a function of which team it is related to. """ team = models.ForeignKey(Group) valid = models.ManyToManyField(ReviewResultName) class ReviewRequest(models.Model): """ There should be one ReviewRequest entered for each combination of document, rev, and reviewer. """ # Fields filled in on the initial record creation: time = models.DateTimeField(auto_now_add=True) type = models.ReviewTypeName() doc = models.ForeignKey(Document, related_name='review_request_set') team = models.ForeignKey(Group) deadline = models.DateTimeField() requested_rev = models.CharField(verbose_name="requested_revision", max_length=16, blank=True) state = models.ForeignKey(ReviewRequestStateName) # Fields filled in as reviewer is assigned, and as the review # is uploaded reviewer = models.ForeignKey(Reviewer, null=True, blank=True) review = models.OneToOneField(Document, null=True, blank=True) reviewed_rev = models.CharField(verbose_name="reviewed_revision", max_length=16, blank=True) result = models.ForeignKey(ReviewResultName)