Network Working Group | R. Gellens |
Internet-Draft | Core Technology Consulting |
Intended status: Standards Track | July 21, 2016 |
Expires: January 22, 2017 |
Negotiating Human Language in Real-Time Communications
draft-ietf-slim-negotiating-human-language-03
Users have various human (natural) language needs, abilities, and preferences regarding spoken, written, and signed languages. When establishing interactive communication ("calls") there needs to be a way to negotiate (communicate and match) the caller's language and media needs with the capabilities of the called party. This is especially important with emergency calls, where a call can be handled by a call taker capable of communicating with the user, or a translator or relay operator can be bridged into the call during setup, but this applies to non-emergency calls as well (as an example, when calling a company call center).
This document describes the need and a solution using new SDP stream attributes.
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 22, 2017.
Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
A mutually comprehensible language is helpful for human communication. This document addresses the real-time, interactive side of the issue. A companion document on language selection in email [I-D.ietf-slim-multilangcontent] addresses the non-real-time side.
When setting up interactive communication sessions (using SIP or other protocols), human (natural) language and media modality (voice, video, text) negotiation may be needed. Unless the caller and callee know each other or there is contextual or out of band information from which the language(s) and media modalities can be determined, there is a need for spoken, signed, or written languages to be negotiated based on the caller's needs and the callee's capabilities. This need applies to both emergency and non-emergency calls. For various reasons, including the ability to establish multiple streams using different media (e.g., voice, text, video), it makes sense to use a per-stream negotiation mechanism, in this case, SDP.
This approach has a number of benefits, including that it is generic (applies to all interactive communications negotiated using SDP) and not limited to emergency calls. In some cases such a facility isn't needed, because the language is known from the context (such as when a caller places a call to a sign language relay center, to a friend, or colleague). But it is clearly useful in many other cases. For example, someone calling a company call center or a Public Safety Answering Point (PSAP) should be able to indicate if one or more specific signed, written, and/or spoken languages are preferred, the callee should be able to indicate its capabilities in this area, and the call proceed using in-common language(s) and media forms.
Since this is a protocol mechanism, the user equipment (UE client) needs to know the user's preferred languages; a reasonable technique could include a configuration mechanism with a default of the language of the user interface. In some cases, a UE could tie language and media preferences, such as a preference for a video stream using a signed language and/or a text or audio stream using a written/spoken language.
Including the user's human (natural) language preferences in the session establishment negotiation is independent of the use of a relay service and is transparent to a voice service provider. For example, assume a user within the United States who speaks Spanish but not English places a voice call. The call could be an emergency call or perhaps to an airline reservation desk. The language information is transparent to the voice service provider, but is part of the session negotiation between the UE and the terminating entity. In the case of a call to e.g., an airline, the call could be automatically handled by a Spanish-speaking agent. In the case of an emergency call, the Emergency Services IP network (ESInet) and the PSAP may choose to take the language and media preferences into account when determining how to process the call.
By treating language as another attribute that is negotiated along with other aspects of a media stream, it becomes possible to accommodate a range of users' needs and called party facilities. For example, some users may be able to speak several languages, but have a preference. Some called parties may support some of those languages internally but require the use of a translation service for others, or may have a limited number of call takers able to use certain languages. Another example would be a user who is able to speak but is deaf or hard-of-hearing and requires a voice stream plus a text stream (known as voice carry over). Making language a media attribute allows the standard session negotiation mechanism to handle this by providing the information and mechanism for the endpoints to make appropriate decisions.
Regarding relay services, in the case of an emergency call requiring sign language such as ASL, there are two common approaches: the caller initiates the call to a relay center, or the caller places the call to emergency services (e.g., 911 in the U.S. or 112 in Europe). (In a variant of the second case, the voice service provider invokes a relay service as well as emergency services.) In the former case, the language need is ancillary and supplemental. In the non-variant second case, the ESInet and/or PSAP may take the need for sign language into account and bridge in a relay center. In this case, the ESInet and PSAP have all the standard information available (such as location) but are able to bridge the relay sooner in the call processing.
By making this facility part of the end-to-end negotiation, the question of which entity provides or engages the relay service becomes separate from the call processing mechanics; if the caller directs the call to a relay service then the human language negotiation facility provides extra information to the relay service but calls will still function without it; if the caller directs the call to emergency services, then the ESInet/PSAP are able to take the user's human language needs into account, e.g., by assigning to a specific queue or call taker or bridging in a relay service or translator.
The term "negotiation" is used here rather than "indication" because human language (spoken/written/signed) is something that can be negotiated in the same way as which forms of media (audio/text/video) or which codecs. For example, if we think of non-emergency calls, such as a user calling an airline reservation center, the user may have a set of languages he or she speaks, with perhaps preferences for one or a few, while the airline reservation center will support a fixed set of languages. Negotiation should select the user's most preferred language that is supported by the call center. Both sides should be aware of which language was negotiated. This is conceptually similar to the way other aspects of each media stream are negotiated using SDP (e.g., media type and codecs).
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].
This facility may be used by NENA and 3GPP. NENA has already referenced it in NENA 08-01 (i3 Stage 3 version 2) in describing attributes of calls presented to an ESInet, and may add further details in that or other documents. 3GPP may reference this mechanism in general call handling and emergency call handling. Some CRs introduced in SA1 have anticipated this functionality being provided within SDP.
The desired solution is a media attribute (preferably per direction) that may be used within an offer to indicate the preferred language of each (direction of a) media stream, and within an answer to indicate the accepted language. The semantics of including multiple values for a media stream within an offer is that the languages are listed in order of preference.
(Negotiating multiple simultaneous languages within a media stream is out of scope, as the complexity of doing so outweighs the usefulness.)
RFC 4566 [RFC4566] specifies an attribute 'lang' which appears similar to what is needed here, but is not sufficiently detailed for use here. In addition, it does not seem to be in common use, which means there is low risk of conflict or confusion in defining new attributes. Further, there is value in being able to specify language per direction (sending and receiving). This document therefore defines two new attributes.
An SDP attribute (per direction) seems the natural choice to negotiate human (natural) language of an interactive media stream. The attribute value should be a language tag per RFC 5646 [RFC5646]
The decision to base the proposal at the media negotiation level, and specifically to use SDP, came after significant debate and discussion. From an engineering standpoint, it is possible to meet the objectives using a variety of mechanisms, but none are perfect. None of the proposed alternatives was clearly better technically in enough ways to win over proponents of the others, and none were clearly so bad technically as to be easily rejected. As is often the case in engineering, choosing the solution is a matter of balancing trade-offs, and ultimately more a matter of taste than technical merit. The two main proposals were to use SDP and SIP. SDP has the advantage that the language is negotiated with the media to which it applies, while SIP has the issue that the languages expressed may not match the SDP media negotiated (for example, a session could negotiate video at the SIP level but fail to negotiate any video media stream at the SDP layer).
The mechanism described here for SDP can be adapted to media negotiation protocols other than SDP.
Rather than re-use 'lang' we define two new media-level attributes starting with 'humintlang' (short for "human interactive language") to negotiate which human language is used in each (interactive) media stream. There are two attributes, one ending in "-send" and the other in "-recv":
Each can appear multiple times in an offer for a media stream.
In an offer, 'humintlang-send' indicates the language(s) the offerer is willing to use when sending using the media, and 'humintlang-recv' indicates the language(s) the offerer is willing to use when receiving using the media. The values constitute a list of languages in preference order (first is most preferred). When a media is intended for use in one direction only (such as a speech-impaired user sending using text and receiving using audio), either humintlang-send or humintlang-recv MAY be omitted. When a media is not primarily intended for language (for example, a video or audio stream intended for background only) both SHOULD be omitted. Otherwise, both SHOULD have the same values in the same order. The two SHOULD NOT be set to languages which are difficult to match together (e.g., specifying a desire to send audio in Hungarian and receive audio in Portuguese will make it difficult to successfully complete the call).
In an answer, 'humintlang-send' is the accepted language the answerer will send (which in most cases is one of the languages in the offer's 'humintlang-recv'), and 'humintlang-recv' is the accepted language the answerer expects to receive (which in most cases is one of the languages in the offer's 'humintlang-send').
Each value MUST be a language tag per RFC 5646 [RFC5646]. RFC 5646 describes mechanisms for matching language tags. While RFC 5646 provides a mechanism accommodating increasingly fine-grained distinctions, in the interest of maximum interoperability for real-time interactive communications, each 'humintlang-send' and 'humintlang-recv' value SHOULD be restricted to the largest granularity of language tags; in other words, it is RECOMMENDED to specify only a Primary-subtag and NOT to include subtags (e.g., for region or dialect) unless the languages might be mutually incomprehensible without them.
In an offer, each language tag value MAY have an asterisk appended as the last character (after the registry value). The asterisk indicates a request by the caller to not fail the call if there is no language in common. See Section 6.3 for more information and discussion.
When placing an emergency call, and in any other case where the language cannot be assumed from context, each media stream in an offer primarily intended for human language communication SHOULD specify both (or in some cases, one of) the 'humintlang-send' and 'humintlang-recv' attributes.
Note that while signed language tags are used with a video stream to indicate sign language, a spoken language tag for a video stream in parallel with an audio stream with the same spoken language tag indicates a request for a supplemental video stream to see the speaker.
Clients acting on behalf of end users are expected to set one or both 'humintlang-send' and 'humintlang-recv' attributes on each media stream primarily intended for human communication in an offer when placing an outgoing session, and either ignore or take into consideration the attributes when receiving incoming calls, based on local configuration and capabilities. Systems acting on behalf of call centers and PSAPs are expected to take into account the values when processing inbound calls.
Note that media and language negotiation might result in more media streams being accepted than are needed by the users (e.g., if more preferred and less preferred combinations of media and language are all accepted).
One important consideration with this mechanism is if the call fails if the callee does not support any of the languages requested by the caller.
In order to provide for maximum likelihood of a successful communication session, especially in the case of emergency calling, the mechanism defined here provides a way for the caller to indicate a preference for the call failing or succeeding when there is no language in common. However, the callee is NOT REQUIRED to honor this preference. For example, a PSAP MAY choose to attempt the call even with no language in common, while a corporate call center MAY choose to fail the call.
The mechanism for indicating this preference is that, in an offer, if the last character of any of the 'humintlang-recv' or 'humintlang-send' values is an asterisk, this indicates a request to not fail the call (similar to SIP Accept-Language syntax). Either way, the called party MAY ignore this, e.g., for the emergency services use case, a PSAP will likely not fail the call.
It is possible to specify a "silly state" where the language specified does not make sense for the media type, such as specifying a signed language for an audio media stream.
An offer MUST NOT be created where the language does not make sense for the media type. If such an offer is received, the receiver MAY reject the media, ignore the language specified, or attempt to interpret the intent (e.g., if American Sign Language is specified for an audio media stream, this might be interpreted as a desire to use spoken English).
A spoken language tag for a video stream in conjunction with an audio stream with the same language might indicate a request for supplemental video to see the speaker.
Some examples are shown below. Only the most directly relevant portions of the SDP block are shown, for clarity.
IANA is kindly requested to add two entries to the 'att-field (media level only)' table of the SDP parameters registry:
Type | Name | Reference |
---|---|---|
att-field (media level only) | humintlang-send | (this document) |
att-field (media level only) | humintlang-recv | (this document) |
The Security Considerations of RFC 5646 [RFC5646] apply here (as a use of that RFC). In addition, if the 'humintlang-send' or 'humintlang-recv' values are altered or deleted en route, the session could fail or languages incomprehensible to the caller could be selected; however, this is also a risk if any SDP parameters are modified en route.
Language and media information can suggest a user's nationality, background, abilities, disabilities, etc.
Gunnar Hellstrom deserves special mention for his reviews, assistance, and especially for contributing the core text in Appendix A.
Many thanks to Bernard Aboba, Harald Alvestrand, Flemming Andreasen, Francois Audet, Eric Burger, Keith Drage, Doug Ewell, Christian Groves, Andrew Hutton, Hadriel Kaplan, Ari Keranen, John Klensin, Paul Kyzivat, John Levine, Alexey Melnikov, James Polk, Pete Resnick, Peter Saint-Andre, and Dale Worley for reviews, corrections, suggestions, and participating in in-person and email discussions.
[RFC2119] | Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997. |
[RFC4566] | Handley, M., Jacobson, V. and C. Perkins, "SDP: Session Description Protocol", RFC 4566, DOI 10.17487/RFC4566, July 2006. |
[RFC5646] | Phillips, A. and M. Davis, "Tags for Identifying Languages", BCP 47, RFC 5646, DOI 10.17487/RFC5646, September 2009. |
[I-D.iab-privacy-considerations] | Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., Morris, J., Hansen, M. and R. Smith, "Privacy Considerations for Internet Protocols", Internet-Draft draft-iab-privacy-considerations-09, May 2013. |
[I-D.ietf-slim-multilangcontent] | Tomkinson, N. and N. Borenstein, "Multiple Language Content Type", Internet-Draft draft-ietf-slim-multilangcontent-02, July 2016. |
[RFC3840] | Rosenberg, J., Schulzrinne, H. and P. Kyzivat, "Indicating User Agent Capabilities in the Session Initiation Protocol (SIP)", RFC 3840, DOI 10.17487/RFC3840, August 2004. |
[RFC3841] | Rosenberg, J., Schulzrinne, H. and P. Kyzivat, "Caller Preferences for the Session Initiation Protocol (SIP)", RFC 3841, DOI 10.17487/RFC3841, August 2004. |
The decision to base the proposal at the media negotiation level, and specifically to use SDP, came after significant debate and discussion. It is possible to meet the objectives using a variety of mechanisms, but none are perfect. Using SDP means dealing with the complexity of SDP, and leaves out real-time session protocols that do not use SDP. The major alternative proposal was to use SIP. Using SIP leaves out non-SIP session protocols, but more fundamentally, would occur at a different layer than the media negotiation. This results in a more fragile solution since the media modality and language would be negotiated using SIP, and then the specific media formats (which inherently include the modality) would be negotiated at a different level (typically SDP, especially in the emergency calling cases), making it easier to have mismatches (such as where the media modality negotiated in SIP don't match what was negotiated using SDP).
An alternative proposal was to use the SIP-level Caller Preferences mechanism from RFC 3840 [RFC3840] and RFC 3841 [RFC3841].
The Caller-prefs mechanism includes a priority system; this would allow different combinations of media and languages to be assigned different priorities. The evaluation and decisions on what to do with the call can be done either by proxies along the call path, or by the addressed UA. Evaluation of alternatives for routing is described in RFC 3841 [RFC3841].
The following would be possible without adding any new registered tags:
Potential callers and recipients MAY include in the Contact field in their SIP registrations media and language tags according to the joint capabilities of the UA and the human user according to RFC 3840 [RFC3840].
The most relevant media capability tags are "video", "text" and "audio". Each tag represents a capability to use the media in two-way communication.
Language capabilities are declared with a comma-separated list of languages that can be used in the call as parameters to the tag "language=".
This is an example of how it is used in a SIP REGISTER:
Including this information in SIP REGISTER allows proxies to act on the information. For the problem set addressed by this document, it is not anticipated that proxies will do so using registration data. Further, there are classes of devices (such as cellular mobile phones) that are not anticipated to include this information in their registrations. Hence, use in registration is OPTIONAL.
In a call, a list of acceptable media and language combinations is declared, and a priority assigned to each combination.
This is done by the Accept-Contact header field, which defines different combinations of media and languages and assigns priorities for completing the call with the SIP URI represented by that Contact. A priority is assigned to each set as a so-called "q-value" which ranges from 1 (most preferred) to 0 (least preferred).
Using the Accept-Contact header field in INVITE requests and responses allows these capabilities to be expressed and used during call set-up. Clients SHOULD include this information in INVITE requests and responses.
Example:
This example shows the highest preference expressed by the caller is to use video with American Sign Language (language code "ase"). As a fallback, it is acceptable to get the call connected with only English text used for human communication. Other media may of course be connected as well, without expectation that it will be usable by the caller for interactive communications (but may still be helpful to the caller).
This system satisfies all the needs described in the previous chapters, except that language specifications do not make any distinction between spoken and written language, and that the need for directionality in the specification cannot be fulfilled.
To some degree the lack of media specification between speech and text in language tags can be compensated by only specifying the important medium in the Accept-Contact field.
Thus, a user who wants to use English mainly for text would specify:
While a user who wants to use English mainly for speech but accept it for text would specify:
However, a user who would like to talk, but receive text back has no way to do it with the existing specification.
In order to be able to specify asymmetric preferences, there are two possibilities. Either new language tags in the style of the humintlang parameters described above for SDP could be registered, or additional media tags describing the asymmetry could be registered.
The following new media tags should be defined:
A user who prefers to talk and get text in return in English would register the following (if including this information in registration data):
At call time, a user who prefers to talk and get text in return in English would set the Accept-Contact header field to:
Note that the directions specified here are as viewed from the callee side to match what the callee has registered.
A bridge arranged for invoking a relay service specifically arranged for captioned telephony would register the following for supporting calling users:
A bridge arranged for invoking a relay service specifically arranged for captioned telephony would register the following for supporting called users:
At call time, these alternatives are included in the list of possible outcome of the call routing by the SIP proxies and the proper relay service is invoked.
An alternative is to register new language tags for the purpose of asymmetric language usage.
Instead of using "language=", six new language tags would be registered:
These language tags would be used instead of the regular bidirectional language tags, and users with bidirectional capabilities SHOULD specify values for both directions. Services specifically arranged for supporting users with asymmetric needs SHOULD specify only the asymmetry they support.