One document matched: draft-ietf-speechsc-mrcpv2-21.xml
<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY rfc3550 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3550.xml">
<!ENTITY rfc3261 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3261.xml">
<!ENTITY rfc2326 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2326.xml">
<!ENTITY rfc4566 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4566.xml">
<!ENTITY rfc2119 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml">
<!ENTITY rfc2616 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2616.xml">
<!ENTITY rfc3264 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3264.xml">
<!ENTITY rfc3629 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3629.xml">
<!ENTITY rfc5234 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5234.xml">
<!ENTITY rfc4145 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4145.xml">
<!ENTITY rfc4572 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4572.xml">
<!ENTITY rfc3388 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3388.xml">
<!ENTITY rfc5322 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5322.xml">
<!ENTITY rfc2392 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2392.xml">
<!ENTITY rfc2109 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2109.xml">
<!ENTITY rfc2965 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2965.xml">
<!ENTITY rfc4646 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4646.xml">
<!ENTITY rfc5226 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.5226.xml">
<!ENTITY rfc1035 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.1035.xml">
<!ENTITY rfc4288 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4288.xml">
<!ENTITY rfc3688 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3688.xml">
<!ENTITY rfc4395 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4395.xml">
<!ENTITY rfc4568 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4568.xml">
<!ENTITY synth SYSTEM "http://xml.resource.org/public/rfc/bibxml4/reference.W3C.REC-speech-synthesis-20040907.xml">
<!ENTITY rfc2483 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2483.xml">
<!ENTITY rfc3711 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3711.xml">
<!ENTITY grxml SYSTEM "http://xml.resource.org/public/rfc/bibxml4/reference.W3C.REC-speech-grammar-20040316.xml">
<!ENTITY names SYSTEM "http://xml.resource.org/public/rfc/bibxml4/reference.W3C.REC-xml-names11-20040204.xml">
<!ENTITY rfc4313 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4313.xml">
<!ENTITY rfc4733 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4733.xml">
<!ENTITY voicexml SYSTEM "http://xml.resource.org/public/rfc/bibxml4/reference.W3C.REC-voicexml20-20040316.xml">
<!ENTITY rfc4463 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4463.xml">
<!ENTITY rfc2234 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2234.xml">
<!ENTITY rfc4467 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4467.xml">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt"?>
<?rfc compact="yes"?>
<?rfc toc="yes"?>
<!--rfc category="std" docName="draft-ietf-speechsc-mrcpv2-18" ipr="full3978"-->
<rfc category="std" docName="draft-ietf-speechsc-mrcpv2-21"
ipr="pre5378Trust200902">
<front>
<title abbrev="MRCPv2">Media Resource Control Protocol Version 2
(MRCPv2)</title>
<author fullname="Daniel C. Burnett" initials="D." surname="Burnett">
<organization>Voxeo</organization>
<address>
<postal>
<street>189 South Orange Avenue #2050</street>
<city>Orlando</city>
<region>FL</region>
<code>32801</code>
<country>USA</country>
</postal>
<email>dburnett@voxeo.com</email>
</address>
</author>
<author fullname="Saravanan Shanmugham" initials="S." surname="Shanmugham">
<organization>Cisco Systems, Inc.</organization>
<address>
<postal>
<street>170 W. Tasman Dr.</street>
<city>San Jose</city>
<region>CA</region>
<code>95134</code>
<country>USA</country>
</postal>
<email>sarvi@cisco.com</email>
</address>
</author>
<date day="9" month="July" year="2010" />
<area>Real-time Applications and Infrastructure</area>
<workgroup>SPEECHSC</workgroup>
<abstract>
<t>The MRCPv2 protocol allows client hosts to control media service
resources such as speech synthesizers, recognizers, verifiers and
identifiers residing in servers on the network. MRCPv2 is not a
"stand-alone" protocol - it relies on other protocols, such as Session
Initiation Protocol (SIP) to rendezvous MRCPv2 clients and servers and
manage sessions between them, and the Session Description Protocol (SDP)
to describe, discover and exchange capabilities. It also depends on SIP
and SDP to establish the media sessions and associated parameters
between the media source or sink and the media server. Once this is
done, the MRCPv2 protocol exchange operates over the control session
established above, allowing the client to control the media processing
resources on the speech resource server.</t>
</abstract>
</front>
<middle>
<section title="Introduction">
<t>The MRCPv2 protocol is designed to allow a client device to control
media processing resources on the network. Some of these media
processing resources include speech recognition engines, speech
synthesis engines, speaker verification and speaker identification
engines. MRCPv2 enables the implementation of distributed Interactive
Voice Response platforms using <xref
target="W3C.REC-voicexml20-20040316">VoiceXML</xref> browsers or other
client applications while maintaining separate back-end speech
processing capabilities on specialized speech processing servers. MRCPv2
is based on the earlier <xref target="RFC4463">Media Resource Control
Protocol (MRCP) </xref> developed jointly by Cisco Systems, Inc., Nuance
Communications, and Speechworks Inc.</t>
<t>The protocol requirements of SPEECHSC <xref target="RFC4313"></xref>
include that the solution be capable of reaching a media processing
server and setting up communication channels to the media resources, and
sending and receiving control messages and media streams to/from the
server. The <xref target="RFC3261">Session Initiation Protocol
(SIP)</xref> meets these requirements.</t>
<t>Note the above mentioned requirements document, RFC 4313, goes into
detail on alternatives to SIP, such as <xref
target="RFC2326">RTSP</xref>, and why MRCPv2 does not use RTSP, even
though the proprietary version of MRCP did run over RTSP.</t>
<t>MRCPv2 leverages these capabilities by building upon SIP and the
<xref target="RFC4566">Session Description Protocol (SDP)</xref>. MRCPv2
uses SIP to setup and tear down media and control sessions with the
server. In addition, the client can use a SIP re-INVITE method (an
INVITE dialog sent within an existing SIP Session) to change the
characteristics of these media and control session while maintaining the
SIP dialog between the client and server. SDP is used to describe the
parameters of the media sessions associated with that dialog. It is
mandatory to support SIP as the session establishment protocol to ensure
interoperability. Other protocols can be used for session establishment
by prior agreement. This document only describes the use of SIP and
SDP.</t>
<t>MRCPv2 uses SIP and SDP to create the speech client/server dialog and
set up the media channels to the server. It also uses SIP and SDP to
establish MRCPv2 control sessions between the client and the server for
each media processing resource required for that dialog. The MRCPv2
protocol exchange between the client and the media resource is carried
on that control session. MRCPv2 protocol exchanges do not change the
state of the SIP dialog, the media sessions, or other parameters of the
dialog initiated via SIP. It controls and affects the state of the media
processing resource associated with the MRCPv2 session(s).</t>
<t>MRCPv2 defines the messages to control the different media processing
resources and the state machines required to guide their operation. It
also describes how these messages are carried over a transport layer
protocol such as TCP or TLS (Note: SCTP is a viable transport for MRCPv2
as well, but the mapping onto SCTP is not described in this
specification).</t>
</section>
<section title="Document Conventions">
<t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in <xref target="RFC2119">
RFC2119</xref>.</t>
<t>Since many of the definitions and syntax are identical to HTTP/1.1
(<xref target="RFC2616">RFC2616</xref>), this specification refers to
the section where they are defined rather than copying it. For brevity,
[HX.Y] is to be taken to refer to Section X.Y of RFC2616.</t>
<t>All the mechanisms specified in this document are described in both
prose and an augmented Backus-Naur form (<xref
target="RFC5234">ABNF</xref>).</t>
<t>The complete message format in ABNF form is provided in <xref
target="S.abnf"></xref> and is the normative format definition.</t>
<section title="Definitions">
<t><list hangIndent="15" style="hanging">
<t hangText="Media Resource"><vspace blankLines="0" />An entity on
the speech processing server that can be controlled through the
MRCPv2 protocol.</t>
<t hangText="MRCP Server"><vspace blankLines="0" />Aggregate of
one or more "Media Resource" entities on a Server, exposed through
the MRCPv2 protocol ("Server" for short).</t>
<t hangText="MRCP Client"><vspace blankLines="0" />An entity
controlling one or more Media Resources through the MRCPv2
protocol ("Client" for short).</t>
<t hangText="DTMF"><vspace blankLines="0" />Dual Tone
Multi-Frequency; a method of transmitting key presses in-band,
either as actual tones (<xref target="Q.23">Q.23</xref>) or as
named tone events (<xref target="RFC4733">RFC4733</xref>).</t>
<t hangText="Endpointing"><vspace blankLines="0" />The process of
automatically detecting the beginning and end of speech in an
audio stream. This is critical both for speech recognition and for
automated recording as one would find in voice mail systems.</t>
<t hangText="Hotword Mode"><vspace blankLines="0" />A mode of
speech recognition where a stream of utterances is evaluated for
match against a small set of command words. This is generally
employed to either trigger some action, or to control the
subsequent grammar to be used for further recognition</t>
</list></t>
</section>
<section title="State-Machine Diagrams">
<t>The state-machine diagrams in this document do not show every
possible method call. Rather, they reflect the state of the resource
based on the methods that have moved to IN-PROGRESS or COMPLETE states
(see <xref target="sec.response"></xref>). Note that since PENDING
requests essentially have not affected the resource yet and are in
queue to be processed, they are not reflected in the state-machine
diagrams.</t>
</section>
<section title="URI Schemes">
<t>This document defines many protocol headers that contain URIs or
lists of URIs for referencing media. The entire document, including
the Security Considerations section (<xref
target="sec.securityConsiderations"></xref>), assumes that HTTP/HTTPS
will be used as the URI addressing scheme unless otherwise stated.
However, implementations MAY support other schemes (such as "file")
provided they have addressed any security considerations described in
this document and any others particular to the specific scheme. For
example, implementations where the client and server both reside on
the same physical hardware and the file system is secured by
traditional user-level file access controls could be reasonable
candidates for supporting the "file" scheme.</t>
</section>
</section>
<section title="Architecture">
<t>A system using MRCPv2 consists of a client that requires the
generation and/or consumption of media streams and a media resource
server that has the resources or "engines" to process these streams as
input or generate these streams as output. The client uses SIP and SDP
to establish an MRCPv2 control channel with the server to use its media
processing resources. MRCPv2 servers are addressed using SIP URIs.</t>
<t>The session initiation protocol (SIP) uses SDP with the offer/answer
model described in <xref target="RFC3264">RFC3264</xref> to set up the
MRCPv2 control channels and describe their characteristics. A separate
MRCPv2 session is needed to control each of the media processing
resources associated with the SIP dialog between the client and server.
Within a SIP dialog, the individual resource control channels for the
different resources are added or removed through SDP offer/answer
carried in a SIP re-INVITE transaction.</t>
<t>The server, through the SDP exchange, provides the client with an
unambiguous channel identifier and a TCP port number. The client MAY
then open a new TCP connection with the server on this port number.
Multiple MRCPv2 channels can share a TCP connection between the client
and the server. All MRCPv2 messages exchanged between the client and the
server carry the specified channel identifier that the server MUST
ensure is unambiguous among all MRCPv2 control channels that are active
on that server. The client uses this channel identifier to indicate the
media processing resource associated with that channel. For information
on message framing, see <xref target="sec.messages"></xref>.</t>
<t>The session management protocol (SIP) also establishes the media
sessions between the client (or other source/sink of media) and the
MRCPv2 server using SDP m-lines. One or more media processing resources
may share a media session under a SIP session, or each media processing
resource may have its own media session.</t>
<t>An MRCP client that merely relays results from one MRCP server to
another MRCP server can be considered an MCRP Proxy. This could be
useful in cases where different resources have their own MRCP servers
that a service aggregator would like to present via a single MRCP
server.</t>
<t>The following diagram shows the general architecture of a system that
uses MRCPv2. To simplify the diagram only a few resources are shown.</t>
<figure anchor="F.arch" title="Architectural Diagram">
<artwork><![CDATA[
MRCPv2 client MRCPv2 Media Resource Server
|--------------------| |------------------------------------|
||------------------|| ||----------------------------------||
|| Application Layer|| ||Synthesis|Recognition|Verification||
||------------------|| || Engine | Engine | Engine ||
||Media Resource API|| || || | || | || ||
||------------------|| ||Synthesis|Recognizer | Verifier ||
|| SIP | MRCPv2 || ||Resource | Resource | Resource ||
||Stack | || || Media Resource Management ||
|| | || ||----------------------------------||
||------------------|| || SIP | MRCPv2 ||
|| TCP/IP Stack ||---MRCPv2---|| Stack | ||
|| || ||----------------------------------||
||------------------||----SIP-----|| TCP/IP Stack ||
|--------------------| || ||
| ||----------------------------------||
SIP |------------------------------------|
| /
|-------------------| RTP
| | /
| Media Source/Sink |------------/
| |
|-------------------|
]]></artwork>
</figure>
<section anchor="sec.resourceTypes" title="MRCPv2 Media Resource Types">
<t>An MRCPv2 server may offer one or more of the following media
processing resources to its clients. <list hangIndent="15"
style="hanging">
<t hangText="Basic Synthesizer"><vspace blankLines="0" />A speech
synthesizer resource with very limited capabilities, that can
generate its media stream exclusively from concatenated audio
clips. The speech data is described using a limited subset of
<xref target="W3C.REC-speech-synthesis-20040907">SSML</xref>
elements. A basic synthesizer MUST support the SSML tags
<speak>, <audio>, <say-as> and <mark>.</t>
<t hangText="Speech Synthesizer"><vspace blankLines="0" />A full
capability speech synthesis resource capable of rendering speech
from text. Such a synthesizer MUST have full <xref
target="W3C.REC-speech-synthesis-20040907">SSML</xref>
support.</t>
<t hangText="Recorder"><vspace blankLines="0" />A resource capable
of recording audio and providing a URI pointer to the recording. A
recorder MUST provide some endpointing capabilities for
suppressing silence at the beginning and end of a recording, and
MAY also suppress silence in the middle of a recording. If such
suppression is done, the recorder MUST maintain timing metadata to
indicate the actual time stamps of the recorded media.</t>
<t hangText="DTMF Recognizer"><vspace blankLines="0" />A
recognition resource capable of extracting and interpreting DTMF
digits in a media stream and matching them against a supplied
digit grammar It could also do a semantic interpretation based on
semantic tags in the grammar.</t>
<t hangText="Speech Recognizer"><vspace blankLines="0" />A full
speech recognition resource that is capable of receiving a media
stream containing audio and interpreting it to recognition
results. It also has a natural language semantic interpreter to
post-process the recognized data according to the semantic data in
the grammar and provide semantic results along with the recognized
input. The recognizer may also support enrolled grammars, where
the client can enroll and create new personal grammars for use in
future recognition operations.</t>
<t hangText="Speaker Verifier"><vspace blankLines="0" />A resource
capable of verifying the authenticity of a claimed identity by
matching a media stream containing spoken input to a pre-existing
voiceprint. This may also involve matching the caller's voice
against more than one voiceprint, also called multi-verification
or speaker identification.</t>
</list></t>
</section>
<section title="Server and Resource Addressing">
<t>The MRCPv2 server is a generic SIP server, and is thus addressed by
a SIP URI.</t>
<figure>
<preamble>For example:</preamble>
<artwork><![CDATA[
sip:mrcpv2@example.net
]]></artwork>
</figure>
</section>
</section>
<section title="MRCPv2 Protocol Basics">
<t>MRCPv2 requires a connection-oriented transport layer protocol such
as TCP or SCTP to guarantee reliable sequencing and delivery of MRCPv2
control messages between the client and the server. In order to meet the
requirements for security enumerated in <xref target="RFC4313">SpeechSC
Requirements</xref>, clients and servers MUST implement TLS as well. One
or more connections between the client and the server can be shared
among different MRCPv2 channels to the server. The individual messages
carry the channel identifier to differentiate messages on different
channels. MRCPv2 protocol encoding is text based with mechanisms to
carry embedded binary data. This allows arbitrary data like recognition
grammars, recognition results, synthesizer speech markup etc. to be
carried in MRCPv2 messages. For information on message framing, see
<xref target="sec.messages"></xref>.</t>
<section anchor="sec.connectToServer" title="Connecting to the Server">
<t>MRCPv2 employs a session establishment and management protocol such
as SIP in conjunction with SDP. The client reaches an MRCPv2 server
using conventional INVITE and other SIP requests for establishing,
maintaining, and terminating SIP dialogs. The SDP offer/answer
exchange model over SIP is used to establish a resource control
channel for each resource. The SDP offer/answer exchange is also used
to establish media sessions between the server and the source or sink
of audio.</t>
</section>
<section anchor="sec.resourceControl"
title="Managing Resource Control Channels">
<t>The client needs a separate MRCPv2 resource control channel to
control each media processing resource under the SIP dialog. A unique
channel identifier string identifies these resource control channels.
The channel identifier is an unambiguous, opaque string followed by an
"@", then by a string token specifying the type of resource. The
server generates the channel identifier and MUST make sure it does not
clash with the identifier of any other MRCP channel currently
allocated by that server. MRCPv2 defines the following IANA-registered
types of media processing resources. Additional resource types, their
associated methods/events and state machines may be added as described
below in <xref target="sec.iana"></xref>.</t>
<texttable title="Resource Types">
<ttcol>Resource Type</ttcol>
<ttcol>Resource Description</ttcol>
<ttcol>Described in</ttcol>
<c>speechrecog</c>
<c>Speech Recognizer</c>
<c><xref target="sec.recognizerResource"></xref></c>
<c>dtmfrecog</c>
<c>DTMF Recognizer</c>
<c><xref target="sec.recognizerResource"></xref></c>
<c>speechsynth</c>
<c>Speech Synthesizer</c>
<c><xref target="sec.synthesizerResource"></xref></c>
<c>basicsynth</c>
<c>Basic Synthesizer</c>
<c><xref target="sec.synthesizerResource"></xref></c>
<c>speakverify</c>
<c>Speaker Verification</c>
<c><xref target="sec.verifierResource"></xref></c>
<c>recorder</c>
<c>Speech Recorder</c>
<c><xref target="sec.recorderResource"></xref></c>
</texttable>
<t>The SIP INVITE or re-INVITE transaction and the SDP offer/answer
exchange it carries contain m-lines describing the resource control
channel to be allocated. There MUST be one SDP m-line for each MRCPv2
resource to be used in the session. This m-line MUST have a media type
field of "application" and a transport type field of either
"TCP/MRCPv2" or "TCP/TLS/MRCPv2". (The usage of SCTP with MRCPv2 may
be addressed in a future specification). The port number field of the
m-line MUST contain the "discard" port of the transport protocol (port
9 for TCP) in the SDP offer from the client and MUST contain the TCP
listen port on the server in the SDP answer. The client may then
either set up a TCP or TLS connection to that server port or share an
already established connection to that port. Since MRCPv2 allows
multiple sessions to share the same TCP connection, multiple m-lines
in a single SDP document may share the same port field value; MRCPv2
servers MUST NOT assume any relationship between resources using the
same port other than the sharing of the communication channel.</t>
<t>MRCPv2 resources do not use the port or format field of the m-line
to distinguish themselves from other resources using the same channel.
The client MUST specify the resource type identifier in the resource
attribute associated with the control m-line of the SDP offer. The
server MUST respond with the full Channel-Identifier (which includes
the resource type identifier and an unambiguous string) in the
"channel" attribute associated with the control m-line of the SDP
answer. To remain backwards compatible with conventional SDP usage,
the format field of the m-line MUST have the arbitrarily-selected
value of "1".</t>
<t>When the client wants to add a media processing resource to the
session, it issues a SIP re-INVITE transaction. The SDP offer/answer
exchange carried by this SIP transaction contains one or more
additional control m-lines for the new resources to be allocated to
the session. The server, on seeing the new m-line, allocates the
resources (if they are available) and responds with a corresponding
control m-line in the SDP answer carried in the SIP response. If the
new resources are not available, the re-INVITE receives an error
message, and existing media processing going on before the re-INVITE
will continue as it was before.</t>
<t>The a=setup attribute, as described in <xref
target="RFC4145">RFC4145</xref>, MUST be "active" for the offer from
the client and MUST be "passive" for the answer from the MRCPv2
server. The a=connection attribute MUST have a value of "new" on the
very first control m-line offer from the client to an MRCPv2 server.
Subsequent control m-line offers from the client to the MRCP server
MAY contain "new" or "existing", depending on whether the client wants
to set up a new connection or share an existing connection,
respectively. If the client specifies a value of "new", the server
MUST respond with a value of "new". If the client specifies a value of
"existing", the server MAY respond with a value of "existing" if it
prefers to share an existing connection or can answer with a value of
"new", in which case the client MUST initiate a new transport
connection.</t>
<t>When the client wants to de-allocate the resource from this
session, it issues a SIP re-INVITE transaction with the server. The
SDP MUST offer the control m-line with port 0. The server MUST then
answer the control m-line with a response of port 0. This de-allocates
the associated MRCPv2 identifier and resource. The server MUST NOT
close the TCP, SCTP or TLS connection if it is currently being shared
among multiple MRCP channels. When all MRCP channels that may be
sharing the connection are released and/or the associated SIP dialog
is terminated, the client or server terminates the connection.</t>
<t>All servers MUST support TLS. Servers MAY support TCP without TLS
in physically secure environments. It is up to the client, through the
SDP offer, to choose which transport it wants to use for an MRCPv2
session. Aside from the exceptions given above, when using TCP the
m-lines MUST conform to <xref target="RFC4145">RFC4145</xref>, which
describes the usage of SDP for connection-oriented transport. When
using TLS the SDP m-line for the control stream MUST conform to <xref
target="RFC4572">comedia over TLS</xref>, which specifies the usage of
SDP for establishing a secure connection-oriented transport over
TLS.</t>
</section>
<section anchor="sec.SIPExample" title="SIP session example">
<t>This first example shows the power of using SIP to route to the
appropriate resource. In the example, note the use of a request to a
domain's speech server service in the INVITE to
mresources@example.com. The SIP routing machinery in the domain
locates the actual server, mresources@server.example.com, which gets
returned in the 200 OK. Note that "cmid" is defined in <xref
target="sec.mediaStreams"></xref>.</t>
<figure title="Example: Add Synthesizer Control Channel">
<preamble>This example exchange adds a resource control channel for
a synthesizer. Since a synthesizer also generates an audio stream,
this interaction also creates a receive-only RTP media session for
the server to send audio to. The SIP dialog with the media
source/sink is independent of MRCP and is not shown.</preamble>
<artwork><![CDATA[
C->S: INVITE sip:mresources@example.com SIP/2.0
Via:SIP/2.0/TCP client.atlanta.example.com:5060;
branch=z9hG4bK74bf9
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314161 INVITE
Contact:<sip:sarvi@client.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842808 IN IP4 192.0.2.4
s=-
c=IN IP4 192.0.2.12
m=application 9 TCP/MRCPv2 1
a=setup:active
a=connection:new
a=resource:speechsynth
a=cmid:1
m=audio 49170 RTP/AVP 0
a=rtpmap:0 pcmu/8000
a=recvonly
a=mid:1
S->C: SIP/2.0 200 OK
Via:SIP/2.0/TCP client.atlanta.example.com:5060;
branch=z9hG4bK74bf9
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314161 INVITE
Contact:<sip:mresources@server.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=- 2890844526 2890842808 IN IP4 192.0.2.4
s=-
c=IN IP4 192.0.2.11
m=application 32416 TCP/MRCPv2 1
a=setup:passive
a=connection:new
a=channel:32AECB234338@speechsynth
a=cmid:1
m=audio 48260 RTP/AVP 0
a=rtpmap:0 pcmu/8000
a=sendonly
a=mid:1
C->S: ACK sip:mresources@server.example.com SIP/2.0
Via:SIP/2.0/TCP client.atlanta.example.com:5060;
branch=z9hG4bK74bf9
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:Sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314162 ACK
Content-Length:...
]]></artwork>
</figure>
<figure title="Add Recognizer example">
<preamble>This example exchange continues from the previous figure
and allocates an additional resource control channel for a
recognizer. Since a recognizer would need to receive an audio stream
for recognition, this interaction also updates the audio stream to
sendrecv, making it a 2-way RTP media session.</preamble>
<artwork><![CDATA[
C->S: INVITE sip:mresources@server.example.com SIP/2.0
Via:SIP/2.0/TCP client.atlanta.example.com:5060;
branch=z9hG4bK74bf9
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314163 INVITE
Contact:<sip:sarvi@client.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842809 IN IP4 192.0.2.4
s=-
c=IN IP4 192.0.2.12
m=application 9 TCP/MRCPv2 1
a=setup:active
a=connection:existing
a=resource:speechsynth
a=cmid:1
m=audio 49170 RTP/AVP 0 96
a=rtpmap:0 pcmu/8000
a=rtpmap:96 telephone-event/8000
a=fmtp:96 0-15
a=sendrecv
a=mid:1
m=application 9 TCP/MRCPv2 1
a=setup:active
a=connection:existing
a=resource:speechrecog
a=cmid:1
S->C: SIP/2.0 200 OK
Via:SIP/2.0/TCP client.atlanta.example.com:5060;
branch=z9hG4bK74bf9
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314163 INVITE
Contact:<sip:sarvi@example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842809 IN IP4 192.0.2.4
s=-
c=IN IP4 192.0.2.11
m=application 32416 TCP/MRCPv2 1
a=setup:passive
a=connection:existing
a=channel:32AECB234338@speechsynth
a=cmid:1
m=audio 48260 RTP/AVP 0 96
a=rtpmap:0 pcmu/8000
a=rtpmap:96 telephone-event/8000
a=fmtp:96 0-15
a=sendrecv
a=mid:1
m=application 32416 TCP/MRCPv2 1
a=setup:passive
a=connection:existing
a=channel:32AECB234338@speechrecog
a=cmid:1
C->S: ACK sip:mresources@server.example.com SIP/2.0
Via:SIP/2.0/TCP client.atlanta.example.com:5060;
branch=z9hG4bK74bf9
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:Sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314164 ACK
Content-Length:...
]]></artwork>
</figure>
<figure title="Deallocate Recognizer example">
<preamble>This example exchange continues from the previous figure
and de-allocates recognizer channel. Since a recognizer no longer
needs to receive an audio stream, this interaction also updates the
RTP media session to recvonly.</preamble>
<artwork><![CDATA[
C->S: INVITE sip:mresources@server.example.com SIP/2.0
Via:SIP/2.0/TCP client.atlanta.example.com:5060;
branch=z9hG4bK74bf9
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314163 INVITE
Contact:<sip:sarvi@client.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842809 IN IP4 192.0.2.4
s=-
c=IN IP4 192.0.2.12
m=application 9 TCP/MRCPv2 1
a=resource:speechsynth
a=cmid:1
m=audio 49170 RTP/AVP 0
a=rtpmap:0 pcmu/8000
a=recvonly
a=mid:1
m=application 0 TCP/MRCPv2 1
a=resource:speechrecog
a=cmid:1
S->C: SIP/2.0 200 OK
Via:SIP/2.0/TCP client.atlanta.example.com:5060;
branch=z9hG4bK74bf9
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314163 INVITE
Contact:<sip:sarvi@example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842809 IN IP4 192.0.2.4
s=-
c=IN IP4 192.0.2.11
m=application 32416 TCP/MRCPv2 1
a=channel:32AECB234338@speechsynth
a=cmid:1
m=audio 48260 RTP/AVP 0
a=rtpmap:0 pcmu/8000
a=sendonly
a=mid:1
m=application 0 TCP/MRCPv2 1
a=channel:32AECB234338@speechrecog
a=cmid:1
C->S: ACK sip:mresources@server.example.com SIP/2.0
Via:SIP/2.0/TCP client.atlanta.example.com:5060;
branch=z9hG4bK74bf9
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:Sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314164 ACK
Content-Length:...
]]></artwork>
</figure>
</section>
<section anchor="sec.mediaStreams" title="Media Streams and RTP Ports">
<t>Since MRCPv2 resources either generate or consume media streams,
the client or the server needs to associate media sessions with their
corresponding resource or resources. More than one resource could be
associated with a single media session or each resource could be
assigned a separate media session. Also note that more than one media
session can be associated with a single resource if need be, but this
scenario is not useful for the current set of resources. For example,
a synthesizer and a recognizer could be associated to the same media
session (m=audio line), if it is opened in "sendrecv" mode.
Alternatively, the recognizer could have its own "sendonly" audio
session and the synthesizer could have its own "recvonly" audio
session.</t>
<t>The association between control channels and their corresponding
media sessions is established using a new "resource channel media
identifier" media-level attribute ("cmid"). Valid values of this
attribute are the values of the "mid" attribute defined in <xref
target="RFC3388">RFC3388</xref>. If there is more than 1 audio m-line,
then each audio m-line MUST have a "mid" attribute. Each control
m-line MAY have one or more "cmid" attributes that match the resource
control channel to the "mid" attributes of the audio m-lines it is
associated with. Note that if a control m-line does not have a "cmid"
attribute it will not be associated with any media. The operations on
such a resource will hence be limited. For example, if it was a
recognizer resource, the RECOGNIZE method requires an associated media
to process while the INTERPRET method does not. The formatting of the
"cmid" attribute is described by the following ABNF:</t>
<figure>
<artwork><![CDATA[
cmid-attribute = "a=cmid:" identification-tag
identification-tag = token]]></artwork>
</figure>
<t>To allow this flexible mapping of media sessions to MRCPv2 control
channels, a single audio m-line can be associated with multiple
resources or each resource can have its own audio m-line. For example,
if the client wants to allocate a recognizer and a synthesizer and
associate them with a single 2-way audio stream, the SDP offer would
contain two control m-lines and a single audio m-line with an
attribute of "sendrecv". Each of the control m-lines would have a
"cmid" attribute whose value matches the "mid" of the audio m-line.
If, on the other hand, the client wants to allocate a recognizer and a
synthesizer each with its own separate audio stream, the SDP offer
would carry two control m-lines (one for the recognizer and another
for the synthesizer) and two audio m-lines (one with the attribute
"sendonly" and another with attribute "recvonly"). The "cmid"
attribute of the recognizer control m-line would match the "mid" value
of the "sendonly" audio m-line and the "cmid" attribute of the
synthesizer control m-line would match the "mid" attribute of the
"recvonly" m-line.</t>
<t>When a server receives media (e.g. audio) on a media session that
is associated with more than one media processing resource, it is the
responsibility of the server to receive and fork the media to the
resources that need to consume it. If multiple resources in an MRCPv2
session are generating audio (or other media) to be sent on a single
associated media session, it is the responsibility of the server to
either multiplex the multiple streams onto the single RTP session or
contain an embedded RTP mixer (see <xref
target="RFC3550">RFC3550</xref>) to combine the multiple streams into
one. In the former case, the media stream will contain RTP packets
generated by different sources, and hence the packets will have
different Synchronization Source identifiers (SSRCs). In the latter
case, the RTP packets will contain multiple (CSRCs) corresponding to
the original streams before being combined by the mixer. An MRCPv2
implementation MUST either multiplex or mix unless it cannot correctly
do either, in which case the server MUST disallow the client from
associating multiple such resources to a single audio stream by
rejecting the SDP offer with a SIP 488 "Not Acceptable" error. Note
that there is a large installed base that will return a SIP 501 "Not
Implemented" error in this case. New implementations SHOULD treat a
501 in this context as a 488.</t>
</section>
<section title="MRCPv2 Message Transport">
<t>The MRCPv2 messages defined in this document are transported over a
TCP, TLS or SCTP (in the future) connection between the client and the
server. The method for setting up this transport connection and the
resource control channel is discussed in <xref
target="sec.connectToServer"> </xref> and <xref
target="sec.resourceControl"> </xref>. Multiple resource control
channels between a client and a server that belong to different SIP
dialogs can share one or more TLS, TCP or SCTP connections between
them; the server and client MUST support this mode of operation. The
individual MRCPv2 messages carry the MRCPv2 channel identifier in
their Channel-Identifier header field, which MUST be used to
differentiate MRCPv2 messages from different resource channels (see
<xref target="sec.channelIdentifier"></xref> for details). All MRCPv2
servers MUST support TLS. Servers MAY support TCP without TLS in
physically secure environments. It is up to the client to choose which
mode of transport it wants to use for an MRCPv2 session.</t>
<t>Most examples from here on show only the MRCPv2 messages and do not
show the SIP messages that may have been used to establish the MRCPv2
control channel.</t>
</section>
</section>
<section anchor="sec.messages" title="MRCPv2 Specification">
<t>MRCPv2 messages are textual using the ISO 10646 character set in the
UTF-8 encoding (<xref target="RFC3629">RFC3629</xref>) to allow many
different languages to be represented. However, to assist in compact
representations, MRCPv2 also allows message bodies to be represented in
other character sets such as ISO 8859-1. This may be useful for
languages such as Chinese where the default character set for most
documents is not UTF-8. The MRCPv2 protocol headers (the first line of
an MRCP message) and header field names use only the US-ASCII subset of
UTF-8. Internationalization only applies to certain fields like grammar,
results, speech markup etc, and not to MRCPv2 as a whole.</t>
<t>Lines are terminated by CRLF. Also, some parameters in the message
may contain binary data or a record spanning multiple lines. Such fields
have a length value associated with the parameter, which indicates the
number of octets immediately following the parameter.</t>
<section anchor="sec.common" title="Common Protocol Elements">
<t>The MRCPv2 message set consists of requests from the client to the
server, responses from the server to the client and asynchronous
events from the server to the client. All these messages consist of a
start-line, one or more header fields, an empty line (i.e. a line with
nothing preceding the CRLF) indicating the end of the header fields,
and an optional message body.</t>
<figure>
<artwork><![CDATA[
generic-message = start-line
message-header
CRLF
[ message-body ]
start-line = request-line / response-line / event-line
message-header = 1*(generic-header / resource-header)
resource-header = recognizer-header
/ synthesizer-header
/ recorder-header
/ verifier-header
]]></artwork>
</figure>
<t>The message-body contains resource-specific and message-specific
data. The actual Media Types used to carry the data are specified
later in the sections defining the individual messages. Generic header
fields are described in <xref target="sec.genericHeaders"></xref>.</t>
<t>If a message contains a message body, the message MUST contain
content-headers indicating the Media Type and encoding of the data in
the message body.</t>
<t>Request, response and event messages (described in following
sections) include the version of MRCP that the message conforms to.
Version compatibility rules follow [H3.1] regarding version ordering,
compliance requirements, and upgrading of version numbers. The version
information is indicated by "MRCP" (as opposed to "HTTP" in [H3.1]) or
"MRCP/2.0" (as opposed to "HTTP/1.1" in [H3.1]). To be compliant with
this specification, clients and servers sending MRCPv2 messages MUST
indicate an mrcp-version of "MRCP/2.0".</t>
<figure>
<artwork><![CDATA[
mrcp-version = "MRCP" "/" 1*2DIGIT "." 1*2DIGIT
]]></artwork>
</figure>
<t></t>
<t>The message-length field specifies the length of the message in
octets, including the start-line, and MUST be the 2nd token from the
beginning of the message. This is to make the framing and parsing of
the message simpler to do. This field specifies the length of the
message including data that may be encoded into the body of the
message. Note that this value MAY be printed as a fixed-length integer
that is zero-padded in front in order to eliminate or reduce
inefficiency in cases where the message-length value would change as a
result of the length of the message-length token itself. This value,
as with all lengths in MRCP, is to be interpreted as a base-10 number.
In particular, leading zeros do not indicate that the value is to be
interpreted as a base-8 number.</t>
<figure>
<artwork><![CDATA[
message-length = 1*19DIGIT
]]></artwork>
</figure>
<t></t>
<t>All MRCPv2 messages, responses and events MUST carry the
Channel-Identifier header field so the server or client can
differentiate messages from different control channels that may share
the same transport connection.</t>
<t>In the resource-specific header field descriptions in sections
8-11, a header field is disallowed on a method (request, response, or
event) for that resource unless specifically listed as being allowed.
Also, the phrasing "This header field MAY occur on method X" indicates
that the header field is allowed on that method but is not required to
be used in every instance of that method.</t>
</section>
<section anchor="sec.request" title="Request">
<t>An MRCPv2 request consists of a Request line followed by the
message header section and an optional message body containing data
specific to the request message.</t>
<t>The Request message from a client to the server includes within the
first line the method to be applied, a method tag for that request and
the version of the protocol in use.</t>
<figure>
<artwork><![CDATA[
request-line = mrcp-version SP message-length SP method-name
SP request-id CRLF
]]></artwork>
</figure>
<t>The mrcp-version field is the MRCP protocol version that is being
used by the client.</t>
<t>The message-length field specifies the length of the message,
including the start-line.</t>
<t>Details about the mrcp-version and message-length fields are given
in <xref target="sec.common"></xref>.</t>
<t>The method-name field identifies the specific request that the
client is making to the server. Each resource supports a subset of the
MRCPv2 methods. The subset for each resource is defined in the section
of the specification for the corresponding resource.</t>
<figure>
<artwork><![CDATA[
method-name = generic-method
/ synthesizer-method
/ recorder-method
/ recognizer-method
/ verifier-method
]]></artwork>
</figure>
<t>The request-id field is a unique identifier representable as an
unsigned 32 bit integer created by the client and sent to the server.
Consecutive requests within an MRCP session MUST utilize monotonically
increasing request-id's. The request-id space is linear, (i.e. not
mod(32)) so the space does not wrap and validity can be checked with a
simple unsigned comparison operation. The client may choose any
initial value for its first request, but a small integer is
RECOMMENDED to avoid exhausting the space in long sessions. If the
server receives duplicate or out-of-order requests the server MUST
reject the request with a response code of 410. Since request-id's are
scoped to the MRCP session, they are unique across all TCP connections
and all resource channels in the session.</t>
<t>The server resource MUST use the client-assigned identifier in its
response to the request. If the request does not complete
synchronously, future asynchronous events associated with this request
MUST carry the client-assigned request-id.</t>
<figure>
<artwork><![CDATA[
request-id = 1*10DIGIT
]]></artwork>
</figure>
</section>
<section anchor="sec.response" title="Response">
<t>After receiving and interpreting the request message for a method,
the server resource responds with an MRCPv2 response message. The
response consists of a response line followed by the message header
section and an optional message body containing data specific to the
method.</t>
<figure>
<artwork><![CDATA[
response-line = mrcp-version SP message-length SP request-id
SP status-code SP request-state CRLF
]]></artwork>
</figure>
<t>The mrcp-version field MUST contain the version of the request if
supported; otherwise, it must contain the highest version of the
MRCPv2 protocol supported by the server.</t>
<t>The message-length field specifies the length of the message,
including the start-line.</t>
<t>Details about the mrcp-version and message-length fields are given
in <xref target="sec.common"></xref>.</t>
<t>The request-id used in the response MUST match the one sent in the
corresponding request message.</t>
<t>The status-code field is a 3-digit code representing the success or
failure or other status of the request.</t>
<t>The request-state field indicates if the action initiated by the
Request is PENDING, IN-PROGRESS or COMPLETE. The COMPLETE status means
that the Request was processed to completion and that there will be no
more events or other messages from that resource to the client with
that request-id. The PENDING status means that the request has been
placed on a queue and will be processed in first-in-first-out order.
The IN-PROGRESS status means that the request is being processed and
is not yet complete. A PENDING or IN-PROGRESS status indicates that
further Event messages may be delivered with that request-id.</t>
<figure>
<artwork><![CDATA[
request-state = "COMPLETE"
/ "IN-PROGRESS"
/ "PENDING"
]]></artwork>
</figure>
</section>
<section anchor="sec.statusCodes" title="Status Codes">
<t>The status codes are classified under the Success (2XX) codes,
Client Failure (4XX) codes, and Server Failure (5XX).</t>
<texttable title="Success 2xx">
<preamble>Success Codes</preamble>
<ttcol width="15%">Code</ttcol>
<ttcol>Meaning</ttcol>
<c>200</c>
<c>Success</c>
<c>201</c>
<c>Success with some optional header fields ignored</c>
</texttable>
<texttable title="Client Failure 4xx">
<preamble>Client Failure 4xx Codes</preamble>
<ttcol width="15%">Code</ttcol>
<ttcol>Meaning</ttcol>
<c>401</c>
<c>Method not allowed</c>
<c>402</c>
<c>Method not valid in this state</c>
<c>403</c>
<c>Unsupported header field</c>
<c>404</c>
<c>Illegal value for header field. This is the error for a syntax
violation.</c>
<c>405</c>
<c>Resource not allocated for this session or does not exist</c>
<c>406</c>
<c>Mandatory Header Field Missing</c>
<c>407</c>
<c>Method or Operation Failed (e.g., Grammar compilation failed in
the recognizer. Detailed cause codes MAY BE available through a
resource specific header.)</c>
<c>408</c>
<c>Unrecognized or unsupported message entity</c>
<c>409</c>
<c>Unsupported Header Field Value. This is a value that is
syntactically legal but exceeds the implementation's capabilities or
expectations.</c>
<c>410</c>
<c>Non-Monotonic or Out of order sequence number in request.</c>
<c>411-420</c>
<c>Reserved for future assignment</c>
</texttable>
<texttable title="Server Failure 4xx">
<preamble>Server Failure 5xx Codes</preamble>
<ttcol width="15%">Code</ttcol>
<ttcol>Meaning</ttcol>
<c>501</c>
<c>Server Internal Error</c>
<c>502</c>
<c>Protocol Version not supported</c>
<c>503</c>
<c>Proxy Timeout. The MRCP Proxy did not receive a response from the
MRCP server.</c>
<c>504</c>
<c>Message too large</c>
</texttable>
</section>
<section anchor="sec.events" title="Events">
<t>The server resource may need to communicate a change in state or
the occurrence of a certain event to the client. These messages are
used when a request does not complete immediately and the response
returns a status of PENDING or IN-PROGRESS. The intermediate results
and events of the request are indicated to the client through the
event message from the server. The event message consists of an event
header line followed by the message header section and an optional
message body containing data specific to the event message. The header
line has the request-id of the corresponding request and status value.
The request-state value is COMPLETE if the request is done and this
was the last event, else it is IN-PROGRESS.</t>
<figure>
<artwork><![CDATA[
event-line = mrcp-version SP message-length SP event-name
SP request-id SP request-state CRLF]]></artwork>
</figure>
<t>The mrcp-version used here is identical to the one used in the
Request/Response Line and indicates the version of the MRCPv2 protocol
running on the server.</t>
<t>The message-length field specifies the length of the message,
including the start-line.</t>
<t>Details about the mrcp-version and message-length fields are given
in <xref target="sec.common"></xref>.</t>
<t>The event-name identifies the nature of the event generated by the
media resource. The set of valid event names depends on the resource
generating it. See the corresponding resource-specific section of the
document.</t>
<figure>
<artwork><![CDATA[
event-name = synthesizer-event
/ recognizer-event
/ recorder-event
/ verifier-event]]></artwork>
</figure>
<t>The request-id used in the event MUST match the one sent in the
request that caused this event.</t>
<t>The request-state indicates whether the Request/Command causing
this event is complete or still in progress, and is the same as the
one mentioned in <xref target="sec.response"></xref>. The final event
for a request has a COMPLETE status indicating the completion of the
request.</t>
</section>
</section>
<section title="MRCPv2 Generic Methods, Headers, and Result Structure">
<t>MRCPv2 supports a set of methods and header fields that are common to
all resources. These are discussed here; resource-specific methods and
header fields are discussed in the corresponding resource-specific
section of the document.</t>
<section anchor="sec.genericMethods" title="Generic Methods">
<t>MRCPv2 supports two generic methods for reading and writing the
state associated with a resource.</t>
<figure>
<artwork><![CDATA[
generic-method = "SET-PARAMS"
/ "GET-PARAMS"]]></artwork>
</figure>
<t></t>
<t>These are described in the following sub-sections.</t>
<section title="SET-PARAMS">
<t>The <spanx style="verb">SET-PARAMS</spanx> method, from the
client to the server, tells the MRCPv2 resource to define parameters
for the session, such as voice characteristics and prosody on
synthesizers, recognition timers on recognizers, etc. If the server
accepts and sets all parameters it MUST return a Response-Status of
200. If it chooses to ignore some optional header fields that can be
safely ignored without affecting operation of the server it MUST
return 201.</t>
<t>If one or more of the header fields being sent is incorrect,
error 403, 404, or 409 MUST be returned as follows:<list
style="symbols">
<t>If one or more of the header fields being set has an illegal
value, the server MUST reject the request with a 404 Illegal
Value for Header Field.</t>
<t>If one or more of the header fields being set is unsupported
for the resource, the server MUST reject the request with a 403
Unsupported Header Field, except as described in the next
paragraph.</t>
<t>If one or more of the header fields being set has an
unsupported value, the server MUST reject the request with a 409
Unsupported Header Field Value, except as described in the next
paragraph.</t>
</list></t>
<t>If both error 404 and another error have occurred, only error 404
MUST be returned. If both errors 403 and 409 have occurred, but not
error 404, only error 403 MUST be returned.</t>
<t>If error 403, 404, or 409 is returned, the response MUST include
the bad or unsupported header fields and their values exactly as
they were sent from the client. Session parameters modified using
<spanx style="verb">SET-PARAMS</spanx> do not override parameters
explicitly specified on individual requests or requests that are
in-PROGRESS.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 124 SET-PARAMS 543256
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:female
Voice-variant:3
S->C: MRCP/2.0 47 543256 200 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
]]></artwork>
</figure>
</section>
<section title="GET-PARAMS">
<t>The <spanx style="verb">GET-PARAMS</spanx> method, from the
client to the server, asks the MRCPv2 resource for its current
session parameters, such as voice characteristics and prosody on
synthesizers, recognition-timer on recognizers, etc. For every
header field the client sends in the request without a value, the
server MUST include the corresponding header fields and their values
in the response. If no parameter header fields are specified by the
client then the server MUST return all the settable parameters and
their values in the corresponding header section of the response,
including vendor-specific parameters. Such wild-card parameter
requests can be very processing-intensive, since the number of
settable parameters can be large depending on the implementation.
Hence, it is RECOMMENDED that the client not use the wildcard <spanx
style="verb">GET-PARAMS</spanx> operation very often. Note that
<spanx style="verb">GET-PARAMS</spanx> returns header field values
that apply to the whole session and not values that have a request
level scope. For example, Input-Waveform-URI is a request-level
header field and thus would not be returned by GET-PARAMS.</t>
<t>If all of the header fields requested are supported, the server
MUST return a Response-Status of 200. If some of the header fields
being retrieved are unsupported for the resource, the server MUST
reject the request with a 403 Unsupported Header Field. Such a
response MUST include the (empty) unsupported header fields exactly
as they were sent from the client.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 136 GET-PARAMS 543256
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:
Voice-variant:
Vendor-Specific-Parameters:com.example.param1;
com.example.param2
S->C: MRCP/2.0 163 543256 200 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:female
Voice-variant:3
Vendor-Specific-Parameters:com.example.param1="Company Name";
com.example.param2="124324234@example.com"
]]></artwork>
</figure>
</section>
</section>
<section anchor="sec.genericHeaders" title="Generic Message Headers">
<t>All MRCPv2 header fields, which include both the generic-headers
defined in the following sub-sections and the resource-specific header
fields defined later, follow the same generic format as that given in
Section 3.1 of <xref target="RFC5322">RFC5322</xref>. Each header
field consists of a name followed by a colon (":") and the value.
Header field names are case-insensitive. The value MAY be preceded by
any amount of LWS, though a single SP is preferred. Header fields may
extend over multiple lines by preceding each extra line with at least
one SP or HT.</t>
<figure>
<artwork><![CDATA[
message-header = field-name ":" [ field-value ]
field-name = token
field-value = *LWS field-content *( CRLF 1*LWS field-content)
field-content = <the OCTETs making up the field-value
and consisting of either *TEXT or combinations
of token, separators, and quoted-string>
]]></artwork>
</figure>
<t>The field-content does not include any leading or trailing LWS
(i.e. linear white space occurring before the first non-whitespace
character of the field-value or after the last non-whitespace
character of the field-value). Such leading or trailing LWS MAY be
removed without changing the semantics of the field value. Any LWS
that occurs between field-content MAY be replaced with a single SP
before interpreting the field value or forwarding the message
downstream.</t>
<t>MRCPv2 servers and clients MUST NOT depend on header field order.
It is "good practice" to send general-header fields first, followed by
request-header or response-header fields, and ending with the
entity-header fields. However, MRCPv2 servers and clients MUST be
prepared to process the header fields in any order. The only exception
to this rule is when there are multiple header fields with the same
name in a message.</t>
<t>Multiple header fields with the same name MAY be present in a
message if and only if the entire value for that header field is
defined as a comma-separated list [i.e., #(values)].</t>
<t>Since vendor-specific parameters may be order-dependent, it MUST be
possible to combine multiple header fields of the same name into one
"name:value" pair without changing the semantics of the message, by
appending each subsequent value to the first, each separated by a
comma. The order in which header fields with the same name are
received is therefore significant to the interpretation of the
combined header field value, and thus an intermediary MUST NOT change
the order of these values when a message is forwarded.</t>
<figure>
<artwork><![CDATA[
generic-header = channel-identifier
/ accept
/ active-request-id-list
/ proxy-sync-id
/ accept-charset
/ content-type
/ content-id
/ content-base
/ content-encoding
/ content-location
/ content-length
/ fetch-timeout
/ cache-control
/ logging-tag
/ set-cookie
/ set-cookie2
/ vendor-specific
]]></artwork>
</figure>
<section anchor="sec.channelIdentifier" title="Channel-Identifier">
<t>All MRCPv2 requests, responses and events MUST contain the
Channel-Identifier header field. The value is allocated by the
server when a control channel is added to the session and
communicated to the client by the "a=channel" attribute in the SDP
answer from the server. The header field value consists of 2 parts
separated by the '@' symbol. The first part is an unambiguous string
identifying the MRCPv2 session. The second part is a string token
which specifies one of the media processing resource types listed in
<xref target="sec.resourceTypes"></xref>. The unambiguous string
(first part) MUST BE unique among the resource instances managed by
the server and is common to all resource channels with that server
established through a single SIP dialog.</t>
<figure>
<artwork><![CDATA[
channel-identifier = "Channel-Identifier" ":" channel-id CRLF
channel-id = 1*alphanum "@" 1*alphanum
]]></artwork>
</figure>
</section>
<section title="Accept">
<t>The Accept header field follows the syntax defined in [H14.1].
The semantics are also identical, with the exception that if no
Accept header field is present, the server MUST assume a default
value that is specific to the resource type that is being
controlled. This default value can be changed for a resource on a
session by sending this header field in a SET-PARAMS method. The
current default value of this header field for a resource in a
session can be found through a GET-PARAMS method. This header field
MAY occur on any request.</t>
</section>
<section title="Active-Request-Id-List">
<t>In a request, this header field indicates the list of request-ids
to which the request applies. This is useful when there are multiple
requests that are PENDING or IN-PROGRESS and the client wants this
request to apply to one or more of these specifically.</t>
<t>In a response, this header field returns the list of request-ids
that the method modified or affected. There could be one or more
requests in a request-state of PENDING or IN-PROGRESS. When a method
affecting one or more PENDING or IN-PROGRESS requests is sent from
the client to the server, the response MUST contain the list of
request-ids that were affected or modified by this command in its
header section.</t>
<t>The active-request-id-list is only used in requests and
responses, not in events.</t>
<t>For example, if a <spanx style="verb">STOP</spanx> request with
no active-request-id-list is sent to a synthesizer resource which
has one or more <spanx style="verb">SPEAK</spanx> requests in the
PENDING or IN-PROGRESS state, all <spanx style="verb">SPEAK</spanx>
requests MUST be cancelled, including the one IN-PROGRESS. The
response to the <spanx style="verb">STOP</spanx> request contains in
the active-request-id-list the request-ids of all the <spanx
style="verb">SPEAK</spanx> requests that were terminated. After
sending the STOP response, the server MUST NOT send any
SPEAK-COMPLETE or RECOGNITION-COMPLETE events for the terminated
requests.</t>
<figure>
<artwork><![CDATA[
active-request-id-list = "Active-Request-Id-List" ":"
request-id *("," request-id) CRLF
]]></artwork>
</figure>
</section>
<section title="Proxy-Sync-Id">
<t>When any server resource generates a barge-in-able event, it also
generates a unique tag. The tag is sent as this header field's value
in an event to the client. The client then acts as a intermediary
among the server resources and sends a BARGE-IN-OCCURRED method to
the synthesizer server resource with the Proxy-Sync-Id it received
from the server resource. When the recognizer and synthesizer
resources are part of the same session, they may choose to work
together to achieve quicker interaction and response. Here the
proxy-sync-id helps the resource receiving the event, intermediated
by the client, to decide if this event has been processed through a
direct interaction of the resources. This header field MAY occur
only on events and the BARGE-IN-OCCURRED method.</t>
<figure>
<artwork><![CDATA[
proxy-sync-id = "Proxy-Sync-Id" ":" 1*VCHAR CRLF
]]></artwork>
</figure>
</section>
<section title="Accept-Charset">
<t>See [H14.2]. This specifies the acceptable character sets for
entities returned in the response or events associated with this
request. This is useful in specifying the character set to use in
the NLSML results of a <spanx
style="verb">RECOGNITION-COMPLETE</spanx> event. This header field
is only used on requests.</t>
</section>
<section title="Content-Type">
<t>See [H14.17]. MRCPv2 supports a restricted set of registered
Media Types for content, including speech markup, grammar, and
recognition results. The content types applicable to each MRCPv2
resource-type are specified in the corresponding section of the
document. The multi-part content type "multi-part/mixed" is
supported to communicate multiple of the above mentioned contents,
in which case the body parts MUST NOT contain any MRCPv2 specific
header fields. This header field MAY occur on all messages.</t>
</section>
<section anchor="sec.Content-ID" title="Content-ID">
<t>This header field contains an ID or name for the content by which
it can be referenced. This header field operates according to the
specification in <xref target="RFC2392">RFC2392</xref> and is
required for content disambiguation in multi-part messages. In
MRCPv2 whenever the associated content is stored, by either the
client or the server, it MUST be retrievable using this ID. Such
content can be referenced later in a session by addressing it with
the <spanx style="verb">session:</spanx> URI scheme described in
<xref target="sec.sessionURIScheme"></xref>. This header field MAY
occur on all messages.</t>
</section>
<section title="Content-Base">
<t>The content-base entity-header may be used to specify the base
URI for resolving relative URLs within the entity.</t>
<figure>
<artwork><![CDATA[
content-base = "Content-Base" ":" absoluteURI CRLF
]]></artwork>
</figure>
<t>Note, however, that the base URI of the contents within the
entity-body may be redefined within that entity-body. An example of
this would be multi-part media, which in turn can have multiple
entities within it. This header field MAY occur on all messages.</t>
</section>
<section title="Content-Encoding">
<t>The content-encoding entity-header is used as a modifier to the
media-type. When present, its value indicates what additional
content encoding has been applied to the entity-body, and thus what
decoding mechanisms must be applied in order to obtain the
media-type referenced by the content-type header field.
Content-encoding is primarily used to allow a document to be
compressed without losing the identity of its underlying media type.
Note that the SDP session can be used to determine accepted
encodings (see <xref target="sec.resourceDiscovery"></xref>). This
header field MAY occur on all messages.</t>
<figure>
<artwork><![CDATA[
content-encoding = "Content-Encoding" ":"
*WSP content-coding
*(*WSP "," *WSP content-coding *WSP )
CRLF
]]></artwork>
</figure>
<t>Content-coding is defined in [H3.5]. An example of its use is</t>
<figure>
<artwork><![CDATA[Content-Encoding:gzip]]></artwork>
</figure>
<t>If multiple encodings have been applied to an entity, the content
encodings MUST be listed in the order in which they were
applied.</t>
</section>
<section title="Content-Location">
<t>The content-location entity-header MAY be used to supply the
resource location for the entity enclosed in the message when that
entity is accessible from a location separate from the requested
resource's URI. Refer to [H14.14].</t>
<figure>
<artwork><![CDATA[
content-location = "Content-Location" ":"
( absoluteURI / relativeURI ) CRLF
]]></artwork>
</figure>
<t>The content-location value is a statement of the location of the
resource corresponding to this particular entity at the time of the
request. This header field is provided for optimization purposes
only. The receiver of this header field MAY assume that the entity
being sent is identical to what would have been retrieved or might
already have been retrieved from the content-location URI.</t>
<t>For example, if the client provided a grammar markup inline, and
it had previously retrieved it from a certain URI, that URI can be
provided as part of the entity, using the content-location header
field. This allows a resource like the recognizer to look into its
cache to see if this grammar was previously retrieved, compiled and
cached. In this case, it might optimize by using the previously
compiled grammar object.</t>
<t>If the content-location is a relative URI, the relative URI is
interpreted relative to the content-base URI. This header field MAY
occur on all messages.</t>
</section>
<section title="Content-Length">
<t>This header field contains the length of the content of the
message body (i.e. after the double CRLF following the last header
field). Unlike HTTP, it MUST be included in all messages that carry
content beyond the header section. If it is missing, a default value
of zero is assumed. Otherwise, it is interpreted according to
[H14.13]. When a message having no use for a message body contains
one, i.e. the Content-Length is non-zero, the receiver MUST ignore
the content of the message body. This header field MAY occur on all
messages.</t>
</section>
<section title="Fetch Timeout">
<t>When the recognizer or synthesizer needs to fetch documents or
other resources this header field controls the corresponding URI
access properties. This defines the timeout for content that the
server may need to fetch over the network. The value is interpreted
to be in milliseconds and ranges from 0 to an
implementation-specific maximum value. The default value for this
header field is implementation-specific. This header field MAY occur
in <spanx style="verb">DEFINE-GRAMMAR</spanx>, <spanx
style="verb">RECOGNIZE</spanx>, <spanx style="verb">SPEAK</spanx>,
<spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>.</t>
<figure>
<artwork><![CDATA[
fetch-timeout = "Fetch-Timeout" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Cache-Control">
<t>If the server implements content caching, it MUST adhere to the
cache correctness rules of <xref target="RFC2616">HTTP 1.1</xref>
when accessing and caching stored content. In particular, the
"expires" and "cache-control" header fields of the cached URI or
document MUST be honored and take precedence over the Cache-Control
defaults set by this header field. The cache-control directives are
used to define the default caching algorithms on the server for the
session or request. The scope of the directive is based on the
method it is sent on. If the directives are sent on a <spanx
style="verb">SET-PARAMS</spanx> method, it applies for all requests
for external documents the server makes during that session, unless
overridden by a cache-control header field on an individual request.
If the directives are sent on any other requests they apply only to
external document requests the server makes for that request. An
empty cache-control header field on the <spanx
style="verb">GET-PARAMS</spanx> method is a request for the server
to return the current cache-control directives setting on the
server. This header field MAY occur only on requests.</t>
<figure>
<artwork><![CDATA[
cache-control = "Cache-Control" ":" cache-directive
*("," *LWS cache-directive) CRLF
cache-directive = "max-age" "=" delta-seconds
/ "max-stale" [ "=" delta-seconds ]
/ "min-fresh" "=" delta-seconds
delta-seconds = 1*19DIGIT
]]></artwork>
</figure>
<t>Here delta-seconds is a decimal time value specifying the number
of seconds since the instant the message response or data was
received by the server.</t>
<t>The cache-directives allow the client to ask the server to
override the default cache expiration mechanisms. <list
hangIndent="15" style="hanging">
<t hangText="max-age">Indicates that the client can tolerate the
server using content whose age is no greater than the specified
time in seconds. Unless a max-stale directive is also included,
the client is not willing to accept a response based on stale
data.</t>
<t hangText="min-fresh">Indicates that the client is willing to
accept a server response with cached data whose expiration is no
less than its current age plus the specified time in seconds. If
the server's cache time to live exceeds the client-supplied
min-fresh value, the server MUST NOT utilize cached content.</t>
<t hangText="max-stale">Indicates that the client is willing to
allow a server to utilize cached data that has exceeded its
expiration time. If max-stale is assigned a value, then the
client is willing to allow the server to use cached data that
has exceeded its expiration time by no more than the specified
number of seconds. If no value is assigned to max-stale, then
the client is willing to allow the server to use stale data of
any age.</t>
</list></t>
<t>The server cache MAY be requested to use stale response/data
without validation, but only if this does not conflict with any
"MUST"-level requirements concerning cache validation (e.g., a
"must-revalidate" cache-control directive in the HTTP 1.1
specification pertaining to the corresponding URI).</t>
<t>If both the MRCPv2 cache-control directive and the cached entry
on the server include "max-age" directives, then the lesser of the
two values is used for determining the freshness of the cached entry
for that request.</t>
</section>
<section title="Logging-Tag">
<t>This header field MAY be sent as part of a <spanx
style="verb">SET-PARAMS</spanx>/<spanx
style="verb">GET-PARAMS</spanx> method to set or retrieve the
logging tag for logs generated by the server. Once set, the value
persists until a new value is set or the session ends. The MRCPv2
server MAY provide a mechanism to subset its output logs so that
system administrators can examine or extract only the log file
portion during which the logging tag was set to a certain value.</t>
<t>It is RECOMMENDED that clients have some identifying information
in the logging tag, so that one can determine which client request
generated a given log message at the server.</t>
<figure>
<artwork><![CDATA[
logging-tag = "Logging-Tag" ":" 1*UTFCHAR CRLF
]]></artwork>
</figure>
</section>
<section anchor="sec.SetCookie" title="Set-Cookie and Set-Cookie2">
<t>Since the associated HTTP client on an MRCPv2 server fetches
documents for processing on behalf of the MRCPv2 client, the cookie
store in the HTTP client of the MRCPv2 server is treated as an
extension of the cookie store in the HTTP client of the MRCPv2
client. This requires that the MRCPv2 client and server be able to
synchronize their common cookie store as needed. To enable the
MRCPv2 client to push its stored cookies to the MRCPv2 server and
get new cookies from the MRCPv2 server stored back to the MRCPv2
client, the set-cookie and set-cookie2 entity-header fields MAY be
included in MRCPv2 requests to update the cookie store on a server
and be returned in final MRCPv2 responses or events to subsequently
update the client's own cookie store. The stored cookies on the
server persist for the duration of the MRCPv2 session and MUST be
destroyed at the end of the session. To ensure support for the type
of cookie header field dictated by the HTTP origin server, MRCPv2
clients and servers MUST support both the set-cookie and set-cookie2
entity header fields.</t>
<figure>
<artwork><![CDATA[
set-cookie = "Set-Cookie:" cookies CRLF
cookies = cookie *("," *LWS cookie)
cookie = attribute "=" value *(";" cookie-av)
cookie-av = "Comment" "=" value
/ "Domain" "=" value
/ "Max-Age" "=" value
/ "Path" "=" value
/ "Secure"
/ "Version" "=" 1*19DIGIT
/ "Age" "=" delta-seconds
set-cookie2 = "Set-Cookie2:" cookies2 CRLF
cookies2 = cookie2 *("," *LWS cookie2)
cookie2 = attribute "=" value *(";" cookie-av2)
cookie-av2 = "Comment" "=" value
/ "CommentURL" "=" DQUOTE uri DQUOTE
/ "Discard"
/ "Domain" "=" value
/ "Max-Age" "=" value
/ "Path" "=" value
/ "Port" [ "=" DQUOTE portlist DQUOTE ]
/ "Secure"
/ "Version" "=" 1*19DIGIT
/ "Age" "=" delta-seconds
portlist = portnum *("," *LWS portnum)
portnum = 1*19DIGIT
]]></artwork>
</figure>
<t>The set-cookie and set-cookie2 header fields are specified in
<xref target="RFC2109">RFC2109</xref> and <xref
target="RFC2965">RFC2965</xref>, respectively. The "Age" attribute
is introduced in this specification to indicate the age of the
cookie and is optional. An MRCPv2 client or server MUST calculate
the age of the cookie according to the age calculation rules in the
<xref target="RFC2616">HTTP/1.1 specification</xref> and append the
"Age" attribute accordingly.</t>
<t>The MRCPv2 client or server MUST supply defaults for the Domain
and Path attributes if omitted by the HTTP origin server as
specified in RFC2109 (set-cookie) and RFC2965 (set-cookie2). Note
that there is no leading dot present in the Domain attribute value
in this case. Although an explicitly specified Domain value received
via the HTTP protocol may be modified to include a leading dot, an
MRCPv2 client or server MUST NOT modify the Domain value when
received via the MRCPv2 protocol.</t>
<t>An MRCPv2 client or server MAY combine multiple cookie header
fields of the same type into a single "field-name:field-value" pair
as described in <xref target="sec.genericHeaders"></xref>.</t>
<t>The set-cookie and set-cookie2 header fields MAY be specified in
any request that subsequently results in the server performing an
HTTP access. When a server receives new cookie information from an
HTTP origin server, and assuming the cookie store is modified
according RFC2109 or RFC2965, the server MUST return the new cookie
information in the MRCPv2 COMPLETE response or event as appropriate
to allow the client to update its own cookie store.</t>
<t>The <spanx style="verb">SET-PARAMS</spanx> request MAY specify
the set-cookie and set-cookie2 header fields to update the cookie
store on a server. The GET-PARAMS request MAY be used to return the
entire cookie store of "Set-Cookie" or "Set-Cookie2" type cookies to
the client.</t>
</section>
<section title="Vendor Specific Parameters">
<t>This set of header fields allows for the client to set or
retrieve Vendor Specific parameters.</t>
<figure>
<artwork><![CDATA[
vendor-specific = "Vendor-Specific-Parameters" ":"
[vendor-specific-av-pair
*(";" vendor-specific-av-pair)] CRLF
vendor-specific-av-pair = vendor-av-pair-name "="
value
]]></artwork>
</figure>
<t>header fields of this form MAY be sent in any method (request)
and are used to manage implementation-specific parameters on the
server side. The vendor-av-pair-name follows the reverse Internet
Domain Name convention (see <xref
target="sec.vendorSpecificRegistration"></xref> for syntax and
registration information). The value of the vendor attribute is
specified after the "=" symbol and MAY be quoted. For example:</t>
<figure>
<artwork><![CDATA[
com.example.companyA.paramxyz=256
com.example.companyA.paramabc=High
com.example.companyB.paramxyz=Low
]]></artwork>
</figure>
<t>When used in GET-PARAMS to get the current value of these
parameters from the server, this header field value may contain a
semicolon-separated list of implementation-specific attribute
names.</t>
</section>
</section>
<section anchor="sec.result" title="Generic Result Structure">
<t>Result data from the server for the Recognizer and Verification
resources is carried as a typed media entity in the MRCPv2 message
body of various events. The Natural Language Semantics Markup Language
(NLSML), an XML markup based on an early draft from the W3C, is the
default standard for returning results back to the client. Hence, all
servers implementing these resource types MUST support the Media Type
application/nlsml+xml. The <xref
target="W3C.REC-emma-20090210">Extensible MultiModal Annotation</xref>
format can be used to return results as well. This can be done by
negotiating the format at session establishment time with SDP
(a=resultformat:application/emma+xml) or with SIP (Allow/Accept). With
SIP, for example, if a client wants results in EMMA, an MRCPv2 proxy
can route the request to a server that supports EMMA by inspecting the
SIP header fields, rather than having to introspect in to the SDP.</t>
<t>MRCPv2 uses this representation to convey content among the clients
and servers that generate and make use of the markup. MRCPv2 uses
NSLML specifically to convey recognition, enrollment, and verification
results between the corresponding resource on the MRCPv2 server and
the MRCPv2 client. Details of this result format are fully described
in <xref target="sec.NLSML"></xref>.</t>
<figure title="Result Example">
<artwork><![CDATA[
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="http://theYesNoGrammar">
<interpretation>
<instance>
<ex:response>yes</ex:response>
</instance>
<input>ok</input>
</interpretation>
</result>
]]></artwork>
</figure>
<section anchor="sec.NLSML"
title="Natural Language Semantics Markup Language">
<t>The Natural Language Semantics Markup Language (NLSML) is an XML
data structure with elements and attributes designed to carry result
information from recognizer (including enrollment) and verification
resources. The normative definition of NLSML is the RelaxNG schema
in <xref target="sec.schema.NLSML"></xref>. Note that the elements
and attributes of this format are defined in the MRCPv2 namespace.
In the result structure, they must either be prefixed by a namespace
prefix declared within the result or must be children of an element
identified as belonging to the respective namespace. For details on
how to use XML Namespaces, see <xref
target="W3C.REC-xml-names11-20040204"></xref>. Section 2 of <xref
target="W3C.REC-xml-names11-20040204"></xref> provides details on
how to declare namespaces and namespace prefixes.</t>
<t>The root element of NLSML is <result>. Optional child
elements are <interpretation>, <enrollment-result>, and
<verification-result>, at least one of which must be present.
A single <result> may contain all of the optional child
elements. Details of the <result> and <interpretation>
elements and their subelements and attributes can be found in <xref
target="sec.recognizerResults"></xref>. Details of the
<enrollment-result> element and its subelements can be found
in <xref target="sec.enrollmentResults"></xref>. Details of the
<verification-result> element and its subelements can be found
in <xref target="sec.verificationResults"></xref>.</t>
</section>
</section>
</section>
<section anchor="sec.resourceDiscovery" title="Resource Discovery">
<t>Server resources may be discovered and their capabilities learned by
clients through standard SIP machinery. The client can issue a SIP
OPTIONS transaction to a server, which has the effect of requesting the
capabilities of the server. The server MUST respond to such a request
with an SDP-encoded description of its capabilities according to <xref
target="RFC3264">RFC3264</xref>. The MRCPv2 capabilities are described
by a single m-line containing the media type "application" and transport
type "TCP/TLS/MRCPv2" or "TCP/MRCPv2". There MUST be one "resource"
attribute for each media resource that the server supports with the
resource type identifier as its value.</t>
<t>The SDP description MUST also contain m-lines describing the audio
capabilities and the coders the server supports.</t>
<figure title="Using SIP OPTIONS for MRCPv2 Server Capability Discovery">
<preamble>In this example, the client uses the SIP OPTIONS method to
query the capabilities of the MRCPv2 server.</preamble>
<artwork><![CDATA[
C->S:
OPTIONS sip:mrcp@server.example.com SIP/2.0
Max-Forwards:6
To:<sip:mrcp@example.com>;tag=62784
From:Sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:63104 OPTIONS
Contact:<sip:sarvi@client.example.com>
Accept:application/sdp
Content-Length:...
S->C:
SIP/2.0 200 OK
To:<sip:mrcp@example.com>;tag=62784
From:Sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:63104 OPTIONS
Contact:<sip:mrcp@server.example.com>
Allow:INVITE, ACK, CANCEL, OPTIONS, BYE
Accept:application/sdp
Accept-Encoding:gzip
Accept-Language:en
Supported:foo
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842807 IN IP4 192.0.2.4
s=
i=MRCPv2 server capabilities
c=IN IP4 192.0.2.12/127
m=application 0 TCP/TLS/MRCPv2 1
a=resource:speechsynth
a=resource:speechrecog
a=resource:speakverify
m=audio 0 RTP/AVP 0 3
a=rtpmap:0 PCMU/8000
a=rtpmap:3 GSM/8000
]]></artwork>
</figure>
</section>
<section anchor="sec.synthesizerResource"
title="Speech Synthesizer Resource">
<t>This resource processes text markup provided by the client and
generates a stream of synthesized speech in real-time. Depending upon
the server implementation and capability of this resource, the client
can also dictate parameters of the synthesized speech such as voice
characteristics, speaker speed, etc.</t>
<t>The synthesizer resource is controlled by MRCPv2 requests from the
client. Similarly, the resource can respond to these requests or
generate asynchronous events to the client to indicate conditions of
interest to the client during the generation of the synthesized speech
stream.</t>
<t>This section applies for the following resource types: <list>
<t>speechsynth</t>
<t>basicsynth</t>
</list></t>
<t>The capabilities of these resources are defined in <xref
target="sec.resourceTypes"></xref>.</t>
<section title="Synthesizer State Machine">
<t>The synthesizer maintains a state machine to process MRCPv2
requests from the client. The state transitions shown below describe
the states of the synthesizer and reflect the state of the request at
the head of the synthesizer resource queue. A <spanx
style="verb">SPEAK</spanx> request in the PENDING state can be deleted
or stopped by a <spanx style="verb">STOP</spanx> request without
affecting the state of the resource.</t>
<figure title="Synthesizer State Machine">
<artwork><![CDATA[
Idle Speaking Paused
State State State
| | |
|----------SPEAK-------->| |--------|
|<------STOP-------------| CONTROL |
|<----SPEAK-COMPLETE-----| |------->|
|<----BARGE-IN-OCCURRED--| |
| |---------| |
| CONTROL |-----------PAUSE--------->|
| |-------->|<----------RESUME---------|
| | |----------|
|----------| | PAUSE |
| BARGE-IN-OCCURRED | |--------->|
|<---------| |----------| |
| | SPEECH-MARKER |
| |<---------| |
|----------| |----------| |
| STOP | RESUME |
| | |<---------| |
|<---------| | |
|<---------------------STOP-------------------------|
|----------| | |
| DEFINE-LEXICON | |
| | | |
|<---------| | |
|<---------------BARGE-IN-OCCURRED------------------|
]]></artwork>
</figure>
</section>
<section title="Synthesizer Methods">
<t>The synthesizer supports the following methods.</t>
<figure>
<artwork><![CDATA[
synthesizer-method = "SPEAK"
/ "STOP"
/ "PAUSE"
/ "RESUME"
/ "BARGE-IN-OCCURRED"
/ "CONTROL"
/ "DEFINE-LEXICON"
]]></artwork>
</figure>
</section>
<section title="Synthesizer Events">
<t>The synthesizer may generate the following events.</t>
<figure>
<artwork><![CDATA[
synthesizer-event = "SPEECH-MARKER"
/ "SPEAK-COMPLETE"
]]></artwork>
</figure>
</section>
<section anchor="sec.synthesizeHeaders"
title="Synthesizer Header Fields">
<t>A synthesizer method may contain header fields containing request
options and information to augment the Request, Response or Event it
is associated with.</t>
<figure>
<artwork><![CDATA[
synthesizer-header = jump-size
/ kill-on-barge-in
/ speaker-profile
/ completion-cause
/ completion-reason
/ voice-parameter
/ prosody-parameter
/ speech-marker
/ speech-language
/ fetch-hint
/ audio-fetch-hint
/ failed-uri
/ failed-uri-cause
/ speak-restart
/ speak-length
/ load-lexicon
/ lexicon-search-order
]]></artwork>
</figure>
<section title="Jump-Size">
<t>This header field MAY be specified in a CONTROL method and
controls the amount to jump forward or backward in an active <spanx
style="verb">SPEAK</spanx> request. A + or - indicates a relative
value to what is being currently played. This header field MAY also
be specified in a "SPEAK" request as a desired offset into the
synthesized speech. In this case, the synthesizer MUST begin
speaking from this amount of time into the speech markup. Note that
an offset that extends beyond the end of the produced speech will
result in audio of length zero. The different speech length units
supported are dependent on the synthesizer implementation. If the
synthesizer resource does not support a unit or the operation, the
resource MUST respond with a status code of 409 "Unsupported Header
Field Value".</t>
<figure>
<artwork><![CDATA[
jump-size = "Jump-Size" ":" speech-length-value CRLF
speech-length-value = numeric-speech-length
/ text-speech-length
text-speech-length = 1*UTFCHAR SP "Tag"
numeric-speech-length = ("+" / "-") positive-speech-length
positive-speech-length = 1*19DIGIT SP numeric-speech-unit
numeric-speech-unit = "Second"
/ "Word"
/ "Sentence"
/ "Paragraph"
]]></artwork>
</figure>
</section>
<section anchor="sec.kill-on-barge-in" title="Kill-On-Barge-In">
<t>This header field MAY be sent as part of the <spanx
style="verb">SPEAK</spanx> method to enable kill-on-barge-in
support. If enabled, the <spanx style="verb">SPEAK</spanx> method is
interrupted by DTMF input detected by a signal detector resource or
by the start of speech sensed or recognized by the speech recognizer
resource.</t>
<figure>
<artwork><![CDATA[
kill-on-barge-in = "Kill-On-Barge-In" ":" BOOLEAN CRLF
]]></artwork>
</figure>
<t>The client MUST send a BARGE-IN-OCCURRED method to the
synthesizer resource when it receives a barge-in-able event from any
source. This source could be a synthesizer resource or signal
detector resource and MAY be either local or distributed. If this
header field is not specified in a <spanx style="verb">SPEAK</spanx>
request or explicitly set by a <spanx
style="verb">SET-PARAMS</spanx>, the default value for this header
field is "true".</t>
<t>If the recognizer or signal detector resource is on the same
server as the synthesizer and both are part of the same session, the
server MAY work with both to provide internal notification to the
synthesizer so that audio may be stopped without having to wait for
the client's BARGE-IN-OCCURRED event.</t>
</section>
<section title="Speaker Profile">
<t>This header field MAY be part of the <spanx
style="verb">SET-PARAMS</spanx>/<spanx
style="verb">GET-PARAMS</spanx> or <spanx style="verb">SPEAK</spanx>
request from the client to the server and specifies a URI which
references the profile of the speaker. Speaker profiles are
collections of voice parameters like gender, accent etc.</t>
<figure>
<artwork><![CDATA[
speaker-profile = "Speaker-Profile" ":" uri CRLF]]></artwork>
</figure>
</section>
<section title="Completion Cause">
<t>This header field MUST be specified in a <spanx
style="verb">SPEAK-COMPLETE</spanx> event coming from the
synthesizer resource to the client. This indicates the reason the
<spanx style="verb">SPEAK</spanx> request completed.</t>
<figure>
<artwork><![CDATA[
completion-cause = "Completion-Cause" ":" 3DIGIT SP
1*VCHAR CRLF
]]></artwork>
</figure>
<texttable title="Synthesizer Resource Compleion Cause Codes">
<ttcol width="10%">Cause-Code</ttcol>
<ttcol width="35%">Cause-Name</ttcol>
<ttcol>Description</ttcol>
<c>000</c>
<c>normal</c>
<c>SPEAK completed normally.</c>
<c>001</c>
<c>barge-in</c>
<c>SPEAK request was terminated because of barge-in.</c>
<c>002</c>
<c>parse-failure</c>
<c>SPEAK request terminated because of a failure to parse the
speech markup text.</c>
<c>003</c>
<c>uri-failure</c>
<c>SPEAK request terminated because access to one of the URIs
failed.</c>
<c>004</c>
<c>error</c>
<c>SPEAK request terminated prematurely due to synthesizer
error.</c>
<c>005</c>
<c>language-unsupported</c>
<c>Language not supported.</c>
<c>006</c>
<c>lexicon-load-failure</c>
<c>Lexicon loading failed.</c>
<c>007</c>
<c>cancelled</c>
<c>A prior SPEAK request failed while this one was still in the
queue.</c>
</texttable>
</section>
<section title="Completion Reason">
<t>This header field MAY be specified in a <spanx
style="verb">SPEAK-COMPLETE</spanx> event coming from the
synthesizer resource to the client. This contains the reason text
behind the <spanx style="verb">SPEAK</spanx> request completion.
This header field communicates text describing the reason for the
failure, such as an error in parsing the speech markup text.</t>
<figure>
<artwork><![CDATA[
completion-reason = "Completion-Reason" ":"
quoted-string CRLF
]]></artwork>
</figure>
<t>The completion reason text is provided for client use in logs and
for debugging and instrumentation purposes. Clients MUST NOT
interpret the completion reason text.</t>
</section>
<section title="Voice-Parameter">
<t>This set of header fields defines the voice of the speaker.</t>
<figure>
<artwork><![CDATA[
voice-parameter = voice-gender
/ voice-age
/ voice-variant
/ voice-name
voice-gender = "Voice-Gender:" voice-gender-value CRLF
voice-gender-value = "male"
/ "female"
/ "neutral"
voice-age = "Voice-Age:" 1*3DIGIT CRLF
voice-variant = "Voice-Variant:" 1*19DIGIT CRLF
voice-name = "Voice-Name:"
1*UTFCHAR *(1*WSP 1*UTFCHAR) CRLF
]]></artwork>
</figure>
<t>The Voice- parameters are derived from the similarly-named
attributes of the voice element specified in W3C's <xref
target="W3C.REC-speech-synthesis-20040907">Speech Synthesis Markup
Language Specification</xref>. Legal values for these parameters are
as defined in that specification.</t>
<t>These header fields MAY be sent in <spanx
style="verb">SET-PARAMS</spanx>/<spanx
style="verb">GET-PARAMS</spanx> request to define/get default values
for the entire session or MAY be sent in the <spanx
style="verb">SPEAK</spanx> request to define default values for that
speak request. Note that SSML content can itself set these values
internal to the SSML document, of course.</t>
<t>Voice parameter header fields MAY also be sent in a CONTROL
method to affect a <spanx style="verb">SPEAK</spanx> request in
progress and change its behavior on the fly. If the synthesizer
resource does not support this operation, it MUST reject the request
with a status of 403 "Unsupported Header Field".</t>
</section>
<section title="Prosody-Parameters">
<t>This set of header fields defines the prosody of the speech.</t>
<figure>
<artwork><![CDATA[
prosody-parameter = "Prosody-" prosody-param-name ":"
prosody-param-value CRLF
]]></artwork>
</figure>
<t>prosody-param-name is any one of the attribute names under the
prosody element specified in W3C's <xref
target="W3C.REC-speech-synthesis-20040907">Speech Synthesis Markup
Language Specification</xref>. The prosody-param-value is any one of
the value choices of the corresponding prosody element attribute
specified in the above section.</t>
<t>These header fields MAY be sent in <spanx
style="verb">SET-PARAMS</spanx>/<spanx
style="verb">GET-PARAMS</spanx> request to define/get default values
for the entire session or MAY be sent in the <spanx
style="verb">SPEAK</spanx> request to define default values for that
speak request. Furthermore, these attributes can be part of the
speech text marked up in SSML.</t>
<t>The prosody parameter header fields in the <spanx
style="verb">SET-PARAMS</spanx> or <spanx style="verb">SPEAK</spanx>
request only apply if the speech data is of type text/plain and does
not use a speech markup format.</t>
<t>These prosody parameter header fields MAY also be sent in a
CONTROL method to affect a <spanx style="verb">SPEAK</spanx> request
in progress and change its behavior on the fly. If the synthesizer
resource does not support this operation, it MUST respond back to
the client with a status of 403 "Unsupported Header Field".</t>
</section>
<section title="Speech Marker">
<t>This header field contains timestamp information in a "timestamp"
field. This is an NTP timestamp, a 64 bit number in decimal form. It
MUST be synced with the RTP timestamp of the media stream through
RTCP.</t>
<t>Markers are bookmarks that are defined within the markup. Most
speech markup formats provide mechanisms to embed marker fields
within speech texts. The synthesizer generates SPEECH-MARKER events
when it reaches these marker fields. This header field MUST be part
of the SPEECH-MARKER event and contain the marker tag value after
the timestamp, separated by a semicolon. In these events the
timestamp marks the time the text corresponding to the marker was
emitted as speech by the synthesizer.</t>
<t>This header field MUST also be returned in responses to STOP,
CONTROL, and BARGE-IN-OCCURRED methods, in the <spanx
style="verb">SPEAK-COMPLETE</spanx> event, and in an IN-PROGRESS
SPEAK response. In these messages, if any markers have been
encountered for the current SPEAK, the marker tag value MUST be the
last embedded marker encountered. If no markers have yet been
encountered for the current SPEAK, only the timestamp is REQUIRED.
Note than in these events the purpose of this header field is to
provide timestamp information associated with important events
within the lifecycle of a request (start of SPEAK processing, end of
SPEAK processing, receipt of CONTROL/STOP/BARGE-IN-OCCURRED).</t>
<figure>
<artwork><![CDATA[
timestamp = "timestamp" "=" time-stamp-value
time-stamp-value = 1*20DIGIT
speech-marker = "Speech-Marker" ":"
timestamp
[";" 1*(UTFCHAR / %x20)] CRLF
]]></artwork>
</figure>
</section>
<section title="Speech Language">
<t>This header field specifies the default language of the speech
data if the language is not specified in the markup. The value of
this header field MUST follow <xref target="RFC4646">RFC4646</xref>
for its values. The header field MAY occur in <spanx
style="verb">SPEAK</spanx>, <spanx style="verb">SET-PARAMS</spanx>
or <spanx style="verb">GET-PARAMS</spanx> requests.</t>
<figure>
<artwork><![CDATA[
speech-language = "Speech-Language" ":" 1*VCHAR CRLF
]]></artwork>
</figure>
</section>
<section title="Fetch Hint">
<t>When the synthesizer needs to fetch documents or other resources
like speech markup or audio files, this header field controls the
corresponding URI access properties. This provides client policy on
when the synthesizer should retrieve content from the server. A
value of "prefetch" indicates the content MAY be downloaded when the
request is received, whereas "safe" indicates that content MUST NOT
be downloaded until actually referenced. The default value is
"prefetch". This header field MAY occur in <spanx
style="verb">SPEAK</spanx>, <spanx style="verb">SET-PARAMS</spanx>
or <spanx style="verb">GET-PARAMS</spanx> requests.</t>
<figure>
<artwork><![CDATA[
fetch-hint = "Fetch-Hint" ":" ("prefetch" / "safe") CRLF
]]></artwork>
</figure>
</section>
<section title="Audio Fetch Hint">
<t>When the synthesizer needs to fetch documents or other resources
like speech audio files, this header field controls the
corresponding URI access properties. This provides client policy
whether or not the synthesizer may attempt to optimize speech by
pre-fetching audio. The value is either "safe" to say that audio is
only fetched when it is referenced, never before; "prefetch" to
permit, but not require the implementation to pre-fetch the audio;
or "stream" to allow it to stream the audio fetches. The default
value is "prefetch". This header field MAY occur in <spanx
style="verb">SPEAK</spanx>, <spanx style="verb">SET-PARAMS</spanx>
or <spanx style="verb">GET-PARAMS</spanx>. requests.</t>
<figure>
<artwork><![CDATA[
audio-fetch-hint = "Audio-Fetch-Hint" ":"
("prefetch" / "safe" / "stream") CRLF
]]></artwork>
</figure>
</section>
<section title="Failed URI">
<t>When a synthesizer method needs a synthesizer to fetch or access
a URI and the access fails, the server SHOULD provide the failed URI
in this header field in the method response, unless there are
multiple URI failures, in which case one of the failed URIs MUST be
provided in this header field in the method response.</t>
<figure>
<artwork><![CDATA[
failed-uri = "Failed-URI" ":" Uri CRLF
]]></artwork>
</figure>
</section>
<section title="Failed URI Cause">
<t>When a synthesizer method needs a synthesizer to fetch or access
a URI and the access fails the server MUST provide the URI-specific
or protocol-specific response code for the URI in the Failed-URI
header field in the method response through this header field. The
value encoding is UTF-8 to accommodate any access protocol, some of
which might have a response string instead of a numeric response
code.</t>
<figure>
<artwork><![CDATA[failed-uri-cause = "Failed-URI-Cause" ":" 1*UTFCHAR CRLF]]></artwork>
</figure>
</section>
<section title="Speak Restart">
<t>When a CONTROL request to jump backward is issued to a currently
speaking synthesizer resource, and the target jump point is before
the start of the current <spanx style="verb">SPEAK</spanx> request,
the current <spanx style="verb">SPEAK</spanx> request MUST restart
from the beginning of its speech data and the response to the
CONTROL request MUST contain this header field with a value of
"true" indicating a restart.</t>
<figure>
<artwork><![CDATA[
speak-restart = "Speak-Restart" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="Speak Length">
<t>This header field MAY be specified in a CONTROL method to control
the length of speech to speak, relative to the current speaking
point in the currently active <spanx style="verb">SPEAK</spanx>
request. If numeric, the value MUST be a positive integer. If a
header field with a Tag unit is specified, then the speech output
continues until the tag is reached or the <spanx
style="verb">SPEAK</spanx> request complete, whichever comes first.
This header field MAY be specified in a <spanx
style="verb">SPEAK</spanx> request to indicate the length to speak
from the speech data and is relative to the point in speech that the
<spanx style="verb">SPEAK</spanx> request starts. The different
speech length units supported are synthesizer implementation
dependent. If a server does not support the specified unit, the
resource MUST respond with a status code of 409 "Unsupported Header
Field Value".</t>
<figure>
<artwork><![CDATA[
speak-length = "Speak-Length" ":" positive-length-value
CRLF
positive-length-value = positive-speech-length
/ text-speech-length
text-speech-length = 1*UTFCHAR SP "Tag"
positive-speech-length = 1*19DIGIT SP numeric-speech-unit
numeric-speech-unit = "Second"
/ "Word"
/ "Sentence"
/ "Paragraph"
]]></artwork>
</figure>
</section>
<section anchor="load-lexicon" title="Load-Lexicon">
<t>This header field is used to indicate whether a lexicon has to be
loaded or unloaded. The default value for this header field is
"true". This header field MAY be specified in a DEFINE-LEXICON
method.</t>
<figure>
<artwork><![CDATA[
load-lexicon = "Load-Lexicon" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section anchor="lexicon-search-order" title="Lexicon-Search-Order">
<t>This header field is used to specify a list of active Lexicon
URIs and the search order among the active lexicons. Lexicons
specified within the SSML document take precedence over the lexicons
specified in this header field. This header field MAY be specified
in the SPEAK, SET-PARAMS, and GET-PARAMS methods.</t>
<figure>
<artwork><![CDATA[
lexicon-search-order = "Lexicon-Search-Order" ":"
"<" absoluteURI ">" *(" " "<" absoluteURI ">") CRLF
]]></artwork>
</figure>
</section>
</section>
<section anchor="sec.synthMessageBody" title="Synthesizer Message Body ">
<t>A synthesizer message may contain additional information associated
with the Request, Response or Event in its message body.</t>
<section title="Synthesizer Speech Data">
<t>Marked-up text for the synthesizer to speak is specified as a
typed media entity in the message body. The speech data to be spoken
by the synthesizer can be specified inline by embedding the data in
the message body or by reference by providing a URI for accessing
the data. In either case the data and the format used to markup the
speech needs to be of a content type supported by the server.</t>
<t>All MRCPv2 servers containing synthesizer resources MUST support
both plain text speech data and W3C's <xref
target="W3C.REC-speech-synthesis-20040907">Speech Synthesis Markup
Language</xref> and hence MUST support the Media Types text/plain
and application/ssml+xml. Other formats MAY be supported.</t>
<t>If the speech data is to be fetched by URI reference, the Media
Type text/uri-list <xref target="RFC2483">RFC2483</xref> is used to
indicate one or more URIs that, when dereferenced, will contain the
content to be spoken. If a list of speech URIs is specified, speech
data provided by each URI MUST be spoken in the order in which the
URIs are specified in the content.</t>
<t>A mix of URI and inline speech data may be indicated through the
multipart/mixed Media Type. Embedded within the multipart there MAY
be content for the text/uri-list, application/ssml+xml and/or
text/plain media types. The character set and encoding used in the
speech data is specified according to standard Media Type
definitions. The multi-part content MAY also contain actual audio
data. Clients may have recorded audio clips stored in memory or on a
local device and wish to play it as part of the <spanx
style="verb">SPEAK</spanx> request. The audio portions MAY be sent
by the client as part of the multi-part content block. This audio is
referenced in the speech markup data that is another part in the
multi-part content block according to the multipart/mixed Media Type
specification.</t>
<figure title="URI List Example">
<artwork><![CDATA[
Content-Type:text/uri-list
Content-Length:...
http://www.example.com/ASR-Introduction.ssml
http://www.example.com/ASR-Document-Part1.ssml
http://www.example.com/ASR-Document-Part2.ssml
http://www.example.com/ASR-Conclusion.ssml
]]></artwork>
</figure>
<figure title="SSML Example">
<artwork><![CDATA[
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams
and arrived at <break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<s>The subject is <prosody
rate="-20%">ski trip</prosody></s>
</p>
</speak>
]]></artwork>
</figure>
<figure title="Multipart Example">
<artwork><![CDATA[
Content-Type:multipart/mixed; boundary="break"
--break
Content-Type:text/uri-list
Content-Length:...
http://www.example.com/ASR-Introduction.ssml
http://www.example.com/ASR-Document-Part1.ssml
http://www.example.com/ASR-Document-Part2.ssml
http://www.example.com/ASR-Conclusion.ssml
--break
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams
and arrived at <break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<s>The subject is <prosody
rate="-20%">ski trip</prosody></s>
</p>
</speak>
--break--
]]></artwork>
</figure>
</section>
<section anchor="sec.lexiconData" title="Lexicon Data">
<t>Synthesizer lexicon data from the client to the server can be
provided inline or by reference. Either way they are carried as
typed media in the message body of the MRCPv2 request message (see
<xref target="sec.methodDefineLexicon"></xref>).</t>
<t>When a lexicon is specified in-line in the message, the client
MUST provide a Content-ID for that lexicon as part of the content
header fields. The server MUST store the lexicon associated with
that Content-ID for the duration of the session. A stored lexicon
can be overwritten by defining a new lexicon with the same
Content-ID. Lexicons that have been associated with a Content-ID can
be referenced through the <spanx style="verb">session:</spanx> URI
scheme (see <xref target="sec.sessionURIScheme"></xref>).</t>
<t>If lexicon data is specified by external URI reference, the Media
Type text/uri-list <xref target="RFC2483">RFC2483</xref> is used to
list the one or more URIs that may be dereferenced to obtain the
lexicon data. All MRCPv2 servers MUST support the HTTP and HTTPS uri
access mechanisms, and MAY support other mechanisms.</t>
<t>If the data in the message body consists of a mix of URI and
inline lexicon data the multipart/mixed Media Type is used. The
character set and encoding used in the lexicon data may be specified
according to standard Media Type definitions.</t>
</section>
</section>
<section title="SPEAK Method">
<t>The <spanx style="verb">SPEAK</spanx> Request provides the
synthesizer resource with the speech text and initiates speech
synthesis and streaming. The <spanx style="verb">SPEAK</spanx> method
can carry voice and prosody header fields that alter the behavior of
the voice being synthesized, as well as a typed media message body
containing the actual marked-up text to be spoken.</t>
<t>The SPEAK method implementation MUST do a fetch of all external
URIs that are part of that operation. If caching is implemented, this
URI fetching MUST conform to the cache control hints and parameter
header fields associated with the method in deciding whether it is to
be fetched from cache or from the external server. If these
hints/parameters are not specified in the method, the values set for
the session using SET-PARAMS/GET-PARAMS apply. If it was not set for
the session their default values apply.</t>
<t>When applying voice parameters there are 3 levels of precedence.
The highest precedence are those specified within the speech markup
text, followed by those specified in the header fields of the <spanx
style="verb">SPEAK</spanx> request and hence apply for that <spanx
style="verb">SPEAK</spanx> request only, followed by the session
default values which can be set using the <spanx
style="verb">SET-PARAMS</spanx> request and apply for subsequent
methods invoked during the session.</t>
<t>If the resource was idle at the time the <spanx
style="verb">SPEAK</spanx> request arrived at the server and the
<spanx style="verb">SPEAK</spanx> method is being actively processed,
the resource responds immediately with a success status code and a
request-state of IN-PROGRESS.</t>
<t>If the resource is in the speaking or paused state when the <spanx
style="verb">SPEAK</spanx> method arrives at the server, i.e. it is in
the middle of processing a previous <spanx style="verb">SPEAK</spanx>
request, the status returns success with a request-state of PENDING.
The server places the <spanx style="verb">SPEAK</spanx> request in the
synthesizer resource request queue. The request queue operates
strictly FIFO: requests are processed serially in order of receipt. If
the current SPEAK fails, all SPEAK methods in the pending queue are
cancelled and each generates a SPEAK-COMPLETE event with a
Completion-Cause of "cancelled".</t>
<t>For the synthesizer resource, <spanx style="verb">SPEAK</spanx> is
the only method that can return a request-state of IN-PROGRESS or
PENDING. When the text has been synthesized and played into the media
stream, the resource issues a <spanx
style="verb">SPEAK-COMPLETE</spanx> event with the request-id of the
<spanx style="verb">SPEAK</spanx> request and a request-state of
COMPLETE.</t>
<figure title="SPEAK Example">
<artwork><![CDATA[
C->S: MRCP/2.0 489 SPEAK 543257
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:neutral
Voice-Age:25
Prosody-volume:medium
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams and arrived at
<break/>
<say-as interpret-as="vxml:time">0345p</say-as>.
</s>
<s>The subject is
<prosody rate="-20%">ski trip</prosody>
</s>
</p>
</speak>
S->C: MRCP/2.0 28 543257 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@speechsynth
Speech-Marker:timestamp=857206027059
S->C: MRCP/2.0 79 SPEAK-COMPLETE 543257 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Completion-Cause:000 normal
Speech-Marker:timestamp=857206027059
]]></artwork>
</figure>
</section>
<section title="STOP">
<t>The <spanx style="verb">STOP</spanx> method from the client to the
server tells the synthesizer resource to stop speaking if it is
speaking something.</t>
<t>The <spanx style="verb">STOP</spanx> request can be sent with an
active-request-id-list header field to stop the zero or more specific
<spanx style="verb">SPEAK</spanx> requests that may be in queue and
return a response code of 200 (Success). If no active-request-id-list
header field is sent in the <spanx style="verb">STOP</spanx> request
the server terminates all outstanding <spanx
style="verb">SPEAK</spanx> requests.</t>
<t>If a <spanx style="verb">STOP</spanx> request successfully
terminated one or more PENDING or IN-PROGRESS <spanx
style="verb">SPEAK</spanx> requests, then the response MUST contain an
active-request-id-list header field enumerating the <spanx
style="verb">SPEAK</spanx> request-ids that were terminated. Otherwise
there is no active-request-id-list header field in the response. No
<spanx style="verb">SPEAK-COMPLETE</spanx> events are sent for such
terminated requests.</t>
<t>If a <spanx style="verb">SPEAK</spanx> request that was IN-PROGRESS
and speaking was stopped, the next pending <spanx
style="verb">SPEAK</spanx> request, if any, becomes IN-PROGRESS at the
resource and enters the speaking state.</t>
<t>If a <spanx style="verb">SPEAK</spanx> request that was IN-PROGRESS
and paused was stopped, the next pending <spanx
style="verb">SPEAK</spanx> request, if any, becomes IN-PROGRESS and
enters the paused state.</t>
<figure title="STOP Example">
<artwork><![CDATA[
C->S: MRCP/2.0 423 SPEAK 543258
Channel-Identifier:32AECB23433802@speechsynth
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams and arrived at
<break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<s>The subject is
<prosody rate="-20%">ski trip</prosody></s>
</p>
</speak>
S->C: MRCP/2.0 48 543258 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@speechsynth
Speech-Marker:timestamp=857206027059
C->S: MRCP/2.0 44 STOP 543259
Channel-Identifier:32AECB23433802@speechsynth
S->C: MRCP/2.0 66 543259 200 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Active-Request-Id-List:543258
Speech-Marker:timestamp=857206039059
]]></artwork>
</figure>
</section>
<section title="BARGE-IN-OCCURED">
<t>The BARGE-IN-OCCURRED method, when used with the synthesizer
resource, provides a client which has detected a barge-in-able event a
means to communicate the occurrence of the event to the synthesizer
resource.</t>
<t>This method is useful in two scenarios, <list style="numbers">
<t>The client has detected DTMF digits in the input media or some
other barge-in-able event and wants to communicate that to the
synthesizer resource.</t>
<t>The recognizer resource and the synthesizer resource are in
different servers. In this case the client acts as an intermediary
for the two servers. It receives an event from the recognition
resource and sends a BARGE-IN-OCCURRED request to the synthesizer.
In such cases, the BARGE-IN-OCCURRED method would also have a
proxy-sync-id header field received from the resource generating
the original event.</t>
</list></t>
<t>If a <spanx style="verb">SPEAK</spanx> request is active with
kill-on-barge-in enabled (see <xref
target="sec.kill-on-barge-in"></xref>), and the BARGE-IN-OCCURRED
event is received, the synthesizer MUST immediately stop streaming out
audio. It MUST also terminate any speech requests queued behind the
current active one, irrespective of whether they have barge-in enabled
or not. If a barge-in-able <spanx style="verb">SPEAK</spanx> request
was playing and it was terminated, the response MUST contain the an
active-request-list header field listing the request-ids of all <spanx
style="verb">SPEAK</spanx> requests that were terminated. The server
generates no <spanx style="verb">SPEAK-COMPLETE</spanx> events for
these requests.</t>
<t>If there were no <spanx style="verb">SPEAK</spanx> requests
terminated by the synthesizer resource as a result of the
BARGE-IN-OCCURRED method, the server responds to the BARGE-IN-OCCURRED
with a 200 success which MUST NOT contain an active-request-id-list
header field.</t>
<t>If the synthesizer and recognizer resources are part of the same
MRCPv2 session, they can be optimized for a quicker kill-on-barge-in
response if the recognizer and synthesizer interact directly. In these
cases, the client MUST still react to a START-OF-INPUT event from the
recognizer by invoking the BARGE-IN-OCCURRED method to the
synthesizer. The client MUST invoke the BARGE-IN-OCCURRED if it has
any outstanding requests to the synthesizer resource in either the
PENDING or IN-PROGRESS state.</t>
<figure title="BARGE-IN-OCCURED Example">
<artwork><![CDATA[
C->S: MRCP/2.0 433 SPEAK 543258
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:neutral
Voice-Age:25
Prosody-volume:medium
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams and arrived at
<break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<s>The subject is
<prosody rate="-20%">ski trip</prosody></s>
</p>
</speak>
S->C: MRCP/2.0 47 543258 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@speechsynth
Speech-Marker:timestamp=857206027059
C->S: MRCP/2.0 69 BARGE-IN-OCCURRED 543259
Channel-Identifier:32AECB23433802@speechsynth
Proxy-Sync-Id:987654321
S->C:MRCP/2.0 72 543259 200 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Active-Request-Id-List:543258
Speech-Marker:timestamp=857206039059
]]></artwork>
</figure>
</section>
<section title="PAUSE">
<t>The PAUSE method from the client to the server tells the
synthesizer resource to pause speech output if it is speaking
something. If a PAUSE method is issued on a session when a <spanx
style="verb">SPEAK</spanx> is not active the server MUST respond with
a status of 402 "Method not valid in this state". If a PAUSE method is
issued on a session when a <spanx style="verb">SPEAK</spanx> is active
and paused the server MUST respond with a status of 200 "Success". If
a <spanx style="verb">SPEAK</spanx> request was active the server MUST
return an active-request-id-list header field with the request-id of
the <spanx style="verb">SPEAK</spanx> request that was paused.</t>
<figure title="PAUSE Example">
<artwork><![CDATA[
C->S: MRCP/2.0 434 SPEAK 543258
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:neutral
Voice-Age:25
Prosody-volume:medium
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams and arrived at
<break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<s>The subject is
<prosody rate="-20%">ski trip</prosody></s>
</p>
</speak>
S->C: MRCP/2.0 48 543258 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@speechsynth
Speech-Marker:timestamp=857206027059
C->S: MRCP/2.0 43 PAUSE 543259
Channel-Identifier:32AECB23433802@speechsynth
S->C: MRCP/2.0 68 543259 200 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Active-Request-Id-List:543258
]]></artwork>
</figure>
</section>
<section title="RESUME">
<t>The RESUME method from the client to the server tells a paused
synthesizer resource to resume speaking. If a RESUME request is issued
on a session with no active <spanx style="verb">SPEAK</spanx> request,
the server MUST respond with a status of 402 "Method not valid in this
state". If a RESUME request is issued on a session with an active
<spanx style="verb">SPEAK</spanx> request that is speaking (i.e., not
paused) the server MUST respond with a status of 200 "Success". If a
<spanx style="verb">SPEAK</spanx> request was paused the server MUST
return an active-request-id-list header field with the request-id of
the <spanx style="verb">SPEAK</spanx> request that was resumed.</t>
<figure title="RESUME Example">
<artwork><![CDATA[
C->S: MRCP/2.0 434 SPEAK 543258
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:neutral
Voice-age:25
Prosody-volume:medium
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams and arrived at
<break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<s>The subject is
<prosody rate="-20%">ski trip</prosody></s>
</p>
</speak>
S->C: MRCP/2.0 48 543258 200 IN-PROGRESS@speechsynth
Channel-Identifier:32AECB23433802
Speech-Marker:timestamp=857206027059
C->S: MRCP/2.0 44 PAUSE 543259
Channel-Identifier:32AECB23433802@speechsynth
S->C: MRCP/2.0 47 543259 200 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Active-Request-Id-List:543258
C->S: MRCP/2.0 44 RESUME 543260
Channel-Identifier:32AECB23433802@speechsynth
S->C: MRCP/2.0 66 543260 200 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Active-Request-Id-List:543258
]]></artwork>
</figure>
</section>
<section title="CONTROL">
<t>The CONTROL method from the client to the server tells a
synthesizer that is speaking to modify what it is speaking on the fly.
This method is used to request the synthesizer to jump forward or
backward in what it is speaking, change speaker rate, speaker
parameters, etc. It affects only the currently IN-PROGRESS <spanx
style="verb">SPEAK</spanx> request. Depending on the implementation
and capability of the synthesizer resource it may or may not support
the various modifications indicated by header fields in the CONTROL
request.</t>
<t>When a client invokes a CONTROL method to jump forward and the
operation goes beyond the end of the active <spanx
style="verb">SPEAK</spanx> method's text, the CONTROL request still
succeeds. The active <spanx style="verb">SPEAK</spanx> request
completes and returns a <spanx style="verb">SPEAK-COMPLETE</spanx>
event following the response to the CONTROL method. If there are more
<spanx style="verb">SPEAK</spanx> requests in the queue, the
synthesizer resource starts at the beginning of the next <spanx
style="verb">SPEAK</spanx> request in the queue.</t>
<t>When a client invokes a CONTROL method to jump backward and the
operation jumps to the beginning or beyond the beginning of the speech
data of the active <spanx style="verb">SPEAK</spanx> method, the
CONTROL request still succeeds. The response to the CONTROL request
contains the speak-restart header field, and the active <spanx
style="verb">SPEAK</spanx> request restarts from the beginning of its
speech data.</t>
<t>These two behaviors can be used to rewind or fast-forward across
multiple speech requests, if the client wants to break up a speech
markup text to multiple <spanx style="verb">SPEAK</spanx>
requests.</t>
<t>If a <spanx style="verb">SPEAK</spanx> request was active when the
CONTROL method was received the server MUST return an
active-request-id-list header field with the Request-id of the <spanx
style="verb">SPEAK</spanx> request that was active.</t>
<figure title="CONTROL Example">
<artwork><![CDATA[
C->S: MRCP/2.0 434 SPEAK 543258
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:neutral
Voice-age:25
Prosody-volume:medium
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams
and arrived at <break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<s>The subject is <prosody
rate="-20%">ski trip</prosody></s>
</p>
</speak>
S->C: MRCP/2.0 47 543258 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@speechsynth
Speech-Marker:timestamp=857205016059
C->S: MRCP/2.0 63 CONTROL 543259
Channel-Identifier:32AECB23433802@speechsynth
Prosody-rate:fast
S->C: MRCP/2.0 67 543259 200 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Active-Request-Id-List:543258
Speech-Marker:timestamp=857206027059
C->S: MRCP/2.0 68 CONTROL 543260
Channel-Identifier:32AECB23433802@speechsynth
Jump-Size:-15 Words
S->C: MRCP/2.0 69 543260 200 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Active-Request-Id-List:543258
Speech-Marker:timestamp=857206039059
]]></artwork>
</figure>
</section>
<section title="SPEAK-COMPLETE">
<t>This is an Event message from the synthesizer resource to the
client indicating that the corresponding <spanx
style="verb">SPEAK</spanx> request was completed. The request-id
header field matches the request-id of the <spanx
style="verb">SPEAK</spanx> request that initiated the speech that just
completed. The request-state field is set to COMPLETE by the server,
indicating that this is the last event with the corresponding
request-id. The completion-cause header field specifies the cause code
pertaining to the status and reason of request completion such as the
<spanx style="verb">SPEAK</spanx> completed normally or because of an
error, kill-on-barge-in etc.</t>
<figure title="SPEAK-COMPLETE Example">
<artwork><![CDATA[
C->S: MRCP/2.0 434 SPEAK 543260
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:neutral
Voice-age:25
Prosody-volume:medium
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams
and arrived at <break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<s>The subject is
<prosody rate="-20%">ski trip</prosody></s>
</p>
</speak>
S->C: MRCP/2.0 48 543260 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@speechsynth
Speech-Marker:timestamp=857206027059
S->C: MRCP/2.0 73 SPEAK-COMPLETE 543260 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Completion-Cause:000 normal
Speech-Marker:timestamp=857206039059
]]></artwork>
</figure>
</section>
<section title="SPEECH-MARKER">
<t>This is an event generated by the synthesizer resource to the
client when the synthesizer encounters a marker tag in the speech
markup it is currently processing. The request-id field in the header
field matches the corresponding <spanx style="verb">SPEAK</spanx>
request. The request-state field indicates IN-PROGRESS as the speech
is still not complete. The value of the speech marker tag hit,
describing where the synthesizer is in the speech markup, is returned
in the speech-marker header field, along with an NTP timestamp
indicating the instant in the output speech stream that the marker was
encountered. The SPEECH-MARKER event MUST also be generated with a
null marker value and output NTP timestamp when a SPEAK request in
Pending-State (i.e. in the queue) changes state to IN-PROGRESS and
starts speaking. The NTP timestamp MUST be synchronized with the RTP
timestamp used to generate the speech stream through standard RTCP
machinery.</t>
<figure title="SPEECH-MARKER Example">
<artwork><![CDATA[
C->S: MRCP/2.0 434 SPEAK 543261
Channel-Identifier:32AECB23433802@speechsynth
Voice-gender:neutral
Voice-age:25
Prosody-volume:medium
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams
and arrived at <break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<mark name="here"/>
<s>The subject is
<prosody rate="-20%">ski trip</prosody>
</s>
<mark name="ANSWER"/>
</p>
</speak>
S->C: MRCP/2.0 48 543261 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@speechsynth
Speech-Marker:timestamp=857205015059
S->C: MRCP/2.0 73 SPEECH-MARKER 543261 IN-PROGRESS
Channel-Identifier:32AECB23433802@speechsynth
Speech-Marker:timestamp=857206027059;here
S->C: MRCP/2.0 74 SPEECH-MARKER 543261 IN-PROGRESS
Channel-Identifier:32AECB23433802@speechsynth
Speech-Marker:timestamp=857206039059;ANSWER
S->C: MRCP/2.0 73 SPEAK-COMPLETE 543261 COMPLETE
Channel-Identifier:32AECB23433802@speechsynth
Completion-Cause:000 normal
Speech-Marker:timestamp=857207689259;ANSWER
]]></artwork>
</figure>
</section>
<section anchor="sec.methodDefineLexicon" title="DEFINE-LEXICON">
<t>The DEFINE-LEXICON method, from the client to the server, provides
a lexicon and tells the server to load or unload the lexicon (see
<xref target="load-lexicon"></xref>). The media type of the lexicon is
provided in the Content-Type header (see <xref
target="sec.lexiconData"></xref>). One such media type is PLS <xref
target="W3C.REC-pronunciation-lexicon-20081014"></xref>.</t>
<t>If the server resource is in the speaking or paused state, the
server MUST respond 402 (Method not valid in this state) failure
status.</t>
<t>If the resource is in the idle state and is able to successfully
load/unload the lexicon the status MUST return a success code and the
request-state MUST be COMPLETE.</t>
<t>If the synthesizer could not define the lexicon for some reason,
for example because the download failed or the lexicon was in an
unsupported form, the server MUST respond with a failure status code
of 407, and a Completion-Cause header field describing the failure
reason.</t>
</section>
</section>
<section anchor="sec.recognizerResource"
title="Speech Recognizer Resource">
<t>The speech recognizer resource receives an incoming voice stream and
provides the client with an interpretation of what was spoken in textual
form.</t>
<t>The recognizer resource is controlled by MRCPv2 requests from the
client. The recognizer resource can both respond to these requests and
generate asynchronous events to the client to indicate conditions of
interest during the processing of the method.</t>
<t>This section applies to the following resource types. <list
style="numbers">
<t>speechrecog</t>
<t>dtmfrecog</t>
</list></t>
<t>The difference between the above two resources is in their level of
support for recognition grammars. The "dtmfrecog" resource type is
capable of recognizing only DTMF digits and hence accepts only DTMF
grammars. It only generates barge-in for DTMF inputs and ignores speech.
The "speechrecog" resource type can recognize regular speech as well as
DTMF digits and hence MUST support grammars describing either speech or
DTMF. This resource generates barge-in events for speech and/or DTMF. By
analyzing the grammars that are activated by the RECOGNIZE method, it
determines if a barge-in should occur for speech and/or DTMF. When the
recognizer decides it needs to generate barge-in it also generates a
START-OF-INPUT event to the client. The recognition resource may support
recognition in the normal or hotword modes or both (although note that a
single speechrecog resource does not perform normal and hotword mode
recognition simultaneously). For implementations where a single
recognition resource does not support both modes, or simultaneous normal
and hotword recognition is desired, the two modes can be invoked through
separate resources allocated to the same SIP dialog (with different MRCP
session identifiers) and share the RTP audio feed.</t>
<t>The capabilities of the recognition resource are enumerated
below:</t>
<t><list style="hanging">
<t hangText="Normal Mode Recognition">Normal mode recognition tries
to match all of the speech or DTMF against the grammar and returns a
no-match status if the input fails to match or the method times
out.</t>
<t hangText="Hotword Mode Recognition">Hotword mode is where the
recognizer looks for a match against specific speech grammar or DTMF
sequence and ignores speech or DTMF that does not match. The
recognition completes only for a successful match of grammar or if
the client cancels the request or if there is a a non-input or
recognition timeout.</t>
<t hangText="Voice Enrolled Grammars">A recognition resource may
optionally support Voice Enrolled Grammars. With this functionality,
enrollment is performed using a person's voice. For example, a list
of contacts can be created and maintained by recording the person's
names using the caller's voice. This technique is sometimes also
called speaker-dependent recognition.</t>
<t hangText="Interpretation">A recognition resource may be employed
strictly for its natural language interpretation capabilities by
supplying it with a text string as input instead of speech. In this
mode the resource takes text as input and produces an
"interpretation" of the input according to the supplied grammar.</t>
</list></t>
<t>Voice Enrollment has the concept of an enrollment session. A session
to add a new phrase to a personal grammar involves the initial
enrollment followed by a repeat of enough utterances before committing
the new phrase to the personal grammar. Each time an utterance is
recorded, it is compared for similarity with the other samples and a
clash test is performed against other entries in the personal grammar to
ensure there are no similar and confusable entries.</t>
<t>Enrollment is done using a recognizer resource. Controlling which
utterances are to be considered for enrollment of a new phrase is done
by setting a header field (see <xref target="sec.phraseID"></xref>) in
the Recognize request.</t>
<t>Interpretation is accomplished through the INTERPRET method (<xref
target="sec.interpret"></xref>) and the interpret-text header field
(<xref target="sec.interpretText"></xref>).</t>
<section title="Recognizer State Machine">
<t>The recognizer resource maintains a state machine to process MRCPv2
requests from the client.</t>
<figure title="Recognizer State Machine">
<artwork><![CDATA[
Idle Recognizing Recognized
State State State
| | |
|---------RECOGNIZE---->|---RECOGNITION-COMPLETE-->|
|<------STOP------------|<-----RECOGNIZE-----------|
| | |
| |--------| |-----------|
| START-OF-INPUT | GET-RESULT |
| |------->| |---------->|
|------------| | |
| DEFINE-GRAMMAR |----------| |
|<-----------| | START-INPUT-TIMERS |
| |<---------| |
|------| | |
| INTERPRET | |
|<-----| |------| |
| | RECOGNIZE |
|-------| |<-----| |
| STOP |
|<------| |
|<-------------------STOP--------------------------|
|<-------------------DEFINE-GRAMMAR----------------|
]]></artwork>
</figure>
<t>If a recognition resource supports voice enrolled grammars,
starting an enrollment session does not change the state of the
recognizer resource. Once an enrollment session is started, then
utterances are enrolled by calling the RECOGNIZE method repeatedly.
The state of the speech recognizer resource goes from IDLE to
RECOGNIZING state each time RECOGNIZE is called.</t>
</section>
<section title="Recognizer Methods">
<t>The recognizer supports the following methods.</t>
<figure>
<artwork><![CDATA[
recognizer-method = recog-only-method
/ enrollment-method
recog-only-method = "DEFINE-GRAMMAR"
/ "RECOGNIZE"
/ "INTERPRET"
/ "GET-RESULT"
/ "START-INPUT-TIMERS"
/ "STOP"
]]></artwork>
</figure>
<t>It is OPTIONAL for a recognizer resource to support voice enrolled
grammars. If the recognizer resource does support voice enrolled
grammars it MUST support the following methods.</t>
<figure>
<artwork><![CDATA[
enrollment-method = "START-PHRASE-ENROLLMENT"
/ "ENROLLMENT-ROLLBACK"
/ "END-PHRASE-ENROLLMENT"
/ "MODIFY-PHRASE"
/ "DELETE-PHRASE"
]]></artwork>
</figure>
</section>
<section title="Recognizer Events">
<t>The recognizer may generate the following events.</t>
<figure>
<artwork><![CDATA[
recognizer-event = "START-OF-INPUT"
/ "RECOGNITION-COMPLETE"
/ "INTERPRETATION-COMPLETE"
]]></artwork>
</figure>
</section>
<section anchor="sec.recognizerHeaders" title="Recognizer Header Fields">
<t>A recognizer message may contain header fields containing request
options and information to augment the Method, Response or Event
message it is associated with.</t>
<figure>
<artwork><![CDATA[
recognizer-header = recog-only-header
/ enrollment-header
recog-only-header = confidence-threshold
/ sensitivity-level
/ speed-vs-accuracy
/ n-best-list-length
/ no-input-timeout
/ input-type
/ recognition-timeout
/ waveform-uri
/ input-waveform-uri
/ completion-cause
/ completion-reason
/ recognizer-context-block
/ start-input-timers
/ speech-complete-timeout
/ speech-incomplete-timeout
/ dtmf-interdigit-timeout
/ dtmf-term-timeout
/ dtmf-term-char
/ failed-uri
/ failed-uri-cause
/ save-waveform
/ media-type
/ new-audio-channel
/ speech-language
/ ver-buffer-utterance
/ recognition-mode
/ cancel-if-queue
/ hotword-max-duration
/ hotword-min-duration
/ interpret-text
/ dtmf-buffer-time
/ clear-dtmf-buffer
/ early-no-match
]]></artwork>
</figure>
<t>If a recognition resource supports voice enrolled grammars, the
following header fields are also used.</t>
<figure>
<artwork><![CDATA[
enrollment-header = num-min-consistent-pronunciations
/ consistency-threshold
/ clash-threshold
/ personal-grammar-uri
/ enroll-utterance
/ phrase-id
/ phrase-nl
/ weight
/ save-best-waveform
/ new-phrase-id
/ confusable-phrases-uri
/ abort-phrase-enrollment
]]></artwork>
</figure>
<t>For enrollment-specific header fields that can appear as part of
<spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx> methods, the following general rule
applies: the START-PHRASE-ENROLLMENT method must be invoked before
these header fields may be set through the <spanx
style="verb">SET-PARAMS</spanx> method or retrieved through the <spanx
style="verb">GET-PARAMS</spanx> method.</t>
<t>Note that the Waveform-URI header field of the Recognizer resource
can also appear in the response to the END-PHRASE-ENROLLMENT
method.</t>
<section anchor="sec.confidenceThreshold" title="Confidence Threshold">
<t>When a recognition resource recognizes or matches a spoken phrase
with some portion of the grammar, it associates a confidence level
with that match. The confidence-threshold header field tells the
recognizer resource what confidence level the client considers a
successful match. This is a float value between 0.0-1.0 indicating
the recognizer's confidence in the recognition. If the recognizer
determines that there is no candidate match with a confidence that
is greater than the confidence threshold, then it MUST return
no-match as the recognition result. This header field MAY occur in
RECOGNIZE, <spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. The default value for this header
field is implementation specific, as is the interpretation of any
specific value for this header field. Although values for servers
from different vendors are not comparable, it is expected that
clients will tune this value over time for a given server.</t>
<figure>
<artwork><![CDATA[
confidence-threshold = "Confidence-Threshold" ":" FLOAT CRLF]]></artwork>
</figure>
</section>
<section title="Sensitivity Level">
<t>To filter out background noise and not mistake it for speech, the
recognizer may support a variable level of sound sensitivity. The
sensitivity-level header field is a float value between 0.0 and 1.0
and allows the client to set the sensitivity level for the
recognizer. This header field MAY occur in RECOGNIZE, <spanx
style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. A higher value for this header
field means higher sensitivity. The default value for this header
field is implementation specific, as is the interpretation of any
specific value for this header field. Although values for servers
from different vendors are not comparable, it is expected that
clients will tune this value over time for a given server.</t>
<figure>
<artwork><![CDATA[
sensitivity-level = "Sensitivity-Level" ":" FLOAT CRLF
]]></artwork>
</figure>
</section>
<section title="Speed Vs Accuracy">
<t>Depending on the implementation and capability of the recognizer
resource it may be tunable towards Performance or Accuracy. Higher
accuracy may mean more processing and higher CPU utilization,
meaning fewer active sessions per server and vice versa. The value
is a float between 0.0 and 1.0. A value of 0.0 means fastest
recognition. A value of 1.0 means best accuracy. This header field
MAY occur in RECOGNIZE, <spanx style="verb">SET-PARAMS</spanx> or
<spanx style="verb">GET-PARAMS</spanx>. The default value for this
header field is implementation specific. Although values for servers
from different vendors are not comparable, it is expected that
clients will tune this value over time for a given server.</t>
<figure>
<artwork><![CDATA[
speed-vs-accuracy = "Speed-Vs-Accuracy" ":" FLOAT CRLF
]]></artwork>
</figure>
</section>
<section title="N Best List Length">
<t>When the recognizer matches an incoming stream with the grammar,
it may come up with more than one alternative match because of
confidence levels in certain words or conversation paths. If this
header field is not specified, by default, the recognition resource
returns only the best match above the confidence threshold. The
client, by setting this header field, can ask the recognition
resource to send it more than 1 alternative. All alternatives must
still be above the confidence-threshold. A value greater than one
does not guarantee that the recognizer will provide the requested
number of alternatives. This header field MAY occur in RECOGNIZE,
<spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. The minimum value for this header
field is 1. The default value for this header field is 1.</t>
<figure>
<artwork><![CDATA[
n-best-list-length = "N-Best-List-Length" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Input Type">
<t>When the recognizer detects barge-in-able input and generates a
START-OF-INPUT event, that event MUST carry this header field to
specify where the input that caused the barge-in was DTMF or
speech.</t>
<figure>
<artwork><![CDATA[
input-type = "Input-Type" ":" inputs CRLF
inputs = "speech" / "dtmf"
]]></artwork>
</figure>
</section>
<section title="No Input Timeout">
<t>When recognition is started and there is no speech detected for a
certain period of time, the recognizer can send a
RECOGNITION-COMPLETE event to the client with a Completion-Cause of
"no-input-timeout" and terminate the recognition operation. The
client can use the no-input-timeout header field to set this
timeout. The value is in milliseconds and may range from 0 to an
implementation specific maximum value. This header field MAY occur
in RECOGNIZE, <spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. The default value is implementation
specific.</t>
<figure>
<artwork><![CDATA[
no-input-timeout = "No-Input-Timeout" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Recognition Timeout">
<t>When recognition is started and there is no match for a certain
period of time, the recognizer can send a RECOGNITION-COMPLETE event
to the client and terminate the recognition operation. The
Recognition-Timeout header field allows the client to set this
timeout value. The value is in milliseconds. The value for this
header field ranges from 0 to an implementation specific maximum
value. The default value is 10 seconds. This header field MAY occur
in RECOGNIZE, SET-PARAMS or GET-PARAMS.</t>
<figure>
<artwork><![CDATA[
recognition-timeout = "Recognition-Timeout" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Waveform URI">
<t>If the Save-Waveform header field is set to true, the recognizer
MUST record the incoming audio stream of the recognition into a
stored form and provide a URI for the client to access it. This
header field MUST be present in the RECOGNITION-COMPLETE event if
the Save-Waveform header field was set to true. The value of the
header field MUST be empty if there was some error condition
preventing the server from recording. Otherwise, the URI generated
by the server MUST be unambiguous across the server and all its
recognition sessions. The content associated with the URI MUST be
available to the client until the MRCPv2 session terminates.</t>
<t>Similarly, if the Save-Best-Waveform header field is set to true,
the recognizer MUST save the audio stream for the best repetition of
the phrase that was used during the enrollment session. The
recognizer MUST then record the recognized audio and make it
available to the client by returning a URI in the Waveform-URI
header field in the response to the END-PHRASE-ENROLLMENT method.
The value of the header field MUST be empty if there was some error
condition preventing the server from recording. Otherwise, the URI
generated by the server MUST be unambiguous across the server and
all its recognition sessions. The content associated with the URI
MUST be available to the client until the MRCPv2 session terminates.
See the discussion on the sensitivity of saved waveforms in <xref
target="sec.securityConsiderations"></xref>.</t>
<t>The server MUST also return the size in octets and the duration
in milliseconds of the recorded audio wave-form as parameters
associated with the header field.</t>
<figure>
<artwork><![CDATA[
waveform-uri = "Waveform-URI" ":" ["<" Uri ">"
";" "size" "=" 1*19DIGIT
";" "duration" "=" 1*19DIGIT] CRLF
]]></artwork>
</figure>
</section>
<section title="Media Type">
<t>This header field MAY be specified in the SET-PARAMS, GET-PARAMS
or the RECOGNIZE methods and tells the server resource the Media
Type in which to store captured audio or video such as the one
captured and returned by the Waveform-URI header field.</t>
<figure>
<artwork><![CDATA[
Media-type = "Media-Type" ":" media-type-value
CRLF
]]></artwork>
</figure>
</section>
<section title="Input-Waveform-URI">
<t>This optional header field specifies a URI pointing to audio
content to be processed by the RECOGNIZE operation. This enables the
client to request recognition from a specified buffer or audio
file.</t>
<figure>
<artwork><![CDATA[
input-waveform-uri = "Input-Waveform-URI" ":" Uri CRLF
]]></artwork>
</figure>
</section>
<section title="Completion Cause">
<t>This header field MUST be part of a RECOGNITION-COMPLETE, event
coming from the recognizer resource to the client. It indicates the
reason behind the RECOGNIZE method completion. This header field
MUST be sent in the DEFINE-GRAMMAR and RECOGNIZE responses, if they
return with a failure status and a COMPLETE state.</t>
<figure>
<artwork><![CDATA[
completion-cause = "Completion-Cause" ":" 3DIGIT SP
1*VCHAR CRLF
]]></artwork>
</figure>
<texttable>
<ttcol width="10%">Cause-Code</ttcol>
<ttcol width="35%">Cause-Name</ttcol>
<ttcol>Description</ttcol>
<c>000</c>
<c>success</c>
<c>RECOGNIZE completed with a match or DEFINE-GRAMMAR succeeded in
downloading and compiling the grammar</c>
<c>001</c>
<c>no-match</c>
<c>RECOGNIZE completed, but no match was found</c>
<c>002</c>
<c>no-input-timeout</c>
<c>RECOGNIZE completed without a match due to a
no-input-timeout</c>
<c>003</c>
<c>hotword-maxtime</c>
<c>RECOGNIZE in hotword mode completed without a match due to a
recognition-timeout</c>
<c>004</c>
<c>grammar-load-failure</c>
<c>RECOGNIZE failed due grammar load failure.</c>
<c>005</c>
<c>grammar-compilation-failure</c>
<c>RECOGNIZE failed due to grammar compilation failure.</c>
<c>006</c>
<c>recognizer-error</c>
<c>RECOGNIZE request terminated prematurely due to a recognizer
error.</c>
<c>007</c>
<c>speech-too-early</c>
<c>RECOGNIZE request terminated because speech was too early. This
happens when the audio stream is already "in-speech" when the
RECOGNIZE request was received.</c>
<c>008</c>
<c>success-maxtime</c>
<c>RECOGNIZE request terminated because speech was too long but
whatever was spoken till that point was a full match.</c>
<c>009</c>
<c>uri-failure</c>
<c>Failure accessing a URI.</c>
<c>010</c>
<c>language-unsupported</c>
<c>Language not supported.</c>
<c>011</c>
<c>cancelled</c>
<c>A new RECOGNIZE cancelled this one, or a prior RECOGNIZE failed
while this one was still in the queue.</c>
<c>012</c>
<c>semantics-failure</c>
<c>Recognition succeeded but semantic interpretation of the
recognized input failed. The RECOGNITION-COMPLETE event MUST
contain the Recognition result with only input text and no
interpretation.</c>
<c>013</c>
<c>partial-match</c>
<c>Speech Incomplete timeout expired before there was a full
match. But whatever that was spoken till that point was a partial
match to one or more grammars.</c>
<c>014</c>
<c>partial-match-maxtime</c>
<c>The Recognition-Timer expired before full match was achieved.
But whatever was spoken till that point was a partial match to one
or more grammars.</c>
<c>015</c>
<c>no-match-maxtime</c>
<c>The Recognition-Timer expired. Whatever was spoken till that
point either did not match any of the grammars. This cause could
also be returned if the recognizer does not support detecting
partial grammar matches.</c>
<c>016</c>
<c>grammar-definition-failure</c>
<c>any DEFINE-GRAMMAR error other than grammar-load-failure and
grammar-compilation-failure.</c>
</texttable>
</section>
<section title="Completion Reason">
<t>This header field MAY be specified in a RECOGNITION-COMPLETE
event coming from the recognizer resource to the client. This
contains the reason text behind the RECOGNIZE request completion.
The server uses this header field to communicate text describing the
reason for the failure, such as the specific error encountered in
parsing a grammar markup.</t>
<t>The completion reason text is provided for client use in logs and
for debugging and instrumentation purposes. Clients MUST NOT
interpret the completion reason text.</t>
<figure>
<artwork><![CDATA[
completion-reason = "Completion-Reason" ":"
quoted-string CRLF
]]></artwork>
</figure>
</section>
<section title="Recognizer Context Block">
<t>This header field MAY be sent as part of the <spanx
style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx> request. If the <spanx
style="verb">GET-PARAMS</spanx> method contains this header field
with no value, then it is a request to the recognizer to return the
recognizer context block. The response to such a message MAY contain
a recognizer context block as a typed media message body. If the
server returns a recognizer context block, the response MUST contain
this header field and its value MUST match the Content-ID of the
corresponding media block.</t>
<t>If the <spanx style="verb">SET-PARAMS</spanx> method contains
this header field, it MUST also contain a message body containing
the recognizer context data, and a Content-ID matching this header
field value. This Content-ID MUST match the Content-ID that came
with the context data during the <spanx
style="verb">GET-PARAMS</spanx> operation.</t>
<t>An implementation choosing to use this mechanism to hand off
recognizer context data between servers MUST distinguish its
implementation-specific block of data by using an IANA-registered
content type in the IANA Media Type vendor tree.</t>
<figure>
<artwork><![CDATA[
recognizer-context-block = "Recognizer-Context-Block" ":"
1*VCHAR CRLF
]]></artwork>
</figure>
</section>
<section anchor="sec.startInputTimers" title="Start Input Timers">
<t>This header field MAY be sent as part of the RECOGNIZE request. A
value of false tells the recognizer to start recognition, but not to
start the no-input timer yet. The recognizer MUST NOT start the
timers until the client sends a START-INPUT-TIMERS request to the
recognizer. This is useful in the scenario when the recognizer and
synthesizer engines are not part of the same session. In such
configurations, when a kill-on-barge-in prompt is being played (see
<xref target="sec.kill-on-barge-in"></xref>), the client wants the
RECOGNIZE request to be simultaneously active so that it can detect
and implement kill-on-barge-in. However, the recognizer ought not
start the no-input timers until the prompt is finished. The default
value is "true".</t>
<figure>
<artwork><![CDATA[
start-input-timers = "Start-Input-Timers" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section anchor="sec.speechCompleteTimeout"
title="Speech Complete Timeout">
<t>This header field specifies the length of silence required
following user speech before the speech recognizer finalizes a
result (either accepting it or generating a nomatch event). The
speech-complete-timeout value applies when the recognizer currently
has a complete match against an active grammar, and specifies how
long the recognizer MUST wait for more input before declaring a
match. By contrast, the incomplete timeout is used when the speech
is an incomplete match to an active grammar. The value is in
milliseconds.</t>
<figure>
<artwork><![CDATA[
speech-complete-timeout = "Speech-Complete-Timeout" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
<t>A long speech-complete-timeout value delays the result to the
client and therefore makes the application's response to a user
slow. A short speech-complete-timeout may lead to an utterance being
broken up inappropriately. Reasonable speech complete timeout values
are typically in the range of 0.3 seconds to 1.0 seconds. The value
for this header field ranges from 0 to an implementation specific
maximum value. The default value for this header field is
implementation specific. This header field MAY occur in RECOGNIZE,
<spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>.</t>
</section>
<section title="Speech Incomplete Timeout">
<t>This header field specifies the required length of silence
following user speech after which a recognizer finalizes a result.
The incomplete timeout applies when the speech prior to the silence
is an incomplete match of all active grammars. In this case, once
the timeout is triggered, the partial result is rejected (with a
Completion-Cause of "partial-match"). The value is in milliseconds.
The value for this header field ranges from 0 to an implementation
specific maximum value. The default value for this header field is
implementation specific.</t>
<figure>
<artwork><![CDATA[
speech-incomplete-timeout = "Speech-Incomplete-Timeout" ":" 1*19DIGIT
CRLF
]]></artwork>
</figure>
<t>The speech-incomplete-timeout also applies when the speech prior
to the silence is a complete match of an active grammar, but where
it is possible to speak further and still match the grammar. By
contrast, the complete timeout is used when the speech is a complete
match to an active grammar and no further spoken words can continue
to represent a match.</t>
<t>A long speech-incomplete-timeout value delays the result to the
client and therefore makes the application's response to a user
slow. A short speech-incomplete-timeout may lead to an utterance
being broken up inappropriately.</t>
<t>The speech-incomplete-timeout is usually longer than the
speech-complete-timeout to allow users to pause mid-utterance (for
example, to breathe). This header field MAY occur in RECOGNIZE,
<spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>.</t>
</section>
<section title="DTMF Interdigit Timeout">
<t>This header field specifies the inter-digit timeout value to use
when recognizing DTMF input. The value is in milliseconds. The value
for this header field ranges from 0 to an implementation specific
maximum value. The default value is 5 seconds. This header field MAY
occur in RECOGNIZE, <spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>.</t>
<figure>
<artwork><![CDATA[
dtmf-interdigit-timeout = "DTMF-Interdigit-Timeout" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="DTMF Term Timeout">
<t>This header field specifies the terminating timeout to use when
recognizing DTMF input. The DTMF-Term-Timeout applies only when no
additional input is allowed by the grammar; otherwise, the
DTMF-Interdigit-Timeout applies. The value is in milliseconds. The
value for this header field ranges from 0 to an implementation
specific maximum value. The default value is 10 seconds. This header
field MAY occur in RECOGNIZE, <spanx style="verb">SET-PARAMS</spanx>
or <spanx style="verb">GET-PARAMS</spanx>.</t>
<figure>
<artwork><![CDATA[
dtmf-term-timeout = "DTMF-Term-Timeout" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="DTMF-Term-Char">
<t>This header field specifies the terminating DTMF character for
DTMF input recognition. The default value is NULL which is indicated
by an empty header field value. This header field MAY occur in
RECOGNIZE, <spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>.</t>
<figure>
<artwork><![CDATA[
dtmf-term-char = "DTMF-Term-Char" ":" VCHAR CRLF
]]></artwork>
</figure>
</section>
<section title="Failed URI">
<t>When a recognizer needs to fetch or access a URI and the access
fails the server SHOULD provide the failed URI in this header field
in the method response, unless there are multiple URI failures, in
which case one of the failed URIs MUST be provided in this header
field in the method response.</t>
<figure>
<artwork><![CDATA[
failed-uri = "Failed-URI" ":" Uri CRLF
]]></artwork>
</figure>
</section>
<section title="Failed URI Cause">
<t>When a recognizer method needs a recognizer to fetch or access a
URI and the access fails the server MUST provide the URI specific or
protocol specific response code for the URI in the Failed-URI header
field through this header field in the method response. The value
encoding is UTF-8 to accommodate any access protocol, some of which
might have a response string instead of a numeric response code.</t>
<figure>
<artwork><![CDATA[
failed-uri-cause = "Failed-URI-Cause" ":" 1*UTFCHAR CRLF
]]></artwork>
</figure>
</section>
<section title="Save Waveform">
<t>This header field allows the client to request the recognizer
resource to save the audio input to the recognizer. The recognizer
resource MUST then attempt to record the recognized audio, without
endpointing, and make it available to the client in the form of a
URI returned in the Waveform-URI header field in the
RECOGNITION-COMPLETE event. If there was an error in recording the
stream or the audio content is otherwise not available, the
recognizer MUST return an empty Waveform-URI header field. The
default value for this field is "false". This header field MAY occur
in RECOGNIZE, <spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. See the discussion on the
sensitivity of saved waveforms in <xref
target="sec.securityConsiderations"></xref>.</t>
<figure>
<artwork><![CDATA[
save-waveform = "Save-Waveform" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section anchor="sec.newAudioChannel" title="New Audio Channel">
<t>This header field MAY be specified in a RECOGNIZE request and
allows the client to tell the server that, from this point on,
further input audio comes from a different audio source, channel or
speaker. If the recognition resource had collected any input
statistics or adaptation state, the recognition resource MUST do
what is appropriate for the specific recognition technology, which
includes but is not limited to discarding any collected input
statistics or adaptation state before starting the RECOGNIZE
request. Note that if there are multiple resources that are sharing
a media stream and are collecting or using this data, and the client
issues this header field to one of the resources, the reset
operation applies to all resources that use the shared media stream.
This helps in a number of use cases, including where the client
wishes to reuse an open recognition session with an existing media
session for multiple telephone calls.</t>
<figure>
<artwork><![CDATA[
new-audio-channel = "New-Audio-Channel" ":" BOOLEAN
CRLF
]]></artwork>
</figure>
</section>
<section title="Speech-Language">
<t>This header field specifies the language of recognition grammar
data within a session or request, if it is not specified within the
data. The value of this header field MUST follow <xref
target="RFC4646">RFC4646</xref> for its values. This MAY occur in
DEFINE-GRAMMAR, RECOGNIZE, <spanx style="verb">SET-PARAMS</spanx> or
<spanx style="verb">GET-PARAMS</spanx> request.</t>
<figure>
<artwork><![CDATA[
speech-language = "Speech-Language" ":" 1*VCHAR CRLF
]]></artwork>
</figure>
</section>
<section title="Ver-Buffer-Utterance">
<t>This header field lets the client request the server to buffer
the utterance associated with this recognition request into a buffer
available to a co-resident verification resource. The buffer is
shared across resources within a session and is allocated when a
verification resource is added to this session. The client MUST NOT
send this header field unless a verification resource is
instantiated for the session. The buffer is released when the
verification resource is released from the session.</t>
</section>
<section title="Recognition-Mode">
<t>This header field specifies what mode the RECOGNIZE method will
operate in. The value choices are "normal" or "hotword". If the
value is "normal", the RECOGNIZE starts matching speech and DTMF to
the grammars specified in the RECOGNIZE request. If any portion of
the speech does not match the grammar, the RECOGNIZE command
completes with a no-match status. Timers may be active to detect
speech in the audio (see <xref
target="sec.startInputTimers"></xref>), so the RECOGNIZE method may
complete because of a timeout waiting for speech. If the value of
this header field is "hotword", the RECOGNIZE method operates in
hotword mode, where it only looks for the particular keywords or
DTMF sequences specified in the grammar and ignores silence or other
speech in the audio stream. The default value for this header field
is "normal". This header field MAY occur on the RECOGNIZE
method.</t>
<figure>
<artwork><![CDATA[
recognition-mode = "Recognition-Mode" ":" 1*ALPHA CRLF
]]></artwork>
</figure>
</section>
<section title="Cancel-If-Queue">
<t>This header field specifies what will happen if the client
attempts to invoke another RECOGNIZE method when this RECOGNIZE
request is already in progress for the resource. The value for this
header field is Boolean. A value of "true" means the server MUST
terminate this RECOGNIZE request, with a Completion-Cause of
"cancelled", if the client issues another RECOGNIZE request for the
same resource. A value of "false" for this header field indicates to
the server that this RECOGNIZE request will continue to completion
and if the client issues more RECOGNIZE requests to the same
resource, they are queued. When the currently active RECOGNIZE
request is stopped or completes with a successful match, the first
RECOGNIZE method in the queue becomes active. If the current
RECOGNIZE fails, all RECOGNIZE methods in the pending queue are
cancelled and each generates a RECOGNITION-COMPLETE event with a
Completion-Cause of "cancelled". This header field MUST be present
in every RECOGNIZE request. There is no default value.</t>
<figure>
<artwork><![CDATA[
cancel-if-queue = "Cancel-If-Queue" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="Hotword-Max-Duration">
<t>This header field MAY be sent in a hotword mode RECOGNIZE
request. It specifies the maximum length of an utterance (in
seconds) that will be considered for Hotword recognition. This
header field, along with Hotword-Min-Duration, can be used to tune
performance by preventing the recognizer from evaluating utterances
that are too short or too long to be one of the hotwords in the
grammar(s). The value is in milliseconds. The default is
implementation dependent. If present in a RECOGNIZE request
specifying a mode other than "hotword", the header field is
ignored.</t>
<figure>
<artwork><![CDATA[
hotword-max-duration = "Hotword-Max-Duration" ":" 1*19DIGIT
CRLF
]]></artwork>
</figure>
</section>
<section title="Hotword-Min-Duration">
<t>This header field MAY be sent in a hotword mode RECOGNIZE
request. It specifies the minimum length of an utterance (in
seconds) that will be considered for Hotword recognition. This
header field, along with Hotword-Max-Duration, can be used to tune
performance by preventing the recognizer from evaluating utterances
that are too short or too long to be one of the hotwords in the
grammar(s). The value is in milliseconds. The default value is
implementation dependent. If present in a RECOGNIZE request
specifying a mode other than "hotword", the header field is
ignored.</t>
<figure>
<artwork><![CDATA[
hotword-min-duration = "Hotword-Min-Duration" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section anchor="sec.interpretText" title="Interpret-Text">
<t>The value of this header field is used to provide a pointer to
the text for which a natural language interpretation is desired. The
value is either a URI or text. If the value is a URI, it MUST be a
Content-ID that refers to an entity of type text/plain in the body
of the message. Otherwise, the server MUST treat the value as the
text to be interpreted. This header field MUST be used when invoking
the INTERPRET method.</t>
<figure>
<artwork><![CDATA[
interpret-text = "Interpret-Text" ":" 1*VCHAR CRLF
]]></artwork>
</figure>
</section>
<section title="DTMF-Buffer-Time">
<t>This header field MAY be specified in a GET-PARAMS or SET-PARAMS
method and is used to specify the size in time, in milliseconds, of
the typeahead buffer for the recognizer. This is the buffer that
collects DTMF digits as they are pressed even when there is no
RECOGNIZE command active. When a subsequent RECOGNIZE method is
received it MAY look to this buffer to match the RECOGNIZE request.
If the digits in the buffer is not sufficient then it can continue
to listen to more digits to match the grammar. The default size of
this DTMF buffer is platform specific.</t>
<figure>
<artwork><![CDATA[
dtmf-buffer-time = "DTMF-Buffer-Time" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Clear-DTMF-Buffer">
<t>This header field MAY be specified in a RECOGNIZE method and is
used to tell the recognizer to clear the DTMF type-ahead buffer
before starting the recognize. The default value of this header
field is FALSE, which does not clear the typeahead buffer before
starting the RECOGNIZE method. If this header field is specified to
be TRUE, then the recognize will clear the DTMF buffer before
starting recognition. This means digits pressed by the caller before
the RECOGNIZE command was issued are discarded.</t>
<figure>
<artwork><![CDATA[
clear-dtmf-buffer = "Clear-DTMF-Buffer" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="Early-No-Match">
<t>This header field MAY be specified in a RECOGNIZE method and is
used to tell the recognizer that it MUST NOT wait for the end of
speech before processing the collected speech to match active
grammars. A value of TRUE indicates the recognizer MUST do early
matching. The default value for this header field if not specified
is FALSE. If the recognizer does not support the processing of the
collected audio before the end of speech this header field can be
safely ignored.</t>
<figure>
<artwork><![CDATA[
early-no-match = "Early-No-Match" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="Num-Min-Consistent-Pronunciations">
<t>This header field MAY be specified in a START-PHRASE-ENROLLMENT,
<spanx style="verb">SET-PARAMS</spanx>, or <spanx
style="verb">GET-PARAMS</spanx> method and is used to specify the
minimum number of consistent pronunciations that must be obtained to
voice enroll a new phrase. The minimum value is 1. The default value
is implementation specific and MAY be greater than 1.</t>
<figure>
<artwork><![CDATA[
num-min-consistent-pronunciations =
"Num-Min-Consistent-Pronunciations" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Consistency-Threshold">
<t>This header field MAY be sent as part of the
START-PHRASE-ENROLLMENT, <spanx style="verb">SET-PARAMS</spanx>, or
<spanx style="verb">GET-PARAMS</spanx> method. Used during
voice-enrollment, this header field specifies how similar to a
previously enrolled pronunciation of the same phrase an utterance
needs to be in order to be considered "consistent." The higher the
threshold, the closer the match between an utterance and previous
pronunciations must be for the pronunciation to be considered
consistent. The range for this threshold is a float value between is
0.0 to 1.0. The default value for this header field is
implementation specific.</t>
<figure>
<artwork><![CDATA[
consistency-threshold = "Consistency-Threshold" ":" FLOAT CRLF
]]></artwork>
</figure>
</section>
<section title="Clash-Threshold">
<t>This header field MAY be sent as part of the
START-PHRASE-ENROLLMENT, SET-PARAMS, or <spanx
style="verb">GET-PARAMS</spanx> method. Used during
voice-enrollment, this header field specifies how similar the
pronunciations of two different phrases can be before they are
considered to be clashing. For example, pronunciations of phrases
such as "John Smith" and "Jon Smits" may be so similar that they are
difficult to distinguish correctly. A smaller threshold reduces the
number of clashes detected. The range for this threshold is float
value between 0.0 and 1.0. The default value for this header field
is implementation specific. Clash testing can be turned off
completely by setting the Clash-Threshold header field value to
0.</t>
<figure>
<artwork><![CDATA[
clash-threshold = "Clash-Threshold" ":" FLOAT CRLF
]]></artwork>
</figure>
</section>
<section title="Personal-Grammar-URI">
<t>This header field specifies the speaker-trained grammar to be
used or referenced during enrollment operations. Phrases are added
to this grammar during enrollment. For example, a contact list for
user "Jeff" could be stored at the Personal-Grammar-URI
"http://myserver.example.com/myenrollmentdb/jeff-list". The
generated grammar syntax MAY be implementation specific. There is no
default value for this header field. This header field MAY be sent
as part of the START-PHRASE-ENROLLMENT, SET-PARAMS, or <spanx
style="verb">GET-PARAMS</spanx> method.</t>
<figure>
<artwork><![CDATA[
personal-grammar-uri = "Personal-Grammar-URI" ":" Uri CRLF
]]></artwork>
</figure>
</section>
<section title="Enroll-Utterance">
<t>This header field MAY be specified in the RECOGNIZE method. If
this header field is set to "true" and an Enrollment is active, the
RECOGNIZE command MUST add the collected utterance to the personal
grammar that is being enrolled. The way in which this occurs is
engine-specific and may be an area of future standardization. The
default value for this header field is "false".</t>
<figure>
<artwork><![CDATA[
enroll-utterance = "Enroll-Utterance" ":" boolean-Value CRLF
]]></artwork>
</figure>
</section>
<section anchor="sec.phraseID" title="Phrase-Id">
<t>This header field in a request identifies a phrase in an existing
personal grammar for which enrollment is desired. It is also
returned to the client in the RECOGNIZE complete event. This header
field MAY occur in START-PHRASE-ENROLLMENT, MODIFY-PHRASE or
DELETE-PHRASE requests. There is no default value for this header
field.</t>
<figure>
<artwork><![CDATA[
phrase-id = "Phrase-ID" ":" 1*VCHAR CRLF
]]></artwork>
</figure>
</section>
<section title="Phrase-NL">
<t>This string specifies the interpreted text to be returned when
the phrase is recognized. This header field MAY occur in
START-PHRASE-ENROLLMENT and MODIFY-PHRASE requests. There is no
default value for this header field.</t>
<figure>
<artwork><![CDATA[
phrase-nl = "Phrase-NL" ":" 1*UTFCHAR CRLF
]]></artwork>
</figure>
</section>
<section title="Weight">
<t>The value of this header field represents the occurrence
likelihood of a phrase in an enrolled grammar. When using grammar
enrollment, the system is essentially constructing a grammar segment
consisting of a list of possible match phrases. This can be thought
of to be similar to the dynamic construction of a <one-of> tag
in the W3C grammar specification. Each enrolled-phrase becomes an
item in the list that can be matched against spoken input similar to
the <item> within a <one-of> list. This header field
allows you to assign a weight to the phrase (i.e., <item>
entry) in the <one-of> list that is enrolled. Grammar weights
are normalized to a sum of one at grammar compilation time, so a
weight value of 1 for each phrase in an enrolled grammar list
indicates all items in that list have the same weight. This header
field MAY occur in START-PHRASE-ENROLLMENT and MODIFY-PHRASE
requests. The default value for this header field is implementation
specific.</t>
<figure>
<artwork><![CDATA[
weight = "Weight" ":" weight-value CRLF
weight-value = FLOAT
]]></artwork>
</figure>
</section>
<section title="Save-Best-Waveform">
<t>This header field allows the client to request the recognizer
resource to save the audio stream for the best repetition of the
phrase that was used during the enrollment session. The recognizer
MUST attempt to record the recognized audio and make it available to
the client in the form of a URI returned in the Waveform-URI header
field in the response to the END-PHRASE-ENROLLMENT method. If there
was an error in recording the stream or the audio data is otherwise
not available, the recognizer MUST return an empty Waveform-URI
header field. This header field MAY occur in the
START-PHRASE-ENROLLMENT, SET-PARAMS, and GET-PARAMS methods.</t>
<figure>
<artwork><![CDATA[
save-best-waveform = "Save-Best-Waveform" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="New-Phrase-Id">
<t>This header field replaces the id used to identify the phrase in
a personal grammar. The recognizer returns the new id when using an
enrollment grammar. This header field MAY occur in MODIFY-PHRASE
requests.</t>
<figure>
<artwork><![CDATA[
new-phrase-id = "New-Phrase-ID" ":" 1*VCHAR CRLF
]]></artwork>
</figure>
</section>
<section title="Confusable-Phrases-URI">
<t>This header field specifies a grammar that defines invalid
phrases for enrollment. For example, typical applications do not
allow an enrolled phrase that is also a command word. This header
field MAY occur in RECOGNIZE requests that are part of an enrollment
session.</t>
<figure>
<artwork><![CDATA[
confusable-phrases-uri = "Confusable-Phrases-URI" ":" Uri CRLF
]]></artwork>
</figure>
</section>
<section title="Abort-Phrase-Enrollment">
<t>This header field can optionally be specified in the
END-PHRASE-ENROLLMENT method to abort the phrase enrollment, rather
than committing the phrase to the personal grammar.</t>
<figure>
<artwork><![CDATA[
abort-phrase-enrollment = "Abort-Phrase-Enrollment" ":"
BOOLEAN CRLF
]]></artwork>
</figure>
</section>
</section>
<section anchor="sec.recMessageBody" title="Recognizer Message Body">
<t>A recognizer message may carry additional data associated with the
request, response or event. The client may provide the grammar to be
recognized in DEFINE-GRAMMAR or RECOGNIZE requests. When one or more
grammars are specified using the DEFINE-GRAMMAR method, the server
MUST attempt to fetch, compile and optimize the grammar before
returning a response to the DEFINE-GRAMMAR method. A RECOGNIZE request
MUST completely specify the grammars to be active during the
recognition operation, except when the RECOGNIZE method is being used
to enroll a grammar. During grammar enrollment, such grammars are
optional. The server resource may send the recognition results in the
RECOGNITION-COMPLETE event or the GET-RESULT response. Grammars and
recognition results are carried in the message body of the
corresponding MRCPv2 messages.</t>
<section anchor="sec.grammarData" title="Recognizer Grammar Data">
<t>Recognizer grammar data from the client to the server can be
provided inline or by reference. Either way, grammar data is carried
as typed media entities in the message body of the RECOGNIZE or
DEFINE-GRAMMAR request. All MRCPv2 servers MUST accept grammars in
the XML form (Media Type application/srgs+xml) of the W3C's
XML-based <xref target="W3C.REC-speech-grammar-20040316">Speech
Grammar Markup Format (SRGS)</xref> and MAY accept grammars in other
formats. Examples include but are not limited to:<list
style="symbols">
<t>the ABNF form (Media Type application/srgs) of SRGS</t>
<t>Sun's <xref target="refs.javaSpeechGrammarFormat">Java Speech
Grammar Format</xref></t>
</list>Additionally, MRCPv2 servers MAY support the <xref
target="W3C.REC-semantic-interpretation-20070405">Semantic
Interpretation for Speech Recognition (SISR)</xref>
specification.</t>
<t>When a grammar is specified inline in the request, the client
MUST provide a Content-ID for that grammar as part of the content
header fields. If there is no space on the server to store the
inline grammar, the request MUST return with a Completion-Cause code
of 016 grammar-definition-failure. Otherwise, the server MUST
associate the inline grammar block with that Content-ID and MUST
store it on the server for the duration of the session. However, if
the Content-ID is redefined later in the session through a
subsequent DEFINE-GRAMMAR, the inline grammar previously associated
with the Content-ID MUST be freed. If the Content-ID is redefined
through a subsequent DEFINE-GRAMMAR with an empty message body (i.e.
no grammar definition), then in addition to freeing any grammar
previously associated with the Content-ID the server MUST clear all
bindings and associations to the Content-ID. Unless and until
subsequently redefined, this URI MUST be interpreted by the server
as one that has never been set.</t>
<t>Grammars that have been associated with a Content-ID can be
referenced through the <spanx style="verb">session:</spanx> URI
scheme (see <xref target="sec.sessionURIScheme"></xref>). For
example:</t>
<figure>
<artwork><![CDATA[session:help@root-level.store
]]></artwork>
</figure>
<t>Grammar data MAY be specified using external URI references. To
do so, the client uses a body of Media Type text/uri-list <xref
target="RFC2483">RFC2483</xref> to list the one or more URIs that
point to the grammar data. The client can use a body of Media Type
text/grammar-ref-list if it wants to assign weights to the list of
grammar URI. All MRCPv2 servers MUST support grammar access using
the HTTP and HTTPS uri schemes.</t>
<t>If the grammar data the client wishes to be used on a request
consists of a mix of URI and inline grammar data the client uses the
multipart/mixed Media Type to enclose the text/uri-list,
application/srgs or application/srgs+xml content entities. The
character set and encoding used in the grammar data are specified
using to standard Media Type definitions.</t>
<t>When more than one grammar URI or inline grammar block is
specified in a message body of the RECOGNIZE request, the server
interprets this as a list of grammar alternatives to match
against.</t>
<figure title="SRGS Grammar Example">
<artwork><![CDATA[
Content-Type:application/srgs+xml
Content-ID:<request1@form-level.store>
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0" root="request">
<!-- single language attachment to tokens -->
<rule id="yes">
<one-of>
<item xml:lang="fr-CA">oui</item>
<item xml:lang="en-US">yes</item>
</one-of>
</rule>
<!-- single language attachment to a rule expansion -->
<rule id="request">
may I speak to
<one-of xml:lang="fr-CA">
<item>Michel Tremblay</item>
<item>Andre Roy</item>
</one-of>
</rule>
<!-- multiple language attachment to a token -->
<rule id="people1">
<token lexicon="en-US,fr-CA"> Robert </token>
</rule>
<!-- the equivalent single-language attachment expansion -->
<rule id="people2">
<one-of>
<item xml:lang="en-US">Robert</item>
<item xml:lang="fr-CA">Robert</item>
</one-of>
</rule>
</grammar>
]]></artwork>
</figure>
<figure title="Grammar Reference Example">
<artwork><![CDATA[
Content-Type:text/uri-list
Content-Length:...
session:help@root-level.store
http://www.example.com/Directory-Name-List.grxml
http://www.example.com/Department-List.grxml
http://www.example.com/TAC-Contact-List.grxml
session:menu1@menu-level.store
]]></artwork>
</figure>
<figure title="Mixed Grammar Reference Example">
<artwork><![CDATA[
Content-Type:multipart/mixed; boundary="break"
--break
Content-Type:text/uri-list
Content-Length:...
http://www.example.com/Directory-Name-List.grxml
http://www.example.com/Department-List.grxml
http://www.example.com/TAC-Contact-List.grxml
--break
Content-Type:application/srgs+xml
Content-ID:<request1@form-level.store>
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0">
<!-- single language attachment to tokens -->
<rule id="yes">
<one-of>
<item xml:lang="fr-CA">oui</item>
<item xml:lang="en-US">yes</item>
</one-of>
</rule>
<!-- single language attachment to a rule expansion -->
<rule id="request">
may I speak to
<one-of xml:lang="fr-CA">
<item>Michel Tremblay</item>
<item>Andre Roy</item>
</one-of>
</rule>
<!-- multiple language attachment to a token -->
<rule id="people1">
<token lexicon="en-US,fr-CA"> Robert </token>
</rule>
<!-- the equivalent single-language attachment expansion -->
<rule id="people2">
<one-of>
<item xml:lang="en-US">Robert</item>
<item xml:lang="fr-CA">Robert</item>
</one-of>
</rule>
</grammar>
--break--
]]></artwork>
</figure>
</section>
<section title="Recognizer Result Data">
<t>Recognition results are returned to the client in the message
body of the RECOGNITION-COMPLETE event or the GET-RESULT response
message as described in <xref target="sec.result"></xref>). Element
and attribute descriptions for the recognition portion of the NLSML
format are provided in <xref target="sec.recognizerResults"></xref>
with a normative definition of the schema in <xref
target="sec.schema.NLSML"></xref>.</t>
<figure title="Result Example">
<artwork><![CDATA[
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="http://www.example.com/theYesNoGrammar">
<interpretation>
<instance>
<ex:response>yes</ex:response>
</instance>
<input>ok</input>
</interpretation>
</result>
]]></artwork>
</figure>
</section>
<section title="Enrollment Result Data">
<t>Enrollment results are returned to the client in the message body
of the RECOGNITION-COMPLETE event as described in <xref
target="sec.result"></xref>). Element and attribute descriptions for
the enrollment portion of the NLSML format are provided in <xref
target="sec.enrollmentResults"></xref> with a normative definition
of the schema in <xref
target="sec.enrollmentResultsSchema"></xref>.</t>
</section>
<section title="Recognizer Context Block">
<t>When a client changes servers while operating on the behalf of
the same incoming communication session, this header field allows
the client to collect a block of opaque data from one server and
provide it to another server. This capability is desirable if the
client needs different language support or because the server issued
a redirect. Here the first recognizer resource may have collected
acoustic and other data during its execution of recognition methods.
After a server switch, communicating this data may allow the
recognition resource on the new server to provide better
recognition. This block of data is implementation-specific and MUST
be carried as Media Type application/octets in the body of the
message.</t>
<t>This block of data is communicated in the <spanx
style="verb">SET-PARAMS</spanx> and <spanx
style="verb">GET-PARAMS</spanx> method/response messages. In the
<spanx style="verb">GET-PARAMS</spanx> method, if an empty
recognizer-context-block header field is present, then the
recognizer SHOULD return its vendor-specific context block, if any,
in the message body as an entity of Media Type application/octets
with a specific Content-ID. The Content-ID value MUST also be
specified in the recognizer-context-block header field in the <spanx
style="verb">GET-PARAMS</spanx> response. The <spanx
style="verb">SET-PARAMS</spanx> request wishing to provide this
vendor-specific data MUST send it in the message body as a typed
entity with the same Content-ID that it received from the <spanx
style="verb">GET-PARAMS</spanx>. The Content-ID MUST also be sent in
the recognizer-context-block header field of the <spanx
style="verb">SET-PARAMS</spanx> message.</t>
<t>Each speech recognition implementation choosing to use this
mechanism to hand off recognizer context data among servers MUST
distinguish its implementation-specific block of data from other
implementations by choosing a Content-ID that is recognizable among
the participating servers and unlikely to collide with values chosen
by another implementation.</t>
</section>
</section>
<section anchor="sec.recognizerResults" title="Recognizer Results">
<t>The recognizer portion of NLSML (see <xref
target="sec.NLSML"></xref>) represents information automatically
extracted from a user's utterances by a semantic interpretation
component, where "utterance" is to be taken in the general sense of a
meaningful user input in any modality supported by the MRCPv2
implementation.</t>
<section title="Markup Functions">
<t>MRCPv2 recognition resources employ the Natural Language
Semantics Markup Language to interpret natural language speech input
and to format the interpretation for consumption by an MRCPv2
client.</t>
<t>The elements of the markup fall into the following general
functional categories: Interpretation, Side Information, and
Multi-Modal Integration.</t>
<section title="Interpretation">
<t>Elements and attributes represent the semantics of a user's
utterance, including the <result>, <interpretation>,
and <instance> elements. The <result> element contains
the full result of processing one utterance. It may contain
multiple <interpretation> elements if the interpretation of
the utterance results in multiple alternative meanings due to
uncertainty in speech recognition or natural language
understanding. There are at least two reasons for providing
multiple interpretations: <list style="numbers">
<t>the client application might have additional information,
for example, information from a database, that would allow it
to select a preferred interpretation from among the possible
interpretations returned from the semantic interpreter.</t>
<t>a client-based dialog manager (e.g. VXML) that was unable
to select between several competing interpretations could use
this information to go back to the user and find out what was
intended. For example, it could issue a <spanx
style="verb">SPEAK</spanx> request to a synthesizer resource
to emit "Did you say 'Boston' or 'Austin'?"</t>
</list></t>
</section>
<section title="Side Information">
<t>These are elements and attributes representing additional
information about the interpretation, over and above the
interpretation itself. Side information includes: <list
style="numbers">
<t>Whether an interpretation was achieved (the <nomatch>
element) and the system's confidence in an interpretation (the
"confidence" attribute of <interpretation>).</t>
<t>Alternative interpretations (<interpretation>)</t>
<t>Input formats and ASR information: the <input>
element, representing the input to the semantic
interpreter.</t>
</list></t>
</section>
<section title="Multi-Modal Integration">
<t>When more than one modality is available for input, the
interpretation of the inputs need to be coordinated. The "mode"
attribute of <input> supports this by indicating whether the
utterance was input by speech, dtmf, pointing, etc. The
"timestamp_start" and "timestamp_end" attributes of
<interpretation> also provide for temporal coordination by
indicating when inputs occurred.</t>
</section>
</section>
<section title="Overview of Recognizer Result Elements and their Relationships">
<t>The recognizer elements in NLSML fall into two categories: <list
style="numbers">
<t>description of the input that was processed.</t>
<t>description of the meaning which was extracted from the
input.</t>
</list> Next to each element are its attributes. In addition, some
elements can contain multiple instances of other elements. For
example, a <result> can contain multiple
<interpretations>, each of which is taken to be an
alternative. Similarly, <input> can contain multiple child
<input> elements which are taken to be cumulative. To
illustrate the basic usage of these elements, as a simple example,
consider the utterance "ok" (interpreted as "yes"). The example
illustrates how that utterance and its interpretation would be
represented in the NL Semantics markup.</t>
<figure>
<artwork><![CDATA[
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="http://www.example.com/theYesNoGrammar">
<interpretation>
<instance>
<ex:response>yes</ex:response>
</instance>
<input>ok</input>
</interpretation>
</result>
]]></artwork>
</figure>
<t>This example includes only the minimum required information.
There is an overall <result> element which includes one
interpretation and an input element. The interpretation contains the
application-specific element "<response>" which is the
semantically interpreted result.</t>
</section>
<section title="Elements and Attributes">
<section title="RESULT Root Element">
<t>The root element of the markup is <result>. The
<result> element includes one or more <interpretation>
elements. Multiple interpretations can result from ambiguities in
the input or in the semantic interpretation. If the "grammar"
attribute does not apply to all of the interpretations in the
result it can be overridden for individual interpretations at the
<interpretation> level.</t>
<t>Attributes: <list style="numbers">
<t>grammar: The grammar or recognition rule matched by this
result. The format of the grammar attribute will match the
rule reference semantics defined in the grammar specification.
Specifically, the rule reference is in the external XML form
for grammar rule references. The markup interpreter needs to
know the grammar rule that is matched by the utterance because
multiple rules may be simultaneously active. The value is the
grammar URI used by the markup interpreter to specify the
grammar. The grammar can be overridden by a grammar attribute
in the <interpretation> element if the input was
ambiguous as to which grammar it matched. If all
interpretation elements within the result element contain
carry their own grammar attributes, the attribute can be
dropped from the result element.</t>
</list></t>
<figure>
<artwork><![CDATA[
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
grammar="http://www.example.com/grammar">
<interpretation>
....
</interpretation>
</result>
]]></artwork>
</figure>
</section>
<section title="INTERPRETATION Element">
<t>An <interpretation> element contains a single semantic
interpretation.</t>
<t>Attributes: <list style="numbers">
<t>confidence: A float value from 0.0-1.0 indicating the
semantic analyzer's confidence in this interpretation. A value
of 1.0 indicates maximum confidence. The values are
implementation-dependent, but are intended to align with the
value interpretation for the confidence MRCPv2 header field
defined in <xref target="sec.confidenceThreshold"></xref>.
This attribute is optional.</t>
<t>grammar: The grammar or recognition rule matched by this
interpretation (if needed to override the grammar
specification at the <interpretation> level.) This
attribute is only needed under <interpretation> if it is
necessary to override a grammar that was defined at the
<result> level.) Note that the grammar attribute for the
interpretation element is OPTIONAL if and only if the grammar
attribute is specified in the result element.</t>
</list></t>
<t>Interpretations MUST be sorted best-first by some measure of
"goodness". The goodness measure is "confidence" if present,
otherwise, it is some implementation-specific indication of
quality.</t>
<t>The grammar is expected to be specified most frequently at the
<result> level. However, it can be overridden at the
<interpretation> level because it is possible that different
interpretations may match different grammar rules.</t>
<t>The <interpretation> element includes an optional
<input> element which contains the input being analyzed, and
an <instance> element containing the interpretation of the
utterance.</t>
<figure>
<artwork><![CDATA[
<interpretation confidence="0.75"
grammar="http://www.example.com/grammar">
...
</interpretation>
]]></artwork>
</figure>
</section>
<section title="INSTANCE Element">
<t>The <instance> element contains the interpretation of the
utterance. When the Semantic Interpretation for Speech Recognition
format is used, the <instance> element contains the XML
serialization of the result using the approach defined in that
specification. When there is semantic markup in the grammar that
does not create semantic objects, but instead only does a semantic
translation of a portion of the input, such as translating "coke"
to "coca-cola", the instance contains the whole input but with the
translation applied. The NLSML looks like the markup in <xref
target="fig.nslmlExample2"></xref> below. If there are no semantic
objects created, nor any semantic translation the instance value
is the same as the input value.</t>
<t>Attributes: <list style="numbers">
<t>confidence: Each element of the instance may have a
confidence attribute, defined in the NL semantics namespace.
The confidence attribute contains an float value in the range
from 0.0-1.0 reflecting the system's confidence in the
analysis of that slot. A value of 1.0 indicates maximum
confidence. The values are implementation-dependent, but are
intended to align with the value interpretation for the
confidence MRCPv2 header field defined in <xref
target="sec.confidenceThreshold"></xref>. This attribute is
optional.</t>
</list></t>
<figure>
<artwork><![CDATA[
<instance>
<nameAddress>
<street confidence="0.75">123 Maple Street</street>
<city>Mill Valley</city>
<state>CA</state>
<zip>90952</zip>
</nameAddress>
</instance>
<input>
My address is 123 Maple Street,
Mill Valley, California, 90952
</input>
]]></artwork>
</figure>
<figure anchor="fig.nslmlExample2" title="NSLML Example">
<artwork><![CDATA[
<instance>
I would like to buy a coca-cola
</instance>
<input>
I would like to buy a coke
</input>
]]></artwork>
</figure>
</section>
<section title="INPUT Element">
<t>The <input> element is the text representation of a
user's input. It includes an optional "confidence" attribute which
indicates the recognizer's confidence in the recognition result
(as opposed to the confidence in the interpretation, which is
indicated by the "confidence" attribute of
<interpretation>). Optional "timestamp-start" and
"timestamp-end" attributes indicate the start and end times of a
spoken utterance, in ISO 8601 format.</t>
<t>Attributes: <list style="numbers">
<t>timestamp-start: The time at which the input began.
(optional)</t>
<t>timestamp-end: The time at which the input ended.
(optional)</t>
<t>mode: The modality of the input, for example, speech, dtmf,
etc. (optional)</t>
<t>confidence: the confidence of the recognizer in the
correctness of the input in the range 0.0 to 1.0
(optional)</t>
</list>Note that it may not make sense for temporally
overlapping inputs to have the same mode; however, this constraint
is not expected to be enforced by implementations.</t>
<t>When there is no time zone designator, ISO 8601 time
representations default to local time.</t>
<t>There are three possible formats for the <input> element.
<list style="numbers">
<t>The <input> element can contain simple text: <figure>
<artwork><![CDATA[<input>onions</input>]]></artwork>
</figure>A future possibility is for <input> to
contain not only text but additional markup that represents
prosodic information that was contained in the original
utterance and extracted by the speech recognizer. This depends
on the availability of ASR's that are capable of producing
prosodic information. MRCPv2 clients MUST be prepared to
receive such markup and MAY make use of it.</t>
<t>An <input> tag can also contain additional
<input> tags. Having additional input elements allows
the representation to support future multi-modal inputs as
well as finer-grained speech information, such as timestamps
for individual words and word-level confidences. <figure>
<artwork><![CDATA[<input>
<input mode="speech" confidence="0.5"
timestamp-start="2000-04-03T0:00:00"
timestamp-end="2000-04-03T0:00:00.2">fried</input>
<input mode="speech" confidence="1.0"
timestamp-start="2000-04-03T0:00:00.25"
timestamp-end="2000-04-03T0:00:00.6">onions</input>
</input>]]></artwork>
</figure></t>
<t>Finally, the <input> element can contain
<nomatch> and <noinput> elements, which describe
situations in which the speech recognizer received input that
it was unable to process, or did not receive any input at all,
respectively.</t>
</list></t>
</section>
<section title="NOMATCH Element">
<t>The <nomatch> element under <input> is used to
indicate that the semantic interpreter was unable to successfully
match any input with confidence above the threshold. It can
optionally contain the text of the best of the (rejected)
matches.</t>
<figure>
<artwork><![CDATA[
<interpretation>
<instance/>
<input confidence="0.1">
<nomatch/>
</input>
</interpretation>
<interpretation>
<instance/>
<input mode="speech" confidence="0.1">
<nomatch>I want to go to New York</nomatch>
</input>
</interpretation>
]]></artwork>
</figure>
</section>
<section title="NOINPUT Element">
<t><noinput> indicates that there was no input - a timeout
occurred in the speech recognizer due to silence.</t>
<figure>
<artwork><![CDATA[<interpretation>
<instance/>
<input>
<noinput/>
</input>
</interpretation>]]></artwork>
</figure>
<t>If there are multiple levels of inputs, the most natural place
for <nomatch> and <noinput> elements to appear is
under the highest level of <input> for <noinput>, and
under the appropriate level of <interpretation> for
<nomatch>. So <noinput> means "no input at all" and
<nomatch> means "no match in speech modality" or "no match
in dtmf modality". For example, to represent garbled speech
combined with dtmf "1 2 3 4", the markup would be:</t>
<figure>
<artwork><![CDATA[<input>
<input mode="speech"><nomatch/></input>
<input mode="dtmf">1 2 3 4</input>
</input>]]></artwork>
</figure>
<t>Note: while <noinput> could be represented as an
attribute of input, <nomatch> cannot, since it could
potentially include PCDATA content with the best match. For
parallelism, <noinput> is also an element.</t>
</section>
</section>
</section>
<section anchor="sec.enrollmentResults" title="Enrollment Results">
<t>All enrollment elements are contained within a single
<enrollment-result> element under <result>. The elements
are described below and have the schema defined in <xref
target="sec.enrollmentResultsSchema"></xref>. The following elements
are defined:</t>
<t><list style="numbers">
<t>num-clashes</t>
<t>num-good-repetitions</t>
<t>num-repetitions-still-needed</t>
<t>consistency-status</t>
<t>clash-phrase-ids</t>
<t>transcriptions</t>
<t>confusable-phrases</t>
</list></t>
<section title="NUM-CLASHES Element">
<t>The <num-clashes> element contains the number of clashes
that this pronunciation has with other pronunciations in an active
enrollment session. The associated Clash-Threshold header field
determines the sensitivity of the clash measurement. Note that clash
testing can be turned off completely by setting the Clash-Threshold
header field value to 0.</t>
</section>
<section title="NUM-GOOD-REPETITIONS Element">
<t>The <num-good-repetitions> element contains the number of
consistent pronunciations obtained so far in an active enrollment
session.</t>
</section>
<section title="NUM-REPETITIONS-STILL-NEEDED Element">
<t>The <num-repetitions-still-needed> element contains the
number of consistent pronunciations that must still be obtained
before the new phrase can be added to the enrollment grammar. The
number of consistent pronunciations required is specified by the
client in the request header field
Num-Min-Consistent-Pronunciations. The returned value must be 0
before the client can successfully commit a phrase to the grammar by
ending the enrollment session.</t>
</section>
<section title="CONSISTENCY-STATUS Element">
<t>The <consistency-status> element is used to indicate how
consistent the repetitions are when learning a new phrase. It can
have the values of consistent, inconsistent, and undecided.</t>
</section>
<section title="CLASH-PHRASE-IDS Element">
<t>The <clash-phrase-ids> element contains the phrase ids of
clashing pronunciation(s), if any. This element is absent if there
are no clashes.</t>
</section>
<section title="TRANSCRIPTIONS Element">
<t>The <transcriptions> element contains the transcriptions
returned in the last repetition of the phrase being enrolled.</t>
</section>
<section title="CONFUSABLE-PHRASES Element">
<t>The <confusable-phrases> element contains a list of phrases
from a command grammar that are confusable with the phrase being
added to the personal grammar. This element may be absent if there
are no confusable phrases.</t>
</section>
</section>
<section title="DEFINE-GRAMMAR">
<t>The DEFINE-GRAMMAR method, from the client to the server, provides
one or more grammars and requests the server to access, fetch, and
compile the grammars as needed. The DEFINE-GRAMMAR method
implementation MUST do a fetch of all external URIs that are part of
that operation. If caching is implemented, this URI fetching MUST
conform to the cache control hints and parameter header fields
associated with the method in deciding whether it should be fetched
from cache or from the external server. If these hints/parameters are
not specified in the method, the values set for the session using
SET-PARAMS/GET-PARAMS apply. If it was not set for the session their
default values apply.</t>
<t>If the server resource is in the recognition state, the
DEFINE-GRAMMAR request MUST respond with a failure status.</t>
<t>If the resource is in the idle state and is able to successfully
process the supplied grammars, the server MUST return a success code
status and the request-state MUST be COMPLETE.</t>
<t>If the recognizer resource could not define the grammar for some
reason, for example if the download failed, the grammar failed to
compile, or the grammar was in an unsupported form, the MRCPv2
response for the DEFINE-GRAMMAR method MUST contain a failure status
code of 407, and contain a completion-cause header field describing
the failure reason.</t>
<figure title="Define Grammar Example">
<artwork><![CDATA[
C->S:MRCP/2.0 589 DEFINE-GRAMMAR 543257
Channel-Identifier:32AECB23433801@speechrecog
Content-Type:application/srgs+xml
Content-ID:<request1@form-level.store>
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0">
<!-- single language attachment to tokens -->
<rule id="yes">
<one-of>
<item xml:lang="fr-CA">oui</item>
<item xml:lang="en-US">yes</item>
</one-of>
</rule>
<!-- single language attachment to a rule expansion -->
<rule id="request">
may I speak to
<one-of xml:lang="fr-CA">
<item>Michel Tremblay</item>
<item>Andre Roy</item>
</one-of>
</rule>
</grammar>
S->C:MRCP/2.0 73 543257 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
C->S:MRCP/2.0 334 DEFINE-GRAMMAR 543258
Channel-Identifier:32AECB23433801@speechrecog
Content-Type:application/srgs+xml
Content-ID:<helpgrammar@root-level.store>
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0">
<rule id="request">
I need help
</rule>
S->C:MRCP/2.0 73 543258 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
C->S:MRCP/2.0 723 DEFINE-GRAMMAR 543259
Channel-Identifier:32AECB23433801@speechrecog
Content-Type:application/srgs+xml
Content-ID:<request2@field-level.store>
Content-Length:...
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE grammar PUBLIC "-//W3C//DTD GRAMMAR 1.0//EN"
"http://www.w3.org/TR/speech-grammar/grammar.dtd">
<grammar xmlns="http://www.w3.org/2001/06/grammar" xml:lang="en"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/06/grammar
http://www.w3.org/TR/speech-grammar/grammar.xsd"
version="1.0" mode="voice" root="basicCmd">
<meta name="author" content="Stephanie Williams"/>
<rule id="basicCmd" scope="public">
<example> please move the window </example>
<example> open a file </example>
<ruleref
uri="http://grammar.example.com/politeness.grxml#startPolite"/>
<ruleref uri="#command"/>
<ruleref
uri="http://grammar.example.com/politeness.grxml#endPolite"/>
</rule>
<rule id="command">
<ruleref uri="#action"/> <ruleref uri="#object"/>
</rule>
<rule id="action">
<one-of>
<item weight="10"> open <tag>open</tag> </item>
<item weight="2"> close <tag>close</tag> </item>
<item weight="1"> delete <tag>delete</tag> </item>
<item weight="1"> move <tag>move</tag> </item>
</one-of>
</rule>
<rule id="object">
<item repeat="0-1">
<one-of>
<item> the </item>
<item> a </item>
</one-of>
</item>
<one-of>
<item> window </item>
<item> file </item>
<item> menu </item>
</one-of>
</rule>
</grammar>
S->C:MRCP/2.0 69 543259 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
C->S:MRCP/2.0 155 RECOGNIZE 543260
Channel-Identifier:32AECB23433801@speechrecog
N-Best-List-Length:2
Content-Type:text/uri-list
Content-Length:...
session:request1@form-level.store
session:request2@field-level.store
session:helpgramar@root-level.store
S->C:MRCP/2.0 48 543260 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
S->C:MRCP/2.0 48 START-OF-INPUT 543260 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
S->C:MRCP/2.0 486 RECOGNITION-COMPLETE 543260 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
Waveform-URI:<http://web.media.com/session123/audio.wav>;
size=124535;duration=2340
Content-Type:application/x-nlsml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="session:request1@form-level.store">
<interpretation>
<instance name="Person">
<ex:Person>
<ex:Name> Andre Roy </ex:Name>
</ex:Person>
</instance>
<input> may I speak to Andre Roy </input>
</interpretation>
</result>
]]></artwork>
</figure>
</section>
<section anchor="sec.methodRecognize" title="RECOGNIZE">
<t>The RECOGNIZE method from the client to the server requests the
recognizer to start recognition and provides it with one or more
grammar references for grammars to match against the input media. The
RECOGNIZE method can carry header fields to control the sensitivity,
confidence level and the level of detail in results provided by the
recognizer. These header field values override the current values set
by a previous <spanx style="verb">SET-PARAMS</spanx> method.</t>
<t>The RECOGNIZE method can request the recognizer resource to operate
in normal or hotword mode as specified by the Recognition-Mode header
field. The default value is "normal". If the resource could not start
a recognition, the server MUST respond with a failure status code of
407 and a completion-cause header field in the response describing the
cause of failure.</t>
<t>The RECOGNIZE request uses the message body to specify the grammars
applicable to the request. The active grammar(s) for the request can
be specified in one of 3 ways. If the client needs to explicitly
control grammar weights for the recognition operation, it must employ
method 3 below. The order of these grammars specifies the precedence
of the grammars which is used when more than one grammar in the list
matches the speech; in this case, the grammar with the higher
precedence is returned as a match. This precedence capability is
useful in applications like VoiceXML browsers to order grammars
specified at the dialog, document and root level of a VoiceXML
application.<list style="numbers">
<t>The grammar may be placed directly in the message body as typed
content. If more than one grammar is included in the body, the
order of inclusion controls the corresponding precedence for the
grammars during recognition, with earlier grammars in the body
having a higher precedence than later ones.</t>
<t>The body may contain a list of grammar URIs specified in
content of Media Type text/uri-list <xref
target="RFC2483">RFC2483</xref>. The order of the URIs determines
the corresponding precedence for the grammars during recognition,
with highest-precedence first and decreasing for each URI
thereafter.</t>
<t>The body may contain a list of grammar URIs specified in
content of Media Type text/grammar-ref-list. This type defines a
list of grammar URIs and allows each grammar URI to be assigned a
weight in the list. This weight has the same meaning as the W3C
grammar weights.</t>
</list>In addition to performing recognition on the input, the
recognizer may also enroll the collected utterance in a personal
grammar if the Enroll-Utterance header field is set to true and an
Enrollment is active (via an earlier execution of the
START-PHRASE-ENROLLMENT method). If so, and if the RECOGNIZE request
contains a Content-ID header field, then the resulting grammar (which
includes the personal grammar as a sub-grammar) can be referenced
through the <spanx style="verb">session:</spanx> URI scheme (see <xref
target="sec.sessionURIScheme"></xref>).</t>
<t>If the resource was able to successfully start the recognition, the
server MUST return a success code and a request-state of IN-PROGRESS.
This means that the recognizer is active and that the client MUST be
prepared to receive further events with this request-id.</t>
<t>If the resource was able to queue the request the server MUST
return a success code and request-state of PENDING. This means that
the recognizer is currently active with another request and that this
request has been queued for processing.</t>
<t>If the resource could not start a recognition, the server MUST
respond with a failure status code of 407 and a completion-cause
header field in the response describing the cause of failure.</t>
<t>For the recognizer resource, RECOGNIZE and INTERPRET are the only
requests that returns a request-state of IN-PROGRESS, meaning that
recognition is in progress. When the recognition completes by matching
one of the grammar alternatives or by a time-out without a match or
for some other reason, the recognizer resource MUST send the client a
RECOGNITION-COMPLETE event (or INTERPRETATION-COMPLETE, if INTERPRET
was the request) with the result of the recognition and a
request-state of COMPLETE.</t>
<t>Large grammars can take a long time for the server to compile. For
grammars which are used repeatedly, the client can improve server
performance by issuing a DEFINE-GRAMMAR request with the grammar ahead
of time. In such a case the client can issue the RECOGNIZE request and
reference the grammar through the <spanx style="verb">session:</spanx>
URI scheme (see <xref target="sec.sessionURIScheme"></xref>). This
also applies in general if the client wants to repeat recognition with
a previous inline grammar.</t>
<t>The RECOGNIZE method implementation MUST do a fetch of all external
URIs that are part of that operation. If caching is implemented, this
URI fetching MUST conform to the cache control hints and parameter
header fields associated with the method in deciding whether it should
be fetched from cache or from the external server. If these
hints/parameters are not specified in the method, the values set for
the session using SET-PARAMS/GET-PARAMS apply. If it was not set for
the session their default values apply.</t>
<t>Note that since the audio and the messages are carried over
separate communication paths there may be a race condition between the
start of the flow of audio and the receipt of the RECOGNIZE method.
For example, if an audio flow is started by the client at the same
time as the RECOGNIZE method is sent, either the audio or the
RECOGNIZE can arrive at the recognizer first. As another example, the
client may choose to continuously send audio to the Server and signal
the Server to recognize using the RECOGNIZE method. Mechanisms to
resolve this condition are outside the scope of this specification.
The recognizer can expect the media to start flowing when it receives
the recognize request, but MUST NOT buffer anything it receives
beforehand in order to preserve the semantics that application authors
expect with respect to the input timers.</t>
<t>When a RECOGNIZE method has been received the recognition is
initiated on the stream. The No-Input-Timer MUST BE started at this
time if the Start-Input-Timers header field is specified as "true". If
this header field is set to "false", the No-Input-Timer MUST be
started when it receives the START-INPUT-TIMERS method from the
client. The Recognition-Timer MUST be started when the recognition
resource detects speech or a DTMF digit in the media stream.</t>
<t>Non-Hotword mode recognition:</t>
<t>When the recognition resource detects speech or a DTMF digit in the
media stream it MUST send the START-OF-INPUT event. When enough speech
has been collected for the server to process, the recognizer can try
to match the collected speech with the active grammars. If the speech
collected at this point fully matches with any of the active grammars,
the Speech-Complete-Timer is started. If it matches partially with one
or more of the active grammars, with more speech needed before a full
match is achieved, then the Speech-Incomplete-Timer is started.</t>
<t>1. When the No-Input-Timer expires, the recognizer must complete
with a Completion-Cause code of "no-input-timeout".</t>
<t>2. The recognizer MUST support detecting a no-match condition upon
detecting end of speech. The recognizer MAY support detecting a
no-match condition before waiting for end-of-speech. If this is
supported, this capability is enabled by setting the "Early-No-Match"
header field to "true". Upon detecting a no-match condition the
RECOGNIZE MUST return with "no-match".</t>
<t>3. When the Speech-Incomplete-Timer expires the recognizer SHOULD
complete with a Completion-Cause code of "partial-match", unless the
recognizer cannot differentiate a partial-match in which case it MUST
return a Completion-Cause code of "no-match". The recognizer MAY
return results for the partially matched grammar.</t>
<t>4. When the Speech-Complete-Timer expires the recognizer MUST
complete with a Completion-Cause code of "success".</t>
<t>5. When the Recognition-Timer expires one of the following MUST
happen:</t>
<t>5.1 If there was a partial-match the recognizer SHOULD complete
with a Completion-Cause code of "partial-match-maxtime", unless the
recognizer cannot differentiate a partial-match in which case it MUST
complete with a Completion-Cause code of "no-match-maxtime". The
recognizer MAY return results for the partially matched grammar.</t>
<t>5.2 If there was a full-match the recognizer MUST complete with a
Completion-Cause code of "success-maxtime".</t>
<t>5.3 If there was a no match the recognizer MUST complete with a
Completion-Cause code of "no-match-maxtime".</t>
<t>For the Hotword mode recognition:</t>
<t>Note that for Hotword mode recognition the START-OF-INPUT event is
not generated when speech or a DTMF digit is detected.</t>
<t>1. When the No-Input-Timer expires, the recognizer must complete
with a Completion-Cause code of "no-input-timeout".</t>
<t>2. When there is match at anytime, the RECOGNIZE completes with a
Completion-Cause code of "success".</t>
<t>3. When the Recognition-Timer expires and there is not a match, the
RECOGNIZE MUST complete with a Completion-Cause code of
"hotword-maxtime".</t>
<t>4. When the Recognition-Timer expires and there is a match, the
RECOGNIZE MUST complete with a Completion-Cause code of
"success-maxtime".</t>
<t>5. When the Recognition-Timer is running but the detected
speech/DTMF has not resulted in a match, the Recognition-Timer MUST be
stopped and reset. It MUST then be restarted when speech/DTMF is again
detected.</t>
<figure title="RECOGNIZE Example">
<artwork><![CDATA[
C->S:MRCP/2.0 479 RECOGNIZE 543257
Channel-Identifier:32AECB23433801@speechrecog
Confidence-Threshold:0.9
Content-Type:application/srgs+xml
Content-ID:<request1@form-level.store>
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0" root="request">
<!-- single language attachment to tokens -->
<rule id="yes">
<one-of>
<item xml:lang="fr-CA">oui</item>
<item xml:lang="en-US">yes</item>
</one-of>
</rule>
<!-- single language attachment to a rule expansion -->
<rule id="request">
may I speak to
<one-of xml:lang="fr-CA">
<item>Michel Tremblay</item>
<item>Andre Roy</item>
</one-of>
</rule>
</grammar>
S->C: MRCP/2.0 48 543257 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
S->C:MRCP/2.0 49 START-OF-INPUT 543257 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
S->C:MRCP/2.0 467 RECOGNITION-COMPLETE 543257 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
Waveform-URI:<http://web.media.com/session123/audio.wav>;
size=424252;duration=2543
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="session:request1@form-level.store">
<interpretation>
<instance name="Person">
<ex:Person>
<ex:Name> Andre Roy </ex:Name>
</ex:Person>
</instance>
<input> may I speak to Andre Roy </input>
</interpretation>
</result>
]]></artwork>
</figure>
<figure title="Second RECOGNIZE Example">
<artwork><![CDATA[
C->S: MRCP/2.0 479 RECOGNIZE 543257
Channel-Identifier:32AECB23433801@speechrecog
Confidence-Threshold:0.9
Fetch-Timeout:20
Content-Type:application/srgs+xml
Content-Length:...
<?xml version="1.0"? Version="1.0" mode="voice"
root="Basic md">
<rule id="rule_list" scope="public">
<one-of>
<item weight=10>
<ruleref uri=
"http://grammar.example.com/world-cities.grxml#canada"/>
</item>
<item weight=1.5>
<ruleref uri=
"http://grammar.example.com/world-cities.grxml#america"/>
</item>
<item weight=0.5>
<ruleref uri=
"http://grammar.example.com/world-cities.grxml#india"/>
</item>
</one-of>
</rule>
]]></artwork>
</figure>
</section>
<section title="STOP">
<t>The <spanx style="verb">STOP</spanx> method from the client to the
server tells the resource to stop recognition if a request is active.
If a RECOGNIZE request is active and the <spanx
style="verb">STOP</spanx> request successfully terminated it, then the
response header section contains an active-request-id-list header
field containing the request-id of the RECOGNIZE request that was
terminated. In this case, no RECOGNITION-COMPLETE event is sent for
the terminated request. If there was no recognition active, then the
response MUST NOT contain an active-request-id-list header field.
Either way the response MUST contain a status of 200 (Success).</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 573 RECOGNIZE 543257
Channel-Identifier:32AECB23433801@speechrecog
Confidence-Threshold:0.9
Content-Type:application/srgs+xml
Content-ID:<request1@form-level.store>
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0" root="request">
<!-- single language attachment to tokens -->
<rule id="yes">
<one-of>
<item xml:lang="fr-CA">oui</item>
<item xml:lang="en-US">yes</item>
</one-of>
</rule>
<!-- single language attachment to a rule expansion -->
<rule id="request">
may I speak to
<one-of xml:lang="fr-CA">
<item>Michel Tremblay</item>
<item>Andre Roy</item>
</one-of>
</rule>
</grammar>
S->C: MRCP/2.0 47 543257 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
C->S: MRCP/2.0 28 STOP 543258 200
Channel-Identifier:32AECB23433801@speechrecog
S->C: MRCP/2.0 67 543258 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Active-Request-Id-List:543257
]]></artwork>
</figure>
</section>
<section title="GET-RESULT">
<t>The GET-RESULT method from the client to the server may be issued
when the recognizer resource is in the recognized state. This request
allows the client to retrieve results for a completed recognition.
This is useful if the client decides it wants more alternatives or
more information. When the server receives this request it re-computes
and returns the results according to the recognition constraints
provided in the GET-RESULT request.</t>
<t>The GET-RESULT request can specify constraints such as a different
confidence-threshold, or n-best-list-length. This capability is
optional for MRCPv2 servers and the automatic speech recognition
engine in the server MAY return a status of unsupported feature.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 73 GET-RESULT 543257
Channel-Identifier:32AECB23433801@speechrecog
Confidence-Threshold:0.9
S->C: MRCP/2.0 487 543257 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="session:request1@form-level.store">
<interpretation>
<instance name="Person">
<ex:Person>
<ex:Name> Andre Roy </ex:Name>
</ex:Person>
</instance>
<input> may I speak to Andre Roy </input>
</interpretation>
</result>
]]></artwork>
</figure>
</section>
<section title="START-OF-INPUT">
<t>This is an event from the server to the client indicating that the
recognition resource has detected speech or a DTMF digit in the media
stream. This event is useful in implementing kill-on-barge-in
scenarios when a synthesizer resource is in a different session from
the recognizer resource and hence is not aware of an incoming audio
source (see <xref target="sec.kill-on-barge-in"></xref>). In these
cases, it is up to the client to act as a intermediary and respond to
this event by issuing a BARGE-IN-OCCURRED event to the synthesizer
resource. The recognizer resource also MUST send a Proxy-Sync-Id
header field with a unique value for this event.</t>
<t>This event MUST be generated by the server irrespective of whether
the synthesizer and recognizer are on the same server or not.</t>
</section>
<section title="START-INPUT-TIMERS">
<t>This request is sent from the client to the recognition resource
when it knows that a kill-on-barge-in prompt has finished playing (see
<xref target="sec.kill-on-barge-in"></xref>). This is useful in the
scenario when the recognition and synthesizer engines are not in the
same session. When a kill-on-barge-in prompt is being played, the
client may want a RECOGNIZE request to be simultaneously active so
that it can detect and implement kill on barge-in. But at the same
time the client doesn't want the recognizer to start the no-input
timers until the prompt is finished. The Start-Input-Timers header
field in the RECOGNIZE request allows the client to say whether the
timers should be started immediately or not. If not, the recognizer
resource MUST NOT start the timers until the client sends a
START-INPUT-TIMERS method to the recognizer.</t>
</section>
<section title="RECOGNITION-COMPLETE">
<t>This is an Event from the recognizer resource to the client
indicating that the recognition completed. The recognition result is
sent in the body of the MRCPv2 message. The request-state field MUST
be COMPLETE indicating that this is the last event with that
request-id, and that the request with that request-id is now complete.
The server MUST maintain the recognizer context containing the results
and the audio waveform input of that recognition until the next
RECOGNIZE request is issued for that resource or the session
terminates. A URI to the audio waveform MAY be returned to the client
in a Waveform-URI header field in the RECOGNITION-COMPLETE event. The
client can use this URI to retrieve or playback the audio.</t>
<t>Note if an enrollment session was active, the RECOGNITION-COMPLETE
event can contain either recognition or enrollment results depending
on what was spoken. The following example shows a complete exchange
with a recognition result.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 487 RECOGNIZE 543257
Channel-Identifier:32AECB23433801@speechrecog
Confidence-Threshold:0.9
Content-Type:application/srgs+xml
Content-ID:<request1@form-level.store>
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0" root="request">
<!-- single language attachment to tokens -->
<rule id="yes">
<one-of>
<item xml:lang="fr-CA">oui</item>
<item xml:lang="en-US">yes</item>
</one-of>
</rule>
<!-- single language attachment to a rule expansion -->
<rule id="request">
may I speak to
<one-of xml:lang="fr-CA">
<item>Michel Tremblay</item>
<item>Andre Roy</item>
</one-of>
</rule>
</grammar>
S->C: MRCP/2.0 48 543257 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
S->C: MRCP/2.0 49 START-OF-INPUT 543257 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
S->C: MRCP/2.0 465 RECOGNITION-COMPLETE 543257 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
Waveform-URI:<http://web.media.com/session123/audio.wav>;
size=342456;duration=25435
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="session:request1@form-level.store">
<interpretation>
<instance name="Person">
<ex:Person>
<ex:Name> Andre Roy </ex:Name>
</ex:Person>
</instance>
<input> may I speak to Andre Roy </input>
</interpretation>
</result>
]]></artwork>
</figure>
<t>If the result were instead an enrollment result, the final message
from the server above could have instead been:</t>
<figure>
<artwork><![CDATA[
S->C: MRCP/2.0 465 RECOGNITION-COMPLETE 543257 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version= "1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
grammar="Personal-Grammar-URI">
<enrollment-result>
<num-clashes> 2 </num-clashes>
<num-good-repetitions> 1 </num-good-repetitions>
<num-repetitions-still-needed>
1
</num-repetitions-still-needed>
<consistency-status> consistent </consistency-status>
<clash-phrase-ids>
<item> Jeff </item> <item> Andre </item>
</clash-phrase-ids>
<transcriptions>
<item> m ay b r ow k er </item>
<item> m ax r aa k ah </item>
</transcriptions>
<confusable-phrases>
<item>
<phrase> call </phrase>
<confusion-level> 10 </confusion-level>
</item>
</confusable-phrases>
</enrollment-result>
</result>
]]></artwork>
</figure>
</section>
<section title="START-PHRASE-ENROLLMENT">
<t>The START-PHRASE-ENROLLMENT method from the client to the server
starts a new phrase enrollment session during which the client may
call RECOGNIZE multiple times to enroll a new utterance in a grammar.
An enrollment session consists of a set of calls to RECOGNIZE in which
the caller speaks a phrase several times so the system can "learn" it.
The phrase is then added to a personal grammar (speaker-trained
grammar), so that the system can recognize it later.</t>
<t>Only one phrase enrollment session may be active at a time for a
resource. The Personal-Grammar-URI identifies the grammar that is used
during enrollment to store the personal list of phrases. Once
RECOGNIZE is called, the result is returned in a RECOGNITION-COMPLETE
event and may contain either an enrollment result OR a recognition
result for a regular recognition.</t>
<t>Calling END-PHRASE-ENROLLMENT ends the ongoing phrase enrollment
session, which is typically done after a sequence of successful calls
to RECOGNIZE. This method can be called to commit the new phrase to
the personal grammar or to abort the phrase enrollment session.</t>
<t>The grammar to contain the new enrolled phrase, specified by
Personal-Grammar-URI, is created if it does not exist. Also, the
personal grammar may ONLY contain phrases added via a phrase
enrollment session.</t>
<t>The Phrase-ID passed to this method is used to identify this phrase
in the grammar and will be returned as the speech input when doing a
RECOGNIZE on the grammar. The Phrase-NL similarly is returned in a
RECOGNITION-COMPLETE event in the same manner as other NL in a
grammar. The tag-format of this NL is implementation specific.</t>
<t>If the client has specified Save-Best-Waveform as true, then the
response after ending the phrase enrollment session MUST contain the
location/URI of a recording of the best repetition of the learned
phrase.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 START-PHRASE-ENROLLMENT 543258
Channel-Identifier:32AECB23433801@speechrecog
Num-Min-Consistent-Pronunciations:2
Consistency-Threshold:30
Clash-Threshold:12
Personal-Grammar-URI:<personal grammar uri>
Phrase-Id:<phrase id>
Phrase-NL:<NL phrase>
Weight:1
Save-Best-Waveform:true
S->C: MRCP/2.0 49 543258 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
]]></artwork>
</figure>
</section>
<section title="ENROLLMENT-ROLLBACK">
<t>The ENROLLMENT-ROLLBACK method discards the last live utterance
from the RECOGNIZE operation. The client can invoke this method when
the caller provides undesirable input such as non-speech noises,
side-speech, commands, utterance from the RECOGNIZE grammar, etc. Note
that this method does not provide a stack of rollback states.
Executing ENROLLMENT-ROLLBACK twice in succession without an
intervening recognition operation has no effect the second time.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 49 ENROLLMENT-ROLLBACK 543261
Channel-Identifier:32AECB23433801@speechrecog
S->C: MRCP/2.0 49 543261 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
]]></artwork>
</figure>
</section>
<section title="END-PHRASE-ENROLLMENT">
<t>The END-PHRASE-ENROLLMENT method may be called ONLY during an
active phrase enrollment session. It MUST NOT be called during an
ongoing RECOGNIZE operation. To commit the new phrase in the grammar,
the client MAY call this method once successive calls to RECOGNIZE
have succeeded and Num-Repetitions-Still-Needed has been returned as 0
in the RECOGNITION-COMPLETE event. Alternatively, the client can abort
the phrase enrollment session by calling this method with the
Abort-Phrase-Enrollment header field.</t>
<t>If the client has specified Save-Best-Waveform as true in the
START-PHRASE-ENROLLMENT request, then the response MUST contain the
location/URI of a recording of the best repetition of the learned
phrase.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 49 END-PHRASE-ENROLLMENT 543262
Channel-Identifier:32AECB23433801@speechrecog
S->C: MRCP/2.0 123 543262 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Waveform-URI:<http://mediaserver.com/recordings/file1324.wav>;
size=242453;duration=25432
]]></artwork>
</figure>
</section>
<section title="MODIFY-PHRASE">
<t>The MODIFY-PHRASE method sent from the client to the server is used
to change the phrase ID, NL phrase and/or weight for a given phrase in
a personal grammar.</t>
<t>If no fields are supplied then calling this method has no
effect.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 MODIFY-PHRASE 543265
Channel-Identifier:32AECB23433801@speechrecog
Personal-Grammar-URI:<personal grammar uri>
Phrase-Id:<phrase id>
New-Phrase-Id:<new phrase id>
Phrase-NL:<NL phrase>
Weight:1
S->C: MRCP/2.0 49 543265 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog]]></artwork>
</figure>
</section>
<section title="DELETE-PHRASE">
<t>The DELETE-PHRASE method sent from the client to the server is used
to delete a phase in a personal grammar added through voice enrollment
or text enrollment. If the specified phrase does not exist, this
method has no effect.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 DELETE-PHRASE 543266
Channel-Identifier:32AECB23433801@speechrecog
Personal-Grammar-URI:<personal grammar uri>
Phrase-Id:<phrase id>
S->C: MRCP/2.0 49 543266 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
]]></artwork>
</figure>
</section>
<section anchor="sec.interpret" title="INTERPRET">
<t>The INTERPRET method from the client to the server takes as input
an interpret-text header field containing the text for which the
semantic interpretation is desired, and returns, via the
INTERPRETATION-COMPLETE event, an interpretation result which is very
similar to the one returned from a RECOGNIZE method invocation. Only
portions of the result relevant to acoustic matching are excluded from
the result. The interpret-text header field MUST be included in the
INTERPRET request.</t>
<t>Recognizer grammar data is treated in the same way as it is when
issuing a RECOGNIZE method call.</t>
<t>If a RECOGNIZE, RECORD or another INTERPRET operation is already in
progress for the resource, the server MUST reject the request with a
response having a status code of 402, "Method not valid in this
state", and a COMPLETE request state.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 INTERPRET 543266
Channel-Identifier:32AECB23433801@speechrecog
Interpret-Text:may I speak to Andre Roy
Content-Type:application/srgs+xml
Content-ID:<request1@form-level.store>
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0" root="request">
<!-- single language attachment to tokens -->
<rule id="yes">
<one-of>
<item xml:lang="fr-CA">oui</item>
<item xml:lang="en-US">yes</item>
</one-of>
</rule>
<!-- single language attachment to a rule expansion -->
<rule id="request">
may I speak to
<one-of xml:lang="fr-CA">
<item>Michel Tremblay</item>
<item>Andre Roy</item>
</one-of>
</rule>
</grammar>
S->C: MRCP/2.0 49 543266 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
S->C: MRCP/2.0 49 INTERPRETATION-COMPLETE 543267 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="session:request1@form-level.store">
<interpretation>
<instance name="Person">
<ex:Person>
<ex:Name> Andre Roy </ex:Name>
</ex:Person>
</instance>
<input> may I speak to Andre Roy </input>
</interpretation>
</result>
]]></artwork>
</figure>
</section>
<section title="INTERPRETATION-COMPLETE">
<t>This event from the recognition resource to the client indicates
that the INTERPRET operation is complete. The interpretation result is
sent in the body of the MRCP message. The request state MUST be set to
COMPLETE.</t>
<t>The completion-cause header field MUST be included in this event
and MUST be set to an appropriate value from the list of cause
codes.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 INTERPRET 543266
Channel-Identifier:32AECB23433801@speechrecog
Interpret-Text:may I speak to Andre Roy
Content-Type:application/srgs+xml
Content-ID:<request1@form-level.store>
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0" root="request">
<!-- single language attachment to tokens -->
<rule id="yes">
<one-of>
<item xml:lang="fr-CA">oui</item>
<item xml:lang="en-US">yes</item>
</one-of>
</rule>
<!-- single language attachment to a rule expansion -->
<rule id="request">
may I speak to
<one-of xml:lang="fr-CA">
<item>Michel Tremblay</item>
<item>Andre Roy</item>
</one-of>
</rule>
</grammar>
S->C: MRCP/2.0 49 543266 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
S->C: MRCP/2.0 49 INTERPRETATION-COMPLETE 543267 200 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="session:request1@form-level.store">
<interpretation>
<instance name="Person">
<ex:Person>
<ex:Name> Andre Roy </ex:Name>
</ex:Person>
</instance>
<input> may I speak to Andre Roy </input>
</interpretation>
</result>
]]></artwork>
</figure>
</section>
<section title="DTMF Detection">
<t>Digits received as DTMF tones are delivered to the recognition
resource in the MRCPv2 server in the RTP stream according to <xref
target="RFC4733">RFC4733</xref>. The automatic speech recognizer (ASR)
MUST support RFC4733 to recognize digits and it MAY support
recognizing <xref target="Q.23">DTMF tones</xref> in the audio.</t>
</section>
</section>
<section anchor="sec.recorderResource" title="Recorder Resource">
<t>This resource captures received audio and video and stores it as
content pointed to by a URI. The main usages of recorders are<list
style="numbers">
<t>to capture speech audio that may be submitted for recognition at
a later time, and</t>
<t>recording voice or video mails.</t>
</list>Both these applications require functionality above and beyond
those specified by protocols such as RTSP. This includes Audio
endpointing (i.e. detecting speech or silence). The support for video is
optional and is mainly capturing video mails that may require the speech
or audio processing mentioned above.</t>
<t>A recorder MUST provide some endpointing capabilities for suppressing
silence at the beginning and end of a recording, and MAY also suppress
silence in the middle of a recording. If such suppression is done, the
recorder MUST maintain timing metadata to indicate the actual time
stamps of the recorded media.</t>
<t> See the discussion on the sensitivity of saved waveforms in <xref
target="sec.securityConsiderations"></xref>.</t>
<section title="Recorder State Machine">
<figure title="Recorder State Machine">
<artwork><![CDATA[
Idle Recording
State State
| |
|---------RECORD------->|
| |
|<------STOP------------|
| |
|<--RECORD-COMPLETE-----|
| |
| |--------|
| START-OF-INPUT |
| |------->|
| |
| |--------|
| START-INPUT-TIMERS |
| |------->|
| |
]]></artwork>
</figure>
</section>
<section title="Recorder Methods">
<t>The recorder resource supports the following methods.</t>
<figure>
<artwork><![CDATA[
recorder-Method = "RECORD"
/ "STOP"
/ "START-INPUT-TIMERS"
]]></artwork>
</figure>
</section>
<section title="Recorder Events">
<t>The recorder resource may generate the following events.</t>
<figure>
<artwork><![CDATA[
recorder-Event = "START-OF-INPUT"
/ "RECORD-COMPLETE"
]]></artwork>
</figure>
</section>
<section title="Recorder Header Fields">
<t>Method invocations for the recorder resource may contain
resource-specific header fields containing request options and
information to augment the Method, Response or Event message it is
associated with.</t>
<figure>
<artwork><![CDATA[
recorder-header = sensitivity-level
/ no-input-timeout
/ completion-cause
/ completion-reason
/ failed-uri
/ failed-uri-cause
/ record-uri
/ media-type
/ max-time
/ trim-length
/ final-silence
/ capture-on-speech
/ ver-buffer-utterance
/ start-input-timers
/ new-audio-channel
]]></artwork>
</figure>
<section title="Sensitivity Level">
<t>To filter out background noise and not mistake it for speech, the
recorder may support a variable level of sound sensitivity. The
sensitivity-level header field is a float value between 0.0 and 1.0
and allows the client to set the sensitivity level for the recorder.
This header field MAY occur in RECORD, <spanx
style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. A higher value for this header
field means higher sensitivity. The default value for this header
field is implementation specific.</t>
<figure>
<artwork><![CDATA[
sensitivity-level = "Sensitivity-Level" ":" FLOAT CRLF
]]></artwork>
</figure>
</section>
<section title="No Input Timeout">
<t>When recording is started and there is no speech detected for a
certain period of time, the recorder can send a RECORD-COMPLETE
event to the client and terminate the record operation. The
no-input-timeout header field can set this timeout value. The value
is in milliseconds. This header field MAY occur in RECORD, <spanx
style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. The value for this header field
ranges from 0 to an implementation specific maximum value. The
default value for this header field is implementation specific.</t>
<figure>
<artwork><![CDATA[
no-input-timeout = "No-Input-Timeout" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Completion Cause">
<t>This header field MUST be part of a RECORD-COMPLETE event from
the recorder resource to the client. This indicates the reason
behind the RECORD method completion. This header field MUST be sent
in the RECORD responses if they return with a failure status and a
COMPLETE state.</t>
<figure>
<artwork><![CDATA[
completion-cause = "Completion-Cause" ":" 3DIGIT SP
1*VCHAR CRLF
]]></artwork>
</figure>
<texttable>
<ttcol width="10%">Cause-Code</ttcol>
<ttcol width="35%">Cause-Name</ttcol>
<ttcol>Description</ttcol>
<c>000</c>
<c>success-silence</c>
<c>RECORD completed with a silence at the end</c>
<c>001</c>
<c>success-maxtime</c>
<c>RECORD completed after reaching maximum recording time
specified in record method.</c>
<c>002</c>
<c>noinput-timeout</c>
<c>RECORD failed due to no input</c>
<c>003</c>
<c>uri-failure</c>
<c>Failure accessing the record URI.</c>
<c>004</c>
<c>error</c>
<c>RECORD request terminated prematurely due to a recorder
error.</c>
</texttable>
</section>
<section title="Completion Reason">
<t>This header field MAY be present in a RECORD-COMPLETE event
coming from the recorder resource to the client. It contains the
reason text behind the RECORD request completion. This header field
communicates text describing the reason for the failure.</t>
<t>The completion reason text is provided for client use in logs and
for debugging and instrumentation purposes. Clients MUST NOT
interpret the completion reason text.</t>
<figure>
<artwork><![CDATA[
completion-reason = "Completion-Reason" ":"
quoted-string CRLF
]]></artwork>
</figure>
</section>
<section title="Failed URI">
<t>When a recorder method needs to post the audio to a URI and
access to the URI fails, the server MUST provide the failed URI in
this header field in the method response.</t>
<figure>
<artwork><![CDATA[
failed-uri = "Failed-URI" ":" Uri CRLF
]]></artwork>
</figure>
</section>
<section title="Failed URI Cause">
<t>When a recorder method needs to post the audio to a URI and
access to the URI fails, the server MUST provide the URI specific or
protocol specific response code through this header field in the
method response. The value encoding is UTF-8 to accommodate any
access protocol, some of which might have a response string instead
of a numeric response code.</t>
<figure>
<artwork><![CDATA[
failed-uri-cause = "Failed-URI-Cause" ":" 1*UTFCHAR
CRLF
]]></artwork>
</figure>
</section>
<section title="Record URI">
<t>When a recorder method contains this header field the server must
capture the audio and store it. If the header field is present but
specified with no value, the server MUST store the content locally
and generate a URI that points to it. This URI is then returned in
either the <spanx style="verb">STOP</spanx> response or the
RECORD-COMPLETE event. If the header field in the RECORD method
specifies a URI, the server MUST attempt to capture and store the
audio at that location. If this header field is not specified in the
RECORD request, the server MUST capture the audio and send it in the
<spanx style="verb">STOP</spanx> response or the RECORD-COMPLETE
event as a message body. In this case, the response carrying the
audio content would have this header field with a cid value pointing
to the Content-ID in the message body.</t>
<t>The server MUST also return the size in octets and the duration
in milliseconds of the recorded audio wave-form as parameters
associated with the header field.</t>
<figure>
<artwork><![CDATA[
record-uri = "Record-URI" ":" ["<" Uri ">"
";" "size" "=" 1*19DIGIT
";" "duration" "=" 1*19DIGIT] CRLF
]]></artwork>
</figure>
</section>
<section title="Media Type">
<t>A RECORD method MUST contain this header field, which specifies
to the server the Media Type of the captured audio or video.</t>
<figure>
<artwork><![CDATA[
media-type = "Media-Type" ":" media-type-value
CRLF
]]></artwork>
</figure>
</section>
<section title="Max Time">
<t>When recording is started this specifies the maximum length of
the recording in milliseconds, calculated from the time the actual
capture and store begins and is not necessarily the time the RECORD
method is received. It specifies the duration before silence
suppression, if any, has been applied by the recorder resource.
After this time, the recording stops and the server MUST return a
RECORD-COMPLETE event to the client having a request-state of
"COMPLETE". This header field MAY occur in RECORD, <spanx
style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. The value for this header field
ranges from 0 to an implementation specific maximum value. A value
of zero means infinity and hence the recording continues until one
or more of the other stop conditions are met. The default value for
this header field is 0.</t>
<figure>
<artwork><![CDATA[
max-time = "Max-Time" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Trim-Length">
<t>This header field MAY be sent on a STOP method and specifies the
length of audio to be trimmed from the end of the recording after
the stop. The length is interpreted to be in milliseconds. The
default value for this header field is 0.</t>
<figure>
<artwork><![CDATA[
trim-length = "Trim-Length" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Final Silence">
<t>When recorder is started and the actual capture begins, this
header field specifies the length of silence in the audio that is to
be interpreted as the end of the recording. This header field MAY
occur in RECORD, <spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. The value for this header field
ranges from 0 to an implementation specific maximum value and is
interpreted to be in milliseconds. A value of zero means infinity
and hence the recording will continue until one of the other stop
conditions are met. The default value for this header field is
implementation specific.</t>
<figure>
<artwork><![CDATA[
final-silence = "Final-Silence" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Capture On Speech">
<t>If false, the recorder MUST start capturing immediately when
started. If true, the recorder MUST wait for the endpointing
functionality to detect speech before it starts capturing. This
header field MAY occur in the RECORD, <spanx
style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>. The value for this header field is
a Boolean. The default value for this header field is false.</t>
<figure>
<artwork><![CDATA[
capture-on-speech = "Capture-On-Speech " ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="Ver-Buffer-Utterance">
<t>This header field is the same as the one described for the
Verification resource (see <xref
target="sec.verBufferUtterance"></xref>). This tells the server to
buffer the utterance associated with this recording request into the
verification buffer. Sending this header field is permitted only if
the verification buffer is for the session. This buffer is shared
across resources within a session. It gets instantiated when a
verification resource is added to this session and is released when
the verification resource is released from the session.</t>
</section>
<section title="Start Input Timers">
<t>This header field MAY be sent as part of the RECORD request. A
value of false tells the recorder resource to start the operation,
but not to start the no-input timer until the client sends a
START-INPUT-TIMERS request to the recorder resource. This is useful
in the scenario when the recorder and synthesizer resources are not
part of the same session. When a kill-on-barge-in prompt is being
played, the client may want the RECORD request to be simultaneously
active so that it can detect and implement kill-on-barge-in (see
<xref target="sec.kill-on-barge-in"></xref>). But at the same time
the client doesn't want the recorder resource to start the no-input
timers until the prompt is finished. The default value is
"true".</t>
<figure>
<artwork><![CDATA[
start-input-timers = "Start-Input-Timers" ":"
BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="New Audio Channel">
<t>This header field is the same as the one described for the
Recognizer resource (see <xref
target="sec.newAudioChannel"></xref>).</t>
</section>
</section>
<section title="Recorder Message Body">
<t>If the RECORD request did not have a Record-Uri header field, the
<spanx style="verb">STOP</spanx> response or the RECORD-COMPLETE event
MUST contain a message body carrying the captured audio. In this case,
the message carrying the audio content has a Record-Uri header field
with a cid value pointing to the message body entity that contains the
recorded audio.</t>
</section>
<section title="RECORD">
<t>The RECORD request places the recorder resource in the Recording
state. Depending on the header fields specified in the RECORD method,
the resource may start recording the audio immediately or wait for the
end pointing functionality to detect speech in the audio. The audio is
then made available to the client either in the message body or as
specified by Record-URI.</t>
<t>The server MUST support the HTTPS URI scheme and MAY support other
schemes. Note that due to the sensitive nature of voice recordings,
any protocols used for dereferencing SHOULD employ integrity and
confidentiality, unless other means, such as physical security, are
employed.</t>
<t>If a RECORD operation is already in progress, invoking this method
causes the server to issue a response having a status code of 402,
"Method not valid in this state", and a COMPLETE request state.</t>
<t>If the recording-uri is not valid, a status code of 404, "Illegal
Value for Header Field", is returned in the response. If it is
impossible for the server to create the requested stored content, a
status code of 407, "Method or Operation Failed", is returned.</t>
<t>If the type specified in the Media-Type header field is not
supported, the server MUST respond with a status code of 409,
"Unsupported Header Field Value", with the Media-Type header field in
its response.</t>
<t>When the recording operation is initiated, the response indicates
an IN-PROGRESS request state. The server MAY generate a subsequent
START-OF-INPUT event when speech is detected. Upon completion of the
recording operation, the server generates a RECORD-COMPLETE event.</t>
<figure title="RECORD Example">
<artwork><![CDATA[
C->S: MRCP/2.0 386 RECORD 543257
Channel-Identifier:32AECB23433802@recorder
Record-URI:<file://mediaserver/recordings/myfile.wav>
Capture-On-Speech:true
Final-Silence:300
Max-Time:6000
S->C: MRCP/2.0 48 456234 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@recorder
S->C: MRCP/2/0 49 START-OF-INPUT 456234 IN-PROGRESS
Channel-Identifier:32AECB23433802@recorder
S->C: MRCP/2.0 54 RECORD-COMPLETE 456234 COMPLETE
Channel-Identifier:32AECB23433802@recorder
Completion-Cause:000 success-silence
Record-URI:<file://mediaserver/recordings/myfile.wav>;
size=242552;duration=25645
]]></artwork>
</figure>
</section>
<section title="STOP">
<t>The <spanx style="verb">STOP</spanx> method moves the recorder from
the recording state back to the idle state. If a RECORD request is
active and the <spanx style="verb">STOP</spanx> request successfully
terminated it, then the STOP response MUST contain an
active-request-id-list header field containing the <spanx
style="verb">RECORD</spanx> request-id that was terminated. In this
case, no RECORD-COMPLETE event is sent for the terminated request. If
there was no recording active, then the response MUST NOT contain an
active-request-id-list header field. If the recording was a success
the <spanx style="verb">STOP</spanx> response MUST contain a
Record-URI header field pointing to the recorded audio content or to
an typed entity in the body of the <spanx style="verb">STOP</spanx>
response containing the recorded audio. The <spanx
style="verb">STOP</spanx> method may have a Trim-Length header field,
in which case the specified length of audio is trimmed from the end of
the recording after the stop. In any case, the response MUST contain a
status of 200 (Success).</t>
<figure title="STOP Example">
<artwork><![CDATA[
C->S: MRCP/2.0 386 RECORD 543257
Channel-Identifier:32AECB23433802@recorder
Record-URI:<file://mediaserver/recordings/myfile.wav>
Capture-On-Speech:true
Final-Silence:300
Max-Time:6000
S->C: MRCP/2.0 48 456234 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@recorder
S->C: MRCP/2/0 49 START-OF-INPUT 456234 IN-PROGRESS
Channel-Identifier:32AECB23433802@recorder
C->S: MRCP/2.0 386 STOP 543257
Channel-Identifier:32AECB23433802@recorder
Trim-Length:200
S->C: MRCP/2.0 48 456234 200 COMPLETE
Channel-Identifier:32AECB23433802@recorder
Record-URI:<file://mediaserver/recordings/myfile.wav>;
size=324253;duration=24561
Active-Request-Id-List:543257
]]></artwork>
</figure>
</section>
<section title="RECORD-COMPLETE">
<t>If the recording completes due to no-input, silence after speech,
or max-time, the server MUST generate the RECORD-COMPLETE event to the
client with a request-state of "COMPLETE". If the recording was a
success the RECORD-COMPLETE event contains a Record-URI header field
pointing to the recorded audio file on the server or to a typed entity
in the message body containing the recorded audio .</t>
<figure title="RECORD-COMPLETE Example">
<artwork><![CDATA[
C->S: MRCP/2.0 386 RECORD 543257
Channel-Identifier:32AECB23433802@recorder
Record-URI:<file://mediaserver/recordings/myfile.wav>
Capture-On-Speech:true
Final-Silence:300
Max-Time:6000
S->C: MRCP/2.0 48 456234 200 IN-PROGRESS
Channel-Identifier:32AECB23433802@recorder
S->C: MRCP/2/0 49 START-OF-INPUT 456234 IN-PROGRESS
Channel-Identifier:32AECB23433802@recorder
S->C: MRCP/2.0 48 RECORD-COMPLETE 456234 COMPLETE
Channel-Identifier:32AECB23433802@recorder
Completion-Cause:000 success
Record-URI:<file://mediaserver/recordings/myfile.wav>;
size=325325;duration=24652
]]></artwork>
</figure>
</section>
<section title="START-INPUT-TIMERS">
<t>This request is sent from the client to the recorder resource when
it discovers that a kill-on-barge-in prompt has finished playing (see
<xref target="sec.kill-on-barge-in"></xref>). This is useful in the
scenario when the recorder and synthesizer resources are not in the
same MRCPv2 session. When a kill-on-barge-in prompt is being played,
the client wants the RECORD request to be simultaneously active so
that it can detect and implement kill on barge-in. But at the same
time the client doesn't want the recorder resource to start the
no-input timers until the prompt is finished. The Start-Input-Timers
header field in the RECORD request allows the client to say if the
timers should be started or not. In the above case the recorder
resource does not start the timers until the client sends a
START-INPUT-TIMERS method to the recorder.</t>
</section>
<section title="START-OF-INPUT">
<t>The START-OF-INPUT event is returned from the server to the client
once the server has detected speech. This event is always returned by
the recording resource when speech has been detected. The recorder
resource also MUST send a Proxy-Sync-Id header field with a unique
value for this event.</t>
<figure>
<artwork><![CDATA[
S->C: MRCP/2.0 49 START-OF-INPUT 543259 IN-PROGRESS
Channel-Identifier:32AECB23433801@recorder
Proxy-Sync-Id:987654321
]]></artwork>
</figure>
</section>
</section>
<section anchor="sec.verifierResource"
title="Speaker Verification and Identification">
<t>This section describes the methods, responses and events employed by
MRCPv2 for doing Speaker Verification / Identification.</t>
<t>Speaker verification is a voice authentication methodology that can
be used to identify the speaker in order to grant the user access to
sensitive information and transactions. Because speech is a biometric, a
number of essential security considerations related to biometric
authentication technologies apply to its implementation and usage.
Implementers should carefully read <xref
target="sec.securityConsiderations"></xref> in this document and the
corresponding section of <xref target="RFC4313">Speechsc
Requirements</xref>.</t>
<t>In speaker verification, a recorded utterance is compared to a
previously stored voiceprint which is in turn associated with a claimed
identity for that user. Verification typically consists of two phases: a
designation phase to establish the claimed identity of the caller and an
execution phase in which a voiceprint is either created (training) or
used to authenticate the claimed identity (verification).</t>
<t>Speaker identification is the process of associating an unknown
speaker with a member in a population. It does not employ a claim of
identity. When an individual claims to belong to a group (e.g., one of
the owners of a joint bank account) a group authentication is performed.
This is generally implemented as a kind of verification involving
comparison with more than one voice model. It is sometimes called
'multi-verification.' If the individual speaker can be identified from
the group, this may be useful for applications where multiple users
share the same access privileges to some data or application. Speaker
identification and group authentication are also done in two phases, a
designation phase and an execution phase. Note that from a functionality
standpoint identification can be thought of as a special case of group
authentication (if the individual is identified) where the group is the
entire population, although the implementation of speaker identification
may be different from the way group authentication is performed. To
accommodate single-voiceprint verification, verification against
multiple voiceprints, group authentication, and identification, this
specification provides a single set of methods that can take a list of
identifiers, called “voiceprint identifiers”, and return a
list of identifiers, with a score for each representing how well the
input speech matched each identifier. The input and output lists of
identifiers do not have to match, allowing a vendor-specific group
identifier to be used as input to indicate that identification is to be
performed. In this specification, the terms “Identification”
and “Multi-verification” are used to indicate that the input
represents a group (potentially the entire population) and that results
for multiple voiceprints may be returned.</t>
<t>It is possible for a speaker verification resource to share the same
session with a recognizer resource or to operate independently. In order
to share the same session, the verification and recognizer resources
MUST be allocated from within the same SIP dialog. Otherwise, an
independent verification resource, running on the same physical server
or a separate one, will be set up. Note that in addition to allowing
both resources to be allocated in the same INVITE, it is possible to
allocate one initially and the other later via a re-INVITE.</t>
<t>Some of the speaker verification methods, described below, apply only
to a specific mode of operation.</t>
<t>The verification resource has a verification buffer associated with
it (see <xref target="sec.verBufferUtterance"></xref>). This allows the
storage of speech utterances for the purposes of verification,
identification or training from the buffered speech. This buffer is
owned by the verification resource but other input resources such as the
recognition resource or recorder resource may write to it. This allows
the speech received as part of a recognition or recording operation to
be later used for verification, identification or training. Access to
the buffer is limited to one operation at time. Hence when the resource
is doing read, write or delete operation such as a RECOGNIZE with
ver-buffer-utterance turned on, another operation involving the buffer
fails with a status of 402. The verification buffer can be cleared by a
CLEAR-BUFFER request from the client and is freed when the verification
resource is deallocated or the session with the server terminates.</t>
<t>The verification buffer is different from collecting waveforms and
processing them using either the real time audio stream or stored audio,
because this buffering mechanism does not simply accumulate speech to a
buffer. The verification buffer may contain additional information
gathered by the recognition resource that serves to improve verification
performance.</t>
<section title="Speaker Verification State Machine">
<t>Speaker verification may operate in a training or a verification
session. Starting one of these sessions does not change the state of
the verification resource, i.e. it remains idle. Once a verification
or training session is started, then utterances are trained or
verified by calling the VERIFY or VERIFY-FROM-BUFFER method. The state
of the verification resources goes from IDLE to VERIFYING state each
time VERIFY or VERIFY-FROM-BUFFER is called.</t>
<figure title="Verification Resource State Machine">
<artwork><![CDATA[
Idle Session Opened Verifying/Training
State State State
| | |
|--START-SESSION--->| |
| | |
| |----------| |
| | START-SESSION |
| |<---------| |
| | |
|<--END-SESSION-----| |
| | |
| |---------VERIFY--------->|
| | |
| |---VERIFY-FROM-BUFFER--->|
| | |
| |----------| |
| | VERIFY-ROLLBACK |
| |<---------| |
| | |
| | |--------|
| | GET-INTERMEDIATE-RESULT |
| | |------->|
| | |
| | |--------|
| | START-INPUT-TIMERS |
| | |------->|
| | |
| | |--------|
| | START-OF-INPUT |
| | |------->|
| | |
| |<-VERIFICATION-COMPLETE--|
| | |
| |<--------STOP------------|
| | |
| |----------| |
| | STOP |
| |<---------| |
| | |
|----------| | |
| STOP | |
|<---------| | |
| |----------| |
| | CLEAR-BUFFER |
| |<---------| |
| | |
|----------| | |
| CLEAR-BUFFER | |
|<---------| | |
| | |
| |----------| |
| | QUERY-VOICEPRINT |
| |<---------| |
| | |
|----------| | |
| QUERY-VOICEPRINT | |
|<---------| | |
| | |
| |----------| |
| | DELETE-VOICEPRINT |
| |<---------| |
| | |
|----------| | |
| DELETE-VOICEPRINT | |
|<---------| | |
]]></artwork>
</figure>
</section>
<section title="Speaker Verification Methods">
<t>The verification resource supports the following methods.</t>
<figure>
<artwork><![CDATA[
verification-method = "START-SESSION"
/ "END-SESSION"
/ "QUERY-VOICEPRINT"
/ "DELETE-VOICEPRINT"
/ "VERIFY"
/ "VERIFY-FROM-BUFFER"
/ "VERIFY-ROLLBACK"
/ "STOP"
/ "CLEAR-BUFFER"
/ "START-INPUT-TIMERS"
/ "GET-INTERMEDIATE-RESULT"
]]></artwork>
</figure>
<t>These methods allow the client to control the mode and target of
verification or identification operations within the context of a
session. All the verification input operations that occur within a
session may be used to create, update, or validate against the
voiceprint specified during the session. At the beginning of each
session the verification resource is reset to the state it had prior
to any previous verification session.</t>
<t>Verification/identification operations can be executed against live
or buffered audio. The verification resource provides methods for
collecting and evaluating live audio data, and methods for controlling
the verification resource and adjusting its configured behavior.</t>
<t>There are no dedicated methods for collecting buffered audio data.
This is accomplished by calling VERIFY, RECOGNIZE or RECORD as
appropriate for the resource, with the header field
ver-buffer-utterance. Then, when the following method is called
verification is performed using the set of buffered audio. <list
style="numbers">
<t>VERIFY-FROM-BUFFER</t>
</list></t>
<t>The following methods are used for verification of live audio
utterances : <list style="numbers">
<t>VERIFY</t>
<t>START-INPUT-TIMERS</t>
</list></t>
<t>The following methods are used for configuring the verification
resource and for establishing resource states : <list style="numbers">
<t>START-SESSION</t>
<t>END-SESSION</t>
<t>QUERY-VOICEPRINT</t>
<t>DELETE-VOICEPRINT</t>
<t>VERIFY-ROLLBACK</t>
<t><spanx style="verb">STOP</spanx></t>
<t>CLEAR-BUFFER</t>
</list></t>
<t>The following method allows the polling a Verification in progress
for intermediate results. <list style="numbers">
<t>GET-INTERMEDIATE-RESULT</t>
</list></t>
</section>
<section title="Verification Events">
<t>The verification resource generates the following events.</t>
<figure>
<artwork><![CDATA[
verification-event = "VERIFICATION-COMPLETE"
/ "START-OF-INPUT"
]]></artwork>
</figure>
</section>
<section title="Verification Header Fields">
<t>A verification resource message may contain header fields
containing request options and information to augment the Request,
Response or Event message it is associated with.</t>
<figure>
<artwork><![CDATA[
verification-header = repository-uri
/ voiceprint-identifier
/ verification-mode
/ adapt-model
/ abort-model
/ min-verification-score
/ num-min-verification-phrases
/ num-max-verification-phrases
/ no-input-timeout
/ save-waveform
/ media-type
/ waveform-uri
/ voiceprint-exists
/ ver-buffer-utterance
/ input-waveform-uri
/ completion-cause
/ completion-reason
/ speech-complete-timeout
/ new-audio-channel
/ abort-verification
/ start-input-timers
]]></artwork>
</figure>
<section anchor="sec.repositoryURI" title="Repository-URI">
<t>This header field specifies the voiceprint repository to be used
or referenced during speaker verification or identification
operations. This header field is required in the START-SESSION,
QUERY-VOICEPRINT and DELETE-VOICEPRINT methods.</t>
<figure>
<artwork><![CDATA[
repository-uri = "Repository-URI" ":" Uri CRLF
]]></artwork>
</figure>
</section>
<section title="Voiceprint-Identifier">
<t>This header field specifies the claimed identity for verification
applications. The claimed identity may be used to specify an
existing voiceprint or to establish a new voiceprint. This header
field is required in the QUERY-VOICEPRINT and DELETE-VOICEPRINT
methods. The Voiceprint-Identifier is required in the START-SESSION
method for verification operations. For Identification or
Multi-Verification operations this header field may contain a list
of voiceprint identifiers separated by semi-colons. For
identification operations the client can also specify a voiceprint
group identifier instead of a list of voiceprint identifiers.</t>
<figure>
<artwork><![CDATA[
voiceprint-identifier = "Voiceprint-Identifier" ":"
1*VCHAR "." 1*VCHAR
*[";" 1*VCHAR "." 1*VCHAR] CRLF
]]></artwork>
</figure>
</section>
<section title="Verification-Mode">
<t>This header field specifies the mode of the verification resource
and is set by the START-SESSION method. Acceptable values indicate
whether the verification session will train a voiceprint ("train")
or verify/identify using an existing voiceprint ("verify").</t>
<t>Training and verification sessions both require the voiceprint
Repository-URI to be specified in the START-SESSION. In many usage
scenarios, however, the system does not know the speaker's claimed
identity until a recognition operation has, for example, recognized
an account number to which the user desires access. In order to
allow the first few utterances of a dialog to be both recognized and
verified, the verification resource on the MRCPv2 server retains a
buffer. In this buffer, the MRCPv2 server accumulates recognized
utterances. The client can later execute a verification method and
apply the buffered utterances to the current verification
session.</t>
<t>Some voice user interfaces may require additional user input that
should not be subject to verification. For example, the user's input
may have been recognized with low confidence and thus require a
confirmation cycle. In such cases, the client should not execute the
VERIFY or VERIFY-FROM-BUFFER methods to collect and analyze the
caller's input. A separate recognizer resource can analyze the
caller's response without any participation by the verification
resource.</t>
<t>Once the following conditions have been met: <list
style="numbers">
<t>Voiceprint identity has been successfully established through
the voiceprint identifier header fields of the START-SESSION
method, and</t>
<t>the verification mode has been set to one of "train" or
"verify",</t>
</list>the verification resource may begin providing verification
information during verification operations. If the verification
resource does not reach one of the two major states ("train" or
"verify") , it MUST report an error condition in the MRCPv2 status
code to indicate why the verification resource is not ready for the
corresponding usage.</t>
<t>The value of verification-mode is persistent within a
verification session. If the client attempts to change the mode
during a verification session, the verification resource reports an
error and the mode retains its current value.</t>
<figure>
<artwork><![CDATA[
verification-mode = "Verification-Mode" ":"
verification-mode-string
verification-mode-string = "train"
/ "verify"
]]></artwork>
</figure>
</section>
<section title="Adapt-Model">
<t>This header field indicates the desired behavior of the
verification resource after a successful verification operation. If
the value of this header field is "true", the sever SHOULD use audio
collected during the verification session to update the voiceprint
to account for ongoing changes in a speaker's incoming speech
characteristics, unless local policy prohibits updating the
voiceprint. If the value is "false" (the default), the server MUST
NOT update the voiceprint. This header field MAY occur in the
START-SESSION method.</t>
<figure>
<artwork><![CDATA[
adapt-model = "Adapt-Model" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="Abort-Model">
<t>The Abort-Model header field indicates the desired behavior of
the verification resource upon session termination. If the value of
this header field is "true", the server MUST discard any pending
changes to a voiceprint due to verification training or verification
adaptation. If the value is "false" (the default), the server MUST
commit any pending changes for a training session or a successful
verification session to the voiceprint repository. A value of "true"
for Abort-Model overrides a value of "true" for the Adapt-Model
header field. This header field MAY occur in the END-SESSION
method.</t>
<figure>
<artwork><![CDATA[
abort-model = "Abort-Model" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="Min-Verification-Score">
<t>The Min-Verification-Score header field, when used with a
verification resource through a <spanx
style="verb">SET-PARAMS</spanx>, <spanx
style="verb">GET-PARAMS</spanx> or START-SESSION method, determines
the minimum verification score for which a verification decision of
"accepted" may be declared by the server. This is a float value
between -1.0 and 1.0 that determines the minimum verification score
for which a verification decision of "accepted" may be declared by
the server. The default value for this header field is
implementation specific.</t>
<figure>
<artwork><![CDATA[
min-verification-score = "Min-Verification-Score" ":"
[ %x2D ] FLOAT CRLF
]]></artwork>
</figure>
</section>
<section title="Num-Min-Verification-Phrases">
<t>The Num-Min-Verification-Phrases header field is used to specify
the minimum number of valid utterances before a positive decision is
given for verification. The value for this header field is an
integer and the default value is 1. The verification resource MUST
NOT declare a verification 'accepted' unless
Num-Min-Verification-Phrases valid utterances have been received.
The minimum value is 1. This header field MAY occur in
START-SESSION, <spanx style="verb">SET-PARAMS</spanx> or <spanx
style="verb">GET-PARAMS</spanx>.</t>
<figure>
<artwork><![CDATA[
num-min-verification-phrases = "Num-Min-Verification-Phrases" ":"
1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Num-Max-Verification-Phrases">
<t>The Num-Max-Verification-Phrases header field is used to specify
the number of valid utterances required before a decision is forced
for verification. The verification resource MUST NOT return a
decision of 'undecided' once Num-Max-Verification-Phrases have been
collected and used to determine a verification score. The value for
this header field is an integer and the minimum value is 1. The
default value is implementation-specific. This header field MAY
occur in START-SESSION, <spanx style="verb">SET-PARAMS</spanx> or
<spanx style="verb">GET-PARAMS</spanx>.</t>
<figure>
<artwork><![CDATA[
num-max-verification-phrases = "Num-Max-Verification-Phrases" ":"
1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="No-Input-Timeout">
<t>The No-Input-Timeout header field sets the length of time from
the start of the verification timers (see START-INPUT-TIMERS) until
the declaration of a no-input event in the VERIFICATION-COMPLETE
server event message. The value is in milliseconds. This header
field MAY occur in VERIFY, <spanx style="verb">SET-PARAMS</spanx> or
<spanx style="verb">GET-PARAMS</spanx>. The value for this header
field ranges from 0 to an implementation specific maximum value. The
default value for this header field is implementation specific.</t>
<figure>
<artwork><![CDATA[
no-input-timeout = "No-Input-Timeout" ":" 1*19DIGIT CRLF
]]></artwork>
</figure>
</section>
<section title="Save-Waveform">
<t>This header field allows the client to request the verification
resource to save the audio stream that was used for
verification/identification. The verification resource MUST attempt
to record the audio and make it available to the client in the form
of a URI returned in the Waveform-URI header field in the
VERIFICATION-COMPLETE event. If there was an error in recording the
stream or the audio content is otherwise not available, the
verification resource MUST return an empty Waveform-URI header
field. The default value for this header field is "false". This
header field MAY appear in the VERIFY method. Note that this header
field does not appear in the VERIFY-FROM-BUFFER method since it only
controls whether or not to save the waveform for live verification /
identification operations.</t>
<figure>
<artwork><![CDATA[
save-waveform = "Save-Waveform" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="Media Type">
<t>This header field MAY be specified in the SET-PARAMS, GET-PARAMS
or the VERIFY methods and tells the server resource the Media Type
of the captured audio or video such as the one captured and returned
by the Waveform-URI header field.</t>
<figure>
<artwork><![CDATA[
media-type = "Media-Type" ":" media-type-value
CRLF
]]></artwork>
</figure>
</section>
<section title="Waveform-URI">
<t>If the Save-Waveform header field is set to true, the
verification resource MUST attempt to record the incoming audio
stream of the verification into a file and provide a URI for the
client to access it. This header field MUST be present in the
VERIFICATION-COMPLETE event if the Save-Waveform header field was
set to true by the client. The value of the header field MUST be
empty if there was some error condition preventing the server from
recording. Otherwise, the URI generated by the server MUST be
globally unique across the server and all its verification sessions.
The content MUST be available via the URI until the verification
session ends. Since the Save-Waveform header field applies only to
live verification / identification operations, the server can return
the Waveform-URI only in the VERIFICATION-COMPLETE event for live
verification / identification operations.</t>
<t>The server MUST also return the size in octets and the duration
in milliseconds of the recorded audio wave-form as parameters
associated with the header field.</t>
<figure>
<artwork><![CDATA[
waveform-uri = "Waveform-URI" ":" ["<" Uri ">"
";" "size" "=" 1*19DIGIT
";" "duration" "=" 1*19DIGIT] CRLF
]]></artwork>
</figure>
</section>
<section title="Voiceprint-Exists">
<t>This header field MUST be returned in QUERY-VOICEPRINT and
DELETE-VOICEPRINT responses. This is the status of the voiceprint
specified in the QUERY-VOICEPRINT method. For the DELETE-VOICEPRINT
method this header field indicates the status of the voiceprint at
the moment the method execution started.</t>
<figure>
<artwork><![CDATA[
voiceprint-exists = "Voiceprint-Exists" ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section anchor="sec.verBufferUtterance" title="Ver-Buffer-Utterance">
<t>This header field is used to indicate that this utterance could
be later considered for Speaker Verification. This way, a client can
request the server to buffer utterances while doing regular
recognition or verification activities and speaker verification can
later be requested on the buffered utterances. This header field is
OPTIONAL in the RECOGNIZE, VERIFY and RECORD methods. The default
value for this header field is "false".</t>
<figure>
<artwork><![CDATA[
ver-buffer-utterance = "Ver-Buffer-Utterance" ":" BOOLEAN
CRLF
]]></artwork>
</figure>
</section>
<section title="Input-Waveform-Uri">
<t>This header field specifies stored audio content that the client
requests the server to fetch and process according to the current
verification mode, either to train the voiceprint or verify a
claimed identity. This header field enables the client to implement
the buffering use case where the recognizer and verification
resources are in different sessions and the verification buffer
technique cannot be used. It MAY be specified on the VERIFY
request.</t>
<figure>
<artwork><![CDATA[
input-waveform-uri = "Input-Waveform-URI" ":" Uri CRLF
]]></artwork>
</figure>
</section>
<section title="Completion-Cause">
<t>This header field MUST be part of a VERIFICATION-COMPLETE event
from the verification resource to the client. This indicates the
cause of VERIFY or VERIFY-FROM-BUFFER method completion. This header
field MUST be sent in the VERIFY, VERIFY-FROM-BUFFER, and
QUERY-VOICEPRINT responses, if they return with a failure status and
a COMPLETE state.</t>
<figure>
<artwork><![CDATA[
completion-cause = "Completion-Cause" ":" 3DIGIT SP
1*VCHAR CRLF
]]></artwork>
</figure>
<texttable>
<ttcol width="10%">Cause-Code</ttcol>
<ttcol width="40%">Cause-Name</ttcol>
<ttcol>Description</ttcol>
<c>000</c>
<c>success</c>
<c>VERIFY or VERIFY-FROM-BUFFER request completed successfully.
The verify decision can be "accepted", "rejected", or
"undecided".</c>
<c>001</c>
<c>error</c>
<c>VERIFY or VERIFY-FROM-BUFFER request terminated prematurely due
to a verification resource or system error.</c>
<c>002</c>
<c>no-input-timeout</c>
<c>VERIFY request completed with no result due to a
no-input-timeout.</c>
<c>003</c>
<c>too-much-speech-timeout</c>
<c>VERIFY request completed with no result due to too much
speech.</c>
<c>004</c>
<c>speech-too-early</c>
<c>VERIFY request completed with no result due to spoke too
soon.</c>
<c>005</c>
<c>buffer-empty</c>
<c>VERIFY-FROM-BUFFER request completed with no result due to
empty buffer.</c>
<c>006</c>
<c>out-of-sequence</c>
<c>Verification operation failed due to out-of-sequence method
invocations. For example calling VERIFY before
QUERY-VOICEPRINT.</c>
<c>007</c>
<c>repository-uri-failure</c>
<c>Failure accessing Repository URI.</c>
<c>008</c>
<c>repository-uri-missing</c>
<c>Repository-uri is not specified.</c>
<c>009</c>
<c>voiceprint-id-missing</c>
<c>Voiceprint-identification is not specified.</c>
<c>010</c>
<c>voiceprint-id-not-exist</c>
<c>Voiceprint-identification does not exist in the voiceprint
repository.</c>
<c>011</c>
<c>speech-not-usable</c>
<c>VERIFY request completed with no result because the speech was
not usable (too noisy, too short, etc.)</c>
</texttable>
</section>
<section title="Completion Reason">
<t>This header field MAY be specified in a VERIFICATION-COMPLETE
event coming from the verifier resource to the client. It contains
the reason text behind the VERIFY request completion. This header
field communicates text describing the reason for the failure.</t>
<t>The completion reason text is provided for client use in logs and
for debugging and instrumentation purposes. Clients MUST NOT
interpret the completion reason text.</t>
<figure>
<artwork><![CDATA[
completion-reason = "Completion-Reason" ":"
quoted-string CRLF
]]></artwork>
</figure>
</section>
<section title="Speech Complete Timeout">
<t>This header field is the same as the one described for the
Recognizer resource. See <xref
target="sec.speechCompleteTimeout"></xref>. This header field MAY
occur in VERIFY, SET-PARAMS, or GET-PARAMS.</t>
</section>
<section title="New Audio Channel">
<t>This header field is the same as the one described for the
Recognizer resource. See <xref target="sec.newAudioChannel"></xref>.
This header field MAY be specified in a VERIFY request.</t>
</section>
<section title="Abort-Verification">
<t>This header field MUST be sent in a <spanx
style="verb">STOP</spanx> request to indicate whether or not to
abort a VERIFY method in progress. A value of "true" requests the
server to discard the results. A value of "false" requests the
server to return in the <spanx style="verb">STOP</spanx> response
the verification results obtained up to the point it received the
<spanx style="verb">STOP</spanx> request.</t>
<figure>
<artwork><![CDATA[
Abort-verification = "Abort-Verification " ":" BOOLEAN CRLF
]]></artwork>
</figure>
</section>
<section title="Start Input Timers">
<t>This header field MAY be sent as part of a VERIFY request. A
value of false tells the verification resource to start the VERIFY
operation, but not to start the no-input timer yet. The verification
resource MUST NOT start the timers until the client sends a
START-INPUT-TIMERS request to the resource. This is useful in the
scenario when the verifier and synthesizer resources are not part of
the same session. In this scenario, when a kill-on-barge-in prompt
is being played, the client may want the VERIFY request to be
simultaneously active so that it can detect and implement
kill-on-barge-in (see <xref target="sec.kill-on-barge-in"></xref>).
But at the same time the client doesn't want the verification
resource to start the no-input timers until the prompt is finished.
The default value is "true".</t>
<figure>
<artwork><![CDATA[
start-input-timers = "Start-Input-Timers" ":"
BOOLEAN CRLF
]]></artwork>
</figure>
</section>
</section>
<section title="Verification Message Body">
<t>A verification response or event message may carry additional data
as described in the following subsection.</t>
<section title="Verification Result Data">
<t>Verification results are returned to the client in the message
body of the VERIFICATION-COMPLETE event or the
GET-INTERMEDIATE-RESULT response message as described in <xref
target="sec.result"></xref>). Element and attribute descriptions for
the verification portion of the NLSML format are provided in <xref
target="sec.verificationResults"></xref> with a normative definition
of the schema in <xref
target="sec.verificationResultsSchema"></xref>.</t>
</section>
<section anchor="sec.verificationResults"
title="Verification Result Elements">
<t>All verification elements are contained within a single
<verification-result> element under <result>. The
elements are described below and have the schema defined in <xref
target="sec.enrollmentResultsSchema"></xref>. The following elements
are defined:</t>
<t><list style="numbers">
<t>Voiceprint</t>
<t>Incremental</t>
<t>Cumulative</t>
<t>Decision</t>
<t>Utterance-Length</t>
<t>Device</t>
<t>Gender</t>
<t>Adapted</t>
<t>Verification-Score</t>
<t>Vendor-Specific-Results</t>
</list></t>
<section title="Voiceprint">
<t>This element in the verification results provides information
on how the speech data matched a single voiceprint. The result
data returned may have more than one such entity in the case of
Identification or Multi-Verification. Each <spanx
style="verb"><voiceprint></spanx> element and the XML data
within the element describe verification result information for
how well the speech data matched that particular voiceprint. The
list of voiceprint element data are ordered according to their
cumulative verification match scores, with the highest score
first.</t>
</section>
<section title="Cumulative">
<t>Within each <spanx style="verb"><voiceprint></spanx>
element there MUST be a <spanx
style="verb"><cumulative></spanx> element with the
cumulative scores of how well multiple utterances matched the
voiceprint.</t>
</section>
<section title="Incremental">
<t>The first <spanx style="verb"><voiceprint></spanx>
element MAY contain an <spanx
style="verb"><incremental></spanx> element with the
incremental scores of how well the last utterance matched the
voiceprint.</t>
</section>
<section title="Decision">
<t>This element is found within the <spanx
style="verb"><incremental></spanx> or <spanx
style="verb"><cumulative></spanx> element within the
verification results. Its value indicates the verification
decision. It can have the values of "accepted", "rejected" or
"undecided".</t>
</section>
<section title="Utterance-Length">
<t>This element MAY occur within either the <spanx
style="verb"><incremental></spanx> or <spanx
style="verb"><cumulative></spanx> elements within the first
<spanx style="verb"><voiceprint></spanx> element. Its value
indicates the size in milliseconds, respectively, of the last
utterance or the cumulated set of utterances.</t>
</section>
<section title="Device">
<t>This element is found within the incremental or cumulative
element within the verification results. Its value indicates the
apparent type of device used by the caller as determined by the
verification resource. It can have the values of "cellular-phone",
"electret-phone", "carbon-button-phone", or "unknown".</t>
</section>
<section title="Gender">
<t>This element is found within the incremental or cumulative
element within the verification results. Its value indicates the
apparent gender of the speaker as determined by the verification
resource. It can have the values of "male", "female" or
"unknown".</t>
</section>
<section title="Adapted">
<t>This element is found within the first <spanx
style="verb"><voiceprint></spanx> element within the
verification results. When verification is trying to confirm the
voiceprint, this indicates if the voiceprint has been adapted as a
consequence of analyzing the source utterances. It is not returned
during verification training. The value can be "true" or
"false".</t>
</section>
<section title="Verification-Score">
<t>This element is found within the incremental or cumulative
element within the verification results. Its value indicates the
score of the last utterance as determined by verification.</t>
<t>During verification, the higher the score the more likely it is
that the speaker is the same one as the one who spoke the
voiceprint utterances. During training, the higher the score the
more likely the speaker is to have spoken all of the analyzed
utterances. The value is a floating point between -1.0 and 1.0. If
there are no such utterances the score is -1. Note that the
verification score is not a probability value.</t>
</section>
<section title="Vendor-Specific-Results">
<t>Verification results may contain implementation specific data
which augment the information provided by the MRCPv2-defined
elements. These may be useful to clients who have private
knowledge of how to interpret these schema extensions.
Implementation specific additions to the verification results
schema MUST belong to the vendor's own namespace. In the result
structure, they must either be indicated by a namespace prefix
declared within the result or must be children of an element
identified as belonging to the respective namespace.</t>
<t>The following example shows the results of three voiceprints.
Note that the first one has crossed the verification score
threshold, and the speaker has been accepted. The voiceprint was
also adapted with the most recent utterance.</t>
<figure title="Verification Results Example 1">
<artwork><![CDATA[
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
grammar="What-Grammar-URI">
<verification-result>
<voiceprint id="johnsmith">
<adapted> true </adapted>
<incremental>
<utterance-length> 500 </utterance-length>
<device> cellular-phone </device>
<gender> male </gender>
<decision> accepted </decision>
<verification-score> 0.98514 </verification-score>
</incremental>
<cumulative>
<utterance-length> 10000 </utterance-length>
<device> cellular-phone </device>
<gender> male </gender>
<decision> accepted </decision>
<verification-score> 0.96725</verification-score>
</cumulative>
</voiceprint>
<voiceprint id="marysmith">
<cumulative>
<verification-score> 0.93410 </verification-score>
</cumulative>
</voiceprint>
<voiceprint uri="juniorsmith">
<cumulative>
<verification-score> 0.74209 </verification-score>
</cumulative>
</voiceprint>
</verification-result>
</result>
]]></artwork>
</figure>
<t>In this next example, the verifier has enough information to
decide to reject the speaker.</t>
<figure title="Verification Results Example 2">
<artwork><![CDATA[
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:xmpl="http://www.example.org/2003/12/mrcpv2"
grammar="What-Grammar-URI">
<verification-result>
<voiceprint id="johnsmith">
<incremental>
<utterance-length> 500 </utterance-length>
<device> cellular-phone </device>
<gender> male </gender>
<verification-score> 0.88514 </verification-score>
<xmpl:raspiness> high </xmpl:raspiness>
<xmpl:emotion> sadness </xmpl:emotion>
</incremental>
<cumulative>
<utterance-length> 10000 </utterance-length>
<device> cellular-phone </device>
<gender> male </gender>
<decision> rejected </decision>
<verification-score> 0.9345 </verification-score>
</cumulative>
</voiceprint>
</verification-result>
</result>
]]></artwork>
</figure>
</section>
</section>
</section>
<section title="START-SESSION">
<t>The START-SESSION method starts a Speaker Verification or
Identification session. Execution of this method places the
verification resource into its initial state. If this method is called
during an ongoing verification session, the previous session is
implicitly aborted. If this method is invoked when VERIFY or
VERIFY-FROM-BUFFER is active, the method fails and the server returns
a status code of 402.</t>
<t>Upon completion of the START-SESSION method, the verification
resource MUST have terminated any ongoing verification session, and
cleared any voiceprint designation.</t>
<t>A verification session is associated with the voiceprint repository
to be used during the session. This is specified through the
"Repository-URI" header field (see <xref
target="sec.repositoryURI"></xref>).</t>
<t>The START-SESSION method also establishes, through the
Voiceprint-Identifier header field, which voiceprints are to be
matched or trained during the verification session. If this is an
Identification session or if the client wants to do
Multi-Verification, the Voiceprint-Identifier header field contains a
list of semi-colon separated voiceprint identifiers.</t>
<t>The header field "Adapt-Model" may also be present in the
START-SESSION request to indicate whether or not to adapt a voiceprint
based on data collected during the session (if the voiceprint
verification phase succeeds). By default, the voiceprint model MUST
NOT be adapted with data from a verification session.</t>
<t>The START-SESSION also determines whether the session is for a
train or verify of a voiceprint. Hence the Verification-Mode header
field MUST be sent in every START-SESSION request. The value of the
Verification-Mode header field MUST be one of either "train" or
"verify".</t>
<t>Before a verification/identification session is started, only
VERIFY-ROLLBACK and generic <spanx style="verb">SET-PARAMS</spanx> and
<spanx style="verb">GET-PARAMS</spanx> operations may be performed on
the verification resource. The server MUST return 402 "Method not
valid in this state" for all other verification operations.</t>
<t>A verification resource may only have a single session active at
one time.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 START-SESSION 314161
Channel-Identifier:32AECB23433801@speakverify
Repository-URI:http://www.example.com/voiceprintdbase/
Voiceprint-Mode:verify
Voiceprint-Identifier:johnsmith.voiceprint
Adapt-Model:true
S->C: MRCP/2.0 49 314161 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
]]></artwork>
</figure>
</section>
<section title="END-SESSION">
<t>The END-SESSION method terminates an ongoing verification session
and releases the verification voiceprint resources. The session may
terminate in one of three ways: <list style="numbers">
<t>abort - the voiceprint adaptation or creation may be aborted so
that the voiceprint remains unchanged (or is not created).</t>
<t>commit - when terminating a voiceprint training session, the
new voiceprint is committed to the repository.</t>
<t>adapt - an existing voiceprint is modified using a successful
verification.</t>
</list></t>
<t>The header field "Abort-Model" MAY be included in the END-SESSION
to control whether or not to abort any pending changes to the
voiceprint. The default behavior is to commit (not abort) any pending
changes to the designated voiceprint.</t>
<t>The END-SESSION method may be safely executed multiple times
without first executing the START-SESSION method. Any additional
executions of this method without an intervening use of the
START-SESSION method have no effect on the verification resource.</t>
<t>The following example assumes there is either a training session or
a verification session in progress.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 END-SESSION 314174
Channel-Identifier:32AECB23433801@speakverify
Abort-Model:true
S->C: MRCP/2.0 49 314174 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
]]></artwork>
</figure>
</section>
<section title="QUERY-VOICEPRINT">
<t>The QUERY-VOICEPRINT method is used to get status information on a
particular voiceprint and can be used by the client to ascertain if a
voiceprint or repository exists and if it contains trained
voiceprints.</t>
<t>The response to the QUERY-VOICEPRINT request contains an indication
of the status of the designated voiceprint in the "Voiceprint-Exists"
header field, allowing the client to determine whether to use the
current voiceprint for verification, train a new voiceprint, or choose
a different voiceprint.</t>
<t>A voiceprint is completely specified by providing a repository
location and a voiceprint identifier. The particular voiceprint or
identity within the repository is specified by a string identifier
that is unique within the repository. The "Voiceprint-Identifier"
header field carries this unique voiceprint identifier within a given
repository.</t>
<t>The following example assumes a verification session is in progress
and the voiceprint exists in the voiceprint repository.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 QUERY-VOICEPRINT 314168
Channel-Identifier:32AECB23433801@speakverify
Repository-URI:http://www.example.com/voiceprints/
Voiceprint-Identifier:johnsmith.voiceprint
S->C: MRCP/2.0 123 314168 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
Repository-URI:http://www.example.com/voiceprints/
Voiceprint-Identifier:johnsmith.voiceprint
Voiceprint-Exists:true
]]></artwork>
</figure>
<t>The following example assumes that the URI provided in the
'Repository-URI' header field is a bad URI.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 QUERY-VOICEPRINT 314168
Channel-Identifier:32AECB23433801@speakverify
Repository-URI:http://www.example.com/bad-uri/
Voiceprint-Identifier:johnsmith.voiceprint
S->C: MRCP/2.0 123 314168 405 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
Repository-URI:http://www.example.com/bad-uri/
Voiceprint-Identifier:johnsmith.voiceprint
Completion-Cause:007 repository-uri-failure
]]></artwork>
</figure>
</section>
<section title="DELETE-VOICEPRINT">
<t>The DELETE-VOICEPRINT method removes a voiceprint from a
repository. This method MUST carry the Repository-URI and
Voiceprint-Identifier header fields.</t>
<t>If the corresponding voiceprint does not exist, the
DELETE-VOICEPRINT method MUST return a 200 status code.</t>
<t>The following example demonstrates a DELETE-VOICEPRINT operation to
remove a specific voiceprint.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 123 DELETE-VOICEPRINT 314168
Channel-Identifier:32AECB23433801@speakverify
Repository-URI:http://www.example.com/bad-uri/
Voiceprint-Identifier:johnsmith.voiceprint
S->C: MRCP/2.0 49 314168 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
]]></artwork>
</figure>
</section>
<section title="VERIFY">
<t>The VERIFY method is used to request the verification resource to
either train/adapt the voiceprint or to verify/identify a claimed
identity. If the voiceprint is new or was deleted by a previous
DELETE-VOICEPRINT method, the VERIFY method trains the voiceprint. If
the voiceprint already exits, it is adapted and not retrained by the
VERIFY command.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 49 VERIFY 543260
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 49 543260 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speakverify
]]></artwork>
</figure>
<t>When the VERIFY request is completes, the MRCPv2 server MUST send a
'VERIFICATION-COMPLETE' event to the client.</t>
</section>
<section title="VERIFY-FROM-BUFFER">
<t>The VERIFY-FROM-BUFFER method directs the verification resource to
verify buffered audio against a voiceprint. Only one VERIFY or
VERIFY-FROM-BUFFER method may be active for a verification resource at
a time.</t>
<t>The buffered audio is not consumed by this method and thus
VERIFY-FROM-BUFFER may be invoked multiple times by the client to
attempt verification against different voiceprints.</t>
<t>For the VERIFY-FROM-BUFFER method, the server MAY optionally return
an "IN-PROGRESS" response before the "VERIFICATION-COMPLETE"
event.</t>
<t>When the VERIFY-FROM-BUFFER method is invoked and the verification
buffer is in use by another resource sharing it, the server MUST
return an IN-PROGRESS response and wait until the buffer is available
to it. The verification buffer is owned by the verification resource
but is shared with write access from other input resources on the same
session. Hence, it is considered to be in use if there is a read or
write operation such as a RECORD or RECOGNIZE with the
Ver-Buffer-Utterance header field set to "true" on a resource that
shares this buffer. Note that if a RECORD or RECOGNIZE method returns
with a failure cause code, the VERIFY-FROM-BUFFER request waiting to
process that buffer MUST also fail with a Completion-Cause of 005
(buffer-empty).</t>
<t>The following example illustrates the usage of some buffering
methods. In this scenario the client first performed a live
verification, but the utterance had been rejected. In the meantime,
the utterance is also saved to the audio buffer. Then, another
voiceprint is used to do verification against the audio buffer and the
utterance is accepted. For the example, we assume both
Num-Min-Verification-Phrases and Num-Max-Verification-Phrases are
1.</t>
<figure title="VERIFY-FROM-BUFFER example">
<artwork><![CDATA[
C->S: MRCP/2.0 123 START-SESSION 314161
Channel-Identifier:32AECB23433801@speakverify
Verification-Mode:verify
Adapt-Model:true
Repository-URI:http://www.example.com/voiceprints
Voiceprint-Identifier:johnsmith.voiceprint
S->C: MRCP/2.0 49 314161 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
C->S: MRCP/2.0 123 VERIFY 314162
Channel-Identifier:32AECB23433801@speakverify
Ver-buffer-utterance:true
S->C: MRCP/2.0 49 314164 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 123 VERIFICATION-COMPLETE 314162 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
Completion-Cause:000 success
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
grammar="What-Grammar-URI">
<verification-result>
<voiceprint id="johnsmith">
<incremental>
<utterance-length> 500 </utterance-length>
<device> cellular-phone </device>
<gender> female </gender>
<decision> rejected </decision>
<verification-score> 0.05465 </verification-score>
</incremental>
<cumulative>
<utterance-length> 500 </utterance-length>
<device> cellular-phone </device>
<gender> female </gender>
<decision> rejected </decision>
<verification-score> 0.05465 </verification-score>
</cumulative>
</voiceprint>
</verification-result>
</result>
C->S: MRCP/2.0 123 QUERY-VOICEPRINT 314163
Channel-Identifier:32AECB23433801@speakverify
Repository-URI:http://www.example.com/voiceprints/
Voiceprint-Identifier:johnsmith
S->C: MRCP/2.0 123 314163 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
Repository-URI:http://www.example.com/voiceprints/
Voiceprint-Identifier:johnsmith.voiceprint
Voiceprint-Exists:true
C->S: MRCP/2.0 123 START-SESSION 314164
Channel-Identifier:32AECB23433801@speakverify
Verification-Mode:verify
Adapt-Model:true
Repository-URI:http://www.example.com/voiceprints
Voiceprint-Identifier:marysmith.voiceprint
S->C: MRCP/2.0 49 314164 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
C->S: MRCP/2.0 123 VERIFY-FROM-BUFFER 314165
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 49 314165 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 123 VERIFICATION-COMPLETE 314165 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
Completion-Cause:000 success
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
grammar="What-Grammar-URI">
<verification-result>
<voiceprint id="marysmith">
<incremental>
<utterance-length> 1000 </utterance-length>
<device> cellular-phone </device>
<gender> female </gender>
<decision> accepted </decision>
<verification-score> 0.98 </verification-score>
</incremental>
<cumulative>
<utterance-length> 1000 </utterance-length>
<device> cellular-phone </device>
<gender> female </gender>
<decision> accepted </decision>
<verification-score> 0.98 </verification-score>
</cumulative>
</voiceprint>
</verification-result>
</result>
C->S: MRCP/2.0 49 END-SESSION 314166
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 49 314166 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
]]></artwork>
</figure>
</section>
<section title="VERIFY-ROLLBACK">
<t>The VERIFY-ROLLBACK method discards the last buffered utterance or
discards the last live utterances (when the mode is "train" or
"verify"). The client should invoke this method when the user provides
undesirable input such as non-speech noises, side-speech,
out-of-grammar utterances, commands, etc. Note that this method does
not provide a stack of rollback states. Executing VERIFY-ROLLBACK
twice in succession without an intervening recognition operation has
no effect on the second attempt.</t>
<figure title="VERFY-ROLLBACK Example">
<artwork><![CDATA[
C->S: MRCP/2.0 49 VERIFY-ROLLBACK 314165
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 49 314165 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
]]></artwork>
</figure>
</section>
<section title="STOP">
<t>The <spanx style="verb">STOP</spanx> method from the client to the
server tells the verification resource to stop the VERIFY or
VERIFY-FROM-BUFFER request if one is active. If such a request is
active and the <spanx style="verb">STOP</spanx> request successfully
terminated it, then the response header section contains an
active-request-id-list header field containing the request-id of the
VERIFY or VERIFY-FROM-BUFFER request that was terminated. In this
case, no VERIFICATION-COMPLETE event is sent for the terminated
request. If there was no verify request active, then the response MUST
NOT contain an active-request-id-list header field. Either way the
response MUST contain a status of 200 (Success).</t>
<t>The <spanx style="verb">STOP</spanx> method can carry a
"Abort-Verification" header field which specifies if the verification
result until that point should be discarded or returned. If this
header field is not present or if the value is "true", the
verification result is discarded and the <spanx
style="verb">STOP</spanx> response does not contain any result data.
If the header field is present and its value is "false", the <spanx
style="verb">STOP</spanx> response MUST contain a "Completion-Cause"
header field and carry the Verification result data in its body.</t>
<t>An aborted VERIFY request does an automatic roll-back and hence
does not affect the cumulative score. A VERIFY request that was
stopped with no "Abort-Verification" header field or with the
"Abort-Verification" header field set to "false" does affect
cumulative scores and would need to be explicitly rolled-back if the
client does not want the verification result considered in the
cumulative scores.</t>
<t>The following example assumes a voiceprint identity has already
been established.</t>
<figure title="STOP verification Example">
<artwork><![CDATA[
C->S: MRCP/2.0 123 VERIFY 314177
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 49 314177 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speakverify
C->S: MRCP/2.0 49 STOP 314178
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 123 314178 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
Active-Request-Id-List:314177
]]></artwork>
</figure>
</section>
<section title="START-INPUT-TIMERS">
<t>This request is sent from the client to the verification resource
to start the no-input timer, usually once the client has ascertained
that any audio prompts to the user have played to completion.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 49 START-INPUT-TIMERS 543260
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 49 543260 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
]]></artwork>
</figure>
</section>
<section title="VERIFICATION-COMPLETE">
<t>The VERIFICATION-COMPLETE event follows a call to VERIFY or
VERIFY-FROM-BUFFER and is used to communicate the verification results
to the client. The event message body contains only verification
results.</t>
<figure>
<artwork><![CDATA[
S->C: MRCP/2.0 123 VERIFICATION-COMPLETE 543259 COMPLETE
Completion-Cause:000 success
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
grammar="What-Grammar-URI">
<verification-result>
<voiceprint id="johnsmith">
<incremental>
<utterance-length> 500 </utterance-length>
<device> cellular-phone </device>
<gender> male </gender>
<decision> accepted </decision>
<verification-score> 0.85 </verification-score>
</incremental>
<cumulative>
<utterance-length> 1500 </utterance-length>
<device> cellular-phone </device>
<gender> male </gender>
<decision> accepted </decision>
<verification-score> 0.75 </verification-score>
</cumulative>
</voiceprint>
</verification-result>
</result>
]]></artwork>
</figure>
</section>
<section title="START-OF-INPUT">
<t>The START-OF-INPUT event is returned from the server to the client
once the server has detected speech. This event is always returned by
the verification resource when speech has been detected, irrespective
of whether the recognizer and verification resources share the same
session or not.</t>
<figure>
<artwork><![CDATA[
S->C: MRCP/2.0 49 START-OF-INPUT 543259 IN-PROGRESS
Channel-Identifier:32AECB23433801@speakverify
]]></artwork>
</figure>
</section>
<section title="CLEAR-BUFFER">
<t>The CLEAR-BUFFER method can be used to clear the verification
buffer. This buffer is used to buffer speech during a recognition,
record or verification operations that may later be used by
VERIFY-FROM-BUFFER. As noted before, the buffer associated with the
verification resource is shared by other input resources like
recognizers and recorders. Hence, a CLEAR-BUFFER request fails if the
verification buffer is in use. This can happen when any one of the
input resources that shares this buffer has an active read or write
operation such as RECORD, RECOGNIZE or VERIFY with the
Ver-Buffer-Utterance header field set to "true".</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 49 CLEAR-BUFFER 543260
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 49 543260 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
]]></artwork>
</figure>
</section>
<section title="GET-INTERMEDIATE-RESULT">
<t>A client can use the GET-INTERMEDIATE-RESULT method to poll for
intermediate results of a verification request that is in progress.
Invoking this method does not change the state of the resource. The
verification resource collects the accumulated verification results
and returns the information in the method response. The message body
in the response to a GET-INTERMEDIATE-RESULT REQUEST contains only
verification results. The method response MUST NOT contain a
Completion-Cause header field as the request is not yet complete. If
the resource does not have a verification in progress the response has
a 402 failure code and no result in the body.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 49 GET-INTERMEDIATE-RESULT 543260
Channel-Identifier:32AECB23433801@speakverify
S->C: MRCP/2.0 49 543260 200 COMPLETE
Channel-Identifier:32AECB23433801@speakverify
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
grammar="What-Grammar-URI">
<verification-result>
<voiceprint id="marysmith">
<incremental>
<utterance-length> 50 </utterance-length>
<device> cellular-phone </device>
<gender> female </gender>
<decision> undecided </decision>
<verification-score> 0.85 </verification-score>
</incremental>
<cumulative>
<utterance-length> 150 </utterance-length>
<device> cellular-phone </device>
<gender> female </gender>
<decision> undecided </decision>
<verification-score> 0.65 </verification-score>
</cumulative>
</voiceprint>
</verification-result>
</result>
]]></artwork>
</figure>
</section>
</section>
<section anchor="sec.securityConsiderations"
title="Security Considerations">
<t>MRCPv2 is designed to comply with the security-related requirements
documented in <xref target="RFC4313">SpeechSC Requirements</xref>.
Implementers and users of MRCPv2 are strongly encouraged to read the
Security Considerations section of <xref target="RFC4313"></xref>,
because that document contains discussion of a number of important
security issues associated with the utilization of speech as biometric
authentication technology, and on the threats against systems which
store recorded speech, contain large corpora of voiceprints, and send
and receive sensitive information based on voice input to a recognizer
or speech output from a synthesizer. Specific security measures employed
by MRCPv2 are summarized in the following subsections. See the
corresponding sections of this specification for how the
security-related machinery is invoked by individual protocol
operations.</t>
<section title="Rendezvous and Session Establishment">
<t>MRCPv2 control sessions are established as media sessions described
by SDP within the context of a SIP dialog. In order to ensure secure
rendezvous between MRCPv2 clients and servers, the following are
required:</t>
<t><list style="numbers">
<t>The SIP implementation in MRCPv2 clients and servers MUST
support digest authentication.</t>
<t>The SIP implementation in MRCPv2 clients and servers SHOULD
employ SIPS: URIs,</t>
<t>If media stream cryptographic keying is done through SDP (e.g.
using <xref target="RFC4568"></xref>), the MRCPv2 clients and
servers MUST employ SIPS:.</t>
</list></t>
</section>
<section title="Control channel protection">
<t>Sensitive data is carried over the MRCPv2 control channel. This
includes things like the output of speech recognition operations,
speaker verification results, input to text-to-speech conversion,
personally-identifying grammars, etc. For this reason MRCPv2 servers
must be properly authenticated and the control channel must permit the
use of both confidentiality and integrity for the data. To ensure
control channel protection, MRCPv2 clients and servers MUST support
TLS and SHOULD utilize it by default unless alternative control
channel protection is used. Alternative control channel protection MAY
be used if desired (e.g. IPSEC).</t>
</section>
<section title="Media session protection">
<t>Sensitive data is also carried on media sessions terminating on
MRCPv2 servers (the other end of a media channel may or may not be on
the MRCPv2 client). This data includes the user's spoken utterances
and the output of text-to-speech operations. MRCPv2 servers MUST
support a security mechanism for protection of audio media sessions.
MRCPv2 clients that originate or consume audio similarly MUST support
a security mechanism for protection of the audio. If appropriate,
usage of the <xref target="RFC3711">Secure Real-time Transport
Protocol (SRTP)</xref> is recommended.</t>
</section>
<section title="Indirect Content Access">
<t>MCRPv2 employs content indirection extensively. Content may be
fetched and/or stored based on URI-addressing on systems other than
the MRCPv2 client or server. Not all of the stored content is
necessarily sensitive (e.g. XML schemas), but the majority generally
needs protection, and some indirect content, such as voice recordings
and voiceprints, are extremely sensitive and must always be protected.
MRCPv2 clients and servers MUST implement HTTPS for indirect content
access, and SHOULD employ secure access for all sensitive indirect
content. Other secure URI-schemes such as FTPS MAY also be used. See
<xref target="sec.SetCookie"></xref> for the header fields used to
transfer cookie information between the MRCPv2 client and server if
needed for authentication.</t>
<t>MRCPv2 makes no inherent assumptions about the lifetime and access
controls associated with a URI. For example, if neither authentication
nor scheme-specific access controls are used, a leak of the URI is
equivalent to a leak of the content. Moreover, MRCPv2 makes no
specific demands on the lifetime of a URI. If a server offers a URI
and the client takes a long, long time to access that URI, the server
may have removed the resource in the interim time period. MRCPv2 deals
with this case by using the URI access scheme's resource not found
error, such as 404 for HTTPS. How long a server should keep a dynamic
resource available is highly application and context dependent.
However, the server SHOULD keep the resource available for a
reasonable amount of time to make it likely the client will have the
resource available when the client needs the resource. Conversely, to
mitigate state exhaustion attacks, MRCPv2 servers are not obligated to
keep resources and resource state in perpetuity. The server SHOULD
delete dynamically-generated resources associated with an MRCPv2
session when the session ends.</t>
<t>One method to avoid resource leakage is for the server to use
one-time resource URIs. In this instance, there can be only a single
access to the underlying resource using the given URI. A downside to
this approach is if an attacker uses the URI before the client uses
the URI, then the client is denied the resource. Other methods would
be to adopt a mechanism similar to the <xref target="RFC4467">URLAUTH
IMAP extension</xref>, where the server sets cryptographic checks on
URI usage, as well as capabilities for expiration, revocation, and so
on. Specifying such a mechanism is beyond the scope of this
document.</t>
</section>
<section title="Protection of stored media">
<t>MRCPv2 applications often require the use of stored media. Voice
recordings are both stored (e.g. for diagnosis and system tuning), and
fetched (for replaying utterances into multiple MRCPv2 resources).
Voiceprints are fundamental to the speaker identification and
verification functions. This data can be extremely sensitive and can
present substantial privacy and impersonation risks if stolen. Systems
employing MRCPv2 should be deployed in ways that minimize these risks.
The <xref target="RFC4313">SpeechSC Requirements</xref> contains a
more extensive discussion of these risks and ways they may be
mitigated.</t>
</section>
<section title="DTMF and recognition buffers">
<t>DTMF buffers and recognition buffers may grow large enough to
exceed the capabilities of a server, and the server MUST be prepared
to gracefully handle resource consumption. A server MAY respond with
the appropriate recognition incomplete if the server is in danger of
running out of resources.</t>
</section>
</section>
<section anchor="sec.iana" title="IANA Considerations">
<section title="New registries">
<t>This section describes the name spaces (registries) for MRCPv2 that
IANA is requested to create and maintain. Assignment/registration
policies are described in <xref target="RFC5226">RFC5226</xref>.</t>
<section anchor="sec.registration.resources"
title="MRCPv2 resource types">
<t>IANA SHALL create a new name space of "MRCPv2 resource types".
All maintenance within and additions to the contents of this name
space MUST be according to the "Standards Action" registration
policy. The initial contents of the registry, defined in <xref
target="sec.resourceControl"></xref>, are given below:</t>
<figure>
<artwork><![CDATA[Resource type Resource description Reference
------------- -------------------- ---------
speechrecog Speech Recognizer [RFCXXXX]
dtmfrecog DTMF Recognizer [RFCXXXX]
speechsynth Speech Synthesizer [RFCXXXX]
basicsynth Basic Synthesizer [RFCXXXX]
speakverify Speaker Verification [RFCXXXX]
recorder Speech Recorder [RFCXXXX]]]></artwork>
</figure>
</section>
<section title="MRCPv2 methods and events">
<t>IANA SHALL create a new name space of "MRCPv2 methods and
events". All maintenance within and additions to the contents of
this name space MUST be according to the "Standards Action"
registration policy. The initial contents of the registry, defined
by the "method-name" BNF in <xref target="sec.request"></xref> and
the "event-name" BNF in <xref target="sec.events"></xref>, are given
below.</t>
<figure>
<artwork><![CDATA[Name Resource type Method/Event Reference
---- ------------- ------------ ---------
SET-PARAMS Synthesizer Method [RFCXXXX]
GET-PARAMS Synthesizer Method [RFCXXXX]
SPEAK Synthesizer Method [RFCXXXX]
STOP Synthesizer Method [RFCXXXX]
PAUSE Synthesizer Method [RFCXXXX]
RESUME Synthesizer Method [RFCXXXX]
BARGE-IN-OCCURRED Synthesizer Method [RFCXXXX]
CONTROL Synthesizer Method [RFCXXXX]
DEFINE-LEXICON Synthesizer Method [RFCXXXX]
DEFINE-GRAMMAR Recognizer Method [RFCXXXX]
RECOGNIZE Recognizer Method [RFCXXXX]
INTERPRET Recognizer Method [RFCXXXX]
GET-RESULT Recognizer Method [RFCXXXX]
START-INPUT-TIMERS Recognizer Method [RFCXXXX]
STOP Recognizer Method [RFCXXXX]
START-PHRASE-ENROLLMENT Recognizer Method [RFCXXXX]
ENROLLMENT-ROLLBACK Recognizer Method [RFCXXXX]
END-PHRASE-ENROLLMENT Recognizer Method [RFCXXXX]
MODIFY-PHRASE Recognizer Method [RFCXXXX]
DELETE-PHRASE Recognizer Method [RFCXXXX]
RECORD Recorder Method [RFCXXXX]
STOP Recorder Method [RFCXXXX]
START-SESSION Verifier Method [RFCXXXX]
END-SESSION Verifier Method [RFCXXXX]
QUERY-VOICEPRINT Verifier Method [RFCXXXX]
DELETE-VOICEPRINT Verifier Method [RFCXXXX]
VERIFY Verifier Method [RFCXXXX]
VERIFY-FROM-BUFFER Verifier Method [RFCXXXX]
VERIFY-ROLLBACK Verifier Method [RFCXXXX]
STOP Verifier Method [RFCXXXX]
START-INPUT-TIMERS Verifier Method [RFCXXXX]
GET-INTERMEDIATE-RESULT Verifier Method [RFCXXXX]
SPEECH-MARKER Synthesizer Event [RFCXXXX]
SPEAK-COMPLETE Synthesizer Event [RFCXXXX]
START-OF-INPUT Recognizer Event [RFCXXXX]
RECOGNITION-COMPLETE Recognizer Event [RFCXXXX]
INTERPRETATION-COMPLETE Recognizer Event [RFCXXXX]
START-OF-INPUT Recorder Event [RFCXXXX]
RECORD-COMPLETE Recorder Event [RFCXXXX]
VERIFICATION-COMPLETE Verifier Event [RFCXXXX]
START-OF-INPUT Verifier Event [RFCXXXX]]]></artwork>
</figure>
</section>
<section title="MRCPv2 header fields">
<t>IANA SHALL create a new name space of "MRCPv2 header fields". All
maintenance within and additions to the contents of this name space
MUST be according to the "Standards Action" registration policy. The
initial contents of the registry, defined by the "message-header"
BNF in <xref target="sec.common"></xref>, are given below. Note that
the values permitted for the "Vendor-Specific-Parameters" parameter
are managed according to a different policy. See <xref
target="sec.vendorSpecificRegistration"></xref>.</t>
<figure>
<artwork><![CDATA[Name Resource type Reference
---- ------------- ---------
channel-identifier Generic [RFCXXXX]
accept Generic [RFC2616]
active-request-id-list Generic [RFCXXXX]
proxy-sync-id Generic [RFCXXXX]
accept-charset Generic [RFC2616]
content-type Generic [RFCXXXX]
content-id Generic [RFC2392, RFC2046, and RFC5322]
content-base Generic [RFCXXXX]
content-encoding Generic [RFCXXXX]
content-location Generic [RFCXXXX]
content-length Generic [RFCXXXX]
fetch-timeout Generic [RFCXXXX]
cache-control Generic [RFCXXXX]
logging-tag Generic [RFCXXXX]
set-cookie Generic [RFCXXXX]
set-cookie2 Generic [RFCXXXX]
vendor-specific Generic [RFCXXXX]
jump-size Synthesizer [RFCXXXX]
kill-on-barge-in Synthesizer [RFCXXXX]
speaker-profile Synthesizer [RFCXXXX]
completion-cause Synthesizer [RFCXXXX]
completion-reason Synthesizer [RFCXXXX]
voice-parameter Synthesizer [RFCXXXX]
prosody-parameter Synthesizer [RFCXXXX]
speech-marker Synthesizer [RFCXXXX]
speech-language Synthesizer [RFCXXXX]
fetch-hint Synthesizer [RFCXXXX]
audio-fetch-hint Synthesizer [RFCXXXX]
failed-uri Synthesizer [RFCXXXX]
failed-uri-cause Synthesizer [RFCXXXX]
speak-restart Synthesizer [RFCXXXX]
speak-length Synthesizer [RFCXXXX]
load-lexicon Synthesizer [RFCXXXX]
lexicon-search-order Synthesizer [RFCXXXX]
confidence-threshold Recognizer [RFCXXXX]
sensitivity-level Recognizer [RFCXXXX]
speed-vs-accuracy Recognizer [RFCXXXX]
n-best-list-length Recognizer [RFCXXXX]
input-type Recognizer [RFCXXXX]
no-input-timeout Recognizer [RFCXXXX]
recognition-timeout Recognizer [RFCXXXX]
waveform-uri Recognizer [RFCXXXX]
input-waveform-uri Recognizer [RFCXXXX]
completion-cause Recognizer [RFCXXXX]
completion-reason Recognizer [RFCXXXX]
recognizer-context-block Recognizer [RFCXXXX]
start-input-timers Recognizer [RFCXXXX]
speech-complete-timeout Recognizer [RFCXXXX]
speech-incomplete-timeout Recognizer [RFCXXXX]
dtmf-interdigit-timeout Recognizer [RFCXXXX]
dtmf-term-timeout Recognizer [RFCXXXX]
dtmf-term-char Recognizer [RFCXXXX]
failed-uri Recognizer [RFCXXXX]
failed-uri-cause Recognizer [RFCXXXX]
save-waveform Recognizer [RFCXXXX]
media-type Recognizer [RFCXXXX]
new-audio-channel Recognizer [RFCXXXX]
speech-language Recognizer [RFCXXXX]
ver-buffer-utterance Recognizer [RFCXXXX]
recognition-mode Recognizer [RFCXXXX]
cancel-if-queue Recognizer [RFCXXXX]
hotword-max-duration Recognizer [RFCXXXX]
hotword-min-duration Recognizer [RFCXXXX]
interpret-text Recognizer [RFCXXXX]
dtmf-buffer-time Recognizer [RFCXXXX]
clear-dtmf-buffer Recognizer [RFCXXXX]
early-no-match Recognizer [RFCXXXX]
num-min-consistent-pronunciations Recognizer [RFCXXXX]
consistency-threshol Recognizer [RFCXXXX]
clash-threshold Recognizer [RFCXXXX]
personal-grammar-uri Recognizer [RFCXXXX]
enroll-utterance Recognizer [RFCXXXX]
phrase-id Recognizer [RFCXXXX]
phrase-nl Recognizer [RFCXXXX]
weight Recognizer [RFCXXXX]
save-best-waveform Recognizer [RFCXXXX]
new-phrase-id Recognizer [RFCXXXX]
confusable-phrases-ur Recognizer [RFCXXXX]
abort-phrase-enrollmen Recognizer [RFCXXXX]
sensitivity-level Recorder [RFCXXXX]
no-input-timeout Recorder [RFCXXXX]
completion-cause Recorder [RFCXXXX]
failed-uri Recorder [RFCXXXX]
failed-uri-cause Recorder [RFCXXXX]
record-uri Recorder [RFCXXXX]
media-type Recorder [RFCXXXX]
max-time Recorder [RFCXXXX]
trim-length Recorder [RFCXXXX]
final-silence Recorder [RFCXXXX]
capture-on-speech Recorder [RFCXXXX]
new-audio-channel Recorder [RFCXXXX]
start-input-timers Recorder [RFCXXXX]
input-type Recorder [RFCXXXX]
repository-uri Verifier [RFCXXXX]
voiceprint-identifier Verifier [RFCXXXX]
verification-mode Verifier [RFCXXXX]
adapt-model Verifier [RFCXXXX]
abort-model Verifier [RFCXXXX]
min-verification-score Verifier [RFCXXXX]
num-min-verification-phrases Verifier [RFCXXXX]
num-max-verification-phrases Verifier [RFCXXXX]
no-input-timeout Verifier [RFCXXXX]
save-waveform Verifier [RFCXXXX]
media-type Verifier [RFCXXXX]
waveform-uri Verifier [RFCXXXX]
voiceprint-exists Verifier [RFCXXXX]
ver-buffer-utterance Verifier [RFCXXXX]
input-waveform-uri Verifier [RFCXXXX]
completion-cause Verifier [RFCXXXX]
completion-reason Verifier [RFCXXXX]
speech-complete-timeout Verifier [RFCXXXX]
new-audio-channel Verifier [RFCXXXX]
abort-verification Verifier [RFCXXXX]
start-input-timers Verifier [RFCXXXX]
input-type Verifier [RFCXXXX]]]></artwork>
</figure>
</section>
<section title="MRCPv2 status codes">
<t>IANA SHALL create a new name space of "MRCPv2 status codes" with
the initial values that are defined in <xref
target="sec.statusCodes"></xref> All maintenance within and
additions to the contents of this name space MUST be according to
the "Specification Required with Expert Review" registration
policy.</t>
</section>
<section title="Grammar Reference List Parameters">
<t>IANA SHALL create a new name space of "Grammar Reference List
Parameters". All maintenance within and additions to the contents of
this name space MUST be according to the "Specification Required
with Expert Review" registration policy. There is only one initial
parameter, "weight", which is defined in <xref
target="sec.grammar-ref-list"></xref> and <xref
target="sec.methodRecognize"></xref>.</t>
</section>
<section anchor="sec.vendorSpecificRegistration"
title="MRCPv2 vendor-specific parameters">
<t>IANA SHALL create a new name space of "MRCPv2 vendor-specific
parameters". All maintenance within and additions to the contents of
this name space MUST be according to the "Hierarchical Allocation"
registration policy as follows. Each name (corresponding to the
"vendor-av-pair-name" ABNF production) MUST satisfy the syntax
requirements of Internet Domain Names as described in section 2.3.1
of <xref target="RFC1035">RFC1035</xref> (and as updated or
obsoleted by successive RFCs), with one exception, the order of the
domain names is reversed. For example, a vendor-specific parameter
"foo" by example.com would have the form "com.example.foo". The
first, or top-level domain, is restricted to exactly the set of
Top-Level Internet Domains defined by IANA and will be updated by
IANA when and only when that set changes. The second-level and all
subdomains within the parameter name MUST be allocated according to
the "Expert Review" policy. The Designated Expert MAY advise IANA to
allow delegation of subdomains to the requester. As a general
guideline, the Designated Expert is encouraged to manage the
allocation of corporate, organizational, or institutional names and
delegate all subdomains accordingly. For example, the Designated
Expert MAY allocate "com.example" and delegate all subdomains of
that name to the organization represented by the Internet domain
name "example.com". For simplicity, the Designated Expert is
encouraged to perform allocations according to the existing
allocations of Internet domain names to organizations, institutions,
corporations, etc.</t>
<t>The registry contains a list of vendor-registered parameters,
where each defined parameter is associated with a reference to an
RFC defining it. The registry is initially empty.</t>
</section>
</section>
<section title="NLSML-related registrations">
<section title="application/nlsml+xml Media Type registration">
<t>IANA is requested to register the following Media Type according
to the process defined in <xref target="RFC4288">RFC4288</xref>.
<list style="hanging">
<t hangText="To:">ietf-types@iana.org</t>
<t hangText="Subject:">Registration of media type
application/nlsml+xml</t>
<t hangText="MIME media type name:">application</t>
<t hangText="MIME subtype name:">nlsml+xml</t>
<t hangText="Required parameters:">none</t>
<t hangText="Optional parameters:"><list style="hanging">
<t hangText="charset:">All of the considerations described
in RFC3023 also apply to the application/nlsml+xml media
type.</t>
</list></t>
<t hangText="Encoding considerations:">All of the considerations
described in RFC3023 also apply to the application/nlsml+xml
media type.</t>
<t hangText="Security considerations:">As with HTML, NLSML
documents contain links to other data stores (grammars,
verification resources, etc.). Unlike HTML, however, the data
stores are not treated as media to be rendered. Nevertheless,
linked files may themselves have security considerations, which
would be those of the individual registered types. Additionally,
this media type has all of the security considerations described
in RFC3023.</t>
<t hangText="Interoperability considerations:">Although an NLSML
document is itself a complete XML document, for a fuller
interpretation of the content a receiver of an NLSML document
may wish to access resources linked to by the document. The
inability of an NLSML processor to access or process such linked
resources could result in different behavior by the ultimate
consumer of the data.</t>
<t hangText="Published specification:">RFCXXXX</t>
<t hangText="Applications which use this media type:">MRCPv2
clients and servers</t>
<t hangText="Additional information:">none</t>
<t hangText="Magic number(s):">There is no single initial octet
sequence that is always present for NLSML files.</t>
<t
hangText="Person & email address to contact for further information:">Sarvi
Shanmugham, sarvi@cisco.com</t>
<t hangText="Intended usage:">This media type is expected to be
used only in conjunction with MRCPv2.</t>
</list></t>
</section>
</section>
<section title="NLSML XML Schema registration">
<t>IANA is requested to register and maintain the following XML
Schema. Information provided follows the template in <xref
target="RFC3688">RFC3688</xref>. <list style="hanging">
<t hangText="XML element type:">schema</t>
<t hangText="URI:">http://www.ietf.org/xml/schema/mrcpv2</t>
<t hangText="Registrant Contact:">IESG</t>
<t hangText="XML:">See <xref
target="sec.schema.NLSML"></xref>.</t>
</list></t>
</section>
<section title="MRCPv2 XML Namespace registration">
<t>IANA is requested to register and maintain the following XML Name
space. Information provided follows the template in <xref
target="RFC3688">RFC3688</xref>. <list style="hanging">
<t hangText="XML element type:">ns</t>
<t hangText="URI:">http://www.ietf.org/xml/ns/mrcpv2</t>
<t hangText="Registrant Contact:">IESG</t>
<t hangText="XML:">RFCXXXX</t>
</list></t>
</section>
<section anchor="sec.text-media-registrations"
title="text Media Type Registrations">
<t>IANA is requested to register the following text Media Types
according to the process defined in <xref target="RFC4288">RFC
4288</xref>.</t>
<section anchor="sec.grammar-ref-list" title="text/grammar-ref-list">
<t><list style="hanging">
<t hangText="To:">ietf-types@iana.org</t>
<t hangText="Subject:">Registration of media type
text/grammar-ref-list</t>
<t hangText="MIME media type name:">application</t>
<t hangText="MIME subtype name:">text/grammar-ref-list</t>
<t hangText="Required parameters:">none</t>
<t hangText="Optional parameters:">none</t>
<t hangText="Encoding considerations:">Depending on the transfer
protocol, a transfer encoding may be necessary to deal with very
long lines.</t>
<t hangText="Security considerations:">This media type contains
URIs which may represent references to external resources. As
these resources are assumed to be speech recognition grammars,
similar considerations as for the media types "application/srgs"
and "application/srgs+xml" apply.</t>
<t hangText="Interoperability considerations;">'>' must be
percent encoded in URIs according to RFC3986.</t>
<t hangText="Published specification:">The RECOGNIZE method of
the MRCP protocol performs a recognition operation that matches
input against a set of grammars. When matching against more than
one grammar, it is sometimes necessary to use different weights
for the individual grammars. These weights are not a property of
the grammar resource itself but qualify the reference to that
grammar for the particular recognition operation initiated by
the RECOGNIZE method. The format of the proposed
text/grammar-ref-list media type is as follows: body =
*reference where reference = "<" uri ">" [parameters] CRLF
parameters = ";" parameter *(";" parameter) and parameter =
attribute "=" value. This specification currently only defines a
'weight' parameter, but new parameters may be added through the
"Grammar Reference List Parameters" IANA registry established
through this specification. Example:
<http://example.com/grammars/field1.gram>
<http://example.com/grammars/field2.gram>;weight="0.85"
<session:field3@form-level.store>;weight="0.9"
<http://example.com/grammars/universals.gram>;weight="0.75"</t>
<t hangText="Applications which use this media type:">MRCPv2
clients and servers</t>
<t hangText="Additional information:">none</t>
<t hangText="Magic number(s):">none</t>
<t
hangText="Person & email address to contact for further information:">Sarvi
Shanmugham, sarvi@cisco.com</t>
<t hangText="Intended usage:">This media type is expected to be
used only in conjunction with MRCPv2.</t>
</list></t>
</section>
</section>
<section anchor="sec.sessionURIScheme"
title="session URL scheme registration">
<t>IANA is requested to register the following new URI scheme. The
information below follows the template given in <xref
target="RFC4395">RFC4395</xref>. <list style="hanging">
<t hangText="URL scheme name:">"session"</t>
<t hangText="URL scheme syntax:">The syntax of this scheme is
identical to that defined for the "cid" scheme in section 2 of
RFC2392.</t>
<t hangText="Character encoding considerations:">URI values are
limited to the US-ASCII character set.</t>
<t hangText="Intended usage:">The URI is intended to identify a
data resource previously given to the network computing resource.
The purpose of this scheme is to permit access to the specific
resource for the lifetime of the session with the entity storing
the resource. The media type of the resource CAN vary. There is no
explicit mechanism for communication of the media type. This
scheme is currently widely used internally by existing
implementations, and the registration is intended to provide
information in the rare (and unfortunate) case that the scheme is
used elsewhere. The scheme SHOULD NOT be used for open internet
protocols.</t>
<t
hangText="Applications and/or protocols which use this URL scheme name:">This
scheme name is used by MRCPv2 clients and servers.</t>
<t hangText="Interoperability considerations:"></t>
<t hangText="">The character set for URLs is restricted to
US-ASCII. Note that none of the resources are accessible after the
MCRPv2 session ends, hence the name of the scheme. For clients who
establish one MRCPv2 session only for the entire speech
application being implemented this is sufficient, but clients who
create, terminate, and recreate MRCP sessions for performance or
scalability reasons will lose access to resources established in
the earlier session(s).</t>
<t hangText="Security considerations:">The URIs defined here
provide an identification mechanism only. Given that the
communication channel between client and server is secure, that
the server correctly accesses the resource associated with the
URI, and that the server ensures session-only lifetime and access
for each URI, the only remaining security issues are those of the
types of media referred to by the URI.</t>
<t hangText="Relevant publications:">This specification,
particularly sections <xref target="sec.Content-ID"></xref>, <xref
target="sec.lexiconData"></xref>, <xref
target="sec.grammarData"></xref>, and <xref
target="sec.methodRecognize"></xref>.</t>
<t hangText="Contact for further information:">Sarvi Shanmugham,
sarvi@cisco.com</t>
<t hangText="Author/Change controller:">IESG</t>
</list></t>
</section>
<section title="SDP parameter registrations">
<t>IANA is requested to register the following SDP parameter values.
The information for each follows the template given in <xref
target="RFC4566">RFC4566</xref>, Appendix B.</t>
<section title="sub-registry "proto"">
<t>"TCP/MRCPv2" value of the "proto" parameter<list style="hanging">
<t
hangText="Contact name, email address and telephone number:">Sarvi
Shanmugham, sarvi@cisco.com, +1.408.902.3875</t>
<t
hangText="Name being registered (as it will appear in SDP):">TCP/MRCPv2</t>
<t hangText="Long-form name in English:">MCRPv2 over TCP</t>
<t hangText="Type of name:">proto</t>
<t hangText="Explanation of name:">This name represents the
MCRPv2 protocol carried over TCP.</t>
<t hangText="Reference to specification of name:">RFCXXXX</t>
</list>"TCP/TLS/MRCPv2" value of the "proto" parameter<list
style="hanging">
<t
hangText="Contact name, email address and telephone number:">Sarvi
Shanmugham, sarvi@cisco.com, +1.408.902.3875</t>
<t
hangText="Name being registered (as it will appear in SDP):">TCP/TLS/MRCPv2</t>
<t hangText="Long-form name in English:">MCRPv2 over TLS over
TCP</t>
<t hangText="Type of name:">proto</t>
<t hangText="Explanation of name:">This name represents the
MCRPv2 protocol carried over TLS over TCP.</t>
<t hangText="Reference to specification of name:">RFCXXXX</t>
</list></t>
</section>
<section title="sub-registry "att-field (session-level)"">
<t>"resource" value of the "att-field" parameter <list
style="hanging">
<t
hangText="Contact name, email address and telephone number:">Sarvi
Shanmugham, sarvi@cisco.com, +1.408.902.3875</t>
<t
hangText="Attribute name (as it will appear in SDP):">resource</t>
<t hangText="Long-form attribute name in English:">MRCPv2
resource type</t>
<t hangText="Type of attribute:">media-level</t>
<t hangText="Subject to charset attribute?">no</t>
<t hangText="Explanation of attribute:">See <xref
target="sec.resourceControl"></xref> of RFCXXXX for description
and examples.</t>
<t hangText="Specification of appropriate attribute values:">See
section <xref target="sec.registration.resources"></xref> of
RFCXXXX.</t>
</list>"channel" value of the "att-field" parameter <list
style="hanging">
<t
hangText="Contact name, email address and telephone number:">Sarvi
Shanmugham, sarvi@cisco.com, +1.408.902.3875</t>
<t
hangText="Attribute name (as it will appear in SDP):">channel</t>
<t hangText="Long-form attribute name in English:">MRCPv2
resource channel identifier</t>
<t hangText="Type of attribute:">media-level</t>
<t hangText="Subject to charset attribute?">no</t>
<t hangText="Explanation of attribute:">See <xref
target="sec.resourceControl"></xref> of RFCXXXX for description
and examples.</t>
<t hangText="Specification of appropriate attribute values">See
<xref target="sec.resourceControl"></xref> and the "channel-id"
ABNF production rules of RFCXXXX.</t>
</list></t>
</section>
<section anchor="cmid"
title="sub-registry "att-field (media-level)"">
<t>"cmid" value of the "att-field" parameter<list style="hanging">
<t
hangText="Contact name, email address and telephone number:">Sarvi
Shanmugham, sarvi@cisco.com, +1.408.902.3875</t>
<t
hangText="Attribute name (as it will appear in SDP):">cmid</t>
<t hangText="Long-form attribute name in English:">MRCPv2
resource channel media identifier</t>
<t hangText="Type of attribute:">media-level</t>
<t hangText="Subject to charset attribute?">no</t>
<t hangText="Explanation of attribute:">See <xref
target="sec.mediaStreams"></xref> of RFCXXXX for description and
examples.</t>
<t hangText="Specification of appropriate attribute values">See
<xref target="sec.mediaStreams"></xref> and the "cmid-attribute"
ABNF production rules of RFCXXXX.</t>
</list></t>
</section>
<t></t>
</section>
</section>
<section title="Examples">
<section title="Message Flow">
<t>The following is an example of a typical MRCPv2 session of speech
synthesis and recognition between a client and a server.</t>
<t>The figure below illustrates opening a session to the MRCPv2
server. This is exchange does not allocate a resource or setup media.
It simply establishes a SIP session with the MRCPv2 server.</t>
<figure>
<artwork><![CDATA[
C->S:
INVITE sip:mresources@example.com SIP/2.0
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314159 INVITE
Contact:<sip:sarvi@client.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842807 IN IP4 192.0.2.4
s=Set up MRCPv2 control and audio
i=Initial contact
c=IN IP4 192.0.2.12
S->C:
SIP/2.0 200 OK
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314159 INVITE
Contact:<sip:mresources@server.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842807 IN IP4 192.0.2.4
s=Set up MRCPv2 control and audio
i=Initial contact
c=IN IP4 192.0.2.11
C->S:
ACK sip:mresources@server.example.com SIP/2.0
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:Sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314160 ACK
Content-Length:...
]]></artwork>
</figure>
<t>The client requests the server to create synthesizer resource
control channel to do speech synthesis. This also adds a media stream
to send the generated speech. Note that in this example, the client
requests a new MRCPv2 TCP stream between the client and the server. In
the following requests, the client will ask to use the existing
connection.</t>
<figure>
<artwork><![CDATA[
C->S:
INVITE sip:mresources@server.example.com SIP/2.0
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314161 INVITE
Contact:<sip:sarvi@client.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842808 IN IP4 192.0.2.4
s=Set up MRCPv2 control and audio
i=Add TCP channel, synthesizer and one-way audio
c=IN IP4 192.0.2.12
m=application 9 TCP/MRCPv2 1
a=setup:active
a=connection:new
a=resource:speechsynth
a=cmid:1
m=audio 49170 RTP/AVP 0 96
a=rtpmap:0 pcmu/8000
a=recvonly
a=mid:1
S->C:
SIP/2.0 200 OK
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314161 INVITE
Contact:<sip:mresources@server.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842808 IN IP4 192.0.2.4
s=Set up MRCPv2 control and audio
i=Add TCP channel, synthesizer and one-way audio
c=IN IP4 192.0.2.11
m=application 32416 TCP/MRCPv2 1
a=setup:passive
a=connection:new
a=channel:32AECB23433801@speechsynth
a=cmid:1
m=audio 48260 RTP/AVP 0
a=rtpmap:0 pcmu/8000
a=sendonly
a=mid:1
C->S:
ACK sip:mresources@server.example.com SIP/2.0
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:Sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314162 ACK
Content-Length:...
]]></artwork>
</figure>
<t>This exchange allocates an additional resource control channel for
a recognizer. Since a recognizer would need to receive an audio stream
for recognition, this interaction also updates the audio stream to
sendrecv making it a 2-way audio stream.</t>
<figure>
<artwork><![CDATA[
C->S:
INVITE sip:mresources@server.example.com SIP/2.0
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314163 INVITE
Contact:<sip:sarvi@client.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842809 IN IP4 192.0.2.4
s=Set up MRCPv2 control and audio
i=Add recognizer and duplex the audio
c=IN IP4 192.0.2.12
m=application 9 TCP/MRCPv2 1
a=setup:active
a=connection:existing
a=resource:speechsynth
a=cmid:1
m=audio 49170 RTP/AVP 0 96
a=rtpmap:0 pcmu/8000
a=recvonly
a=mid:1
m=application 9 TCP/MRCPv2 1
a=setup:active
a=connection:existing
a=resource:speechrecog
a=cmid:2
m=audio 49180 RTP/AVP 0 96
a=rtpmap:0 pcmu/8000
a=rtpmap:96 telephone-event/8000
a=fmtp:96 0-15
a=sendonly
a=mid:2
S->C:
SIP/2.0 200 OK
To:MediaServer <sip:mresources@example.com>;tag=62784
From:sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314163 INVITE
Contact:<sip:mresources@server.example.com>
Content-Type:application/sdp
Content-Length:...
v=0
o=sarvi 2890844526 2890842809 IN IP4 192.0.2.4
s=Set up MRCPv2 control and audio
i=Add recognizer and duplex the audio
c=IN IP4 192.0.2.11
m=application 32416 TCP/MRCPv2 1
a=channel:32AECB23433801@speechsynth
a=cmid:1
m=audio 48260 RTP/AVP 0
a=rtpmap:0 pcmu/8000
a=sendonly
a=mid:1
m=application 32416 TCP/MRCPv2 1
a=channel:32AECB23433801@speechrecog
a=cmid:2
m=audio 48260 RTP/AVP 0
a=rtpmap:0 pcmu/8000
a=rtpmap:96 telephone-event/8000
a=fmtp:96 0-15
a=recvonly
a=mid:2
C->S:
ACK sip:mresources@server.example.com SIP/2.0
Max-Forwards:6
To:MediaServer <sip:mresources@example.com>;tag=62784
From:Sarvi <sip:sarvi@example.com>;tag=1928301774
Call-ID:a84b4c76e66710
CSeq:314164 ACK
Content-Length:...
]]></artwork>
</figure>
<t>A MRCPv2 <spanx style="verb">SPEAK</spanx> request initiates
speech.</t>
<figure>
<artwork><![CDATA[
C->S:
MRCP/2.0 386 SPEAK 543257
Channel-Identifier:32AECB23433801@speechsynth
Kill-On-Barge-In:false
Voice-gender:neutral
Voice-age:25
Prosody-volume:medium
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>You have 4 new messages.</s>
<s>The first is from Stephanie Williams
<mark name="Stephanie"/>
and arrived at <break/>
<say-as interpret-as="vxml:time">0345p</say-as>.</s>
<s>The subject is <prosody
rate="-20%">ski trip</prosody></s>
</p>
</speak>
S->C:
MRCP/2.0 49 543257 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechsynth
Speech-Marker:timestamp=857205015059
]]></artwork>
</figure>
<t>The synthesizer hits the special marker in the message to be spoken
and faithfully informs the client of the event.</t>
<figure>
<artwork><![CDATA[
S->C: MRCP/2.0 46 SPEECH-MARKER 543257 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechsynth
Speech-Marker:timestamp=857206027059;Stephanie
]]></artwork>
</figure>
<t>The synthesizer finishes with the <spanx style="verb">SPEAK</spanx>
request.</t>
<figure>
<artwork><![CDATA[
S->C: MRCP/2.0 48 SPEAK-COMPLETE 543257 COMPLETE
Channel-Identifier:32AECB23433801@speechsynth
Speech-Marker:timestamp=857207685213;Stephanie
]]></artwork>
</figure>
<t>The recognizer is issued a request to listen for the customer
choices.</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 343 RECOGNIZE 543258
Channel-Identifier:32AECB23433801@speechrecog
Content-Type:application/srgs+xml
Content-Length:...
<?xml version="1.0"?>
<!-- the default grammar language is US English -->
<grammar xmlns="http://www.w3.org/2001/06/grammar"
xml:lang="en-US" version="1.0" root="request">
<!-- single language attachment to a rule expansion -->
<rule id="request">
Can I speak to
<one-of xml:lang="fr-CA">
<item>Michel Tremblay</item>
<item>Andre Roy</item>
</one-of>
</rule>
</grammar>
S->C: MRCP/2.0 49 543258 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
]]></artwork>
</figure>
<t>The client issues the next MRCPv2 <spanx style="verb">SPEAK</spanx>
method. It is generally RECOMMENDED when playing a prompt to the user
with kill-on-barge-in and asking for input, that the client issue the
RECOGNIZE request ahead of the <spanx style="verb">SPEAK</spanx>
request for optimum performance and user experience. This way, it is
guaranteed that the recognizer is online before the prompt starts
playing and the user's speech will not be truncated at the beginning
(especially for power users).</t>
<figure>
<artwork><![CDATA[
C->S: MRCP/2.0 289 SPEAK 543259
Channel-Identifier:32AECB23433801@speechsynth
Kill-On-Barge-In:true
Content-Type:application/ssml+xml
Content-Length:...
<?xml version="1.0"?>
<speak version="1.0"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis
http://www.w3.org/TR/speech-synthesis/synthesis.xsd"
xml:lang="en-US">
<p>
<s>Welcome to ABC corporation.</s>
<s>Who would you like Talk to.</s>
</p>
</speak>
S->C: MRCP/2.0 52 543259 200 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechsynth
Speech-Marker:timestamp=857207696314
]]></artwork>
</figure>
<t>Since the last <spanx style="verb">SPEAK</spanx> request had
Kill-On-Barge-In set to "true", the speech synthesizer is interrupted
when the user starts speaking. And the client is notified.</t>
<t>Now, since the recognition and synthesizer resources are on the
same session, they may have worked with each other to deliver
kill-on-barge-in. Whether the synthesizer and recognizer are in the
same session or not the recognizer MUST generate the START-OF-INPUT
event to the client.</t>
<t>The client MUST then blindly turn around and issued a
BARGE-IN-OCCURRED method to the synthesizer resource (if a <spanx
style="verb">SPEAK</spanx> request was active). The synthesizer, if
kill-on-barge-in was enabled on the current <spanx
style="verb">SPEAK</spanx> request, would have then interrupted it and
issued a <spanx style="verb">SPEAK</spanx>-COMPLETE event to the
client.</t>
<t>The completion-cause code differentiates if this is normal
completion or a kill-on-barge-in interruption.</t>
<figure>
<artwork><![CDATA[
S->C: MRCP/2.0 49 START-OF-INPUT 543258 IN-PROGRESS
Channel-Identifier:32AECB23433801@speechrecog
Proxy-Sync-Id:987654321
C->S: MRCP/2.0 69 BARGE-IN-OCCURRED 543259
Channel-Identifier:32AECB23433801@speechsynth
Proxy-Sync-Id:987654321
S->C: MRCP/2.0 72 543259 200 COMPLETE
Channel-Identifier:32AECB23433801@speechsynth
Active-Request-Id-List:543258
Speech-Marker:timestamp=857206096314
S->C: MRCP/2.0 73 SPEAK-COMPLETE 543259 COMPLETE
Channel-Identifier:32AECB23433801@speechsynth
Completion-Cause:001 barge-in
Speech-Marker:timestamp=857207685213
]]></artwork>
</figure>
<t>The recognition resource matched the spoken stream to a grammar and
generated results. The result of the recognition is returned by the
server as part of the RECOGNITION-COMPLETE event.</t>
<figure>
<artwork><![CDATA[
S->C: MRCP/2.0 412 RECOGNITION-COMPLETE 543258 COMPLETE
Channel-Identifier:32AECB23433801@speechrecog
Completion-Cause:000 success
Waveform-URI:<http://web.media.com/session123/audio.wav>;
size=423523;duration=25432
Content-Type:application/nlsml+xml
Content-Length:...
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="session:request1@form-level.store">
<interpretation>
<instance name="Person">
<ex:Person>
<ex:Name> Andre Roy </ex:Name>
</ex:Person>
</instance>
<input> may I speak to Andre Roy </input>
</interpretation>
</result>
]]></artwork>
</figure>
<t>When the client wants to tear down the whole session and all its
resources, it MUST issue a SIP BYE to close the SIP session. This will
de-allocate all the control channels and resources allocated under the
session.</t>
<figure>
<artwork><![CDATA[
C->S: BYE sip:mresources@server.example.com SIP/2.0
Max-Forwards:6
From:Sarvi <sip:sarvi@example.com>;tag=1928301774
To:MediaServer <sip:mresources@example.com>;tag=62784
Call-ID:a84b4c76e66710
CSeq:231 BYE
Content-Length:...
]]></artwork>
</figure>
</section>
<section title="Recognition Result Examples">
<section title="Simple ASR Ambiguity">
<figure>
<artwork><![CDATA[
System: To which city will you be traveling?
User: I want to go to Pittsburgh.
<?xml version="1.0"?>
<result xmlns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns:ex="http://www.example.com/example"
grammar="http://www.example.com/flight">
<interpretation confidence="0.6">
<instance>
<ex:airline>
<ex:to_city>Pittsburgh</ex:to_city>
<ex:airline>
<instance>
<input mode="speech">
I want to go to Pittsburgh
</input>
</interpretation>
<interpretation confidence="0.4"
<instance>
<ex:airline>
<ex:to_city>Stockholm</ex:to_city>
</ex:airline>
</instance>
<input>I want to go to Stockholm</input>
</interpretation>
</result>
]]></artwork>
</figure>
</section>
<section title="Mixed Initiative">
<figure>
<artwork><![CDATA[
System: What would you like?
User: I would like 2 pizzas, one with pepperoni and cheese,
one with sausage and a bottle of coke, to go.
]]></artwork>
</figure>
<t>This example includes an order object which in turn contains
objects named "food_item", "drink_item" and "delivery_method". The
representation assumes there are no ambiguities in the speech or
natural language processing. Note that this representation also
assumes some level of intra-sentential anaphora resolution, i.e., to
resolve the two "one's" as "pizza".</t>
<figure>
<artwork><![CDATA[
<?xml version="1.0"?>
<nl:result xmlns:nl="http://www.ietf.org/xml/ns/mrcpv2"
xmlns="http://www.example.com/example"
grammar="http://www.example.com/foodorder">
<nl:interpretation confidence="1.0" >
<nl:instance>
<order>
<food_item confidence="1.0">
<pizza>
<ingredients confidence="1.0">
pepperoni
</ingredients>
<ingredients confidence="1.0">
cheese
</ingredients>
</pizza>
<pizza>
<ingredients>sausage</ingredients>
</pizza>
</food_item>
<drink_item confidence="1.0">
<size>2-liter</size>
</drink_item>
<delivery_method>to go</delivery_method>
</order>
</nl:instance>
<nl:input mode="speech">I would like 2 pizzas,
one with pepperoni and cheese, one with sausage
and a bottle of coke, to go.
</nl:input>
</nl:interpretation>
</nl:result>
]]></artwork>
</figure>
</section>
<section title="DTMF Input">
<t>A combination of DTMF input and speech is represented using
nested input elements. For example:</t>
<figure>
<artwork><![CDATA[User: My pin is (dtmf 1 2 3 4)]]></artwork>
</figure>
<figure>
<artwork><![CDATA[
<input>
<input mode="speech" confidence ="1.0"
timestamp-start="2000-04-03T0:00:00"
timestamp-end="2000-04-03T0:00:01.5">My pin is
</input>
<input mode="dtmf" confidence ="1.0"
timestamp-start="2000-04-03T0:00:01.5"
timestamp-end="2000-04-03T0:00:02.0">1 2 3 4
</input>
</input>
]]></artwork>
</figure>
<t>Note that grammars that recognize mixtures of speech and DTMF are
not currently possible in VoiceXML; however this representation may
be needed for other applications of NLSML, and it may be introduced
in future versions of VoiceXML.</t>
</section>
<section title="Interpreting Meta-Dialog and Meta-Task Utterances">
<t>Natural language communication makes use of meta-dialog and
meta-task utterances. This specification is flexible enough so that
meta utterances can be represented on an application-specific basis
without requiring other standard markup.</t>
<t>Here are two examples of how meta-task and meta-dialog utterances
might be represented.</t>
<figure>
<artwork><![CDATA[
System: What toppings do you want on your pizza?
User: What toppings do you have?
<interpretation grammar="http://www.example.com/toppings">
<instance>
<question>
<questioned_item>toppings<questioned_item>
<questioned_property>
availability
</questioned_property>
</question>
</instance>
<input mode="speech">
what toppings do you have?
</input>
</interpretation>
User: slow down.
<interpretation grammar="http://www.example.com/generalCommandsGrammar">
<instance>
<command>
<action>reduce speech rate</action>
<doer>system</doer>
</command>
</instance>
<input mode="speech">slow down</input>
</interpretation>
]]></artwork>
</figure>
</section>
<section title="Anaphora and Deixis">
<t>This specification can be used on an application-specific basis
to represent utterances that contain unresolved anaphoric and
deictic references. Anaphoric references, which include pronouns and
definite noun phrases that refer to something that was mentioned in
the preceding linguistic context, and deictic references, which
refer to something that is present in the non-linguistic context,
present similar problems in that there may not be sufficient
unambiguous linguistic context to determine what their exact role in
the interpretation should be. In order to represent unresolved
anaphora and deixis using this specification, one strategy would be
for the developer to define a more surface-oriented representation
that leaves the specific details of the interpretation of the
reference open. (This assumes that a later component is responsible
for actually resolving the reference).</t>
<figure>
<artwork><![CDATA[
Example: (ignoring the issue of representing the input from the
pointing gesture.)
System: What do you want to drink?
User: I want this (clicks on picture of large root beer.)
<?xml version="1.0"?>
<nl:result xmlns:nl="http://www.ietf.org/xml/ns/mrcpv2"
xmlns="http://www.example.com/example"
grammar="http://www.example.com/beverages.grxml">
<nl:interpretation>
<nl:instance>
<doer>I</doer>
<action>want</action>
<object>this</object>
</nl:instance>
<nl:input mode="speech">I want this</nl:input>
</nl:interpretation>
</nl:result>
]]></artwork>
</figure>
<t></t>
</section>
<section title="Distinguishing Individual Items from Sets with One Member">
<t>For programming convenience, it is useful to be able to
distinguish between individual items and sets containing one item in
the XML representation of semantic results. For example, a pizza
order might consist of exactly one pizza, but a pizza might contain
zero or more toppings. Since there is no standard way of marking
this distinction directly in XML, in the current framework, the
developer is free to adopt any conventions that would convey this
information in the XML markup. One strategy would be for the
developer to wrap the set of items in a grouping element, as in the
following example.</t>
<figure>
<artwork><![CDATA[
<order>
<pizza>
<topping-group>
<topping>mushrooms</topping>
</topping-group>
</pizza>
<drink>coke</drink>
</order>
]]></artwork>
</figure>
<t>In this example, the programmer can assume that there is supposed
to be exactly one pizza and one drink in the order, but the fact
that there is only one topping is an accident of this particular
pizza order.</t>
<t>Note that the client controls both the grammar and the semantics
to be returned upon grammar matches, so the user of the MRCP
protocol is fully empowered to cause results to be returned in NLSML
in such a way that the interpretation is clear to that user.</t>
</section>
<section title="Extensibility">
<t>Extensibility in NLSML is provided via result content
flexibility, as discussed in the discussions of meta utterances and
anaphora. NLSML can easily be used in sophisticated systems to
convey application-specific information that more basic systems
would not make use of, for example defining speech acts.</t>
</section>
</section>
</section>
<section anchor="S.abnf" title="ABNF Normative Definition">
<t>The following productions make use of the core rules defined in
Section 6.1 of <xref target="RFC2234">RFC 2234</xref>.</t>
<figure>
<artwork><![CDATA[
LWS = [*WSP CRLF] 1*WSP ; linear whitespace
SWS = [LWS] ; sep whitespace
UTF8-NONASCII = %xC0-DF 1UTF8-CONT
/ %xE0-EF 2UTF8-CONT
/ %xF0-F7 3UTF8-CONT
/ %xF8-FB 4UTF8-CONT
/ %xFC-FD 5UTF8-CONT
UTF8-CONT = %x80-BF
UTFCHAR = %x21-7E
/ UTF8-NONASCII
param = *pchar
quoted-string = SWS DQUOTE *(qdtext / quoted-pair )
DQUOTE
qdtext = LWS / %x21 / %x23-5B / %x5D-7E
/ UTF8-NONASCII
quoted-pair = "\" (%x00-09 / %x0B-0C / %x0E-7F)
token = 1*(alphanum / "-" / "." / "!" / "%" / "*"
/ "_" / "+" / "`" / "'" / "~" )
reserved = ";" / "/" / "?" / ":" / "@" / "&" / "="
/ "+" / "$" / ","
mark = "-" / "_" / "." / "!" / "~" / "*" / "'"
/ "(" / ")"
unreserved = alphanum / mark
pchar = unreserved / escaped
/ ":" / "@" / "&" / "=" / "+" / "$" / ","
alphanum = ALPHA / DIGIT
BOOLEAN = "true" / "false"
FLOAT = *DIGIT ["." *DIGIT]
escaped = "%" HEXDIG HEXDIG
fragment = *uric
uri = [ absoluteURI / relativeURI ]
[ "#" fragment ]
absoluteURI = scheme ":" ( hier-part / opaque-part )
relativeURI = ( net-path / abs-path / rel-path )
[ "?" query ]
hier-part = ( net-path / abs-path ) [ "?" query ]
net-path = "//" authority [ abs-path ]
abs-path = "/" path-segments
rel-path = rel-segment [ abs-path ]
rel-segment = 1*( unreserved / escaped / ";" / "@"
/ "&" / "=" / "+" / "$" / "," )
opaque-part = uric-no-slash *uric
uric = reserved / unreserved / escaped
uric-no-slash = unreserved / escaped / ";" / "?" / ":"
/ "@" / "&" / "=" / "+" / "$" / ","
path-segments = segment *( "/" segment )
segment = *pchar *( ";" param )
scheme = ALPHA *( ALPHA / DIGIT / "+" / "-" / "." )
authority = srvr / reg-name
srvr = [ [ userinfo "@" ] hostport ]
reg-name = 1*( unreserved / escaped / "$" / ","
/ ";" / ":" / "@" / "&" / "=" / "+" )
query = *uric
userinfo = ( user ) [ ":" password ] "@"
user = 1*( unreserved / escaped
/ user-unreserved )
user-unreserved = "&" / "=" / "+" / "$" / "," / ";"
/ "?" / "/"
password = *( unreserved / escaped
/ "&" / "=" / "+" / "$" / "," )
hostport = host [ ":" port ]
host = hostname / IPv4address / IPv6reference
hostname = *( domainlabel "." ) toplabel [ "." ]
domainlabel = alphanum / alphanum *( alphanum / "-" )
alphanum
toplabel = ALPHA / ALPHA *( alphanum / "-" )
alphanum
IPv4address = 1*3DIGIT "." 1*3DIGIT "." 1*3DIGIT "."
1*3DIGIT
IPv6reference = "[" IPv6address "]"
IPv6address = hexpart [ ":" IPv4address ]
hexpart = hexseq / hexseq "::" [ hexseq ] / "::"
[ hexseq ]
hexseq = hex4 *( ":" hex4)
hex4 = 1*4HEXDIG
port = 1*19DIGIT
; generic-message is the top-level rule
generic-message = start-line message-header CRLF
[ message-body ]
message-body = *OCTET
start-line = request-line / status-line / event-line
request-line = mrcp-version SP message-length SP method-name
SP request-id CRLF
status-line = mrcp-version SP message-length SP request-id
SP status-code SP request-state CRLF
event-line = mrcp-version SP message-length SP event-name
SP request-id SP request-state CRLF
method-name = generic-method
/ synthesizer-method
/ recognizer-method
/ recorder-method
/ verifier-method
generic-method = "SET-PARAMS"
/ "GET-PARAMS"
request-state = "COMPLETE"
/ "IN-PROGRESS"
/ "PENDING"
event-name = synthesizer-event
/ recognizer-event
/ recorder-event
/ verifier-event
message-header = 1*(generic-header / resource-header)
resource-header = synthesizer-header
/ recognizer-header
/ recorder-header
/ verifier-header
generic-header = channel-identifier
/ accept
/ active-request-id-list
/ proxy-sync-id
/ accept-charset
/ content-type
/ content-id
/ content-base
/ content-encoding
/ content-location
/ content-length
/ fetch-timeout
/ cache-control
/ logging-tag
/ set-cookie
/ set-cookie2
/ vendor-specific
; -- content-id is as defined in RFC2392, RFC2046 and RFC5322
; -- accept and accept-charset are as defined in RFC2616
mrcp-version = "MRCP" "/" 1*2DIGIT "." 1*2DIGIT
message-length = 1*19DIGIT
request-id = 1*10DIGIT
status-code = 3DIGIT
channel-identifier = "Channel-Identifier" ":"
channel-id CRLF
channel-id = 1*alphanum "@" 1*alphanum
active-request-id-list = "Active-Request-Id-List" ":"
request-id *("," request-id) CRLF
proxy-sync-id = "Proxy-Sync-Id" ":" 1*VCHAR CRLF
content-length = "Content-Length" ":" 1*19DIGIT CRLF
content-base = "Content-Base" ":" absoluteURI CRLF
content-type = "Content-Type" ":" media-type-value CRLF
media-type-value = type "/" subtype *( ";" parameter )
type = token
subtype = token
parameter = attribute "=" value
attribute = token
value = token / quoted-string
content-encoding = "Content-Encoding" ":"
*WSP content-coding
*(*WSP "," *WSP content-coding *WSP )
CRLF
content-coding = token
content-location = "Content-Location" ":"
( absoluteURI / relativeURI ) CRLF
cache-control = "Cache-Control" ":"
[*WSP cache-directive
*( *WSP "," *WSP cache-directive *WSP )]
CRLF
fetch-timeout = "Fetch-Timeout" ":" 1*19DIGIT CRLF
cache-directive = "max-age" "=" delta-seconds
/ "max-stale" ["=" delta-seconds ]
/ "min-fresh" "=" delta-seconds
logging-tag = "Logging-Tag" ":" 1*UTFCHAR CRLF
vendor-specific = "Vendor-Specific-Parameters" ":"
[vendor-specific-av-pair
*(";" vendor-specific-av-pair)] CRLF
vendor-specific-av-pair = vendor-av-pair-name "="
value
vendor-av-pair-name = 1*UTFCHAR
set-cookie = "Set-Cookie:" cookies CRLF
cookies = cookie *("," *LWS cookie)
cookie = attribute "=" value *(";" cookie-av)
cookie-av = "Comment" "=" value
/ "Domain" "=" value
/ "Max-Age" "=" value
/ "Path" "=" value
/ "Secure"
/ "Version" "=" 1*19DIGIT
/ "Age" "=" delta-seconds
set-cookie2 = "Set-Cookie2:" cookies2 CRLF
cookies2 = cookie2 *("," *LWS cookie2)
cookie2 = attribute "=" value *(";" cookie-av2)
cookie-av2 = "Comment" "=" value
/ "CommentURL" "=" DQUOTE uri DQUOTE
/ "Discard"
/ "Domain" "=" value
/ "Max-Age" "=" value
/ "Path" "=" value
/ "Port" [ "=" DQUOTE portlist DQUOTE ]
/ "Secure"
/ "Version" "=" 1*19DIGIT
/ "Age" "=" delta-seconds
portlist = portnum *("," *LWS portnum)
portnum = 1*19DIGIT
; Synthesizer ABNF
synthesizer-method = "SPEAK"
/ "STOP"
/ "PAUSE"
/ "RESUME"
/ "BARGE-IN-OCCURRED"
/ "CONTROL"
/ "DEFINE-LEXICON"
synthesizer-event = "SPEECH-MARKER"
/ "SPEAK-COMPLETE"
synthesizer-header = jump-size
/ kill-on-barge-in
/ speaker-profile
/ completion-cause
/ completion-reason
/ voice-parameter
/ prosody-parameter
/ speech-marker
/ speech-language
/ fetch-hint
/ audio-fetch-hint
/ failed-uri
/ failed-uri-cause
/ speak-restart
/ speak-length
/ load-lexicon
/ lexicon-search-order
jump-size = "Jump-Size" ":" speech-length-value CRLF
speech-length-value = numeric-speech-length
/ text-speech-length
text-speech-length = 1*UTFCHAR SP "Tag"
numeric-speech-length = ("+" / "-") positive-speech-length
positive-speech-length = 1*19DIGIT SP numeric-speech-unit
numeric-speech-unit = "Second"
/ "Word"
/ "Sentence"
/ "Paragraph"
delta-seconds = 1*19DIGIT
kill-on-barge-in = "Kill-On-Barge-In" ":" BOOLEAN
CRLF
speaker-profile = "Speaker-Profile" ":" absoluteURI
CRLF
completion-cause = "Completion-Cause" ":" 3DIGIT SP
1*VCHAR CRLF
completion-reason = "Completion-Reason" ":"
quoted-string CRLF
voice-parameter = voice-gender
/ voice-age
/ voice-variant
/ voice-name
voice-gender = "Voice-Gender:" voice-gender-value CRLF
voice-gender-value = "male"
/ "female"
/ "neutral"
voice-age = "Voice-Age:" 1*3DIGIT CRLF
voice-variant = "Voice-Variant:" 1*19DIGIT CRLF
voice-name = "Voice-Name:"
1*UTFCHAR *(1*WSP 1*UTFCHAR) CRLF
prosody-parameter = "Prosody-" prosody-param-name ":"
[prosody-param-value] CRLF
prosody-param-name = 1*VCHAR
prosody-param-value = 1*VCHAR
timestamp = "timestamp" "=" time-stamp-value
time-stamp-value = 1*20DIGIT
speech-marker = "Speech-Marker" ":"
timestamp
[";" 1*(UTFCHAR / %x20)] CRLF
speech-language = "Speech-Language" ":" 1*VCHAR CRLF
fetch-hint = "Fetch-Hint" ":" ("prefetch" / "safe") CRLF
audio-fetch-hint = "Audio-Fetch-Hint" ":"
("prefetch" / "safe" / "stream") CRLF
failed-uri = "Failed-URI" ":" absoluteURI CRLF
failed-uri-cause = "Failed-URI-Cause" ":" 1*UTFCHAR CRLF
speak-restart = "Speak-Restart" ":" BOOLEAN CRLF
speak-length = "Speak-Length" ":" positive-length-value
CRLF
positive-length-value = positive-speech-length
/ text-speech-length
load-lexicon = "Load-Lexicon" ":" BOOLEAN CRLF
lexicon-search-order = "Lexicon-Search-Order" ":"
"<" absoluteURI ">" *(" " "<" absoluteURI ">") CRLF
; Recognizer ABNF
recognizer-method = recog-only-method
/ enrollment-method
recog-only-method = "DEFINE-GRAMMAR"
/ "RECOGNIZE"
/ "INTERPRET"
/ "GET-RESULT"
/ "START-INPUT-TIMERS"
/ "STOP"
enrollment-method = "START-PHRASE-ENROLLMENT"
/ "ENROLLMENT-ROLLBACK"
/ "END-PHRASE-ENROLLMENT"
/ "MODIFY-PHRASE"
/ "DELETE-PHRASE"
recognizer-event = "START-OF-INPUT"
/ "RECOGNITION-COMPLETE"
/ "INTERPRETATION-COMPLETE"
recognizer-header = recog-only-header
/ enrollment-header
recog-only-header = confidence-threshold
/ sensitivity-level
/ speed-vs-accuracy
/ n-best-list-length
/ input-type
/ no-input-timeout
/ recognition-timeout
/ waveform-uri
/ input-waveform-uri
/ completion-cause
/ completion-reason
/ recognizer-context-block
/ start-input-timers
/ speech-complete-timeout
/ speech-incomplete-timeout
/ dtmf-interdigit-timeout
/ dtmf-term-timeout
/ dtmf-term-char
/ failed-uri
/ failed-uri-cause
/ save-waveform
/ media-type
/ new-audio-channel
/ speech-language
/ ver-buffer-utterance
/ recognition-mode
/ cancel-if-queue
/ hotword-max-duration
/ hotword-min-duration
/ interpret-text
/ dtmf-buffer-time
/ clear-dtmf-buffer
/ early-no-match
enrollment-header = num-min-consistent-pronunciations
/ consistency-threshold
/ clash-threshold
/ personal-grammar-uri
/ enroll-utterance
/ phrase-id
/ phrase-nl
/ weight
/ save-best-waveform
/ new-phrase-id
/ confusable-phrases-uri
/ abort-phrase-enrollment
confidence-threshold = "Confidence-Threshold" ":"
FLOAT CRLF
sensitivity-level = "Sensitivity-Level" ":" FLOAT
CRLF
speed-vs-accuracy = "Speed-Vs-Accuracy" ":" FLOAT
CRLF
n-best-list-length = "N-Best-List-Length" ":" 1*19DIGIT
CRLF
input-type = "Input-Type" ":" [ "speech" / "dtmf" ] CRLF
no-input-timeout = "No-Input-Timeout" ":" 1*19DIGIT
CRLF
recognition-timeout = "Recognition-Timeout" ":" 1*19DIGIT
CRLF
waveform-uri = "Waveform-URI" ":" ["<" absoluteURI ">"
";" "size" "=" 1*19DIGIT
";" "duration" "=" 1*19DIGIT] CRLF
recognizer-context-block = "Recognizer-Context-Block" ":"
[1*VCHAR] CRLF
start-input-timers = "Start-Input-Timers" ":"
BOOLEAN CRLF
speech-complete-timeout = "Speech-Complete-Timeout" ":"
1*19DIGIT CRLF
speech-incomplete-timeout = "Speech-Incomplete-Timeout" ":"
1*19DIGIT CRLF
dtmf-interdigit-timeout = "DTMF-Interdigit-Timeout" ":"
1*19DIGIT CRLF
dtmf-term-timeout = "DTMF-Term-Timeout" ":" 1*19DIGIT
CRLF
dtmf-term-char = "DTMF-Term-Char" ":" VCHAR CRLF
save-waveform = "Save-Waveform" ":" BOOLEAN CRLF
new-audio-channel = "New-Audio-Channel" ":"
BOOLEAN CRLF
recognition-mode = "Recognition-Mode" ":" 1*ALPHA CRLF
cancel-if-queue = "Cancel-If-Queue" ":" BOOLEAN CRLF
hotword-max-duration = "Hotword-Max-Duration" ":"
1*19DIGIT CRLF
hotword-min-duration = "Hotword-Min-Duration" ":"
1*19DIGIT CRLF
interpret-text = "Interpret-Text" ":" 1*VCHAR CRLF
dtmf-buffer-time = "DTMF-Buffer-Time" ":" 1*19DIGIT CRLF
clear-dtmf-buffer = "Clear-DTMF-Buffer" ":" BOOLEAN CRLF
early-no-match = "Early-No-Match" ":" BOOLEAN CRLF
num-min-consistent-pronunciations =
"Num-Min-Consistent-Pronunciations" ":" 1*19DIGIT CRLF
consistency-threshold = "Consistency-Threshold" ":" FLOAT
CRLF
clash-threshold = "Clash-Threshold" ":" FLOAT CRLF
personal-grammar-uri = "Personal-Grammar-URI" ":" uri CRLF
enroll-utterance = "Enroll-Utterance" ":" BOOLEAN CRLF
phrase-id = "Phrase-ID" ":" 1*VCHAR CRLF
phrase-nl = "Phrase-NL" ":" 1*UTFCHAR CRLF
weight = "Weight" ":" weight-value CRLF
weight-value = FLOAT
save-best-waveform = "Save-Best-Waveform" ":"
BOOLEAN CRLF
new-phrase-id = "New-Phrase-ID" ":" 1*VCHAR CRLF
confusable-phrases-uri = "Confusable-Phrases-URI" ":"
uri CRLF
abort-phrase-enrollment = "Abort-Phrase-Enrollment" ":"
BOOLEAN CRLF
; Verifier ABNF
verifier-method = "START-SESSION"
/ "END-SESSION"
/ "QUERY-VOICEPRINT"
/ "DELETE-VOICEPRINT"
/ "VERIFY"
/ "VERIFY-FROM-BUFFER"
/ "VERIFY-ROLLBACK"
/ "STOP"
/ "START-INPUT-TIMERS"
/ "GET-INTERMEDIATE-RESULT"
verifier-event = "VERIFICATION-COMPLETE"
/ "START-OF-INPUT"
verifier-header = repository-uri
/ voiceprint-identifier
/ verification-mode
/ adapt-model
/ abort-model
/ min-verification-score
/ num-min-verification-phrases
/ num-max-verification-phrases
/ no-input-timeout
/ save-waveform
/ media-type
/ waveform-uri
/ voiceprint-exists
/ ver-buffer-utterance
/ input-waveform-uri
/ completion-cause
/ completion-reason
/ speech-complete-timeout
/ new-audio-channel
/ abort-verification
/ start-input-timers
/ input-type
repository-uri = "Repository-URI" ":" uri CRLF
voiceprint-identifier = "Voiceprint-Identifier" ":"
1*VCHAR "." 3VCHAR
[";" 1*VCHAR "." 3VCHAR] CRLF
verification-mode = "Verification-Mode" ":"
verification-mode-string
verification-mode-string = "train" / "verify"
adapt-model = "Adapt-Model" ":" BOOLEAN CRLF
abort-model = "Abort-Model" ":" BOOLEAN CRLF
min-verification-score = "Min-Verification-Score" ":"
[ %x2D ] FLOAT CRLF
num-min-verification-phrases = "Num-Min-Verification-Phrases"
":" 1*19DIGIT CRLF
num-max-verification-phrases = "Num-Max-Verification-Phrases"
":" 1*19DIGIT CRLF
voiceprint-exists = "Voiceprint-Exists" ":"
BOOLEAN CRLF
ver-buffer-utterance = "Ver-Buffer-Utterance" ":"
BOOLEAN CRLF
input-waveform-uri = "Input-Waveform-URI" ":" uri CRLF
abort-verification = "Abort-Verification " ":"
BOOLEAN CRLF
; Recorder ABNF
recorder-method = "RECORD"
/ "STOP"
recorder-event = "START-OF-INPUT"
/ "RECORD-COMPLETE"
recorder-header = sensitivity-level
/ no-input-timeout
/ completion-cause
/ completion-reason
/ failed-uri
/ failed-uri-cause
/ record-uri
/ media-type
/ max-time
/ trim-length
/ final-silence
/ capture-on-speech
/ new-audio-channel
/ start-input-timers
/ input-type
record-uri = "Record-URI" ":" [ "<" uri ">"
";" "size" "=" 1*19DIGIT
";" "duration" "=" 1*19DIGIT] CRLF
media-type = "Media-Type" ":" media-type-value CRLF
max-time = "Max-Time" ":" 1*19DIGIT CRLF
trim-length = "Trim-Length" ":" 1*19DIGIT CRLF
final-silence = "Final-Silence" ":" 1*19DIGIT CRLF
capture-on-speech = "Capture-On-Speech " ":"
BOOLEAN CRLF]]></artwork>
</figure>
<t></t>
<t>The following productions add a new SDP session-level attribute. See
<xref target="cmid"></xref>.</t>
<figure>
<artwork><![CDATA[cmid-attribute = "a=cmid:" identification-tag
identification-tag = token
]]></artwork>
</figure>
</section>
<section title="XML Schemas">
<section anchor="sec.schema.NLSML" title="NLSML Schema Definition">
<figure>
<artwork><![CDATA[
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.ietf.org/xml/ns/mrcpv2"
xmlns="http://www.ietf.org/xml/ns/mrcpv2"
elementFormDefault="qualified"
attributeFormDefault="unqualified" >
<xs:annotation>
<xs:documentation> Natural Language Semantic Markup Schema
</xs:documentation>
</xs:annotation>
<xs:include schemaLocation="enrollment-schema.rng"/>
<xs:include schemaLocation="verification-schema.rng"/>
<xs:element name="result">
<xs:complexType>
<xs:sequence>
<xs:element name="interpretation" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="instance" minOccurs="0">
<xs:complexType mixed="true">
<xs:sequence minOccurs="0">
<xs:any namespace="##other" processContents="lax"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="input">
<xs:complexType mixed="true">
<xs:choice>
<xs:element name="noinput" minOccurs="0"/>
<xs:element name="nomatch" minOccurs="0"/>
<xs:element name="input" minOccurs="0"/>
</xs:choice>
<xs:attribute name="mode"
type="xs:string"
default="speech"/>
<xs:attribute name="confidence"
type="confidenceinfo"
default="1.0"/>
<xs:attribute name="timestamp-start"
type="xs:string"/>
<xs:attribute name="timestamp-end"
type="xs:string"/>
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute name="confidence" type="confidenceinfo"
default="1.0"/>
<xs:attribute name="grammar" type="xs:anyURI"
use="optional"/>
</xs:complexType>
</xs:element>
<xs:element name="enrollment-result"
type="enrollment-contents"/>
<xs:element name="verification-result"
type="verification-contents"/>
</xs:sequence>
<xs:attribute name="grammar" type="xs:anyURI"
use="optional"/>
</xs:complexType>
</xs:element>
<xs:simpleType name="confidenceinfo">
<xs:restriction base="xs:float">
<xs:minInclusive value="0.0"/>
<xs:maxInclusive value="1.0"/>
</xs:restriction>
</xs:simpleType>
</xs:schema>
]]></artwork>
</figure>
</section>
<section anchor="sec.enrollmentResultsSchema"
title="Enrollment Results Schema Definition">
<figure>
<artwork><![CDATA[
<!-- MRCP Enrollment Schema
(See http://www.oasis-open.org/committees/relax-ng/spec.html)
-->
<grammar datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes"
ns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns="http://relaxng.org/ns/structure/1.0">
<start>
<element name="enrollment-result">
<ref name="enrollment-content"/>
</element>
</start>
<define name="enrollment-content">
<interleave>
<element name="num-clashes">
<data type="nonNegativeInteger"/>
</element>
<element name="num-good-repetitions">
<data type="nonNegativeInteger"/>
</element>
<element name="num-repetitions-still-needed">
<data type="nonNegativeInteger"/>
</element>
<element name="consistency-status">
<choice>
<value>consistent</value>
<value>inconsistent</value>
<value>undecided</value>
</choice>
</element>
<optional>
<element name="clash-phrase-ids">
<oneOrMore>
<element name="item">
<data type="token"/>
</element>
</oneOrMore>
</element>
</optional>
<optional>
<element name="transcriptions">
<oneOrMore>
<element name="item">
<text/>
</element>
</oneOrMore>
</element>
</optional>
<optional>
<element name="confusable-phrases">
<oneOrMore>
<element name="item">
<text/>
</element>
</oneOrMore>
</element>
</optional>
</interleave>
</define>
</grammar>
]]></artwork>
</figure>
</section>
<section anchor="sec.verificationResultsSchema"
title="Verification Results Schema Definition">
<figure>
<artwork><![CDATA[
<!-- MRCP Verification Results Schema
(See http://www.oasis-open.org/committees/relax-ng/spec.html)
-->
<grammar datatypeLibrary="http://www.w3.org/2001/XMLSchema-datatypes"
ns="http://www.ietf.org/xml/ns/mrcpv2"
xmlns="http://relaxng.org/ns/structure/1.0">
<start>
<element name="verification-result">
<ref name="verification-contents"/>
</element>
</start>
<define name="verification-contents">
<element name="voiceprint">
<ref name="firstVoiceprintContent"/>
</element>
<zeroOrMore>
<element name="voiceprint">
<ref name="restVoiceprintContent"/>
</element>
</zeroOrMore>
</define>
<define name="firstVoiceprintContent">
<attribute name="id">
<data type="string"/>
</attribute>
<interleave>
<optional>
<element name="adapted">
<data type="boolean"/>
</element>
</optional>
<optional>
<element name="needmoredata">
<ref name="needmoredataContent"/>
</element>
</optional>
<optional>
<element name="incremental">
<ref name="firstCommonContent"/>
</element>
</optional>
<element name="cumulative">
<ref name="firstCommonContent"/>
</element>
</interleave>
</define>
<define name="restVoiceprintContent">
<attribute name="id">
<data type="string"/>
</attribute>
<element name="cumulative">
<ref name="restCommonContent"/>
</element>
</define>
<define name="firstCommonContent">
<interleave>
<element name="decision">
<ref name="decisionContent"/>
</element>
<optional>
<element name="utterance-length">
<ref name="utterance-lengthContent"/>
</element>
</optional>
<optional>
<element name="device">
<ref name="deviceContent"/>
</element>
</optional>
<optional>
<element name="gender">
<ref name="genderContent"/>
</element>
</optional>
<zeroOrMore>
<element name="verification-score">
<ref name="verification-scoreContent"/>
</element>
</zeroOrMore>
</interleave>
</define>
<define name="restCommonContent">
<interleave>
<optional>
<element name="decision">
<ref name="decisionContent"/>
</element>
</optional>
<optional>
<element name="device">
<ref name="deviceContent"/>
</element>
</optional>
<optional>
<element name="gender">
<ref name="genderContent"/>
</element>
</optional>
<zeroOrMore>
<element name="verification-score">
<ref name="verification-scoreContent"/>
</element>
</zeroOrMore>
</interleave>
</define>
<define name="decisionContent">
<choice>
<value>accepted</value>
<value>rejected</value>
<value>undecided</value>
</choice>
</define>
<define name="needmoredataContent">
<data type="boolean"/>
</define>
<define name="utterance-lengthContent">
<data type="nonNegativeInteger"/>
</define>
<define name="deviceContent">
<choice>
<value>cellular-phone</value>
<value>electret-phone</value>
<value>carbon-button-phone</value>
<value>unknown</value>
</choice>
</define>
<define name="genderContent">
<choice>
<value>male</value>
<value>female</value>
<value>unknown</value>
</choice>
</define>
<define name="verification-scoreContent">
<data type="float">
<param name="minInclusive">-1</param>
<param name="maxInclusive">1</param>
</data>
</define>
</grammar>
]]></artwork>
</figure>
</section>
</section>
</middle>
<back>
<references title="Normative References">
<!--RTP-->
&rfc3550;
<!--SIP-->
&rfc3261;
<!--RTSP-->
&rfc2326;
<!--SDP: Session Description Protocol-->
&rfc4566;
<!--Key words for use in RFCs to Indicate Requirement Levels-->
&rfc2119;
<!--HTTP/1.1-->
&rfc2616;
<!--Offer/Answer Model with Session Description Protocol (SDP)-->
&rfc3264;
<!--UTF-8, a transformation format of Unicode and ISO 10646-->
&rfc3629;
<!--ABNF-->
&rfc5234;
<!--Connection-Oriented Media Transport in the Session Description Protocol (SDP)-->
&rfc4145;
<!-- TLS profile for Comedia -->
&rfc4572;
<!--Grouping of Media Lines in the Session Description Protocol -->
&rfc3388;
<!--Internet Message Format-->
&rfc5322;
<!--Content-ID and Message-ID Uniform Resource Locators-->
&rfc2392;
<!--HTTP State Management Mechanism-->
&rfc2109;
<!--HTTP State Management Mechanism-->
&rfc2965;
<!--Tags for the Identification of Languages-->
&rfc4646;
<!--Guidelines for Writing an IANA Considerations Section in RFCs-->
&rfc5226;
<!--Domain names - implementation and specification-->
&rfc1035;
<!--MIME Registration Procedures-->
&rfc4288;
<!--IETF XML Registry-->
&rfc3688;
<!--Registration Procedures for URL Scheme Names-->
&rfc4395;
<!--Security Descriptions for SDP-->
&rfc4568;
<!-- Speech Synthesis-->
&synth;
<!-- text/uri-list definition -->
&rfc2483;
<!-- SRTP -->
&rfc3711;
<!-- Grammar -->
&grxml;
<!-- Semantic Interpretation -->
<reference anchor="W3C.REC-semantic-interpretation-20070405"
target="http://www.w3.org/TR/2007/REC-semantic-interpretation-20070405">
<front>
<title>Semantic Interpretation for Speech Recognition (SISR) Version
1.0</title>
<author fullname="Luc Van Tichelen" initials="L." surname="Tichelen">
<organization>Nuance Communications</organization>
</author>
<author fullname="David Burke" initials="D." surname="Burke">
<organization>VoxPilot</organization>
</author>
<date day="5" month="April" year="2007" />
</front>
<seriesInfo name="World Wide Web Consortium REC"
value="REC-semantic-interpretation-20070405" />
<format target="http://www.w3.org/TR/2007/REC-semantic-interpretation-20070405"
type="HTML" />
</reference>
<!-- XML Name Spaces-->
&names;
</references>
<references title="Informative References">
<!--SPEECHSC Requirements - awaiting RFC editor-->
&rfc4313;
<reference anchor="Q.23">
<front>
<title>Technical Features of Push-Button Telephone Sets</title>
<author>
<organization>International Telecommunications
Union</organization>
</author>
<date year="1993" />
</front>
<seriesInfo name="ITU-T" value="Q.23" />
</reference>
<!--DTMF in RTP-->
&rfc4733;
<!-- VoiceXML 2.0 -->
&voicexml;
<!--MRCP V1-->
&rfc4463;
<!--ABNF-->
&rfc2234;
<reference anchor="refs.javaSpeechGrammarFormat">
<front>
<title>Java Speech Grammar Format Version 1.0</title>
<author fullname="">
<organization>Sun Microsystems</organization>
</author>
<date day="26" month="October" year="1998" />
</front>
<format target="http://java.sun.com/products/java-media/speech/forDevelopers/JSGF/"
type="HTML" />
</reference>
<reference anchor="W3C.REC-emma-20090210"
target="http://www.w3.org/TR/2009/REC-emma-20090210">
<front>
<title>EMMA: Extensible MultiModal Annotation markup
language</title>
<author fullname="Michael Johnston" initials="M." surname="Johnston">
<organization>AT&T</organization>
</author>
<author fullname="Paolo Baggia" initials="P." surname="Baggia">
<organization>Loquendo</organization>
</author>
<author fullname="Daniel C. Burnett" initials="D." surname="Burnett">
<organization>Nuance</organization>
</author>
<author fullname="Jerry Carter" initials="J." surname="Carter">
<organization>Nuance</organization>
</author>
<author fullname="Deborah A. Dahl" initials="D." surname="Dahl">
<organization>Invited Expert</organization>
</author>
<author fullname="Gerry McCobb" initials="G." surname="McCobb">
<organization>IBM</organization>
</author>
<author fullname="Dave Raggett" initials="D." surname="Raggett">
<organization>W3C</organization>
</author>
<date day="10" month="February" year="2009" />
</front>
<seriesInfo name="World Wide Web Consortium Recommendation"
value="REC-emma-20090210" />
<format target="http://www.w3.org/TR/2009/REC-emma-20090210"
type="HTML" />
</reference>
<!--URLAUTH IMAP extension-->
&rfc4467;
<reference anchor="W3C.REC-pronunciation-lexicon-20081014"
target="http://www.w3.org/TR/2008/REC-pronunciation-lexicon-20081014">
<front>
<title>Pronunciation Lexicon Specification (PLS)</title>
<author fullname="Paolo Baggia" initials="P." surname="Baggia">
<organization>Loquendo</organization>
</author>
<author fullname="Paul Bagshaw" initials="P." surname="Bagshaw">
<organization>France Telecom</organization>
</author>
<author fullname="Daniel C. Burnett" initials="D." surname="Burnett">
<organization>Voxeo</organization>
</author>
<author fullname="Jerry Carter" initials="J." surname="Carter">
<organization>Nuance</organization>
</author>
<author fullname="Frank Scahill" initials="F." surname="Scahill">
<organization>BT</organization>
</author>
<date day="14" month="October" year="2008" />
</front>
<seriesInfo name="World Wide Web Consortium Recommendation"
value="REC-pronunciation-lexicon-20081014" />
<format target="http://www.w3.org/TR/2008/REC-pronunciation-lexicon-20081014"
type="HTML" />
</reference>
</references>
<section title="Contributors">
<figure>
<artwork><![CDATA[
Pierre Forgues
Nuance Communications Ltd.
1500 University Street
Suite 935
Montreal, Quebec
Canada H3A 3S7
Email: forgues@nuance.com
Charles Galles
Intervoice, Inc.
17811 Waterview Parkway
Dallas, Texas 75252
Email: charles.galles@intervoice.com
Klaus Reifenrath
Scansoft, Inc
Guldensporenpark 32
Building D
9820 Merelbeke
Belgium
Email: klaus.reifenrath@scansoft.com
]]></artwork>
</figure>
</section>
<section title="Acknowledgements">
<figure>
<artwork><![CDATA[
Andre Gillet (Nuance Communications)
Andrew Hunt (ScanSoft)
Andrew Wahbe (Genesys)
Aaron Kneiss (ScanSoft)
Brian Eberman (ScanSoft)
Corey Stohs (Cisco Systems Inc)
Dave Burke (VoxPilot)
Jeff Kusnitz (IBM Corp)
Ganesh N Ramaswamy (IBM Corp)
Klaus Reifenrath (ScanSoft)
Kristian Finlator (ScanSoft)
Magnus Westerlund (Ericsson)
Martin Dragomirecky (Cisco Systems Inc)
Paolo Baggia (Loquendo)
Peter Monaco (Nuance Communications)
Pierre Forgues (Nuance Communications)
Ran Zilca (IBM Corp)
Suresh Kaliannan (Cisco Systems Inc.)
Skip Cave (Intervoice Inc)
Thomas Gal (LumenVox)
]]></artwork>
</figure>
<t>The chairs of the speechsc work group are Eric Burger (Georgetown
University) and Dave Oran (Cisco Systems, Inc.).</t>
</section>
</back>
</rfc>| PAFTECH AB 2003-2026 | 2026-04-24 22:16:26 |