One document matched: draft-iab-strint-report-03.xml


<?xml version="1.0" encoding="US-ASCII"?>
<!DOCTYPE rfc PUBLIC "-//IETF//DTD RFC 2629//EN"
  "http://xml.resource.org/authoring/rfc2629.dtd" [
<!ENTITY RFC3261 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3261.xml">
<!ENTITY RFC3365 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3365.xml">
<!ENTITY RFC3552 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3552.xml">
<!ENTITY RFC3704 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.3704.xml">
<!ENTITY RFC4322 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4322.xml">
<!ENTITY RFC6120 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6120.xml">
<!ENTITY RFC6817 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6817.xml">
<!ENTITY RFC6962 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.6962.xml">
<!ENTITY RFC7258 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.7258.xml">
<!ENTITY RFC7435 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.7435.xml">
<!ENTITY I-D.barnes-pervasive-problem SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.barnes-pervasive-problem.xml">
<!ENTITY I-D.kent-opportunistic-security SYSTEM "http://xml.resource.org/public/rfc/bibxml3/reference.I-D.kent-opportunistic-security.xml">
<!ENTITY RFC4252 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.4252.xml">
]>

<?xml-stylesheet type='text/xsl'
  href='http://xml.resource.org/authoring/rfc2629.xslt' ?>
<?xml-stylesheet type='text/css' href='rfc2629.css' ?>

<?rfc comments="yes" ?>
<?rfc strict="yes" ?>
<?rfc toc="yes"?>
<?rfc tocdepth="1" ?>
<?rfc symrefs="yes" ?>
<?rfc sortrefs="yes" ?>
<?rfc compact="yes" ?>
<?rfc subcompact="no" ?>
<?rfc inline="yes" ?>
<rfc category="info" ipr="trust200902" submissionType="IAB"
  docName="draft-iab-strint-report-03">
  <front>
    <title abbrev="STRINT Workshop Report">Report from the Strengthening the Internet (STRINT) workshop</title>
    <author initials="S." surname="Farrell" fullname="Stephen Farrell">
      <organization>Trinity College, Dublin</organization>
      <address>
	<email>stephen.farrell@cs.tcd.ie</email>
      </address>
    </author>
    <author initials="R." surname="Wenning" fullname="Rigo Wenning">
      <organization abbrev="W3C">World Wide Web Consortium</organization>
      <address>
	<postal>
	  <street>2004, route des Lucioles</street>
	  <street>B.P. 93</street>
	  <code>06902</code>
	  <city>Sophia-Antipolis</city>
	  <country>France</country>
	</postal>
	<email>rigo@w3.org</email>
      </address>
    </author>
    <author initials="B." surname="Bos" fullname="Bert Bos">
      <organization abbrev="W3C">World Wide Web Consortium</organization>
      <address>
	<postal>
	  <street>2004, route des Lucioles</street>
	  <street>B.P. 93</street>
	  <code>06902</code>
	  <city>Sophia-Antipolis</city>
	  <country>France</country>
	</postal>
	<email>bert@w3org</email>
      </address>
    </author>

<author initials='M.' surname="Blanchet" fullname='Marc Blanchet'>
  <organization>Viagenie</organization>
  <address>
    <postal>
      <street>246 Aberdeen</street>
      <city>Quebec</city>
      <region>QC</region>
      <code>G1R 2E1</code>
      <country>Canada</country>
    </postal>
    <email>Marc.Blanchet@viagenie.ca</email>
    <uri>http://viagenie.ca</uri>
  </address>
</author>

    <author initials="H.T." surname="Tschofenig" fullname="Hannes Tschofenig ">
      <organization>ARM Ltd.</organization>
      <address>
        <postal>
          <street>110 Fulbourn Rd</street>
          <city>Cambridge</city>
          <code> CB1 9NJ </code>
          <country>Great Britain</country>
        </postal>
        <email>Hannes.tschofenig@gmx.net </email>
        <uri>http://www.tschofenig.priv.at</uri>
      </address>
    </author>
    <date/> 
    <area>Security</area>
    <keyword>IAB</keyword>
    <keyword>W3C</keyword>
    <keyword>STREWS</keyword>
    <keyword>security</keyword>
    <keyword>pervasive monitoring</keyword>
    <keyword>London</keyword>
    <abstract>
      <t>The Strengthening the Internet (STRINT) workshop assembled one hundred participants in 
      London for two days in early 2014 to discuss how the technical
      community, and in particular the IETF and the W3C, should react
      to Pervasive Monitoring and more generally how to strengthen
	  the Internet in the face of such attacks. The discussions covered issues of
      terminology, the role of user interfaces, classes of mitigation,
      some specific use cases, transition strategies (including
      opportunistic encryption), and more. The workshop ended with a few
      high-level recommendations, which it is believed could be implemented
      and which could help strengthen the Internet. This is the report of that
      workshop.</t>
      <t>Note that this document is a report on the proceedings of the
workshop.  The views and positions documented in this report are
those of the workshop participants and do not necessarily reflect IAB
views and positions.</t>
    </abstract>
  </front>
  <middle>

    <section anchor="context" title="Context">

      <t>The Vancouver IETF plenary<xref target='vancouverplenary'/> concluded that Pervasive
      Monitoring (PM)
      represents an attack on
      the Internet, and the IETF has begun to carry out 
      the more obvious actions required to try to handle this attack.
      However, there are additional much more complex questions
      arising that need further consideration before any additional
      concrete plans can be made.</t>

      <t>The <eref target="http://www.w3.org/" >W3C</eref> and <eref
      target="https://www.iab.org/" >IAB</eref> therefore decided to
      host a <eref
      target="https://www.w3.org/2014/strint/Overview.html"
      >workshop</eref> on the topic of “Strengthening the Internet
      Against Pervasive Monitoring” before <eref
      target="https://www.ietf.org/meeting/89/index.html"
      >IETF 89</eref> in London in March 2014.
      The FP7-funded 
	<eref target="http://www.strews.eu/" >STREWS</eref> 
	project 
		organised the STRINT workshop on behalf of the IAB and W3C.
		</t>

		<t>
	The main workshop goal was to discuss what can be
      done, especially by the two standards organisations IETF and
      W3C, against PM, both for existing Internet
      protocols (HTTP/1, SMTP, etc.) and for new ones (WebRTC, HTTP/2,
      etc.).</t>

		<t>The starting point for the workshop was the existing
		IETF consensus that PM is an attack<xref target="RFC7258"/> (the text of which
had achieved IEFF consensus at the time of the workshop, even though
the RFC had yet to be published).
		</t>

    </section>

    <section anchor="summary" title="Summary">

      <t>The workshop was well attended (registration closed when the
      maximum capacity of 100 was reached, but more than
      150 expressed a desire to register) and several people (about 165 at the
      maximum) listened to the streaming audio. The submitted papers
      (67 in total) were generally of good quality and all were
	  published, except for a few where authors who couldn't take part in
	  the workshop preferred not to publish. 
      </t>

      <t>The chairs of the workshop summarised the workshop in the
      final session in the form of the following recommendations:

	<list style="numbers">
	  <t>Well-implemented cryptography can be effective against PM and will benefit the Internet if used more, despite its
	  cost, which is steadily decreasing anyway.</t>

	  <t>Traffic analysis also needs to be considered, but is less well understood in
		the Internet community:
	  relevant research and protocol mitigations such
		as data minimisation need to
	  be better understood.</t>


	  <t>Work should continue on progressing the PM threat model
	draft<xref target="I-D.barnes-pervasive-problem"/> discussed in the workshop.  
	  </t>

	<t>
	Later, the IETF may be in a position
	to start to develop an update to
	  BCP 72 <xref target="RFC3552"/>, most likely as a new RFC enhancing
        that BCP and dealing with recommendations on how to
	mitigate PM and how to reflect that in IETF work.</t>

	  <t>The term "Opportunistic" has been widely used 
		to refer to a possible mitigation strategy for PM. The community need
		to document definition(s) for this term, as it is being
		used differently by different people and in different
		contexts. We may also be able to develop a
	  cookbook-like set of related protocol techniques for developers. 
		Since the workshop, the IETF's security area has taken up
		this work, most recently favouring the generic term "Opportunistic
		Security" (OS) <xref target="I-D.kent-opportunistic-security"/>. Subsequent work on this topic resulted in the
publication of a definition of OS in <xref target="RFC7435"/>.
	  </t>

	  <t>The technical community could do better in explaining the
	  real technical downsides related to PM in terms that policy makers
		can understand.</t>

	  <t>Many User Interfaces (UIs) could be better in terms of how they
		present security state, though this is a significantly hard
 		problem. There may be benefits if
	  certain dangerous choices were simply not
	  offered anymore. But that could require significant co-ordination among
	  competing software makers, otherwise some will be considered "broken"
	  by users. </t>

		<t>Further discussion is needed on ways to better integrate UI issues into the processes of
	  IETF and W3C.</t>

	  <t>Examples of good software configurations
	  that can be cut-and-paste'd for popular
	  software, etc., can help. This is not necessarily standards 
		work, but
	  maybe the standards organisations can help and can work
		with those developing such package-specific documentation.</t>

	  <t>The IETF and W3C can do more so that default
	  ("out-of-the-box") settings for protocols better protect
	  security and privacy.</t>

	  <t><eref
	  target="https://en.wikipedia.org/wiki/Captive_portal"
	  >Captive portals</eref> (and some firewalls, too) can and
	  should be distinguished from real man-in-the-middle
	  attacks. This might mean establishing common
	  conventions with makers of such middleboxes, but might also need
		new protocols. However, the incentives for deploying such
		new middlebox features might not align.</t>
	</list></t>

    </section>

    <section anchor="goals" title="Workshop goals">
      <t>As stated, the STRINT workshop started from the position
	<xref target="RFC7258"/> that PM is an attack.
      While some dissenting voices are expected and need to be heard, that was the baseline assumption
      for the workshop, and the high-level goal was to provide 
      more consideration of that and how it ought to affect future work
      within the IETF and W3C.</t>

      <t>At the next level down the goals of the STRINT workshop were
      to:
	<list style="symbols">
	  <t>Discuss and hopefully come to agreement among the
	  participants on concepts in PM for both threats and
	  mitigation, e.g., “opportunistic” as the term applies to
	  cryptography.</t>

	  <t>Discuss the PM threat model, and how that might be
	  usefully documented for the IETF at least, e.g., via an
	  update to <eref target="http://tools.ietf.org/html/bcp72"
	  >BCP72.</eref></t>

	  <t>Discuss and progress common understanding in the
	  trade-offs between mitigating and suffering PM.</t>

	  <t>Identify weak links in the chain of Web security
	  architecture with respect to PM.</t>

	  <t>Identify potential work items for the IETF, IAB, IRTF and
	  W3C that would help mitigate PM.</t>

	  <t>Discuss the kinds of action outside the IETF/W3C context
	  might help those done within the IETF/W3C.</t>
	</list>

      </t>
    </section>

    <section anchor="structure" title="Workshop structure">

      <t>The workshop structure was designed to maximise discussion
      time. There were no direct presentations of submitted
      papers. Instead, the moderators of each session summarised
		topics that the Technical Programme Committee (TPC) had
 	  agreed based on the submitted
      papers. 
      These summary presentations took at most 50% of the session and
      usually less.</t>

      <t>Because the papers would not be presented during the
      workshop, participants were asked to read and discuss the
      papers beforehand, at least those relevant to their fields of
      interest. (To help people choose papers to read, 
      authors were asked to provide short abstracts.)</t>

      <t>Most of the sessions had two moderators, one
      to lead the discussion, while the other managed the queue of
      people who wanted to speak. This worked well: everybody got a
      chance to speak and each session still ended on time.</t>

      <t>The penultimate session consisted of
      break-outs (which turned out to be the most productive
		sessions of all, most likely simply due to the smaller
		numbers of people involved). The subjects for the break-outs were 
      agreed during the 
      earlier sessions and just before the break-out session the
      participants collectively determined who would attend which.</t>

    </section>


    <section anchor="topics" title="Topics">
      <t>The following sections contain summaries of the various
      sessions. See the minutes (see <xref target="agenda"/>) for more
      details.</t>
      <section anchor="opening" title="Opening session">
	<t>The first session discussed the goals of the
	workshop. Possible approaches to improving security in the
	light of pervasive monitoring include a critical look at what
	metadata is actually required, whether old (less secure)
	devices can be replaced with new ones, what are
	"low-hanging fruit" (issues that can be handled quickly and
	easily), and what level of security is “good enough”: a good
	solution may be one that is good for 90% of people or 90% of
	organisations.</t>

	<t>Some participants felt that standards are needed so that people can see if their
	systems conform to a certain level of security, and easy to remember names
	are needed for those standards, so that a buyer can immediately see
	that a product “conforms to the named intended standard.”</t>

      </section>

      <section anchor="threats" title="Threats">

	<t>One difference between “traditional” attacks and pervasive
	monitoring is modus-operandi of the attacker: typically, one
	determines what resources an attacker might want to target and
	at what cost and then one defends against that threat. But a
	pervasive attacker has no specific targets, other than to
	collect everything he can. The calculation of the cost of
	losing resources vs. the cost of protecting them is thus
	different. And unlike someone motivated to make money, 
	a PM attacker may not be concerned at the cost of the
	attack (or may even prefer a higher cost, for "empire building" reasons"). </t>

	<t>The terminology used to talk about threats has to be chosen
	carefully (this was a common theme in several sessions),
	because we need to explain to people outside the technical
	community what they need to do or not do. For example, authentication
	of endpoints doesn't so much “protect against”
	man-in-the-middle (MITM) attacks as make them visible. The
	attacker can still attack, but it does not remain invisible while he
	does so. Somebody on either end of the conversation needs to
	react to the alert from the system: stop the conversation or
	find a different channel.</t>

	<t>Paradoxically, while larger sites such as Facebook, Yahoo, and
Google supervise the security of their respective services more than
other smaller sites, such large sites offer a much more attractive target to
attack.  Avoiding overuse of such repositories for private or sensitive
	information may be a useful measure that increases the
	cost of collecting for a pervasive attacker. This is sometimes
	called the target-dispersal approach.</t>

	<t>Lack of interoperability between systems can lead to poorly thought
out work-arounds and compromises that may themselves introduce vulnerabilities.
	 And thus improving interoperability needs to be high on the list
	of priorities of standards makers and even more for implementers. 
        Of course,
	testing, such as interop testing, is at some level, part of the process of IETF and W3C; and
	W3C is currently increasing its testing efforts.</t>

      </section>

      <section anchor="comsec-usage" title="Increase usage of security tools">

	<t>The first session on Communication Security (COMSEC) tools
	looked at the question why existing security tools aren't used
	more.</t>

	<t>The example of the public key infrastructure used to secure HTTP
is informative.  One problem is that certificate authorities (CAs) may issue a certificate for any domain.  Thus a single compromised CA can be used in combination with a MITM to impersonate any server.    Moreover, ongoing administration, including requesting, paying for and installing new certificates, has proven over time to be an insurmountable barrier for many web site administrators, leading them not to bother to secure their systems.</t>

	<t>Some ideas were discussed for improving the CA system, e.g., via
	cross-certification of CAs and by means of “certificate
	transparency”: a public, permanent log of who issued which
	certificate. <xref target="RFC6962"/></t>

	<t>Using other models than the hierarchical certificate model
	(as alternative or in combination) may also help. PGP demonstrates a model known as a "web of trust" where people verify the public key of the people they meet.  Because there is no innate transitive trust in PGP, it is appropriate only for small scale uses; an example being a team of people working on a project.</t>

	<t anchor="tofu">Yet another model is “trust on first use”
	(TOFU). This is used quite effectively by SSH <xref target='RFC4252'/>. On the first
	connection, one has no way to verify that the received public key
        belongs to the server one is contacting, therefore, the key is accepted without further verification. But on the subsequent
	connections, one can verify that the received key is the same key as the
	first time. So a MITM has to be there on all connections,
	including the first, otherwise it will be detected by a key mismatch.</t>

	<t>This works well for SSH, because people typically use SSH
	to communicate with a small number of servers over and over
	again. And, if they want, they may find a separate
	channel to get the public key (or its fingerprint). It may
	also work for Web servers used by small groups (the server of
	a sports club, a department of a company, etc.), but probably
	works less well for public servers that are visited once or a few times
	or for large services where many servers may be used.</t>

	  <t>A similar proposal <xref
	  target="draft-ietf-websec-key-pinning"/> for an HTTP header introduces an aspect of
	  TOFU into HTTP: Key pinning tells HTTP clients that for a
	  certain time after receiving this certificate, they should
	  not expect the certificate to change. If it does, even if
	  the new certificate looks valid, the client should assume a
	  security breach. </t>

	<t>Session Initiation Protocol (SIP) <xref target="RFC3261"/> can require several different intermediaries in different stages of the communication to deal with NAT traversal and to handle policy.  While both hop-by-hop and end-to-end encryption are specified, in practice many SIP providers disable these functions.  The reasons for disabling end-to-end security here
	are understandable: to overcome lack of interoperability they
	often need to change protocol headers and modify protocol
	data. Some workshop participants argued that SIP would never have taken off
	if it hadn't been possible for providers to monitor and
	interfere in communications in this way. Of course, that means
	an attacker can listen in just as easily.</t>

	<t>A new protocol for peer-to-peer communication of video and
	audio (and potentially other data) is WebRTC. 
        WebRTC
	re-uses many of the same architectural concepts as SIP, but 
	there is a reasonable
	chance that it can do better in terms of protecting users: The
	people implementing the protocols and offering the service
	have different goals and interests. In particular, the first
	implementers are browser makers, who may have different business
	models from other more traditional Voice over IP providers.</t>

	<t>XMPP<xref target="RFC6120"/> suffers from yet another problem. It has encryption
	and authentication, and the OTR (“off the record”)
	extension even
	provides what is called Perfect Forward Secrecy (PFS, compromising
	the current communication never gives an attacker enough
	information to decrypt past communications that he may have
	recorded). But, in practice, many people don't use XMPP at
	all, but rather Skype, WhatsApp or other instant-messaging
	tools with unknown or no security. The problem here seems to
	be one of user awareness. And though OTR does provide 
	security, it is not well integrated with XMPP and nor is
	it available as a core feature of XMPP clients.</t>

	<t>To increase usage of existing solutions, some tasks can be identified,
	though how those map to actions for e.g. IETF/W3C is not clear:

	  <list style="symbols">

	    <t>Improvements to the certificate system, such as certificate transparency (CT).</t>

	    <t>Making it easier (cheaper, quicker) for system
	    administrators to deploy secure solutions.</t>

	    <t>Improve awareness of the risks. Identify which
	    communities influence which decisions and what is the
	    appropriate message for each.</t>

	    <t>Provide an upgrade path that doesn't break existing
	    systems or require that everybody upgrade at the same
	    time. Opportunistic Security may be one model for that.</t>

	  </list></t>

      </section>

      <section anchor="policy" title="Policy issues and non-technical actions">

	<t>Previous sessions already concluded that the problem isn't
	just technical, such as getting the right algorithms in the standards,
	fixing interoperability, or educating implementers and systems
	administrators. There are user interface issues and education
	issues too. And there are also legal issues 
	and policy issues for governments.</t>

	<t>It appears that the public in general demand more
	privacy and security (e.g., for their children) but are also
	pessimistic about getting that. They trust that somebody assures
	that nothing bad happens to them, but they also expect to be
	spied on all the time.</t>

	<t>(Perceived) threats of terrorism gave governments a
	reason to allow widespread surveillance, far beyond what may
	previously have been considered dangerous for freedom.</t>

	<t>In this environment, the technical community will have a
	hard time developing and deploying technologies that fully
	counter PM, which means there has to be action in
	the social and political spheres, too.</t>

	<t>Technology isn't the only thing that can make life harder
	for attackers. Government-sponsored PM
	is indirectly affected by trade agreements and
	treaties and thus it makes sense to lobby for those to be as
	privacy-friendly as possible.</t>

	<t>Court cases on the grounds of human rights can also
	influence policy, especially if they reach, for example, the European Court
	of Human Rights.</t>

	<t>In medicine and law, it is common to have ethics committees,
	not so in software. Should standards bodies such as IETF and
	W3C have an ethics committee? Standards such as the
	Geolocation API <xref target="w3c-geo-api"/> have gotten scrutiny 
	from privacy experts, but
	only in an ad-hoc manner. (W3C has permanent groups to review standards for
	accessibility and internationalisation. It also has a Privacy
	group, but that currently doesn't do the same kind of systematic
	reviews.)</t>

	<t anchor="collaborator">As the Internet Draft
	draft-barnes-pervasive-problem-00 (included as <eref
	target="https://www.w3.org/2014/strint/papers/44.pdf"
	>paper 44</eref>) explains, PM doesn't just
	monitor the networks, but also attacks at the endpoints,
	turning organisations or people into (willing, unwilling, or
	unwitting) collaborators. One technical means of protection is
	thus to design protocols such that there are fewer potential
	collaborators, e.g., a provider of cloud storage cannot hand
	over plaintext for content that is encrypted with a key he doesn't have, and
	cannot hand over names if his client is anonymous.</t>

	<t>It is important to distinguish between PM and fighting
	crime. PM is an attack, but a judge ordering the surveillance
	of a suspected criminal is not. The latter, often abbreviated
	in this context as LI (for Lawful Intercept), is outside the
	scope of this workshop.</t>

      </section>

      <section anchor="comsec-improvements" title="Improving the tools">

	<t>An earlier session discussed why existing COMSEC tools
	weren't used more. 
	This second session on COMSEC therefore discussed
	what improvements and/or new tools were needed.</t>

	<t>Discussion at the workshop indicated that an important meta-tool for
improving existing security technology could be 
Opportunistic Security (OS) <xref target="I-D.kent-opportunistic-security"/>.
The idea is that software is enhanced
with a module that tries to encrypt communications when it detects that the
other end also has the same capability but otherwise lets the communication
continue in the old way. The detailed definition of OS is being
discussed by the IETF security area at the time of this workshop <xref target="saag"/>.
</t>

	<t>OS would protect against a passive eavesdropper but should
	also allow for endpoint
	authentication to protect against an active attacker (a MITM). As OS spreads,
	more and more communications would be encrypted (and hopefully
	authenticated) and thus there is less and less for an
	eavesdropper to collect.</t>

	<t>Of course, an implementation of OS could give a false sense of security as well:
	some connections are encrypted, some are not. A user might see
	something like a padlock icon in browsers, but there was
	agreement at the workshop that such user interface features
	ought not be changed because OS is being used.
	</t>

	<t>There is also the possibility that a MITM intercepts the
	reply from a server that says “yes, I can do encryption” and
	removes it, causing the client to fall back to an unencrypted
	protocol. Mitigations against this can be to have other
	channels of finding out a server's capabilities and
	remembering that a server could do encryption previously.</t>

	<t>There is also, again, a terminology problem. The technical
	descriptions of OS talk about “silent fail” when a connection
	couldn't be encrypted and has to fall back to the old,
	unencrypted protocol. Actually, it's not a fail; it's no worse
	than it was before. A successful encryption would rather be a
	“silent improvement.”</t>

	<t>That raises the question of the UI: How do you explain to a
	user what their security options are, and, in case an error
	occurs, how do you explain the implications of the various
	responses?</t>

	<t>The people working on encryption are mathematicians and
	engineers, and typically not the same people who know about
	UI. We need to involve the experts. We also need to
	distinguish between usability of the UI, user understanding,
	and user experience. For an e-commerce site, e.g., it is not
	just important that the user's data is technically safe, but
	also that he feels secure. Otherwise he still won't buy
	anything.</t>

	<t>When talking about users, we also need to distinguish the
	end user (who we typically think about when we talk about UI)
	from the server administrators and other technical people
	involved in enabling a connection. When something goes wrong
	(e.g., the user's software detects an invalid certificate),
	the message usually goes to the end user. But he isn't
	necessarily the person who can do something about it. E.g., if
	the problem is a certificate that expired yesterday, the
	options for the user are to break the connection (the safe
	choice, but it means he can't get his work done) or continue
	anyway (there could be a MITM…). The server administrator,
	on the other hand, could actually solve the problem.</t>

	<t>Encryption and authentication have a cost, in terms of
	setting them up, but also in terms of the time it takes for
	software to do the calculations. The set-up cost can be
	reduced with sensible defaults, predefined profiles and
	cut-and-paste configurations. 
	
	And for some
	connections, authentication without encryption could be
	enough, in the case that the data doesn't need to be kept
	secret, but it is important to know that it is the real
	data. Most mail user agents (UA) already provide independent options for
	encryption and signing, but Web servers only support
	authentication if the connection is also encrypted.</t>

	<t>On the other hand, as e-mail also shows, it is difficult
	for users to understand what encryption and authentication do
	separately.</t>

	<t>And it also has to be kept in mind that encrypting only the
	“sensitive” data and not the rest decreases the cost for an
	attacker, too: It becomes easy to know which connections are
	worth attacking. Selective field confidentiality is also
	more prone to lead to developer error, as not all developers
	will know the provenance of values to be processed.</t>

	<t>One problem with the <xref pageno="true" format="none" target="tofu"
	>TOFU model</xref> as used by SSH (see explanation above) is
	that it lacks a solution for key continuity: When a key is
	changed (which can happen, e.g., when a server is replaced or
	the software upgraded), there is no way to inform the client.
	(In practice, people use other means, such as calling people
	on the phone or asking their colleagues in the office, but
	that doesn't scale and doesn't always happen either.) An
	improvement in the SSH protocol could thus be a way to
	transfer a new key to a client in a safe way.</t>

      </section>

      <section anchor="metadata" title="Hiding metadata">

	<t>Encryption and authentication help protect the content of
	messages. Correctly implemented encryption is very hard to
	crack. (To get the content, an attacker would rather attempt
	to steal the keys, corrupt the encoding software, or get the
	content via a <xref pageno="true" format="none" target="collaborator"
	>collaborator</xref>.) But encrypting the content doesn't hide
	the fact that you are communicating. This metadata (who talks
	to whom, when and for how long) is often as interesting as the
	content itself, and in some cases the size and timing of
	messages is even an accurate predictor of the content. So how
	to stop an attacker from collecting metadata, given that much
	of that data is actually needed by routers and other services
	to deliver the message to the right place?</t>

	<t>It is useful to distinguish different kinds of metadata:
	explicit (or metadata proper) and implicit (sometimes called
	traffic data). Implicit metadata is things that can be derived
	from a message or are necessary for its delivery, such as the
	destination address, the size, the time, or the frequency with
	which messages pass. Explicit metadata is things like quality
	ratings, provenance or copyright data: data about the data,
	useful for an application, but not required to deliver the
	data to its endpoint.</t>

	<t>A system such as Tor hides much of the metadata by passing
	through several servers, encrypting all the data except that
	which a particular server needs to see. Each server thus knows
	which server a message came from and where he has to send it
	to, but cannot know where the previous server got it from or
	where the next server is instructed to send it. However,
	deliberately passing through multiple servers makes the
	communication slower than taking the most direct route and
	increases the amount of traffic the network as a whole has to
	process.</t>

	<t>There are three kinds of measures that can be taken to make
	metadata harder to get: aggregation, contraflow and multipath
	(see <eref
	target="https://www.w3.org/2014/strint/papers/04.pdf" >paper
	4</eref>). New protocols should be designed such that these
	measures are not inadvertently disallowed, e.g., because the
	design assumes that the whole of a conversation passes through
	the same route.</t>

	<t>"Aggregation" means collecting
	conversations from multiple sources into one stream. E.g., if
	HTTP connections pass through a proxy, all the conversations
	appear to come from the proxy instead of from their original
	sources. (This assumes that telltale information in the
	headers is stripped by the proxy, or that the connection is
	encrypted.) It also works in the other direction: if multiple
	Web sites are hosted on the same server, an attacker cannot
	see which of those Web sites a user is reading. (This assumes
	that the name of the site is in the path info of the URL and
	not in the domain name, otherwise watching DNS queries can
	still reveal the name.)</t>

	<t>"Contraflow" means routing a
	conversation via one or more other servers than the normal
	route, e.g., by using a tunnel (e.g., with SSH or a VPN) to
	another server. Tor is an example of this. An attacker must
	watch more routes and do more effort to correlate
	conversations. (Again, this assumes that there is no telltale
	information left in the messages that leave the tunnel.)</t>

	<t>"Multipath" splits up a single
	conversation (or a set of related conversations) and routes
	the parts in different ways. E.g., sending a request via a
	satellite link and receiving the response via a land line; or
	starting a conversation on a cellular link and continuing it
	via Wi-Fi. This again increases the cost for an attacker, who
	has to monitor and correlate multiple networks.</t>

	<t>Protecting metadata automatically with technology at a
	lower layer than the application layer is difficult. The
	applications themselves need to pass less data, e.g., use
	anonymous temporary handles instead of permanent
	identifiers. There is often no real need for people to use the
	same identifier on different computers (smartphone, desktop,
	etc.) other than that the application they use was designed
	that way.</t>

	<t>One thing that can be done relatively easily in the short
	term is to go through existing protocols to check what data
	they send that isn't really necessary. One candidate mentioned for such
	a study was XMPP.</t>

	<t>"Fingerprinting" is the process
	of distinguishing different senders of messages based on
	metadata: Clients can be recognised (or at least grouped)
	because their messages always have a combination of features
	that other clients do not have. Reducing redundant metadata
	and reducing the number of optional features in a protocol
	reduces the variation between clients and thus makes
	fingerprinting harder.</t>

	<t>Traffic analysis is a research discipline that produces
	sometimes surprising findings, which are little known among
	protocol developers. Some collections of results are
	  <list style="symbols">
	    <t>A selected <eref target="http://freehaven.net/anonbib/"
	    >bibliography on anonymity</eref> by the Free Haven
	    Project,</t>

	    <t>The yearly <eref
	    target="http://www.informatik.uni-trier.de/~Ley/db/conf/pet/index.html"
	    >Symposium on Privacy Enhancing Technologies
	    (PETS)</eref>, and</t>

	    <t>The yearly <eref
	    target="http://www.informatik.uni-trier.de/~Ley/db/conf/wpes/index.html"
	    >Workshop on Privacy in the Electronic Society
	    (WPES)</eref>.</t>
	  </list></t>

	<t>Techniques that deliberately change the timing or size of
	messages, such as padding, can also help reduce
	fingerprinting. Obviously, they make conversations slower
	and/or use more bandwidth, but in some cases that is not an
	issue, e.g., if the conversation is limited by the speed of a
	human user anyway. HTTP/2 has a built-in padding
	mechanism. However, it is not so easy to use these techniques
	well, and not actually make messages easier to recognise
	rather than harder.</t>

	<t>Different users in different contexts may have different
	security needs, so maybe the priority can be a user
	choice (if that can be done without making high-security
	users stand out from other users). Although many people would
	not understand what their choices are, some do, such as
	political activists or journalists.</t>

      </section>

      <section anchor="deployment"
	title="Deployment, intermediaries and middleboxes">

	<t>Secure protocols have often been designed in the past for
	end-to-end security: Intermediaries cannot read or modify the
	messages. This is the model behind TLS for example.</t>

	<t>In practice, however, people have more or less valid reasons to
	insist on intermediaries: companies filtering incoming and
	outgoing traffic for viruses or other reasons, giving priority
	to certain communications or caching to reduce bandwidth.</t>

	<t>In the presence of end-to-end encryption and
	authentication, these intermediaries have two choices: use
	fake certificates to impersonate the endpoints or have
	access to the private keys of the endpoints. The former is a
	MITM attack that is difficult to distinguish from a more
	malicious one, and the latter obviously decreases the security of
	the endpoints by copying supposedly protected data and
	concentrating such data in a single place.</t>

	<t>As mentioned in <xref pageno="true" target="threats"/> above,
	aggregation of data in a single place makes that place an
	attractive target. And in the case of PM even if the data is
	not concentrated physically in one place, it is under control
	of a single legal entity that can be made into a <xref pageno="true"
	format="none" target="collaborator" >collaborator</xref>.</t>

	<t>The way Web communication with TLS typically works is that
	the client authenticates the server, but the server does not 
	authenticate the
	client at the TLS layer. (If the client needs to be identified, that is mainly
	done at the application layer via passwords or cookies.) Thus
	the presence of a MITM (middlebox) could be detected by the
	client (because of the incorrect certificate), but not by the
	server. If the client doesn't immediately close the
	connection (which they do not in many cases), the server may thus disclose information that the
	user would rather not have disclosed.</t>

	<t>One widespread example of middleboxes is captive portals,
	as found on the Wi-Fi hotspots in hotels, airports, etc. Even
	the hotspots offering free access often intercept communications
	to redirect the user to a login or policy page.</t>

	<t>When the communication they intercept is, e.g., the
	automatic update of your calendar program or a chat session,
	the redirect obviously doesn't work: these applications don't
	know how to display a Web page. With the increasing use of
	applications, it may be a while before the user actually opens a
	browser. The flood of error messages may also have as a result
	that the user no longer reads the errors, allowing an actual
	malicious attack to go unnoticed.</t>

	<t>Some operating systems now come
	with heuristics that try to recognise captive portals and
	either automatically login or show their login page in a
	separate application. (But some hotspot providers apparently
	don't want automatic logins and actually reverse-engineered
	the heuristics to try and fool them.)</t>

	<t>It seems some protocol is missing in this case. Captive
	portals shouldn't have to do MITM attacks to be noticed.
	Something like an extension to DHCP that tells a connecting
	device about the login page may help, although that still
	doesn't solve the problem for devices that do not have a Web
	browser, such as game consoles or SIP phones. HTTP response
	code 511 (defined in <xref target="RFC6585"/>) is another
	attempt at a partial solution (Partial, because it can only work at
	the moment the user uses a browser to connect to a Web site
	and doesn't use HTTPS).</t>

	<t>A practical problem with deployment of such a protocol may
	be that many such captive portals are very old and never
	updated. The hotel staff only knows how to reboot the system
	and as long as it works, the hotel has no incentive to buy a
	new one. As evidence of this: how many such systems require
	you to get a password and the ticket shows the price as zero?
	This is typically because the owner doesn't know how to
	reconfigure the hotspot, but he does know how to change the
	price in his cash register.</t>

      </section>


      <section anchor="research" title="Break-out 1 – research">

    <t>
      Despite some requests earlier in the workshop, the research 
      break-out did not discuss clean-slate approaches. The 
      challenge was rather that the relationship between security research
      and standardisation needs improvement. Research on linkability is 
      not yet well known in the IETF. But the other side of the coin 
      needs improvement too: While doing protocol design, 
      standardisation should indicate what specific problems are in 
      need of more research. 
    </t>
    <t>
      The break-out then made a non-exclusive list of topics that are 
      in need of further research: 
      <list style="symbols">
        <t>
          The interaction of compression and encryption as demonstrated 
          by the <eref 
target="https://community.qualys.com/blogs/securitylabs/2012/09/14/crime 
           -information-leakage-attack-against-ssltls">CRIME SSL/TLS 
            vulnerability</eref>
        </t>
        <t>
          A more proactive deprecation of algorithms based 
          on research results
        </t>
        <t>
          Mitigation for return-oriented programming attacks
        </t>
        <t>
          How to better obfuscate so called "metadata"
        </t>
        <t>How to make the 
          existence of traffic and their endpoints stealthy
        </t>  
      </list>
    </t>

  </section>

      <section anchor="client" title="Break-out 2 – clients">

	<t>Browsers are the first clients one thinks of when talking
	about encrypted connections, authentication and certificates,
	but there are many others.</t>

	<t>Other common case of “false” alarms for MITM (after
	captive portals) include expired and mis-configured
	certificates. This is quite common in intranets, when the
	sysadmin hasn't bothered updating a certificate and rather
	tells his handful of users to just “click continue.” The
	problem is on the one hand that users may not understand the
	difference between this case and the same error message when
	they connect to a server outside the company, and on the other
	hand that the incorrect certificate installed by the sysadmin
	is not easily distinguishable from an incorrect certificate
	from a MITM. The error message is almost the same and the user
	may just click continue again.</t>

	<t>One way to get rid of such certificates is if client
	software no longer offers the option to continue after a
	certificate error. That requires that all major clients (such
	as browsers) change their behaviour at the same time, otherwise
	the first one to do so will be considered broken by users,
	because the others still work. Also it requires a period in
	which that software gives increasingly strong warnings about
	the cut-off date after which the connection will fail with
	this certificate.</t>

	<t>Yet another source of error messages is self-signed
	certificates. Such certificates are actually only errors for
	sites that are not expected to have them. If a message about a
	self-signed certificate appears when connecting to Facebook or
	Google, you're clearly not connected to the real Facebook or
	Google. But for a personal Website it shouldn't cause such
	scary warnings. There may be ways to improve the explanations
	in the error message and provide an easy way to verify the
	certificate (by e-mail, over the phone or some other channel)
	and trust it.</t>

      </section>

      <section anchor="on-by-default" title="Break-out 3 – on by default">

	<t>One step in improving security is to require the relevant
	features, in particular encryption and authentication, to be
	implemented in compliant products: The features are labelled as
	MUST in the standard rather than MAY. This is sometimes
	referred to as Mandatory To Implement (MTI) and is 
	the current practice for IETF protocols<xref target="RFC3365"/>. </t> 
	<t>But that may not be enough to counter PM. It may be that the features are
	there, but not used, because only very knowledgeable users or
	sysadmins turn them on. Or it may be that implementations do not
	actually follow the MTI parts of specifications. Or it may be
	that some security features are implemented but interoperability
	for those doesn't really work. Or, even worse, it may be that
	protocol designers have only followed the letter of the MTI best 
	practice and not its spirit, with the result that security
	features are hard to use or make deployment harder.
	One can thus argue that such features should be defined 
	to be on by default.</t>

	<t>Going further one might argue that these features should not even be options,
	i.e., there should be no way to turn them off. This is
	sometimes called Mandatory To Use (MTU).</t>

	<t>The question raised at this session was for what protocols on-by-default is
	appropriate, and how can one explain to the developers of such
	protocols that it is needed?</t>

	<t>There would of course be resistance to MTU security from
	implemeters and deployments that practice deep packet inspection (DPI) and also
	perhaps from some governments. On the other hand, there may also be
	governments that outlaw protocols 
	without proper encryption. 
	</t>

	<t>This break-out concluded that there could be value in
	attempting to document a new Best Current Practice for the
	IETF that moves from the current MTI position to one
	where security features are on-by-default. Some of the 
	workshop participants expressed interest in authoring
	a draft for such a new BCP and progressing that through
	the IETF consensus process (where it would no doubt be
	controversial). </t>
	

      </section>

      <section anchor="measure" title="Break-out 4 – measurement">

	<t>There was a small break-out on the idea of measurement as
	a way to encourage or gamify the increased use of security
	mechanisms. </t>

      </section>


      <section anchor="opportunistic" title="Break-out 5 – opportunistic">


	<t>This break out considered the use of the term "opportunistic" as
	it applies to cryptographic security and attempted to progress the
	work towards arriving at an agreed-upon definition for use of that 
	term, at it applies to IETF and W3C work.</t>

	<t>While various terms had been used, with many people talking
	about opportunistic encryption, that usage was felt to be 
	problematic both because it conflicted with the use of the
	same term in <xref target="RFC4322"/> and because it was
	being used differently in different parts of the community.</t>

	<t>At the session it was felt that the term "opportunistic 
	keying" was better, but as explained above subsequent list
	discussion resulted in a move to the term "Opportunistic
	Security" (OS). </t>

	<t>Aside from terminology, disussion focused on the use
	of Diffie-Hellman (D-H) key exchange as the preferred mechanism
	of OS, with fall back to cleartext if D-H doesn't succeed
	as a counter for passive attacks.</t>

	<t>There was also of course the desire to be able to
	easily escalate from countering passive attacks to
	also handling endpoint authentication and thereby 
	also countering MITM attacks.</t>

	<t>Making OS visible to users was again considered to
	be undesirable, as users could not be expected to 
	distinguish between cleartext, OS and (one-sided or
	mutual) endpoint authentication.</t>

	<t>Finally, it was noted that it may take some 
	effort to establish how middleboxes might affect OS at different layers
	and that OS really is not suitable as the only migitation
	to use for high-sensitivity sessions such as financial
	transactions.</t>

      </section>

	<section title="Unofficial Transport/Routing Break-out">

	
<t>
Some routing and transport area directors felt a little left
out by all the application layer break-outs, so they had their own
brainstorm about what
could be done at the Transport and Routing layers from which
these notes resulted.</t>

<t>
The LEDBAT <xref target="RFC6817"/> 
protocol was targeted towards a
bulk-transfer service that is reordering and delay insensitive.  Use of
LDEBAT could offer the following benefits for an application:
<list style="letters">
<t>
    Because it is reordering-insensitive, traffic can be sprayed across
a large number of forwarding paths.  Assuming such different paths exist,
this would make it more challenging to capture and analyze a full
interaction.
</t>
<t>
    The application can vary the paths by indicating per packet a
different flow.  In IPv6, this can be done via different IPv6 flow
labels.  For IPv4, this can be done by encapsulating the IP packet into UDP
and varying the UDP source port.
</t>
<t>
    Since LEDBAT is delay-insensitive and applications using it would
need to be as well, it would be possible to obfuscate the application
signatures by varying the packet lengths and frequency.
</t>
<t>
    This can also hide the transport header (for IP in UDP).
</t>
<t>
    If the Reverse Path Forwarding(RPF)<xref target="RFC3704"/> check problem can be fixed, perhaps the source
could be hidden, however it assumes the trafic is within trusted perimeters.
</t>
<t>
    The use of LEDBAT is orthogonal to the use of encryption and provides
different benefits (harder to intercept the whole conversation, ability to
obfuscate the traffic analysis), and also has different costs (longer latency, new
transport protocol usage) to its users.
</t>
</list>
</t>

<t>
The idea of encrypting traffic from customer edge (CE) to CE as part of an
L3VPN or such was also discussed.  This could allow hiding of addresses, including source, and
headers.  From conversation with Ron Bonica, some customers
already do encryption (though not hiding the source address) like this.
 So, it is unclear that this is very practically useful as an enhancement except
for encouraging deployment and use.
</t>

<t>
Finally, it was discussed whether it would be useful to have a means of
communicating where and what layers are doing encryption on an
application's traffic path.  The initial idea of augmenting ICMP has some
issues (not visible to application, ICMP packets frequently filtered) as
well as potential work (determining how to trust the report of encryption).
 It would be interesting to understand if such communication is actually
needed and what the requirements would be.
</t>

	</section>

    </section>

    <section anchor="after" title="After the workshop">
      <t>Holding the workshop just before the IETF had the intended
      effect: a number of people went to both the workshop and the
      IETF, and they took the opportunity of being together at the
      IETF to continue the discussions.</t>

      <t>IETF Working groups meeting in London took the
      recommendations from the workshop into account. It was even the
      first item in the report about the IETF meeting by the IETF
      chair, Jari Arkko:
	<list style="empty">
	  <t>“Strengthening the security and
	  privacy of the
	  Internet continued to draw a lot of attention. The STRINT
	  workshop organised by the IAB and W3C just before the IETF
	  attracted 100 participants and over 60 papers. Even more
	  people would have joined us, but there was no space. During
	  the IETF meeting, we continued discussing the topic at
	  various working groups. A while ago we created the first
	  working group specifically aimed at addressing some of the
	  issues surrounding pervasive monitoring. The Using TLS for
	  Applications (UTA) working group had its first meeting in
	  London. But many other working groups also address these
	  issues in their own work. The TCPM working group discussed a
	  proposal to add opportunistic keying mechanisms directly
	  onto the TCP protocol. And the DNSE BOF considered the
	  possibility of adding confidentiality support to DNS
	  queries. Finally, there is an ongoing effort to review old
	  specifications to search for areas that might benefit from
	  taking privacy and data minimisation better into
	  account.”<xref target="Arkko1"/></t>
	  </list></t>

      <t>Two papers that were written for the workshop, but not
      finished in time, are worth mentioning, too: One by the same
      Jari Arkko, titled “Privacy and Networking
      Functions” <xref target="Arkko2"/>; and one by Johan
      Pouwelse, “The Shadow Internet: liberation from
      Surveillance, Censorship and Servers” <xref
      target="draft-pouwelse-perpass-shadow-internet"/></t>
    </section>

    <section anchor="iana" title="IANA considerations">
	<t>There are none. We hope the RFC editor deletes this section.</t>
	</section>
    <section anchor="Security" title="Security considerations">
      <t>This document does not define a technology but is all about security and privacy.</t>
    </section>
  </middle>
  <back>
    <references title="Informative references">

	&RFC3261;
	&RFC3365;
	&RFC3552;
        &RFC3704;
        &RFC4252;
	&RFC4322;
	&RFC6120;
	&RFC6817;
	&RFC6962;
	&RFC7258;
	&RFC7435;
	&I-D.barnes-pervasive-problem;
	&I-D.kent-opportunistic-security;

      <reference anchor='vancouverplenary' target='http://www.ietf.org/proceedings/88/minutes/minutes-88-iab-techplenary'>
        <front>
         <title>IETF 88 Technical Plenary Minutes</title>
         <author surname='IETF'/>
         <date/>
        </front>
      </reference>
      <reference anchor='RFC6585'>
	<front>
	  <title>Additional HTTP Status Codes</title>
	  <author initials='M.' surname='Nottingham' fullname='M. Nottingham'/>
	  <author initials='R.' surname='Fielding' fullname='R. Fielding'/>
	  <date year='2012' month='April' />
	  <abstract>
	    <t>This document specifies additional HyperText Transfer
	    Protocol (HTTP) status codes for a variety of common
	    situations. [STANDARDS-TRACK]</t>
	  </abstract>
	</front>
	<seriesInfo name='RFC' value='6585' />
	<format type='TXT' octets='17164'
	  target='http://www.rfc-editor.org/rfc/rfc6585.txt' />
      </reference>
      <reference anchor="draft-ietf-websec-key-pinning">
	<front>
	  <title>Public Key Pinning Extension for HTTP</title>
	  <author initials="C." surname="Evans" fullname="Chris Evans">
	    <organization>Google, Inc.</organization>
	    <address>
	      <postal>
		<street>1600 Amphitheatre Pkwy</street>
		<city>Mountain View</city>
		<region>CA</region>
		<code>94043</code>
		<country>US</country>
	      </postal>
	      <email>cevans@google.com</email>
	    </address>
	  </author>
	  <author initials="C." surname="Palmer" fullname="Chris Palmer">
	    <organization>Google, Inc.</organization>
	    <address>
	      <postal>
		<street>1600 Amphitheatre Pkwy</street>
		<city>Mountain View</city>
		<region>CA</region>
		<code>94043</code>
		<country>US</country>
	      </postal>
	      <email>cevans@google.com</email>
	    </address>
	  </author>
	  <author initials="R." surname="Sleevi" fullname="Ryan Sleevi">
	    <organization>Google, Inc.</organization>
	    <address>
	      <postal>
		<street>1600 Amphitheatre Pkwy</street>
		<city>Mountain View</city>
		<region>CA</region>
		<code>94043</code>
		<country>US</country>
	      </postal>
	      <email>sleevi@google.com</email>
	    </address>
	  </author>
	  <date day="8" month="February" year="2014"/>
	  <area>Web Security</area>
	  <abstract>
	    <t>This memo describes an extension to the HTTP protocol
	    allowing web host operators to instruct user agents (UAs)
	    to remember ("pin") the hosts' cryptographic identities
	    for a given period of time.  During that time, UAs will
	    require that the host present a certificate chain
	    including at least one Subject Public Key Info structure
	    whose fingerprint matches one of the pinned fingerprints
	    for that host.  By effectively reducing the number of
	    authorities who can authenticate the domain during the
	    lifetime of the pin, pinning may reduce the incidence of
	    man-in-the-middle attacks due to compromised Certification
	    Authorities.</t>
	  </abstract>
	</front>
	<format target="http://tools.ietf.org/html/draft-ietf-websec-key-pinning-11" type="TXT"/>
	<annotation>(Work in progress.)</annotation>
      </reference>

	<reference anchor="w3c-geo-api"
		target="http://www.w3.org/TR/geolocation-API/">
	<front>
	  <title>Geolocation API Specification</title>
	  <author initials="A." surname="Popescu" fullname="Andrei Popescu">
	  </author>
	  <date day="24" month="October" year="2013"/>
	</front>
      </reference>
	

      <reference anchor="saag"
	target="https://www.ietf.org/mail-archive/web/saag/current/maillist.html">
	<front>
	  <title>IETF Security Area mailing list</title>
	  <author initials="S." surname="Area" fullname="Security Area">
	  </author>
	  <date day="10" month="March" year="2014"/>
	</front>
      </reference>

      <reference anchor="Arkko1"
	target="http://www.ietf.org/blog/2014/03/ietf-89-summary/">
	<front>
	  <title>IETF-89 Summary</title>
	  <author initials="J." surname="Arkko" fullname="Jari Arkko">
	  </author>
	  <date day="10" month="March" year="2014"/>
	</front>
      </reference>

      <reference anchor="draft-pouwelse-perpass-shadow-internet"
	target="https://datatracker.ietf.org/doc/draft-pouwelse-perpass-shadow-internet/">
	<front>
	  <title>The Shadow Internet: liberation from Surveillance,
	  Censorship and Servers</title>
	  <author initials="J." surname="Pouwelse" fullname="Johan Pouwelse" role="editor">
	    <organization>Delft University of
	    Technology</organization>
	    <address>
	      <postal>
		<street>Mekelweg 4</street>
		<city>Delft</city>
		<country>The Netherlands</country>
	      </postal>
	      <phone>+31 15 278 2539</phone>
	      <email>J.A.pouwelse@tudelft.nl</email>
	    </address>
	  </author>
	  <date day="14" month="February" year="2014"/>
	  <abstract>
	    <t>This IETF Perpass document describes some scenarios and
	    requirements for Internet hardening by creating what we
	    term a shadow Internet, defined as an infrastructure in
	    which the ability of governments to conduct indiscriminate
	    eavesdropping or censor media dissemination is reduced.
	    Internet-deployed code is available for most components of
	    this shadow Internet.</t>
	  </abstract>
	</front>
	<annotation>(Work in progress.)</annotation>
      </reference>
      <reference anchor="Arkko2"
	target="http://www.arkko.com/ietf/strint/draft-arkko-strint-networking-functions.txt">
	<front>
	  <title>Privacy and Networking Functions</title>
	  <author initials="J." surname="Arkko" fullname="Jari Arkko">
	    <organization>Ericsson</organization>
	    <address>
	      <postal>
		<street></street>
		<city>Jorvas</city>
		<code>02420</code>
		<country>Finland</country>
	      </postal>
	      <email>jari.arkko@piuha.net</email>
	    </address>
	  </author>
	  <date day="6" month="March" year="2014"/>
	  <abstract>
	    <t> This paper discusses the inherent tussle between
	    network functions and some aspects of privacy.  There is
	    clearly room for a much improved privacy in Internet
	    communications, but there are also interesting
	    interactions with network functions, e.g., what
	    information networks need to provide a service.  Exploring
	    these limits is useful to better understand potential
	    improvements.</t>
	  </abstract>
	</front>
	<annotation>(Work in progress.)</annotation>
      </reference>
    </references>

    <section anchor="logistics" title="Logistics">
      <t>The workshop was organised by the 
	<eref target="http://www.strews.eu/" >STREWS</eref> 
		project (a
      research project funded under the European Union's <eref
      target="http://cordis.europa.eu/fp7/ict/" >7th Framework
      Programme</eref>), as the first of two workshops in its work
      plan. The organisers were supported by the IAB and W3C, and, for
      the local organisation, by <eref
      target="http://blog.digital.telefonica.com/" >Telefonica
      Digital.</eref></t>
      <t>One of the suggestions in the project description of the
      STREWS project was to attach the first workshop to an IETF
      meeting. The best opportunity was <eref
      target="https://www.ietf.org/meeting/89/index.html"
      >IETF 89</eref> in London, which would begin on Sunday
      March 2, 2014. Telefonica Digital offered meeting rooms at
      its offices in central London for the preceding Friday and
      Saturday, just minutes away from the IETF's location.</t>
      <t>The room held 100 people, which was thought to be
      sufficient. There turned out to be more interest than expected
      and we could have filled a larger room, but 100 people is
      probably an upper limit for good discussions anyway.</t>

      <t>Apart from the usual equipment in the room (projector, white
      boards, microphones, coffee…), we also set up some extra
      communication channels:
	<list style="symbols">
	  <t>A mailing list where participants could discuss the
	  agenda and the published papers about three weeks in advance
	  of the workshop itself. (Only participants were allowed to
	  write to the mailing list, but the <eref
	  target="http://lists.i1b.org/pipermail/strint-attendees-i1b.org/"
	  >archive</eref> is public.)</t>
	  <t>Publicly advertised streaming audio (one-way only). At
	  some point, no less than 165 people were listening.</t>
	  <t>An IRC channel for live minute taking, passing links and
	  other information, and as a help for remote participants to
	  follow the proceedings.</t>
	  <t>An Etherpad, where the authors of papers could provide an
	  abstract of their submissions, to help participants who
	  could not read all 66 papers in full in advance of the
	  workshop. The abstracts were also used on the workshop's
	  <eref target="https://www.w3.org/2014/strint/">Web site</eref>.</t>
	  <t>A “Twitter hashtag” (#strint). Four weeks
	  after the workshop, there were still a few new <eref
	  target="https://twitter.com/search?q=%23strint"
	  >messages</eref> about events related to workshop
	  topics.</t>
	</list>
      </t>
    </section>

    <section anchor="agenda" title="Agenda">
      <t>This was the final agenda of the workshop, as 
      determined by the TPC and participants on the mailing list prior to the
      workshop. The included links are to the slides that the
      moderators used to introduce each discussion topic and to the
      minutes.</t>

      <section anchor="friday" title="Friday 28 February">
	<t>
	  <list style="hanging">
<t><eref
	  target="http://www.w3.org/2014/02/28-strint-minutes.html"
	  >Minutes</eref></t>
	  <t>Workshop starts, welcome, logistics,
	  opening/overview <eref
	  target="http://down.dsg.cs.tcd.ie/strint-slides/s0-welcome.pdf"
	  >[slides]</eref>
	    <list style="symbols">
	      <t>Goal is to plan how we respond to PM threats</t>
	      <t>Specific questions to be discussed in sessions</t>
	      <t>Outcomes are actions for IETF, W3C, IRTF, etc.</t>
	    </list></t>
	  <t>I. Threats – What problem are we trying
	  to solve?  (Presenter: Richard Barnes; Moderator: Cullen
	  Jennings) <eref
	  target="http://down.dsg.cs.tcd.ie/strint-slides/s1-threat.pdf"
	  >[slides]</eref>
	    <list style="symbols">
	      <t>What attacks have been described? (Attack taxonomy)</t>
	      <t>What should we assume the attackers' capabilities are?</t>
	      <t>When is it really “pervasive monitoring” and when is
	      it not?</t>
	      <t>Scoping – what's in and what's out? (for IETF/W3C)</t>
	    </list></t>
	  <t>II. COMSEC 1 – How can we increase
	  usage of current COMSEC tools? (Presenter: Hannes
	  Tschofenig; Moderator: Leif Johansson) <eref
	  target="http://down.dsg.cs.tcd.ie/strint-slides/s2-comsec.pdf"
	  >[slides]</eref>
	    <list style="symbols">
	      <t>Whirlwind catalog of current tools</t>
	      <t>Why aren't people using them?  In what situations
	      are / aren't they used?</t>
	      <t>Securing AAA and management protocols – why not?</t>
	      <t>How can we (IETF/W3C/community) encourage more/better use?</t>
	    </list></t>
	  <t>III. Policy – What policy / legal/
	  other issues need to be taken into account? (Presenter:
	  Christine Runnegar; Moderator: Rigo Wenning) <eref
	  target="http://down.dsg.cs.tcd.ie/strint-slides/s3-policy.pdf"
	  >[slides]</eref>
	    <list style="symbols">
              <t>What non-technical activities do we need to be aware of?</t>
	      <t>How might such non-technical activities impact on IETF/W3C?</t>
	      <t>How might IETF/W3C activities impact on those
	      non-technical activities?</t>
	    </list></t>
	  <t>Session IV – Saturday plan,
	  open-mic, wrap up day</t>
	  </list></t>
      </section>
      <section anchor="saturday" title="Saturday 1 March">
	<t>
	  <list style="hanging">
        <t><eref
	target="http://www.w3.org/2014/03/01-strint-minutes.html"
	>Minutes</eref></t>
	    <t>IV. COMSEC 2 – What improvements
	    to COMSEC tools are needed?(Presenter: Mark Nottingham;
	    Moderator: Steve Bellovin) <eref
	    target="http://down.dsg.cs.tcd.ie/strint-slides/s4-opportunistic.pdf"
	    >[slides]</eref>
	      <list style="symbols">
		<t>Opportunistic encryption – what is it and where it
		might apply</t>
		<t>Mitigations aiming to block PM vs. detect PM – when
		to try which?</t>
	      </list></t>
	  <t>V. Metadata – How can we reduce
	  the metadata that protocols expose? (Presenter: Alfredo
	  Pironti <eref
	  target="http://down.dsg.cs.tcd.ie/strint-slides/s5-1metadata-pironti.pdf"
	  >[slides]</eref> / Ted Hardie <eref
	  target="http://down.dsg.cs.tcd.ie/strint-slides/s5-2metadata-hardie.pdf"
	  >[slides]</eref>; Moderator: Alissa Cooper <eref
	  target="http://down.dsg.cs.tcd.ie/strint-slides/s5-3metadata-cooper.pdf"
	  >[slides]</eref>)
	    <list style="symbols">
	      <t>Meta-data, fingerprinting, minimisation</t>
	      <t>What's out there?</t>
	      <t>How can we do better?</t>
	    </list></t>
	  <t>VI. Deployment – How can we
	  address PM in deployment / operations? (Presenter: Eliot
	  Lear; Moderator: Barry Leiba) <eref
	  target="http://down.dsg.cs.tcd.ie/strint-slides/s6-deploy.pdf"
	  >[slides]</eref>
	    <list style="symbols">
	      <t>“Mega”-commercial services (clouds, large
	      scale email & SN, SIP, WebRTC…)</t>
	      <t>Target dispersal – good goal or wishful thinking?</t>
	      <t>Middleboxes: when a help and when a hindrance?</t>
	    </list></t>
	  <t>VII. 3 x Break-out Sessions / Bar-Camp
	  style (Hannes Tschofenig)
	    <list style="symbols">
	      <t>Content to be defined during meeting, as topics come up</t>
	      <t>Sum up at the end to gather conclusions for report</t>
	    </list></t>
	  <t>Break-outs:
	    <list style="numbers">
	      <t>Research Questions (Moderator:
	      Kenny Paterson)
		<list style="symbols">
		  <t>Do we need more/different crypto tools?</t>
		  <t>How can applications make better use of COMSEC tools?</t>
		  <t>What research topics could be handled in IRTF?</t>
		  <t>What other research would help?</t>
		</list></t>
	      <t>clients</t>
	      <t>on by default</t>
	      <t>measuring</t>
	      <t>opportunistic</t>
	    </list></t>
	  <t>VIII. Break-out reports, Open mic &
	  Conclusions – What are we going to do to address PM?
	  <eref
	  target="https://www.w3.org/2014/strint/slides/summary.pdf"
	  >[slides]</eref>
	    <list style="symbols">
	      <t>Gather conclusions / recommendations / goals from
	      earlier sessions</t>
	    </list></t>
	</list></t>
      </section>
    </section>
    <section anchor="committee" title="Workshop chairs & program committee">
      <t>The workshop chairs were three: <eref
      target="https://www.cs.tcd.ie/Stephen.Farrell/" >Stephen
      Farrell</eref> (TCD) and <eref
      target="http://www.w3.org/People/Rigo/">Rigo Wenning</eref>
      (W3C) from the STREWS project, and <eref
      target="http://www.tschofenig.priv.at/wp/?page_id=5">Hannes
      Tschofenig</eref> (ARM) from the STREWS Interest Group.</t>

      <t>A program committee (PC) was charged with evaluating the
      submitted papers. It was made up of the members of the STREWS
      project, the members of the STREWS Interest Group, plus invited
      experts: Bernard Aboba (Microsoft), Dan Appelquist
      (Telefónica & W3C TAG), Richard Barnes (Mozilla),
      Bert Bos (W3C), Lieven Desmet (KU Leuven), Karen O'Donoghue
      (ISOC), Russ Housley (Vigil Security), Martin Johns (SAP), Ben
      Laurie (Google), Eliot Lear (Cisco), Kenny Paterson (Royal
      Holloway), Eric Rescorla (RTFM), Wendy Seltzer (W3C), Dave
      Thaler (Microsoft) and Sean Turner (IECA).</t>
    </section>
    <section anchor="participants" title="Participants">
      <t>The participants to the workshop were: <list style="symbols">
	  <t>Bernard Aboba (Microsoft Corporation)</t>
	  <t>Thijs Alkemade (Adium)</t>
	  <t>Daniel Appelquist (Telefónica Digital)</t>
	  <t>Jari Arkko (Ericsson)</t>
	  <t>Alia Atlas (Juniper Networks)</t>
	  <t>Emmanuel Baccelli (INRIA)</t>
	  <t>Mary Barnes</t>
	  <t>Richard Barnes (Mozilla)</t>
	  <t>Steve Bellovin (Columbia University)</t>
	  <t>Andrea Bittau (Stanford University)</t>
	  <t>Marc Blanchet (Viagenie)</t>
	  <t>Carsten Bormann (Uni Bremen TZI)</t>
	  <t>Bert Bos (W3C)</t>
	  <t>Ian Brown (Oxford University)</t>
	  <t>Stewart Bryant (Cisco Systems)</t>
	  <t>Randy Bush (IIJ / Dragon Research Labs)</t>
	  <t>Kelsey Cairns (Washington State University)</t>
	  <t>Stuart Cheshire (Apple)</t>
	  <t>Vincent Cheval (University of Birmingham)</t>
	  <t>Benoit Claise (Cisco)</t>
	  <t>Alissa Cooper (Cisco)</t>
	  <t>Dave Crocker (Brandenburg InternetWorking)</t>
	  <t>Leslie Daigle (Internet Society)</t>
	  <t>George Danezis (University College London)</t>
	  <t>Spencer Dawkins (Huawei)</t>
	  <t>Mark Donnelly (Painless Security)</t>
	  <t>Nick Doty (W3C)</t>
	  <t>Dan Druta (AT&T)</t>
	  <t>Peter Eckersley (Electronic Frontier Foundation)</t>
	  <t>Lars Eggert (NetApp)</t>
	  <t>Kai Engert (Red Hat)</t>
	  <t>Monika Ermert</t>
	  <t>Stephen Farrell (Trinity College Dublin)</t>
	  <t>Barbara Fraser (Cisco)</t>
	  <t>Virginie Galindo (gemalto)</t>
	  <t>Stefanie Gerdes (Uni Bremen TZI)</t>
	  <t>Daniel Kahn Gillmor (ACLU)</t>
	  <t>Wendy M. Grossman</t>
	  <t>Christian Grothoff (The GNUnet Project)</t>
	  <t>Oliver Hahm (INRIA)</t>
	  <t>Joseph Lorenzo Hall (Center for Democracy & Technology)</t>
	  <t>Phillip Hallam-Baker</t>
	  <t>Harry Halpin (W3C/MIT and IRI)</t>
	  <t>Ted Hardie (Google)</t>
	  <t>Joe Hildebrand (Cisco Systems)</t>
	  <t>Russ Housley (Vigil Security, LLC)</t>
	  <t>Cullen Jennings (CISCO)</t>
	  <t>Leif Johansson (SUNET)</t>
	  <t>Harold Johnson (Irdeto)</t>
	  <t>Alan Johnston (Avaya)</t>
	  <t>L. Aaron Kaplan (CERT.at)</t>
	  <t>Steve Kent (BBN Technologies)</t>
	  <t>Achim Klabunde (European Data Protection Supervisor)</t>
	  <t>Hans Kuhn (NOC)</t>
	  <t>Christian de Larrinaga</t>
	  <t>Ben Laurie (Google)</t>
	  <t>Eliot Lear (Cisco Ssytems)</t>
	  <t>Barry Leiba (Huawei Technologies)</t>
	  <t>Sebastian Lekies (SAP AG)</t>
	  <t>Orit Levin (Microsoft Corporation)</t>
	  <t>Carlo Von LynX (#youbroketheinternet)</t>
	  <t>Xavier Marjou (Orange)</t>
	  <t>Larry Masinter (Adobe)</t>
	  <t>John Mattsson (Ericsson)</t>
	  <t>Patrick McManus (Mozilla)</t>
	  <t>Doug Montgomery (NIST)</t>
	  <t>Kathleen Moriarty (EMC)</t>
	  <t>Alec Muffett (Facebook)</t>
	  <t>Suhas Nandakumar (Cisco Systems)</t>
	  <t>Linh Nguyen (ERCIM/W3C)</t>
	  <t>Linus Nordberg (NORDUnet)</t>
	  <t>Mark Nottingham</t>
	  <t>Karen O'Donoghue (Internet Society)</t>
	  <t>Piers O'Hanlon (Oxford Internet Institute)</t>
	  <t>Kenny Paterson (Royal Holloway, University of London)</t>
	  <t>Jon Peterson (Neustar)</t>
	  <t>Joshua Phillips (University of Birmingham)</t>
	  <t>Alfredo Pironti (INRIA)</t>
	  <t>Dana Polatin-Reuben (University of Oxford)</t>
	  <t>Prof. Johan Pouwelse (Delft University of Technology)</t>
	  <t>Max Pritikin (Cisco)</t>
	  <t>Eric Rescorla (Mozilla)</t>
	  <t>Pete Resnick (Qualcomm Technologies, Inc.)</t>
	  <t>Tom Ristenpart (University of Wisconsin)</t>
	  <t>Andrei Robachevsky (Internet Society)</t>
	  <t>David Rogers (Copper Horse)</t>
	  <t>Scott Rose (NIST)</t>
	  <t>Christine Runnegar (Internet Society)</t>
	  <t>Philippe De Ryck (DistriNet - KU Leuven)</t>
	  <t>Peter Saint-Andre (&yet)</t>
	  <t>Runa A. Sandvik (Center for Democracy and Technology)</t>
	  <t>Jakob Schlyter</t>
	  <t>Dr. Jan Seedorf (NEC Laboratories Europe)</t>
	  <t>Wendy Seltzer (W3C)</t>
	  <t>Melinda Shore (No Mountain Software)</t>
	  <t>Dave Thaler (Microsoft)</t>
	  <t>Brian Trammell (ETH Zurich)</t>
	  <t>Hannes Tschofenig (ARM Limited)</t>
	  <t>Sean Turner (IECA, Inc.)</t>
	  <t>Matthias Wählisch (Freie Universität Berlin)</t>
	  <t>Greg Walton (Oxford University)</t>
	  <t>Rigo Wenning (W3C)</t>
	  <t>Tara Whalen (Apple Inc.)</t>
	  <t>Greg Wood (Internet Society)</t>
	  <t>Jiangshan Yu (University of Birmingham)</t>
	  <t>Aaron Zauner</t>
	  <t>Dacheng Zhang (Huawei)</t>
	  <t>Phil Zimmermann (Silent Circle LLC)</t>
	  <t>Juan-Carlos Zuniga (InterDigital)</t>
	</list></t>
    </section> 
  </back>
</rfc>

PAFTECH AB 2003-20262026-04-24 04:21:01