Skip navigation

In some ways, it reads like a bad novel: “Every Step You Fake” (https://openeffect.ca/reports/Every_Step_You_Fake.pdf), a Canadian study of privacy and security in personal fitness devices. The report outlines two key areas in which these devices have significant security and privacy shortcomings — but just as you feel sympathy for the devices’ wearers, you learn they may be the “bad actor” in other cases. We can spot adversaries in every direction, but who’s the hero of this drama? And, frankly, does it need to be a drama?
The two shortcomings outlined in the report are that:

  • the devices’ radio-based transmissions can “leak” your presence and make you trackable (anonymously) through shopping malls that do that sort of thing; and
  • it’s possible to fake out some of the website collection servers so that you can “adjust” your results.

Well, wait. Why so much drama around a device you electively wear on your person? What are the actual problems that need solving here?

Read More »

I had a fun time on  ISOC-DC’s #5in5in5 panel — talking about five things that will be different about the future of the Internet, in 2020.  There’s video of the panel available on ISOC-DC’s livestream TV site:  http://www.isoc-dc.org/isoc-dc-tv/ .

My 5 points, delivered within 5 minutes, covered:

  1. The Internet continues to try to “get out of the box” — increasingly, we don’t see “the Internet” as separate from our tools or tasks.  They merge.  Sometimes  it’s seamless — you might carry out a text message conversation with someone from your phone, and then answering from your computer as you walk by, and picking it up again on your iPad.  Each device has all the context.  You’re not thinking about which network you’re using to send the messages.   The downside is  the loss of individual control over your Internet experience — having your car hacked over the net while you’re driving it is the downside of all that invisible interconnectedness.  Also, the Internet becomes opaque, another packaged commodity, and a lot less likely to be something we can all hook into, climb onto, and understand.
  2. Various forces — policy makers, big business, whatever — are trying to put the Internet into a box, or some structure.  For example, regulations requiring that the Internet not serve particular information beyond geographic boundaries is essentially implemented by aligning the actual Internet network with those geopolitical limits.  I have had a lot to say about the challenges with that approach, but suffice to say that the Internet was not built to pay attention to political lines, and imposing those structures reduces its resiliency and its efficiency and effectiveness.
  3. New approaches are needed!  If these approaches to regulation and restriction are not going to work (because they reduce the Internet to something unusable), then we need a different way to talk about the Internet, the services that run over the Internet, and how we articulate and enact policies that relate to them.  Don’t try to curtail web page access by making laws requiring ISPs to delete entries in DNS, figure out a better way to get international policy to common ground on what makes inappropriate use of the Internet.  (Make the action illegal, not the tool).
  4. And yet, everything old is new again.
    1. The Internet was developed as an inter-network — making a whole out of disparate parts collaborating.
    2. We could not have seen the kind of IPv6 deployment we have today if large, competitive web companies hadn’t stood up to do World IPv6 Day and World IPv6 Launch.
    3. The future is better if we can regain and foster more of that sense of cross-industry collaboration to find solutions that are best for the Internet as a whole.
  5. As I recall saying at an ISOC-DC “Future Internet” panel some years ago, if I could tell you what the Internet was going to be in 2020, it wouldn’t be the Internet, now would it?!  The beauty and power of the Internet is that it is a platform that supports creativity, communication and development for everyone, and we have no means to size the depth and breadth of that much creative energy.  So much of what we now consider “normal” in the Internet — say, FaceBook — was unthinkable until someone thought of it and built it.  If the Internet loses its ability to support that kind of novel development, then it’s not the Internet anymore, it’s just another network.

It was a fun panel — good discussion with the attendees, too.   From the comments that came up during the session, it’s clear that people have very real concerns about where we’re going with the Internet as a platform for “permissionless innovation”, while ensuring that we retain some level of privacy and management of our personal information.

As Mike Nelson said, moderating the session — there’s enough meat in each of the topics we brought up to fuel a semester long university course!

Today, I want to share with you something that I’ve been working on for the last several months — a concrete vision and proposal for supporting the Internet’s development.

CCDI-Spelled-Out-20150715

For some, “Internet development” is about building out more networks in under-served parts of the world. For others, myself included, it has always included a component of evolving the technology itself, finding answers to age-old or just-discovered limitations and improving the state of the art of the functioning, deployed Internet.  In either case, development means getting beyond the status quo.  And, for the Internet, the status quo means stagnation, and stagnation means death.

Twenty-odd years ago, when I first got involved in Internet technology development, it was clear that the technology was evolving dynamically.  Engineers got together regularly to work out next steps large and small — incremental improvements were important, but people were not afraid to think of and tackle the larger questions of the Internet’s future.  And the engineers who got together were the ones that would go home to their respective companies and implement the agreed on changes within their products and networks.

Time passes, things change.  As an important underlay to the world’s day to day activities, a common perspective of the best “future Internet” is — hopefully as good as today’s, but maybe faster.   And, many of the engineers have gone on to better things, or management positions.  Companies are typically larger, shareholders a little more keen on stability, and engineers are less able to go home to their companies and just implement new things.

If we want something other than “current course and speed” for the Internet’s development, I believe we need to put some thoughtful, active effort into rebuilding that sense of collaborative empowerment for the exploration of solutions to old problems and development of new directions — but taking into account and working with the business drivers of today’s Internet.

Clearly, it can be done, at least for specific issue — I give you World IPv6 Launch.

Apart from that, what kinds of issues need tackling?  Well, near term issues include routing security as well as fostering measurements and analysis of the currently deployed network.  Longer term issues can include things like dealing with rights — in handling personal information (privacy) as well as created content.

I don’t think it requires magic.   It might involve more than one plan — since there never is a single right answer or one size that fits all for the Internet.  But, mostly, I think it involves careful fostering, technical leadership, and general facilitation of collaboration and cooperation on real live Internet-touching activities.

I’m not just waving my hands around and writing pretty words in a blog post.  Earlier this year, I invited a number of operators to come talk about an Unwedging Routing Security Activity, and in April, we had a meeting to discuss possibilities and particulars.  You can out more about the activity, including a report from the meeting here.

That was a proof point for the more general idea of this “coordination” function I described above — for now, let’s call it the Centre for the Creative Development of the Internet, and you can read more about that here:  http://ccdi.thinkingcat.com/ .

In brief, I believe it’s possible to put together concrete activities that will move the Internet forward, that can be sustained by support from individual companies that have an interest in finding a collaborative solution to a problem that faces them.  The URSA work is a first step and a proof point.

Now the hard part:  this is not a launch, because while  the idea is there, it’s not funded yet.  I am actively pursuing ways to get it kick started, with to be able to make longer term commitments to needed resources, and get the idea out of the lab and working with Internet actors.

If you have thoughts or suggestions, I’m happy to hear them — ldaigle@thinkingcat.com .   Even if it’s just a suggestion for a better name :^)  .

And, if we’re lucky, the future of Internet development will mirror some of its past, embracing new challenges with creative, collaborative solutions.

Last week I had the privilege of participating in the Norwich University College of Graduate and Continuing Studies 2015 Residency Conference.    NU runs several online graduate programs, which have the requirement of spending one week (at the end of the program) on campus in Northfield, VT.

As a member of the advisory board for the highly-rated Master of Science, Information Security and Assurance (MSISA) program, I find it interesting to meet the students and see the breadth of backgrounds, interests and future plans that they have.  Online university programs are real, concrete, and provide access to education that would otherwise be very difficult for working professionals to accommodate in their busy lives.

The Residency Conference is the icing on the cake as students have the opportunity to meet each other and some of the professors they’ve been working with throughout their program.

And, really, who could resist the opportunity to spend a week in Vermont?

Not I, clearly — this was my second opportunity to participate in the annual conference.  Last year, I gave an introduction to Internet governance, among other things.

This time, in addition to helping with facilitating case study discussion at the conference, I had the opportunity to participate by hooding in the Academic Recognition Ceremony for the MSISA and MPA programs — a fine opportunity to wear academic regalia!  The school’s photographer caught some pics, so there is evidence!

I’ve long been convinced of the importance of continued learning — always keep stretching, doing, learning.  Last week’s Residency Conference energy was a reminder that formal education, whether in person or online, can be an even more intense and rewarding experience than self-directed learning.

GCIG_Paper_No7-frontpiece

A visible product of my “self-funded sabbatical” is now published!

On the Nature of the Internet, by Leslie Daigle

My aim and hope is that it will provide some further insight into what not to do to the Internet intentionally or inadvertently,  so that collectively we can agree on the need to find better ways of dealing with the very real policy issues that need to have solutions.

The Internet has proven itself highly accommodating of change over the decades — today’s Internet looks nothing like the network of networks that existed 25 years ago, when commercial traffic was still prohibited from traversing it.  But, most of the changes that it has faced have come from technological or direct usage issues.  In today’s reality, many of the forces at play on the Internet are direct or indirect outcomes of (government and regulatory) policy choices.

If we want to continue to have a healthy and evolving Internet, we need to learn how to make policies that are consistent with, or at least not antithetical to, what makes the Internet work.

So, when I was asked last year to write a paper on the nature of the Internet for the Global Commission on Internet Governance, I turned first to the work we’d done at the Internet Society on the “Invariant Properties” that are true of the healthy Internet.   In the paper that I wrote for GCIG, now published as their 7th paper for this commission, I tackled the questions of policy choices that are driving us towards national networks and localized abuse of Internet infrastructure, through the lens of those 8 invariant properties of Internet health.

Here’s the executive summary:

This paper examines three aspects of the nature of the Internet: the Internet’s technology, general properties that make the Internet successful and current pressures for change. Current policy choices can, literally, make or break the Internet’s future. By understanding the Internet — primarily in terms of its key properties for success, which have been unchanged since its inception — policy makers will be empowered to make thoughtful choices in response to the pressures outlined here, as well as new matters arising.

Have a read of the paper, and let me know what you think — other examples of policy driving us in the wrong direction?  New approaches to policy-making that will help us solve problems and have a healthy Internet?  I’d love to hear your perspective, and — more importantly — see a broader discussion develop around different perspectives.

Those of you who track the announcements of IETF Internet-Draft publications may have noticed a “draft-daigle-” document pop out in the flurry last-minute pre-IETF92 documents.  (ICYMI, the document is:  draft-daigle-AppIdArch-00.txt).

Related to the work I’ve been doing in bolstering the content of “The Attic” of applications identifier technology history, I started to think about a general framework to describe applications identifiers.  So many times we’ve been through the same design discussions — it would be nice to capture the state of the art in tradeoffs and design considerations and simply move forward.

Future versions of this or other documents are intended to delve more deeply into questions of design choices, as well as the broader question of applications architectures (which are uniquely tied to identifiers, content, and resolution).

That’s the theory behind the draft.  It is an “-00” version, with all the draftiness that implies.  My hope is that it will stimulate some discussion and feedback.  I’d love to hear your thoughts — comment here, send me an e-mail, or catch me in Dallas at IETF92.

The long-standing and generally-held belief of the Internet community has been that the Internet’s governance should be based on a “multistakeholder” model.  Whatever you may surmise is the proper definition of that word, we should readily agree it doesn’t mean that a single government, any single government, should have override control of major swaths of the Internet or its support functions.

This year, there have been constructive community steps towards reducing the Internet’s dependency on a single nation, as well as a variety of reminders why it is important to make that effort successful.

Hence the global satisfaction at the NTIA’s announcement in March 2014 that it would seek a multistakeholder-model-supporting proposal to transition the NTIA (and the US government) out of its oversight role for the Internet Assigned Numbers Authority (IANA).    For many, this has been a long time coming — certainly, the Internet Architecture Board, on behalf of the IETF, has been signalling (to the NTIA, publicly) its concerns about the IETF’s lack of control over its own standards’ parameter assignment since at least the days when I was the IAB Chair.

The communities that have actual responsibility for managing the names, numbers, and other protocol parameters have, since March, stepped up to engage in developing the pieces of the requested proposal.   These are not random strawperson proposals to define a new Internet or governance system:  the communities involved are dependent on the IANA function for getting their own work done, and the focus has been on ensuring that the Internet’s naming, numbering and protocol development functions will continue to work reliably, responsibly and without undue interference in a post-NTIA-transition world.

For the protocol parameters part of IANA, the IETF’s IANAPLAN working group was chartered “to produce an IETF consensus document that describes the expected interaction between the IETF and the operator of IETF protocol parameters registries.”    From my vantage point as co-chair of the WG, I have seen the WG’s extensive discussion of the issues at hand, and watched the document editors do a sterling job of producing a document that will be the basis of the IETF’s contribution to the proposal.  With the WG’s document in last call across the IETF until the 15th of December (err, today!), the IETF is on track to have its contribution done by the January 15th deadline set by the inter-communittee coordinating committee.  (See IETF Chair Jari Arkko’s blog post for more details).

Just in case anyone’s energy was flagging before we finish the final details, there are timely reminders of why it is important to keep pressing on with defining (and realizing) the IANA in a post-NTIA reality.  As noted in Paul Rosenzweig’s article on Lawfare, “Congress Tries To Stop the IANA Transition — But Does It?”, a different part of the US government (the US Congress) is trying to stop the NTIA’s actions:

“Now Congress has intervened.  In the Omnibus spending bill that looks to be going through Congress this week the following language appears:

SEC. 540. (a) None of the funds made available by this Act may be used to relinquish the responsibility of the National Telecommunications and Information Administration during fiscal year 2015 with respect to Internet domain name system functions, including responsibility with respect to the authoritative root zone file and the Internet Assigned Numbers Authority functions.

(b) Subsection (a) of this section shall expire on September 30, 2015.”

Rosenzweig goes on to observe that the provision may well not have the expected impact, and might have more deleterious effects for the US.  Perhaps this is Congress attempting to use the budget process to stop the NTIA’s actions in their tracks; perhaps it’s just budget-jockeying on a scale not comprehended outside the limits of Washington, D.C.  But — It.Doesn’t.Matter.

Most of the Internet’s users do not live in the country in question, let alone have a voice in those discussions.  Nevertheless, they are impacted by the outcome.   Which is why the Internet community, which is global, and has solicited input broadly, is stepping up to create a future for the IANA that will:

  • Support and enhance the multistakeholder model;
  • Maintain the security, stability, and resiliency of the Internet DNS;
  • Meet the needs and expectation of the global customers and partners of the IANA services; and,
  • Maintain the openness of the Internet.

Clearly, that can not be satisfied with the control of any single government, as the US Congress’s actions remind us now!    The question is not whether the US government retains its historical role as contract-holder for the IANA functions.  The question is how to best meet the criteria thoughtfully laid out by the NTIA.

Post Wordle

 

Today is the official launch of a new ThinkingCat Enterprises project — InternetImpossible.   The purpose of the project is to capture, share, and raise awareness of  the many and varied wonders of the Internet. This ranges from its technology to its reach and its impact. Impact is noted on people, on cultures, on ways of doing things.

It’s a storybook.  And, like all good storybooks, it has lessons, or at least valuable learnings that should be remembered and shared.   The Internet is, in some ways, being taken for granted. Along with that ease and familiarity comes an increase in efforts to apply existing norms, processes and problem solving approaches.   So take a moment to review the stories.  Come back to read new ones.  And, if you’ve got a great story about how the Internet is impossible, or has enabled you to do something impossible, please share!  (Send an e-mail to “editor” at “internetimpossible.org”).

That’s it.  Why are you still here? 😉  Go check out http://www.internetimpossible.org .

IMG_4939-small

Yesterday, I re-tweeted Cloudflare’s announcement that they are providing universal SSL for their customers. [1]   I believe the announcement is a valuable one for the state of the open Internet for a couple of reasons:

First, there is the obvious — they are doubling the number of websites on the Internet that support encrypted connections.    And, hopefully, that will prompt even more sites/hosting providers/CDNs to get serious about supporting encryption, too.    Web encryption — it’s not just for e-commerce, anymore.

Second, and no less important, is the way that the announcement articulates and shares their organizational thought processes.  They are pretty clear that this is not a decision made to immediately and positively impact their bottom line of business.  It’s about better browsing, and a better Internet in the long run is better business.  And, they are also pretty open about the challenges they face, operationally, to achieve this.    That’s another thing that can be helpful to other organizations contemplating the plunge to support SSL.

So, go ahead and have a read of their detailed announcement — and please forget to come back and check if this website supports encrypted connections.   It does not :-/   (yet).  I’ve added it to my IT todo list — right after dealing with some issues in my e-mail infrastructure.  I asked the head of IT for a timeline on that, and she just gave me a tail-flick and a paw-wash in response.  Life as a micro-enterprise.

More substantially, I could easily become a Cloudflare customer and thus enable encryption up to the Cloudflare servers.  But, proper end-to-end encryption requires my site to have a certificate, based on a unique IP address for this website and the going rate for that, given where my site is, is $6/mo.   That adds, substantially, to the cost of supporting a website, especially when you might have several of them kicking around for different purposes.

There’s work to be done yet in the whole security system (economics) model, it seems to me.    Open discussion of practical issues and eventual work arounds does seem like a good starting place, though.

 

[1] http://blog.cloudflare.com/introducing-universal-ssl/

Time to introduce a new feature on the ThinkingCat site:  The Attic.

It is with chagrin that I acknowledge that I am an old enough <fill in appropriate but not-too-abusive-please ephithet> that many hot new technology standards discussions are ringing in resonance with the long, hard exercises I recall from years past.   In particular, many of the discussions around “information centric networking”, “named data networking”, and new ways to handle intellectual property rights intended for digital media are working through similar problem spaces.  When is a resource “the same” enough to be the same?  Et cetera.

From my perspective, there was a vibrant community discussion of those issues in the heyday of standardization of Uniform Resource Identifiers at the IETF in the 1990’s and early 2000’s.   There was a small core of that community that really wanted to push URIs to be more than just “web addresses”, and saw an application infrastructure standards roadmap.  That roadmap never got implemented — at some point we acknowledged that the implementing community was not as keen, and there’s no fun in defining standards that never get used.

I would like to believe that the ICN and other groups have the implementors with them, and enough interest in the outcome to solve some of these issues that are being revisited.  It would also be useful if we could somehow short-circuit the learning curve, and not tread through all the same sequences.

Perhaps that is a vain hope, but it is the spirit with which I offer “The Attic” — a place where I intend to post up various remnants of those discussions, as culled from my spotty archives (driven by my even spottier recollection).

Today’s  inaugural contribution is on “Contextualized (URI) Resolution — C15N” (C15N because there are 15 letters between “c” and “n” in “contextualization”… get it?  Hey, I didn’t say the humour aged well).  That work never got beyond the BoF stage at the IETF, but the same questions arise when we look at any kind of advanced information resolution.

This is an experiment.  If nothing else, I’ll have a somewhat organized version of my own archive when I’m done 😉    But, if you find this useful, let me know — I’ll be more motivated to add to it.  If you have suggestions — of content or format, I’d also be happy to know.  Feel free to leave a comment here, or email me (I’m “ldaigle” at this site’s domain name).

P.S.:  Apologies to Twitter followers for the double-tweet of the last posting.   I had set up an app to auto-tweet my blog posts here, because automation is So!Cool! and then decided I’d really rather handcraft my tweets — authenticity is important to me.  Apparently, I failed to stomp adequately on the auto-tweeter app.  More stomping has been applied — let’s see if this works better.   My Twitter account is, after all, my1regret …