During his keynote speech at CES 2020, Samsung consumer electronics division CEO H.S. Kim introduced “Ballie”, a ball-shaped robot that tracked him around the stage. Subsequently, Samsung released a cute video outlining their aspirations for Ballie — a “BB-8”-like robot to help with pets and house chores. (See https://www.youtube.com/watch?v=c7N5UDZX7TQ).
CES is all about excitement in consumer electronics, so it’s a good time to stop and ask: what’s are the hardest parts of making Ballie a success for consumers?

Continue Reading

Years ago, techies had different phrases to describe the explosive growth of computing software and networking. The “Unix development model” became “the Internet development model”, as software was deployed first and made to work in successive patches and iterations. It was exciting times — a “killer app” could launch a technology, especially if it supported the “network effect” (it was useful even if only a few people were using it, and only became more useful as more people took it up).

It seems to me that those phrases fail to do justice to the effort and ecosystem that allowed technologies like the Internet to take off and continue on a growth curve for decades.

As Open Source Software (OSS) is increasingly important to commercial endeavours, and companies worry about becoming heavily dependent on something that becomes abandonware, I’d like to offer three things to look for in technology/systems if you want to see them take off and last beyond initial specs.

Continue Reading

Last week I learned something about BitCoin that entirely changed my view on the urgency for implementing uniform security in the global routing system. While I value and appreciate that the Internet is made up of a network of independently owned and operated networks, I think there is compelling reason for network operators to peer over the parapet of their network borders and focus on routing security as a contribution beyond their own realm.

For most of the history of the Internet, the being in the routing business meant delivering packets on a “best effort basis”. In practical terms, as the Internet has gotten more commercially important, that has meant that individual network operators have focused on improving the efficiency and effectiveness of traffic within their own networks. For handoffs between networks (routing to the rest of the world), the emphasis has been on ensuring connectivity to well-positioned neighbour networks.

Continue Reading

Over twenty years ago, we said it was a bad idea. Then the tables were turned, in the name of making the Internet commercially viable, and we’ve been living with the consequences ever since. The current “information economy” (aka, software and services spying on users) is “mobile agents” in reverse.

A quarter of a century ago, when the Internet was just blooming in the world, and technology innovation was everywhere, there was discussion of software agents. These were typically outlined as bits of code that would “act on your behalf”, by transporting themselves to servers or other computing devices to do some computation, and then bring the results back to your device. Even then, there was enough security awareness to perceive that remote systems were not going to be interested in hosting these foreign code objects, no matter how “sandboxed”. They would consume resources, and could potentially access sensitive data or take down the remote system, inadvertently or otherwise.

I know, right? The idea of shipping code snippets around to other machines sounds completely daft, even as I type it! For those reasons, among others, systems like General Magic’s “Magic Cap” never got off the ground.

And here is the irony: in the end, we wound up inviting agents (literally) into our homes. Plugins like ghostery will show you how many suspicious bits of code are executing on your computer when you load different webpages in your browser. Those bits of code are among the chief actors in the great exposition of private data in today’s web usage. You’re looking at cute cat pictures, while that code is busily shipping your browser history off to some random server in another country. Programs like Firefox do attempt to sandbox some of the worst offenders (e.g., Facebook), but the problems are exactly the same as with the old “agent avatar” idea: the code is consuming resources on your machine, possibly accessing data it shouldn’t be, and generally undermining your system in ways that have nothing to do with your interests.

With the growing sense of unease over this sort of invasive behaviour, the trend is already being slowed. Here are two of the current countervailing trends:

  • Crypto, crypto everywhere — blockchain your transactions and encrypt your transmissions. That may be necessary, but it’s really not getting at the heart of the problem, which is that there is no respect in information sharing in transactions. Take your pick of analogy — highway robbers, thumbs on the scale at the bazaar, smash-and-grab for your browser history, whatever.
  • Visiting increasingly specific, extra-territorial regulation on the Internet, without regard for feasibility of implementation (GDPR, I’m looking at you…). Even if some limited application of this approach helps address a current problem, it’s not an approach that scales: more such regulation will lead to conflicting, impossible to implement requirements that will ultimately favour only the largest players, and generally pare the Internet and its services down to a limited shadow of what we’ve known.

A different approach is to take a page from the old URA (“Uniform Resource Agent”) approach — not the actual technology proposal, but the idea that computation should happen (only) on the computing resources of the interested party, and everything else is an explicit transaction. Combined with the work done on federated identity management,  those transactions can include appropriate permissions and access control. And, while the argument is made that it is hard to come up with the specifics of interesting transactions, the amount of effort that has gone into creating existing systems belies a level of cleverness in the industry that is certainly up to the challenge.

Who’s up for that challenge?

What is the news:  publication of the updated security standard for Internet transport layer security:  TLS 1.3

Why it matters:  TLS provides the basis for pretty much all Internet communication privacy and encryption.  The big deal with version 1.3 of TLS is that it has been stripped of features with previously-detected vulnerabilities, and extended its security and encryption.  TLS 1.3 should be more robust/even less vulnerable than TLS 1.2.

Who benefits:  TLS 1.3 only benefits people using applications and devices that implement it.  The good news is that, apparently, major browsers have already implemented and deployed it.  Additionally, the hope is that the lighter weight, more straightforward nature of TLS 1.3 (as compared to previous versions) will be attractive to other application and device developers that have been reluctant to implement TLS in the past.

More info:  https://www.ietf.org/blog/tls13/

Last week, I had the opportunity to attend and speak at Interop ITX 2018, in Las Vegas.  It was my first Interop — and an interesting opportunity to see more of the enterprise networking side of things.  That’s a space that is growing, and increasing in complexity, even as cloud and software as a service are notionally taking on the heavy lifting of corporate IT.

The refrain that I overheard several times captured that attendees really enjoyed having a vendor-neutral conference to talk about a variety of practical topics.  Indeed, it felt a bit like a NOG with an enterprise focus.

Continue Reading

“Permissionless innovation” is one of the invariant properties of the Internet — without it, we would not have the World Wide Web.   Sometimes, however, this basic concept is misunderstood, or the expression is used as an excuse for bad behaviour.

Consider the case of the Guardian’s article on the use of “smart” technology, such as wifi trackers that tail people through their phones’ MAC addresses, to monitor activities in Utrecht and Eindhoven:  https://www.theguardian.com/cities/2018/mar/01/smart-cities-data-privacy-eindhoven-utrecht:

“Companies are getting away with it in part because it involves new applications of data. In Silicon Valley, they call it “permissionless innovation”, they believe technological progress should not be stifled by public regulations.”

Continue Reading

“Cooperation”, “Consensus” and “Collaboration” are three C-words that get thrown around in the context of Internet (technology and policy) development.    Given that it is at the heart of the Internet Engineering Task Force’s organizing principles, I was a little surprised  to see consensus treated as a poor discussion framework in Peter J. Denning and Robert Dunham’s “The Innovator’s Way – Essential Practices for Successful Innovation”.

While I still don’t entirely buy the authors’ view of consensus as a force of creativity stifling, with a little more reflection, I could see their argument that consensus aims to narrow discussion to find an outcome.  In an engineering context of complex problems,  when the problem is well understood and an answer has to be selected, that’s a good thing.

However, for many of the challenges facing the Internet, there isn’t even necessarily agreement that there is a problem, let alone a rough notion of what to do about the challenge.    These are wicked problems, requiring more collaboration across diverse groups of people and interests.   The heartening thing  is that we’ve actually solved some of these wicked problems in the past — the existence and continued functioning of the Internet is testimony to that.

Continue Reading

field-839797_1920

At any given moment, the Internet as we know it is poised on the edge of extinction.

There are at least two ways to understand that sentence. One is pretty gloomy, suggesting a fragile Internet that must continually be rescued from doom. The other way to look at it is that the Internet is always in a state of evolution, and it’s our understanding of it that is challenged to keep up. I tend to prefer the latter perspective, and think it’s important to keep the Internet open for innovation.

At the same time, change can be scary — if it leads to an outcome that impacts us badly, from which we cannot recover, for example. That’s at least one reason why discussion of policy requirements and changing the Internet can be pretty tense.

If we want a dialog about the Internet that is as global as the network itself, we need to know how to talk about change:

  • what are the useful points of reference (hint: they aren’t particular technologies), and
  • how can we frame a successful dialog?

Continue Reading