Skip navigation

I’m using this space to share some of the collected artifacts found in digging through archives of past work on Internet technology development.  Several pieces are available publicly elsewhere — if you know they exist and where to look for them.  This is the sort of thing you might find when digging around in “the attic” of an old farm house, for instance.

Here are some of the topics I have/plan to cover (topics already covered are linked):

1990’s:

2000’s:

2010’s:

  • URI in DNS RR (Internet-Draft)
  • LINK in DNS (Internet-Draft)

 

IAFA Templates

IAFA stands for “Internet Anonymous FTP Archive”, and IAFA templates were being developed at the IETF in the early 1990’s, as a first effort in machine-parsable “metadata” for Internet resources, in the then-current publication platform of the Internet (Anonymous FTP).

You can see a snapshot of the definition of the intended standard in:  draft-ietf-iiir-publishing-03.txt .  The abstract of that document reads:

Anonymous FTP Archives are a popular method of making
material available to the Internet user community. This
document specifies a range of indexing information that can
be used to describe the contents and services provided by
such archives. This information can be used directly by the
user community when visiting parts of the archive. Further-
more, automatic indexing tools can gather and index this
information, thus making it easier for users to find and
access it.

A couple of interesting historical points:

(National) librarians were heavily involved in figuring out indexing and archiving of Internet material at that time — seeing the Internet as a publication medium and wanting to address its publications.  Further efforts with IAFA templates included mappings to US Library of Congress MARC codes.  (MARC is “MAchine Readable Cataloging”).

The IAFA Templates work probably represents the first serious effort of Internet standards in metadata systems.

 

Uniform Resource Characteristics (URC)

URCs have their own Wikipedia page!   This sums it up pretty well:

“A URC binds a URI’s associated URN (uniform resource name, a unique name for a Web resource) to its URL (uniform resource locator, the location at which a Web resource can be found). [1] URCs were proposed as a specification in the mid-1990s, but were never adopted.”

According to the IETF website, there was a URC Working Group, which concluded in January 1997.  I don’t recall it actually getting off the ground, but that may be a fault of my recollection.

UKOLN has a good summary of the way things were, including:

“It is important to note that there is (currently) no URC per se. The term URC has generally been used to identify:

· long term cataloguing information pertaining primarily to on-line resources

· a standardised means of associating so-called metadata, or describing information, with objects – not necessarily for cataloguing purposes

· information used as part of the process of resolving a Uniform Resource Name (URN) to a URL or URLs

· information used by applications when selecting a particular instance of a resource from a number of possibilities, not necessarily as part of a URN lookup.

URCs started off life as the responsibility of the Internet Engineering Task Force’s Uniform Resource Identifiers working group, which was chartered to investigate both URCs and Uniform Resource Names (URNs) – persistent location independent naming. In an unusual step for the IETF, the URI group was disbanded due to what was felt to be a lack of progress.

At the time of writing, an effort was under way to form a new IETF working group specifically addressing URC issues, and with a more focussed remit than the old URI group. Specifically: the new group would focus on developing a common carrier architecture which could be used to package various resource description formats, rather than attempting to standardise upon one particular preferred format.”

Looking back on it, I would say that there was an inherent tussle between the cataloguers (honest-to-goodness librarians, who knew about creating indexes of materials) and the application engineers (who wanted a way to capture application-relevant information, such as media type).  The problem was that the latter never really closed on a fixed set of uses for this “metadata”.

 

RESource CAPabilities — RESCAP

In the late 1990’s, there was interest in being able to determine the “capabilities” of an Internet-connected resource.  This is sort of like URCs, but in some ways the focus was more on services as a resource, whereas URCs were inherently tied to the notion of resources being “pages” or “documents”.

So, for example, if you wanted to know whether a given e-mail address would accept MIME-encapsulated content (as opposed to plaintext only), you might look up the resource’s capabilities to find out.   Today, that might seem like a bit of a contrived example;  we don’t live in nearly as technology-constrained an environment as existed 15 years ago.   But, the concept does generalize to things we’re interested in today — e.g., does this phone number accept text messages?

My notes from c2000 say:

“RESCAP
. all about finding ‘capabilities’ of a ‘resource’
. currently based on ‘the hostname of the url’
. uses DNS to find the rescap server”

An IETF WG was formed — you can see its charter here, but the work was never finished.   An IESG note on one of the WG’s documents reads:

“Open issues not resolved after 18 months. WG concluded. Marking dead.”

 That describes what happened, but I think it obscures some of why it happened.  Indeed, there were 2 basic proposals on the table, and the two driving technology minds did not come to agreement on how to bring them together.  The working group didn’t pick one.    But, fundamentally, there was a lack of energy and focus — without a significant drive toward adoption of the result, the WG consisted of a couple of interested idea editors and some interested bystanders (myself included).    Eventually, attention waned and we all wandered away.

Some of the documents worth reviewing are:

 

DNS Search

Historically, two pressures on the DNS have been:  the desire to control recognizable strings stored in it, and the habit of sticking all manner of things in it (because it is a universally-accessible, federated, distributed system that actually works).   Along with the “recognizable” string hysteria, there has long been an impression that the DNS should somehow support “search”, which is not actually what it’s built for.

As an effort to address that desire (searching for labels of Internet things, and tying them back to the DNS), John Klensin spearheaded work on addressing the issue with a layer above the DNS.

Last updated 2004.     See https://datatracker.ietf.org/doc/draft-klensin-dns-search/ for the Internet-draft and its history.   The abstract reads:

This memo discusses strategies for supporting ‘DNS searching’ — finding of names in the DNS, or references that will ultimately point to DNS names, by a mechanism layered above the DNS itself that permits fuzzy matching, selection that uses attributes or facets, and use of descriptive terms. Demand for these facilities appear to be increasing with growth in the Internet (and especially the web) and with requirements to move beyond the restricted subset of ASCII names that have been the traditional contents of DNS ‘Class=IN’. This document proposes a three-level system for access to DNS names in which the upper two levels involve search, rather than lookup (exactly known target), functions. It also discusses some of the issues and challenges in completing the design of, and deploying, such a system.

 

 

Contextualized (URI) Resolution — C15N

Circa 2000.

From memory, this was only ever an IETF BoF.  The data tracker lists it as a “concluded working group”: http://datatracker.ietf.org/wg/c15n/charter/

The URN WG’s Dynamic Delegation Discovery System (DDDS) describes a
generalized architecture for ‘top down’ resolution of identifiers such
as URIs. This works well when a (software) client wants or needs to
dynamically determine the explicit authoritative delegation of
resolution. However, there are times when it is desirable to
incorporate other elements of contextual control information in
determining, for example, the “appropriate copy” of a resource —
preferrentially finding a “local” copy of a journal rather than
re)purchasing one from the authoritative publisher. This is generally
applicable to all URI resolution, but it is more specific than “web
caching”. Software systems being built to solve this in today’s
deployed systems are using specialized, non-interoperable, non-
scalable approaches.

The BoF (December 2000) had an agenda — http://www.ietf.org/ietf-ftp/00dec/c15n-agenda.txt .   I had some slides:  LLD-c15n .  None of these made it into the official proceedings from that IETF meeting, for reasons unclear this many years later.

There were other presentations from other people.  I can’t find any official minutes or drafts thereof.  I have an empty “c15n” folder in my mailbox hierarchy — don’t know if it was lost to random IMAP errors over the years, or if it was an unfulfilled expectation.

My rough notes (which I won’t reproduce here 🙂 ) suggest the meeting concluded with mild support to progress toward a working group.  But not a whole lot of energy or volunteers to carry it forward.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.