Thursday, November 15, 2018

An American Europeana

The blog is often reserved for updates or technical explanations of archival/authority software development at the American Numismatic Society, or experimentation in new modes of archival data publication (mainly Linked Open Data).

However, since I have long been a proponent of open, community-oriented efforts to publish cultural heritage aggregations, like Europeana and DPLA, I wanted to take a bit of time to hash out some thoughts in the form of a blog post instead of starting a series of disjointed Twitter threads [1, 2].

Most of you have likely heard that DPLA laid off six employees, and John S. Bracken went online to speak of his vision and answer some questions. This vision seems to revolve around ebook deals primarily, with cultural heritage aggregation as a secondary function of DPLA. However, DPLA laid off the people that actually know how to do that stuff, so the aggregation aspect of the organization (which is its real and lasting value to the American people) no longer seems viable.

I believe the ultimate solution for an American version of Europeana is tying it into the institutional function of a federally-funded organization like the Library of Congress or Smithsonian, with the backing of Congressional support for the benefit of the American people (which is years away, at least). However, I do think there are some shorter-term solutions that can be undertaken to bootstrap an aggregation system and administered by one organization or a small body of institutions working collaboratively. There doesn't need to be a non-profit organization in the middle to manage this system, at least at this phase.

There are a few things to point out regarding the system's political and technical organization:

  1. The real heavy lifting is done by the service/content hubs. It takes more time/money/professional expertise to harvest and normalize the data than it does to build the UI on top of good quality data.
  2. Much of the aggregation software has been written already, but hasn't been shared broadly with the community.
  3. There seems to be a wide variation in the granularity and quality of data provided to DPLA. I wrote a harvester for Orbis Cascade that provided them with DPLA Metadata Application Profile-compliant RDF that had some normalization of strings extracted from Dublin Core to Getty AAT and VIAF URIs, which were modeled properly into SKOS Concepts or EDM Agents. But DPLA couldn't actually ingest their own data model.
  4. Europeana has already written a ton of tools that can be repurposed. 
  5. There are other off the shelf tools that scale that could be appropriated for either the UI or underlying architecture (Blacklight, various open source triplestores, like Apache Fuseki, which I have heard will scale at least to a billion triples).
  6. On a non-technical level, the name "Digital Public Library of America" itself is problematic, because the project has been overwhelmingly driven by R1 research libraries. Cultural Heritage is more than what you find in a Special Collections Library, and museums are notably absent from this picture (in contrast to Europeana).

Without knowing more of the details, I had heard that DPLA had scaling issues with their SPARQL endpoint software. I don't know if this is still an issue with this particular software, but I do believe the data were a problem. Aside from what was produced by those organizations that are part of Orbis Cascade that opted to reconcile their strings to things (sadly, most did not choose to take this additional step), how much data ingested by DPLA is actual, honest to God Linked Open Data--with, you know, links? A giant triplestore that's nothing but literals is not very useful, and it's impossible to build UIs for the public that can live up to the potential of the data and the architectural principles of LOD.

At some point, there needs to be a minimum data quality barrier to entry into DPLA, and part of this is implementing a required layer of reconciliation of entities to authoritative URIs. I understand this does create more work for individual organizations that wish to participate, but the payoffs are immense:

  1. Reconciliation is a two way street: it enables you to extract data from external sources to enhance your own public-facing user interface (biographies about people--that sort of thing).
  2. Social Networks and Archival Context should play a vital role in the reconciliation of people, families, and corporate bodies. There should be greater emphasis in the LibTech community to interoperate with SNAC in order to create entities that only exist in local authority files, which will then enable all CPF entities to be normalized to SNAC URIs upon DPLA ingestion.
    • Furthermore, SNAC itself can interact with DPLA APIs in order to populate a more complete listing of cultural heritage objects related to that entity. Therefore, there is an immediate benefit to contributors to DPLA, as their content will simultaneously become available in SNAC to a wide range of researchers and genealogists via LOD methodologies.
    • SNAC is beginning to aggregate content about entities, so it frankly doesn't make sense for there to be two architecturally dissimilar systems that have the same function. DPLA and SNAC should be brought closer together. They need each other in order for both projects to maximize their potential. I strongly believe these projects are inseparable.
  3. With regard to the first two points, content hubs should put greater emphasis on building the reconciliation services for non-technical libraries, archivists, curators, etc. to use, with intuitive user interfaces that allow for efficient clean-up. Many people (including myself) have already built systems that look up entities in Geonames, VIAF, SNAC, the Getty AAT/ULAN, Wikidata, etc. This work doesn't need to be done from scratch.
Because DPLA's data are so simple and unrefined, many of the lowest hanging fruits in a digital collection interfaces have not been achieved, such as basic geographic visualization. Furthermore facet fields are basically useless because there's no controlled vocabulary.

After expanding the location facet for a basic text search of Austin, I am seeing lists that appear to be Library of Congress-formatted geographic subject headings. The most common heading is "United States - Texas - Travis County - Austin", mainly from the Austin History Center, Austin Public Library. However, there are many more variations of the place name contributed by other organizations.

The many Austins

This is really a problem that needs to be addressed further down the chain from DPLA at the hub level. If you want to build a national aggregation system that reaches its full potential, more emphasis needs to be placed on data normalization.




DPLA decided to go large scale, low quality. I am much more of a small scale, good quality person, because it is easier to scale up later once you have the workflows to produce good quality data than it is to go back and clean up a pile of poor data. And I don't think that the current form of the DPLA interface is powerful enough to demonstrate the value of entity reconciliation to the librarians, curators, etc. making the most substantial investment of time. You can't get the buy-in from that specialist community without demonstrating a powerful user interface that capitalizes on the effort they have made. I know this from experience. Nomisma.org struggled to get buy-in until we built Online Coins of the Roman Empire, and now Nomisma is considered one of the most successful LOD projects out there.

My recommendation is to go back to the drawing board with a small number of data contributors to develop the workflows that are necessary to build a better aggregation system. This process should be completely transparent and can be replicated within the other content hubs. The burden of cleaning data shouldn't fall on the shoulders of DPLA (or whoever comes next).

There are obvious funding issues here, but contributions of staff time and expertise can be more valuable than monetary contributions in this case.

Wednesday, July 11, 2018

Creating and Updating SNAC constellations directly in xEAC

After 2-3 weeks of work, I have made some very significant updates to xEAC, one which paves the way to making archival materials at the American Numismatic Society (and other potential users of our open source software frameworks) broadly accessible to other researchers. This is especially important for us, since we are a small archive with unique materials that don't reach a general historical audience, and we are now able to fulfill one of the potentialities we outlined in our Mellon-NEH Open Humanities Book project: that we would be able to make 200+ open ebooks available through Social Networks and Archival Context (SNAC).

I have introduced a new feature that interacts with the SNAC JSON API within the XForms backend of xEAC (note that you need to use an XForms 2.0 compliant processor for xEAC in order to make use of JSON data). The feature will create a new constellation if none exists or supplement existing constellations with data from the local EAC-CPF record. While the full range of EAC-CPF components is supported by the SNAC API, I have focused primarily on the integration of the stable URI for the entity in the local authority system (e.g., http://numismatics.org/authority/newell), existDates (if they are not already in the constellation), and the biogHist. Importantly, if xEAC users have opted to connect to a SPARQL endpoint that also contains archival or libraries materials, these related resources will be created in SNAC and linked to the constellation.

It should be noted that this system is still in beta and has only been tested with the SNAC development server. There is still work to do with improving the authentication handshake between xEAC and SNAC.

The process

 

Step 1: Reviewing an existing constellation for content


The first step of the process is executed when the user loads the form. If the EAC-CPF record already contains an entityId that conforms to the permanent, stable SNAC ARK URI, a "read" query will be issued to the SNAC API in order to determine what content already exists in the constellation, including what resources are already available in the constellation vs. the resources extracted from the local archival information system via SPARQL.

The SPARQL query for extracted resources from the endpoint is as follows:


PREFIX rdf:      <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX dcterms:  <http://purl.org/dc/terms/>
PREFIX foaf:  <http://xmlns.com/foaf/0.1/>
 
SELECT ?uri ?role ?title ?type ?genre ?abstract ?extent WHERE {
?uri ?role <http://numismatics.org/authority/newell> ;
     dcterms:title ?title ;
     rdf:type ?type ;
     dcterms:type ?genre .
  OPTIONAL {?uri dcterms:abstract ?abstract}
  OPTIONAL {?uri dcterms:extent ?extent}
} ORDER BY ASC(?role)


I recently made an update to our Digital Library and Archival software so that every different type of resource (ebooks and notebooks in TEI, photographs in MODS, finding aids in EAD) will include a dcterms:type linking to a Getty AAT URI in the RDF serialization. This AAT URI, in conjunction with the rdf:type of the archival or library object (often a schema.org Class), will help determine the type of resource according to SNAC's own parameters (BibliographicResource, ArchivalResource, DigitalArchivalRescource). Additionally, the role of the entity with respect to the resource (dcterms:creator, dcterms:subject) informs the role within the SNAC resource-constellation connection: creatorOf, referencedIn. Abstracts and extents are inserted, if available.

Step 2: Validate authentication


SNAC uses Google user tokens for validation within its own system. There is currently no handshake available between xEAC and SNAC which will facilitate multiple users in xEAC to each have their own credentials in SNAC. At the moment, the "user" information is stored in the xEAC config file. A user will have to enter their Google credentials from the SNAC API Key page into the web form and click the "Confirm User Data" button. xEAC will submit an "edit" to a random constellation to verify the validity of the authentication information. If it is successful, the credentials are then stored back into the config (although the token only lasts about 24 hours) and the constellation is immediately unlocked. The user will then proceed to the create/update constellation interface.

Authenticating through xEAC

Step 3: Creating or updating a constellation


The user will now see several checkboxes to add information into the constellation. Eventually, it will be possible to remove data as well. Below is a synopsis of options:

  1. Same As URI: The URI of the entity in the local authority system will be added into the constellation. This is especially important or establishing concordances between different vocabulary systems.
  2. Exist dates can be added into the constellation if they are not already present.
  3. If there isn't already a biogHist in the constellation and there is one present in the EAC-CPF record, the biogHist will be escaped and published to SNAC. A source will also be created in the constellation in order to link the new biogHist to SNAC control metadata, tying the new biogHist directly to the local URI for the authority. This makes it possible to update or delete only the biogHist associated with your own entity without overwriting other biogHist information that might already be present within the constellation. While SNAC does support multiple biogHists, only the most recently added biogHist will appear in the HTML view of the entity. For this reason (at present), xEAC will only insert a biogHist if there isn't one in the constellation already. In step 1, if the constellation already contains a biogHist associated with the source URI for your authority, it will hash encode the constellation's biogHist and compare it to the hash-encoded biogHist currently in the EAC-CPF record. If there is a difference between these hashes, the constellation will be updated with the current version of the biogHist in the EAC-CPF record.
  4. A list of resource relations derived from SPARQL will be displayed. All will be checked by default in order to first create the resource with the "insert_resource" API command, and second to connect the constellation to that newly created resource with "update_constellation". Each resource entry will display some basic metadata and whether or not it already exists in the constellation, and what action will be taken. It is possible to uncheck the box for a resource that exists in the constellation to remove it from the constellation.
The interface for creating and updating SNAC constellations

Step 4: Saving the ARK back to the EAC-CPF record, if applicable

After the successful issuing of "publish_constellation" to the SNAC API, an entityId with the new SNAC ARK URI will be inserted into the EAC-CPF record, if the constellation is newly created (updates presume the ARK already exists in the EAC record). Saving the EAC record will trigger a re-indexing of the document to Solr and a SPARQL/Update that will insert the ARK as a skos:exactMatch into the concept object for the entity.

PREFIX rdf:      <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX skos:  <http://www.w3.org/2004/02/skos/core#>
PREFIX foaf:  <http://xmlns.com/foaf/0.1/>

INSERT { ?concept skos:exactMatch <ARK>  }
WHERE { ?concept foaf:focus <URI> }


The data above are those I consider to most vital to SNAC integration--essential historical or biographical context and related archival or library resources that can be made more broadly accessible. I am not sure how many other authority systems are able to interact with SNAC with this degree of granularity yet, but I am hopeful that these features will propel more unique research materials into the public sphere.

I will briefly touch on these new features when I present our our comprehensive LOD-oriented numismatic research platform at SAA next month (I will upload the slideshow soon).

Thursday, June 7, 2018

SNAC Lookups Updated in xEAC and EADitor

Since the Social Networks and Archival Context has migrated to a new platform, it has published a JSON-based REST API, which they have well-documented. Although EADitor and xEAC have had lookup mechanisms to link personal, corporate, and family entities from SNAC to EAD and EAC-CPF records since 2014 (see here), the lookup mechanisms in the XForms-based backends to these platforms interacted with an unpublicized web service that provided an XML response for simple queries.

With the advent of these new SNAC APIs and JSON processing within the XForms 2.0 spec (present in Orbeon since 2016), I have finally gotten around to overhauling the lookups in both EADitor and xEAC. Following documentation for the Search API, the XForms Submission process now submits (via PUT) an instance that conforms to the required JSON model. The @serialization attribute is set to "application/json" in the submission, and the JSON response from SNAC is serialized back into XML following the XForms 2.0 specification. Side note: the JSON->XML serialization differs between XForms 2.0 and XSLT/XPath 3.0, and so there should be more communication between these groups to standardize JSON->XML across all XML technologies.

The following XML instance is transformed into API-compliant JSON upon submission.


<xforms:instance id="query-json" exclude-result-prefixed="#all">
 <json type="object" xmlns="">
  <command>search</command>
  <term/>
  <entity_type/>
  <start>0</start>
  <count>10</count>
 </json>
</xforms:instance>


The submission is as follows:


<xforms:submission id="query-snac" ref="instance('query-json')" 
    action="http://api.snaccooperative.org" method="put" replace="instance" 
    instance="snac-response" serialization="application/json">
 <xforms:header>
  <xforms:name>User-Agent</xforms:name>
  <xforms:value>XForms/xEAC</xforms:value>
 </xforms:header>
 <xforms:message ev:event="xforms-submit-error" level="modal">Error transfroming 
into JSON and/or interacting with the SNAC
  API.</xforms:message>
</xforms:submission> 

The SNAC URIs are placed into the entityIds within the cpfDescription/identity in EAC-CPF or as the @authfilenumber for a persname, corpname, or famname in EAD.

The next task to to build APIs into xEAC for pushing data (biographical data, skos:exactMatch URIs, and related archival resources) directly into SNAC. By tomorrow, all (or nearly all) of the authorities in the ANS Archives will be linked to SNAC URIs.

Friday, May 18, 2018

Three new Edward Newell research notebooks added to Archer

Three research notebooks of Edward T. Newell have been added to Archer, the archives of the American Numismatic Society. These had been scanned as part of the larger Newell digitization project, which was migrated into IIIF for display in Mirador (with annotations) in late 2017.

These three notebooks had been scanned, but TEI files had not been generated due to some minor oversight. Generating the TEI files was fairly straightforward--there's a small PHP script that will extract MODS from our Koha-based library catalog. These MODS files are subsequently run through an XSLT 3.0 stylesheet to generate TEI with a facsimile listing of all image files associated with the notebook, linking to the IIIF service URI. XSLT 3.0 comes into play to parse the info.json for each image in order to insert the height and width of the source image directly into the TEI, which is used for the TEI->IIIF Manifest JSON transformation (the canvas and image portions of the manifest), which is now inherent to TEI files published in the EADitor platform.

The notebooks all share the same general theme: they are Newell's notes on the coins in the Berlin Münzkabinett, which we aim to annotate in Mirador over the course of the NEH-funded Hellenistic Royal Coinages project.

A fourth notebook was found to have not yet been scanned, and so it will be published online soon.

Friday, April 6, 2018

117 ANS ebooks published to Digital Library

I have finally put the finishing touches on 117 ANS out-of-print publications that have been digitized into TEI (and made available as EPUB and PDF) as part of the NEH and Mellon-funded Open Humanities Book project. This is the "end" (more details on what an end entails later) of the project, in which about 200 American Numismatic Society monographs were digitized and made freely and openly available to the public.

All of these, plus a selection of numismatic electronic theses and dissertations as well as two other ebooks not funded by the NEH-Mellon project, are available in the ANS Digital Library. The details of this project have been outlined in previous blog posts, but to summarize, the TEI files have been annotated with thousands of links to people, places, and other types of entities defined in a variety of information systems--particularly Nomisma.org (for ancient entities), Wikidata, and Geonames (for modern ones).

Additionally:
  • Books have been linked to 153 coins (so far) in the ANS collection identified by accession number. Earlier books cite Newell's personal collection, bequeathed to the ANS and accessioned in 1944. A specialist will have to identify these.
  • 173 total references to coin hoards defined in the Inventory of Greek Coin Hoards, plus several from Kris Lockyear's Coin Hoards of the Roman Republic.
  • 166 references to Roman imperial coin types defined in the NEH-funded Online Coins of the Roman Empire.
  • A small handful of Islamic glass weights in The Metropolitan Museum of Art 
  • One book by Wolfgang Fischer-Bossert, Athenian Decadrachm, has a DOI, connected to his ORCID.
Since each of these annotations is serialized into RDF and published in the ANS archival SPARQL endpoint, the other various information systems (MANTIS, IGCH, OCRE, etc.) query the endpoint for related archival or library materials.

For example, the clipped shilling, 1942.50.1, was minted in Boston, but the note says it was found among a mass of other clippings in London. The findspot is not geographically encoded in our database (and therefore doesn't appear on the map), but this coin is cited in "Part III Finds of American Coins Outside the Americas" in Numismatic finds of the Americas.


Using OpenRefine for Entity Reconciliation

Unlike the first phase of the project, the people and places tagged in these books were extracted into two enormous lists (20,000 total lines) that were reconciled against Wikidata, VIAF, or Nomisma OpenRefine reconciliation APIs. Nomisma was particularly useful because of the high degree of accuracy in matching people and places. Wikidata and VIAF were useful for modern people and places, but these were more challenging in that there might be dozens of American towns with the same name or numerous examples of Charles IV or other regents. I had to evaluate the name within the context of the passage in which it occurred, a tedious process that took nearly two months to complete. The end result, however, has a significantly broader and more accurate coverage than the 85 books in the first iteration of the grant. After painstakingly matching entities to their appropriate identifiers, it only took about a day to write the scripts to incorporate the URIs back into the TEI files, and a few more days of manual, or regex linking for IGCH, ANS coins, etc.

As a result of this effort, and through the concordance between Nomisma identifiers and Pleiades places, there are a total of 3,602 distinct book sections containing 4,304 Pleiades URIs, which can now be made available to scholars through the Pelagios project.


What's Next for ANS Publications?

So while the project concludes in its official capacity, there is room for improvement and further integration. Now that the corpus has been digitized, it will be possible to export all of the references into OpenRefine in an attempt to restructure the TEI and link to URIs defined by Worldcat. We will want to link to other DOIs if possible, and make the references for each book available in Crossref. Some of this relies on the expansion of Crossref itself to support entities identifiers beyond ORCID (e.g., ISNI) and citations for Worldcat. Presently, DOI citation mechanisms allow us to build a network graph of citations for works produced in the last few years, but the extension of this graph to include older journals and monographs will allow us to chart the evolution of scientific and humanistic thought over the course of centuries.

As we know, there is never an "end" to Digital Humanities projects. Only constant improvement. And I believe that the work we have done will open the door to a sort of born-digital approach to future ANS publications.