Thursday, June 7, 2018

SNAC Lookups Updated in xEAC and EADitor

Since the Social Networks and Archival Context has migrated to a new platform, it has published a JSON-based REST API, which they have well-documented. Although EADitor and xEAC have had lookup mechanisms to link personal, corporate, and family entities from SNAC to EAD and EAC-CPF records since 2014 (see here), the lookup mechanisms in the XForms-based backends to these platforms interacted with an unpublicized web service that provided an XML response for simple queries.

With the advent of these new SNAC APIs and JSON processing within the XForms 2.0 spec (present in Orbeon since 2016), I have finally gotten around to overhauling the lookups in both EADitor and xEAC. Following documentation for the Search API, the XForms Submission process now submits (via PUT) an instance that conforms to the required JSON model. The @serialization attribute is set to "application/json" in the submission, and the JSON response from SNAC is serialized back into XML following the XForms 2.0 specification. Side note: the JSON->XML serialization differs between XForms 2.0 and XSLT/XPath 3.0, and so there should be more communication between these groups to standardize JSON->XML across all XML technologies.

The following XML instance is transformed into API-compliant JSON upon submission.

<xforms:instance id="query-json" exclude-result-prefixed="#all">
 <json type="object" xmlns="">

The submission is as follows:

<xforms:submission id="query-snac" ref="instance('query-json')" 
    action="" method="put" replace="instance" 
    instance="snac-response" serialization="application/json">
 <xforms:message ev:event="xforms-submit-error" level="modal">Error transfroming 
into JSON and/or interacting with the SNAC

The SNAC URIs are placed into the entityIds within the cpfDescription/identity in EAC-CPF or as the @authfilenumber for a persname, corpname, or famname in EAD.

The next task to to build APIs into xEAC for pushing data (biographical data, skos:exactMatch URIs, and related archival resources) directly into SNAC. By tomorrow, all (or nearly all) of the authorities in the ANS Archives will be linked to SNAC URIs.

Friday, May 18, 2018

Three new Edward Newell research notebooks added to Archer

Three research notebooks of Edward T. Newell have been added to Archer, the archives of the American Numismatic Society. These had been scanned as part of the larger Newell digitization project, which was migrated into IIIF for display in Mirador (with annotations) in late 2017.

These three notebooks had been scanned, but TEI files had not been generated due to some minor oversight. Generating the TEI files was fairly straightforward--there's a small PHP script that will extract MODS from our Koha-based library catalog. These MODS files are subsequently run through an XSLT 3.0 stylesheet to generate TEI with a facsimile listing of all image files associated with the notebook, linking to the IIIF service URI. XSLT 3.0 comes into play to parse the info.json for each image in order to insert the height and width of the source image directly into the TEI, which is used for the TEI->IIIF Manifest JSON transformation (the canvas and image portions of the manifest), which is now inherent to TEI files published in the EADitor platform.

The notebooks all share the same general theme: they are Newell's notes on the coins in the Berlin M├╝nzkabinett, which we aim to annotate in Mirador over the course of the NEH-funded Hellenistic Royal Coinages project.

A fourth notebook was found to have not yet been scanned, and so it will be published online soon.

Friday, April 6, 2018

117 ANS ebooks published to Digital Library

I have finally put the finishing touches on 117 ANS out-of-print publications that have been digitized into TEI (and made available as EPUB and PDF) as part of the NEH and Mellon-funded Open Humanities Book project. This is the "end" (more details on what an end entails later) of the project, in which about 200 American Numismatic Society monographs were digitized and made freely and openly available to the public.

All of these, plus a selection of numismatic electronic theses and dissertations as well as two other ebooks not funded by the NEH-Mellon project, are available in the ANS Digital Library. The details of this project have been outlined in previous blog posts, but to summarize, the TEI files have been annotated with thousands of links to people, places, and other types of entities defined in a variety of information systems--particularly (for ancient entities), Wikidata, and Geonames (for modern ones).

  • Books have been linked to 153 coins (so far) in the ANS collection identified by accession number. Earlier books cite Newell's personal collection, bequeathed to the ANS and accessioned in 1944. A specialist will have to identify these.
  • 173 total references to coin hoards defined in the Inventory of Greek Coin Hoards, plus several from Kris Lockyear's Coin Hoards of the Roman Republic.
  • 166 references to Roman imperial coin types defined in the NEH-funded Online Coins of the Roman Empire.
  • A small handful of Islamic glass weights in The Metropolitan Museum of Art 
  • One book by Wolfgang Fischer-Bossert, Athenian Decadrachm, has a DOI, connected to his ORCID.
Since each of these annotations is serialized into RDF and published in the ANS archival SPARQL endpoint, the other various information systems (MANTIS, IGCH, OCRE, etc.) query the endpoint for related archival or library materials.

For example, the clipped shilling, 1942.50.1, was minted in Boston, but the note says it was found among a mass of other clippings in London. The findspot is not geographically encoded in our database (and therefore doesn't appear on the map), but this coin is cited in "Part III Finds of American Coins Outside the Americas" in Numismatic finds of the Americas.

Using OpenRefine for Entity Reconciliation

Unlike the first phase of the project, the people and places tagged in these books were extracted into two enormous lists (20,000 total lines) that were reconciled against Wikidata, VIAF, or Nomisma OpenRefine reconciliation APIs. Nomisma was particularly useful because of the high degree of accuracy in matching people and places. Wikidata and VIAF were useful for modern people and places, but these were more challenging in that there might be dozens of American towns with the same name or numerous examples of Charles IV or other regents. I had to evaluate the name within the context of the passage in which it occurred, a tedious process that took nearly two months to complete. The end result, however, has a significantly broader and more accurate coverage than the 85 books in the first iteration of the grant. After painstakingly matching entities to their appropriate identifiers, it only took about a day to write the scripts to incorporate the URIs back into the TEI files, and a few more days of manual, or regex linking for IGCH, ANS coins, etc.

As a result of this effort, and through the concordance between Nomisma identifiers and Pleiades places, there are a total of 3,602 distinct book sections containing 4,304 Pleiades URIs, which can now be made available to scholars through the Pelagios project.

What's Next for ANS Publications?

So while the project concludes in its official capacity, there is room for improvement and further integration. Now that the corpus has been digitized, it will be possible to export all of the references into OpenRefine in an attempt to restructure the TEI and link to URIs defined by Worldcat. We will want to link to other DOIs if possible, and make the references for each book available in Crossref. Some of this relies on the expansion of Crossref itself to support entities identifiers beyond ORCID (e.g., ISNI) and citations for Worldcat. Presently, DOI citation mechanisms allow us to build a network graph of citations for works produced in the last few years, but the extension of this graph to include older journals and monographs will allow us to chart the evolution of scientific and humanistic thought over the course of centuries.

As we know, there is never an "end" to Digital Humanities projects. Only constant improvement. And I believe that the work we have done will open the door to a sort of born-digital approach to future ANS publications.