Wednesday, July 11, 2018

Creating and Updating SNAC constellations directly in xEAC

After 2-3 weeks of work, I have made some very significant updates to xEAC, one which paves the way to making archival materials at the American Numismatic Society (and other potential users of our open source software frameworks) broadly accessible to other researchers. This is especially important for us, since we are a small archive with unique materials that don't reach a general historical audience, and we are now able to fulfill one of the potentialities we outlined in our Mellon-NEH Open Humanities Book project: that we would be able to make 200+ open ebooks available through Social Networks and Archival Context (SNAC).

I have introduced a new feature that interacts with the SNAC JSON API within the XForms backend of xEAC (note that you need to use an XForms 2.0 compliant processor for xEAC in order to make use of JSON data). The feature will create a new constellation if none exists or supplement existing constellations with data from the local EAC-CPF record. While the full range of EAC-CPF components is supported by the SNAC API, I have focused primarily on the integration of the stable URI for the entity in the local authority system (e.g.,, existDates (if they are not already in the constellation), and the biogHist. Importantly, if xEAC users have opted to connect to a SPARQL endpoint that also contains archival or libraries materials, these related resources will be created in SNAC and linked to the constellation.

It should be noted that this system is still in beta and has only been tested with the SNAC development server. There is still work to do with improving the authentication handshake between xEAC and SNAC.

The process


Step 1: Reviewing an existing constellation for content

The first step of the process is executed when the user loads the form. If the EAC-CPF record already contains an entityId that conforms to the permanent, stable SNAC ARK URI, a "read" query will be issued to the SNAC API in order to determine what content already exists in the constellation, including what resources are already available in the constellation vs. the resources extracted from the local archival information system via SPARQL.

The SPARQL query for extracted resources from the endpoint is as follows:

PREFIX rdf:      <>
PREFIX dcterms:  <>
PREFIX foaf:  <>
SELECT ?uri ?role ?title ?type ?genre ?abstract ?extent WHERE {
?uri ?role <> ;
     dcterms:title ?title ;
     rdf:type ?type ;
     dcterms:type ?genre .
  OPTIONAL {?uri dcterms:abstract ?abstract}
  OPTIONAL {?uri dcterms:extent ?extent}
} ORDER BY ASC(?role)

I recently made an update to our Digital Library and Archival software so that every different type of resource (ebooks and notebooks in TEI, photographs in MODS, finding aids in EAD) will include a dcterms:type linking to a Getty AAT URI in the RDF serialization. This AAT URI, in conjunction with the rdf:type of the archival or library object (often a Class), will help determine the type of resource according to SNAC's own parameters (BibliographicResource, ArchivalResource, DigitalArchivalRescource). Additionally, the role of the entity with respect to the resource (dcterms:creator, dcterms:subject) informs the role within the SNAC resource-constellation connection: creatorOf, referencedIn. Abstracts and extents are inserted, if available.

Step 2: Validate authentication

SNAC uses Google user tokens for validation within its own system. There is currently no handshake available between xEAC and SNAC which will facilitate multiple users in xEAC to each have their own credentials in SNAC. At the moment, the "user" information is stored in the xEAC config file. A user will have to enter their Google credentials from the SNAC API Key page into the web form and click the "Confirm User Data" button. xEAC will submit an "edit" to a random constellation to verify the validity of the authentication information. If it is successful, the credentials are then stored back into the config (although the token only lasts about 24 hours) and the constellation is immediately unlocked. The user will then proceed to the create/update constellation interface.

Authenticating through xEAC

Step 3: Creating or updating a constellation

The user will now see several checkboxes to add information into the constellation. Eventually, it will be possible to remove data as well. Below is a synopsis of options:

  1. Same As URI: The URI of the entity in the local authority system will be added into the constellation. This is especially important or establishing concordances between different vocabulary systems.
  2. Exist dates can be added into the constellation if they are not already present.
  3. If there isn't already a biogHist in the constellation and there is one present in the EAC-CPF record, the biogHist will be escaped and published to SNAC. A source will also be created in the constellation in order to link the new biogHist to SNAC control metadata, tying the new biogHist directly to the local URI for the authority. This makes it possible to update or delete only the biogHist associated with your own entity without overwriting other biogHist information that might already be present within the constellation. While SNAC does support multiple biogHists, only the most recently added biogHist will appear in the HTML view of the entity. For this reason (at present), xEAC will only insert a biogHist if there isn't one in the constellation already. In step 1, if the constellation already contains a biogHist associated with the source URI for your authority, it will hash encode the constellation's biogHist and compare it to the hash-encoded biogHist currently in the EAC-CPF record. If there is a difference between these hashes, the constellation will be updated with the current version of the biogHist in the EAC-CPF record.
  4. A list of resource relations derived from SPARQL will be displayed. All will be checked by default in order to first create the resource with the "insert_resource" API command, and second to connect the constellation to that newly created resource with "update_constellation". Each resource entry will display some basic metadata and whether or not it already exists in the constellation, and what action will be taken. It is possible to uncheck the box for a resource that exists in the constellation to remove it from the constellation.
The interface for creating and updating SNAC constellations

Step 4: Saving the ARK back to the EAC-CPF record, if applicable

After the successful issuing of "publish_constellation" to the SNAC API, an entityId with the new SNAC ARK URI will be inserted into the EAC-CPF record, if the constellation is newly created (updates presume the ARK already exists in the EAC record). Saving the EAC record will trigger a re-indexing of the document to Solr and a SPARQL/Update that will insert the ARK as a skos:exactMatch into the concept object for the entity.

PREFIX rdf:      <>
PREFIX skos:  <>
PREFIX foaf:  <>

INSERT { ?concept skos:exactMatch <ARK>  }
WHERE { ?concept foaf:focus <URI> }

The data above are those I consider to most vital to SNAC integration--essential historical or biographical context and related archival or library resources that can be made more broadly accessible. I am not sure how many other authority systems are able to interact with SNAC with this degree of granularity yet, but I am hopeful that these features will propel more unique research materials into the public sphere.

I will briefly touch on these new features when I present our our comprehensive LOD-oriented numismatic research platform at SAA next month (I will upload the slideshow soon).

Thursday, June 7, 2018

SNAC Lookups Updated in xEAC and EADitor

Since the Social Networks and Archival Context has migrated to a new platform, it has published a JSON-based REST API, which they have well-documented. Although EADitor and xEAC have had lookup mechanisms to link personal, corporate, and family entities from SNAC to EAD and EAC-CPF records since 2014 (see here), the lookup mechanisms in the XForms-based backends to these platforms interacted with an unpublicized web service that provided an XML response for simple queries.

With the advent of these new SNAC APIs and JSON processing within the XForms 2.0 spec (present in Orbeon since 2016), I have finally gotten around to overhauling the lookups in both EADitor and xEAC. Following documentation for the Search API, the XForms Submission process now submits (via PUT) an instance that conforms to the required JSON model. The @serialization attribute is set to "application/json" in the submission, and the JSON response from SNAC is serialized back into XML following the XForms 2.0 specification. Side note: the JSON->XML serialization differs between XForms 2.0 and XSLT/XPath 3.0, and so there should be more communication between these groups to standardize JSON->XML across all XML technologies.

The following XML instance is transformed into API-compliant JSON upon submission.

<xforms:instance id="query-json" exclude-result-prefixed="#all">
 <json type="object" xmlns="">

The submission is as follows:

<xforms:submission id="query-snac" ref="instance('query-json')" 
    action="" method="put" replace="instance" 
    instance="snac-response" serialization="application/json">
 <xforms:message ev:event="xforms-submit-error" level="modal">Error transfroming 
into JSON and/or interacting with the SNAC

The SNAC URIs are placed into the entityIds within the cpfDescription/identity in EAC-CPF or as the @authfilenumber for a persname, corpname, or famname in EAD.

The next task to to build APIs into xEAC for pushing data (biographical data, skos:exactMatch URIs, and related archival resources) directly into SNAC. By tomorrow, all (or nearly all) of the authorities in the ANS Archives will be linked to SNAC URIs.

Friday, May 18, 2018

Three new Edward Newell research notebooks added to Archer

Three research notebooks of Edward T. Newell have been added to Archer, the archives of the American Numismatic Society. These had been scanned as part of the larger Newell digitization project, which was migrated into IIIF for display in Mirador (with annotations) in late 2017.

These three notebooks had been scanned, but TEI files had not been generated due to some minor oversight. Generating the TEI files was fairly straightforward--there's a small PHP script that will extract MODS from our Koha-based library catalog. These MODS files are subsequently run through an XSLT 3.0 stylesheet to generate TEI with a facsimile listing of all image files associated with the notebook, linking to the IIIF service URI. XSLT 3.0 comes into play to parse the info.json for each image in order to insert the height and width of the source image directly into the TEI, which is used for the TEI->IIIF Manifest JSON transformation (the canvas and image portions of the manifest), which is now inherent to TEI files published in the EADitor platform.

The notebooks all share the same general theme: they are Newell's notes on the coins in the Berlin M├╝nzkabinett, which we aim to annotate in Mirador over the course of the NEH-funded Hellenistic Royal Coinages project.

A fourth notebook was found to have not yet been scanned, and so it will be published online soon.

Friday, April 6, 2018

117 ANS ebooks published to Digital Library

I have finally put the finishing touches on 117 ANS out-of-print publications that have been digitized into TEI (and made available as EPUB and PDF) as part of the NEH and Mellon-funded Open Humanities Book project. This is the "end" (more details on what an end entails later) of the project, in which about 200 American Numismatic Society monographs were digitized and made freely and openly available to the public.

All of these, plus a selection of numismatic electronic theses and dissertations as well as two other ebooks not funded by the NEH-Mellon project, are available in the ANS Digital Library. The details of this project have been outlined in previous blog posts, but to summarize, the TEI files have been annotated with thousands of links to people, places, and other types of entities defined in a variety of information systems--particularly (for ancient entities), Wikidata, and Geonames (for modern ones).

  • Books have been linked to 153 coins (so far) in the ANS collection identified by accession number. Earlier books cite Newell's personal collection, bequeathed to the ANS and accessioned in 1944. A specialist will have to identify these.
  • 173 total references to coin hoards defined in the Inventory of Greek Coin Hoards, plus several from Kris Lockyear's Coin Hoards of the Roman Republic.
  • 166 references to Roman imperial coin types defined in the NEH-funded Online Coins of the Roman Empire.
  • A small handful of Islamic glass weights in The Metropolitan Museum of Art 
  • One book by Wolfgang Fischer-Bossert, Athenian Decadrachm, has a DOI, connected to his ORCID.
Since each of these annotations is serialized into RDF and published in the ANS archival SPARQL endpoint, the other various information systems (MANTIS, IGCH, OCRE, etc.) query the endpoint for related archival or library materials.

For example, the clipped shilling, 1942.50.1, was minted in Boston, but the note says it was found among a mass of other clippings in London. The findspot is not geographically encoded in our database (and therefore doesn't appear on the map), but this coin is cited in "Part III Finds of American Coins Outside the Americas" in Numismatic finds of the Americas.

Using OpenRefine for Entity Reconciliation

Unlike the first phase of the project, the people and places tagged in these books were extracted into two enormous lists (20,000 total lines) that were reconciled against Wikidata, VIAF, or Nomisma OpenRefine reconciliation APIs. Nomisma was particularly useful because of the high degree of accuracy in matching people and places. Wikidata and VIAF were useful for modern people and places, but these were more challenging in that there might be dozens of American towns with the same name or numerous examples of Charles IV or other regents. I had to evaluate the name within the context of the passage in which it occurred, a tedious process that took nearly two months to complete. The end result, however, has a significantly broader and more accurate coverage than the 85 books in the first iteration of the grant. After painstakingly matching entities to their appropriate identifiers, it only took about a day to write the scripts to incorporate the URIs back into the TEI files, and a few more days of manual, or regex linking for IGCH, ANS coins, etc.

As a result of this effort, and through the concordance between Nomisma identifiers and Pleiades places, there are a total of 3,602 distinct book sections containing 4,304 Pleiades URIs, which can now be made available to scholars through the Pelagios project.

What's Next for ANS Publications?

So while the project concludes in its official capacity, there is room for improvement and further integration. Now that the corpus has been digitized, it will be possible to export all of the references into OpenRefine in an attempt to restructure the TEI and link to URIs defined by Worldcat. We will want to link to other DOIs if possible, and make the references for each book available in Crossref. Some of this relies on the expansion of Crossref itself to support entities identifiers beyond ORCID (e.g., ISNI) and citations for Worldcat. Presently, DOI citation mechanisms allow us to build a network graph of citations for works produced in the last few years, but the extension of this graph to include older journals and monographs will allow us to chart the evolution of scientific and humanistic thought over the course of centuries.

As we know, there is never an "end" to Digital Humanities projects. Only constant improvement. And I believe that the work we have done will open the door to a sort of born-digital approach to future ANS publications.

Tuesday, October 31, 2017

EADitor now supports EAD and MODS to IIIF manifest generation

After migrating the Newell TEI notebooks to support serialization of facsimiles into IIIF manifests and the render of these manifests in an embedded Mirador viewer, I implemented a transformation of EAD finding aid image collections and MODS records for photographs into manifests.

EAD updates

The EAD finding aids were updated to replace the daogrp's linking to flickr images to link to thumbnail, reference, and IIIF service URLs (dao[@xlink:role='IIIFService']). An XSLT transformation of the EAD into manifest JSON occurs, with an intermediate process of iterating through the IIIFService info.json files with the Orbeon XForms processor in XPL to extract the height and width to generate canvases for each image.

The Brett finding aid now includes clickable thumbnails that will launch the zoomable Leaflet viewer in a fancybox popup window. At the top of the page, the user can download the manifest, and there's also a link to view the manifest in our internal Mirador viewer. You can view the EAD XML (link at top) for more details.

MODS updates

The updates to the MODS were twofold. First, in the previous version of Archer, all photographs were suppressed from the public regardless of copyright concerns. We have re-evaluated these concerns by applying one of several Rights Statements. Two of these rights statements are most permissible, and therefore, we will display the high resolution image when we have every right to do so. In any case, thumbnails are Fair Use, and therefore, they are always visible in the record page and the search results pages.

Where copyright allows us to do so, the MODS file includes a URL for the reference image and a URL[@access='raw object' and @note='IIIFService']. When a IIIFService URL is present in the MODS record, the XSLT transformation will include a Leaflet div and initiate the display of the image. See A Portrait Photograph of Margaret Thompson, for example. Like the finding aid, a manifest is dynamically generated from MODS, but only one XForms processor is called to extract the height and width from the info.json for the single image linked in the MODS file.

Pelagios Updates

Since the Brett collection links many photographs to ancient places defined in the Pleiades Gazetteer of Ancient Places, I have updated the EADitor RDF output for Pelagios. The output now includes IIIF service metadata conforming to the Europeana Data Model specification. Rainer Simon has imported these photographs into Peripleo.

Friday, October 6, 2017

Newell notebooks migrated to IIIF

As part of our transition to IIIF for high resolution photographs for the numismatic collection in MANTIS (see for example), I have begun to migrate our archival images into IIIF as well. These new features will be available in our new dedicated server as soon as the migration of Wordpress from one server to another is complete, which I expect in the next few weeks. The implementation of IIIF for our archival resources entails three overhauls of the current metadata model and HTML/IIIF Manifest serialization: TEI (for Newell notebooks of facsimile images), Encoded Archival Description (EAD) finding aids, and MODS. The transformation of the TEI notebooks into IIIF compliance is completed, and the functionality for EAD and MODS has been built, but the XML data have not been fully updated to link to IIIF services (mainly because the high resolution images haven't been uploaded to the server yet).

Annotated Newell notebook IIIF manifest displayed in Mirador

TEI to IIIF Manifest

The first Newell notebook was published to Archer (built on EADitor) more than three years ago. There are now about 50 notebooks published, but only a handful have been annotated to link to people, IGCH hoards, and coins in our collection (we will complete the annotation as part of the Hellenistic Royal Coinages project). To summarize the technical underpinnings, each notebook is a TEI file with facsimile elements for each page. The facsimile contains a link to the image and 0-n surface elements representing annotations. These surface elements were created by roundtripping the Annotorious/OpenLayers annotation JSON <-> TEI. The @ulx, @uly, @lrx, and @lry attributes represent the coordinates of the upper left and lower right hand corners of the annotations, and the coordinates were relative ratios based on OpenLayers bounds.

 For IIIF compliance, I ran the TEI through an XSLT 3 transformation to load the info.json metadata from our IIIF image server to extract the height and width of each image, and then recalculate the coordinates to be more in line with Web Annotation segments. The lower right coordinates are still stored in the TEI, but upon generation of annotation lists for the manifest, the left coordinates are subtracted to the right to correctly establish the annotation height and width.

      <surface lrx="1540" lry="155" ulx="1182" uly="54" xml:id="aho40v9vbhq7">
            <ref target="">IGCH 1516</ref>

The tei:facsimile to annotation list transformation outputs:,54,358,101

The tei:graphic was replaced with tei:media[@type='IIIFService'], with the @url pointing to the IIIF service URI instead of an image location. XSLT transformations for the manifest, HTML, RDF, and Solr outputs do the rest.

The Javascript has been updated so that clicking on a page under the index of annotations will force Mirador to change the the correct canvas.

You can see an example here:

I will post another update on EAD and MODS -> IIIF next week. 

Thursday, August 10, 2017

First DOIs minted for ANS Digital Library items

Several weeks ago, we migrated an older, circa 2002 TEI ebook on the Taranto 1911 hoard, authored by John Kroll and Sebastian Heath, into our Digital Library. The original TEI file and subsequent updates have been loaded into our TEI Github repository. The updates follow transcription precedents that we have set in older ANS-published printed monographs as part of the Mellon-funded Open Humanities Book Program: relevant places, objects, people, etc. have been linked to entities in LOD systems, such as All of the objects within this hoard (itself linked to IGCH 1864) are in the British Museum and linked to their URIs. Upon publication into the ANS Digital Library, the document parts are now accessible from the IGCH 1864 record and in (eventually) in Pelagios, connected to relevant ancient places.

Since Sebastian is an active scholar, with an ORCID, this document served as a proof of concept for the next iteration of ANS digital publication: that our current and future monographs and journal articles, once issued openly online, should be connected to ORCIDs for their authors, and publication metadata should be submitted to Crossref to mint a DOI and enhance accessibility. Furthermore, since there's a direct connection between ORCID and Crossref submissions, this new digital publication workflow would automatically populate an author's scholarly profile with ANS publications. This is a vast improvement over the likes of, which requires manual submission. The broad vision is this:

Regardless of whether an author submits works through the American Numismatic Society Digital Library,, Humanities Commons, their own institutional repository, or an Open Access journal system, their ORCID profile is the central, canonical aggregation of the entirety of their intellectual output (which includes datasets, software, etc.).

This aggregation system between DOIs and ORCIDs, following Linked Open Data principles, is the future of academic publication. Ideally, it should be expanded beyond citations to modern works with DOIs and ORCIDs to include more historic works defined by Worldcat and linked to historic scholars with ISNI identifiers. It would take a tremendous amount of work, but in theory, it would be possible to create a network graph of citations across all disciplines, going back in history to the advent of the printed book, charting the evolution of how knowledge is generated and disseminated. Therefore, Crossref, ISNI, and ORCID would perhaps play a greater role than providing simple (and superficial) citation metrics in enabling us to develop a broader historiography and analysis of scholarship itself. We plan to mint DOIs for our historical publications eventually, if Crossref extends its XML schema to support ISNI identifiers.

Under the Hood

Some extensions were implemented in ETDPub, the TEI/MODS publication framework that underlies the ANS Digital library. First, I authored XSLT stylesheets that would crosswalk TEI or MODS into the appropriate Crossref XML model according to their schema version 4.4.0. You can see an example of my MA thesis here:

If the author/editor URI matches an ORCID URI in the TEI, then the Admin panel in ETDPub will enable the publication of the metadata to Crossref. Similarly, within the MODS ETD editing interface (in XForms), a user can insert a mods:nameIdentifier[@type='orcid'] under the mods:name for an author/editor in order to capture the ORCID. So far, only TEI or MODS records with ORCIDs attached to people are available for submission into Crossref to mint a DOI.

Submission Workflow

In the admin panel, if a document is eligible for submission to Crossref, a checkbox is available. Clicking on this will fire off a series of actions in the XForms engine:
  1. The TEI/MODS-to-Crossref XML transformation is executed and loaded into an XForms instance
  2. The Crossref XML is serialized to /tmp because it must be attached via multipart/form-data
  3. Still having difficulty getting multipart/form-data to execute correctly in the XForms engine, the XForms engine instead interacts with a PHP script in CGI
  4. After the PHP script responds with a successful HTTP code, the MODS/TEI document is loaded in the XForms engine in order to insert the DOI in the proper location within the document
  5. The TEI/MODS file is saved back to eXist, and the standard publication workflow is executed (a chain of XForms submissions), updating the Solr search index and the triplestore/SPARQL endpoint
So far two documents in the Digital Library have DOIs connected to ORCIDs:

Taranto 1911:
My thesis (Recent Advancements in Roman Numismatics):