Tuesday, September 1, 2020

First pass mapping EAC-CPF to Linked Art JSON-LD

After pushing updates to map people and organization concepts in Nomisma.org in Linked Art-compliant JSON, I have implemented a similar serialization into xEAC, the open source authority management framework, based on EAC-CPF, that I have been developing on and off since 2012.

Like Nomisma and Numishare projects, an HTTP request for an authority URI includes Link headers that include alternate RDF, Turtle, and JSON-LD serializations for that resource, including a JSON-LD serialization following the Linked Art profile. A capable JSON-LD parser can convert this profile into other serializations of RDF (XML, Turtle, etc.) according to the CIDOC-CRM ontology that other semantic web developers might more readily recognize.

It is therefore possible to request the Linked Art JSON-LD via content negotiation from xEAC, for example:

curl -H "Accept: application/ld+json;profile=\"https://linked.art/ns/v1/linked-art.json\"" 
     http://numismatics.org/authority/adams_edgar

The Accept header is then parsed by the XProc pipeline in a controller that reads the content-type and profile in order to choose which serialization to enact. In this case, the EAC-CPF document is transformed via an XSLT stylesheet into an intermediate XML document that represents a JSON structure of objects and arrays, which is subsequently transformed by a secondary XSLT stylesheet into a text output, to which the XProc pipeline attaches an `application/ld+json` content-type in the HTTP header. This JSON metamodel approach has been applied throughout many of my frameworks, including Nomisma.org and Numishare, in order to consistently transform various XML schemas into different JSON profiles, from Linked Art to GeoJSON to the model required by d3js for data visualization.

The mapping of EAC-CPF for people, corporate bodies, and families follows the specifications for people and organizations drafted by the Linked Art community, at https://linked.art/model/actor/. A representation of a person in the ANS archival authority system, Edgar H. Adams, includes the preferred name and biographical statement (eac:biogHist/eac:abstract), URIs for matching concepts (a xEAC-specific implementation of eac:identity/eac:entityId[@localType = 'skos:exactMatch']), birth/death (for people) and formed_by/dissolved_by (for families and corporate bodies) dates from eac:existDates, and member/member_of links to URIs that implement relevant W3C Org ontology properties to the @xlink:arcrole in an eac:cpfRelation (also a specific xEAC implementation to align EAC-CPF more directly with Linked Open Data principles).

{
  "@context": "https://linked.art/ns/v1/linked-art.json",
  "id": "http://numismatics.org/authority/adams_edgar",
  "type": "Person",
  "_label": "Adams, Edgar H. (Edgar Holmes), 1868-1940",
  "identified_by": [
    {
      "type": "Name",
      "content": "Adams, Edgar H. (Edgar Holmes), 1868-1940",
      "classified_as": [
        {
          "id": "http://vocab.getty.edu/aat/300404670",
          "type": "Type",
          "_label": "Primary Name"
        }
      ]
    }
  ],
  "exact_match": [
    "http://viaf.org/viaf/92956241",
    "http://d-nb.info/gnd/101883196",
    "http://dbpedia.org/resource/Edgar_Adams",
    "http://www.wikidata.org/entity/Q3719031",
    "http://id.loc.gov/authorities/names/n81061401",
    "http://n2t.net/ark:/99166/w6n03w0m"
  ],
  "born": {
    "type": "Birth",
    "_label": "Start Date",
    "timespan": {
      "type": "TimeSpan",
      "begin_of_the_begin": "1868-04-07",
      "end_of_the_end": "1868-04-07"
    }
  },
  "died": {
    "type": "Death",
    "_label": "End Date",
    "timespan": {
      "type": "TimeSpan",
      "begin_of_the_begin": "1940-05-05",
      "end_of_the_end": "1940-05-05"
    }
  },
  "referred_to_by": [
    {
      "type": "LinguisticObject",
      "content": "Edgar H. Adams (1868-1940) of Bayville, Oyster Bay, and Brooklyn, 
      New York, was a numismatic scholar, author, and collector who produced, among 
      other works, reference guides to territorial and private gold coins. He also 
      coauthored, with William H. Woodin, the book United States Pattern, Trial, 
      and Experimental Pieces, a standard reference work on pattern coins. He served 
      as editor of The Numismatist, the monthly journal of the American Numismatic 
      Association, wrote a numismatic column for the New York Sun newspaper, and 
      was a co-founder of the New York Numismatic Club (1908).",
      "classified_as": [
        {
          "type": "Type",
          "id": "http://vocab.getty.edu/aat/300435422",
          "_label": "Biography Statement",
          "classified_as": [
            {
              "id": "http://vocab.getty.edu/aat/300418049",
              "type": "Type",
              "_label": "Brief Text"
            }
          ]
        }
      ]
    }
  ],
  "member_of": [
    {
      "type": "Group",
      "id": "http://numismatics.org/authority/new_york_numismatic_club",
      "_label": "New York Numismatic Club"
    },
    {
      "type": "Group",
      "id": "http://viaf.org/viaf/157729460",
      "_label": "American Numismatic Association"
    }
  ]
}

Ideally, we would want to be able to include links to geographic resources for places of birth or death, occupations, and other events as machine-readable data, with actionable xs: dates and references to controlled vocabulary URIs. Some of this is already possible within xEAC because it was built from the ground up to interact with LOD resources, but projects like Social Networks and Archival Context (SNAC) aren't yet well-integrated with external resources.

Wednesday, March 4, 2020

270 hoard documents and 60 authorites added to the ANS Archives

In a major digital archival publication today, 270 documents pertaining to Greek coin hoards have been added into the ANS Digital Archives, Archer, and 60 new archival authorities have been added into the ANS Biographies (EAC-CPF records published in xEAC). These authorities include numerous prominent numismatists, archaeologists, dealers, and collectors, as well as some individuals who are not prominent--people only attested through our archives and a scant provenance records from other museums. Each of these authorities will be created or updated in the Social Networks and Archival Context (SNAC) project, along with links back to our archival records.

A nice example is Sir Arthur Evans, the famous archaeologist of Knossos. He is mentioned in several letters between Sidney Noe and other scholars. Although Evans is not a prominent scholar in our own archives, his papers are held in other institutions. We are able to make our few letters more broadly available to researchers interested in Arthur Evans through SNAC.

The record for Arthur Evans, with links to hoard documents.


The archival documents themselves represent the first portion of a larger collection of scanned letters, invoices, inventories, notes, hoard photographs, and other research materials related to The Inventory of Greek Coin Hoards and subsequent Coin Hoards volumes. Coin Hoards will be published online in the near future, after we migrate the old IGCH platform into a completely new database system that operated more like Coin Hoards of the Roman Republic.

The display of IGCH 140, with new archival documents

Under the hood, these archival records are TEI documents generated from spreadsheet metadata entered by Peter van Alfen. The images are IIIF-compliant and follow the procedures we have already established with Edward T. Newell's research notebooks. The Archer framework, EADitor, was updated to accommodate other types of archival materials represented as TEI (manuscripts, etc.), and EADitor is capable of serializing these files directly into RDF for Archer's SPARQL endpoint (that drives the interconnectivity between the authority records and archival items, as well as the display of archival items in MANTIS and IGCH). Additionally, the TEI files, and TEI-encoded annotations, are serialized dynamically into IIIF manifests.

Because all TEI files use the same annotation system in the back-end of EADitor (Masahide Kanzaki's Image Annotator: https://www.kanzaki.com/works/2016/pub/image-annotator), these new archival documents can be annotated with URIs from Nomisma.org, coins in our collection, coin types or monograms in PELLA or other corpora. As a proof of concept, I annotated the names of Mithradates VI and Lysimachus with their respective Nomisma URIs on the notes of Wayte Raymond about IGCH 973: http://numismatics.org/archives/ark:/53695/igch973.001. These annotations, stored natively in TEI surface elements within a facsimile, are serialized into JSON-LD according to the IIIF spec in real time, and displayed at the link above in Mirador. The names are also listed in the index below the Mirador viewer.

While we still have more metadata to enter for more archival documents, the data-entry workflow and processing scripts are fully established at this stage. This is the next step in transforming the IGCH database into a more comprehensive research platform for Greek coin hoards.

Tuesday, July 9, 2019

135 ANS authority records merged into SNAC

Finally, after fine-tuning the xEAC-to-SNAC publication workflow over the last few months after initially building this functionality into xEAC last summer, I have switched over to the SNAC production API. We have integrated authority data from 135 EAC-CPF records in the American Numismatic Society Biographies into the Social Networks and Archival Context project. Among these authority records are dozens of new ones inserted into SNAC, complete with biographical information and references to digital archival and library holdings at the ANS. One of the more notable additions to SNAC is Margaret Thompson, one of the most prominent Greek numismatists of the latter 20th century and a long-time curator at the ANS.

Not only have we provided a comprehensive biography of Margaret Thompson, but also URIs in other systems, such as VIAF and Wikidata.  The Bibliographic Resources for Thompson include numerous archival photographs (which link back to the ANS Archives--many of these are available in IIIF) and four ebooks in our Open Access Digital Library. These ebooks were digitized as part of the NEH-Mellon Foundation Open Humanities Book program.

SNAC record for Edward T. Newell, with biography from the ANS.

In fact, since many of the ~200 books digitized as part of this NEH-Mellon project were authored by prominent numismatists represented in the ANS archival authorities, 74 of these books have been made accessible to scholars through SNAC. This was the aim of our initial application to this grant program--finally realized by much work in extending xEAC to be able to interact with SNAC's JSON APIs. We not only wanted to create a large corpus of TEI ebooks that linked to URIs in our numismatic collection or research databases like Online Coins of the Roman Empire and the Inventory of Greek Coin Hoards (and similar systems), but to integrate these books into the larger cloud of cultural heritage data by linking the authors to large-scale authority systems like SNAC that could be leveraged to point researchers back to our own services.

SNAC was funded not only by Mellon (like our ebooks project), but also initially by the IMLS and the NEH. In this way, we are providing value to funders by building upon projects in which they have already invested: creating a whole that is greater than the sum of its parts. I hope that other institutions will look at xEAC and our broader archival LOD strategy (see Linked Open Data and Hellenistic Numismatics and Linked Open Data for Numismatic Library, Archive, and Museum Integration for further information about this architecture) as a means by which they too can enhance SNAC while simultaneously broadening access to their own materials.

By incorporating our archival authorities and digital archives and library into SNAC, we are providing pathways through broader, more generalized aggregators for non-numismatic researchers who may otherwise never think to query our archives directly. A great example of this is the record for the prominent sculptor, Augustus Saint-Gaudens. This record links to more than 160 finding aids published by dozens of institutions, including museum archives, and so art historians may find correspondences in our archives as well as the Smithsonian Archives of American Art or the New York Public Library. Furthermore, since we have already used the Wikidata API look-up inherent to xEAC to embed related authority URIs in our own EAC-CPF record, we inserted the Getty ULAN URI for Saint-Gaudens into SNAC. This would, in theory, make it possible for SNAC to interact with art historical aggregators built on the Getty vocabularies to extract other works of cultural heritage, such as medals held at the American Numismatic Society or sculptures held in other art museums both in the United States and abroad.

I think we are only seeing the tip of the iceberg of what will be possible interacting with SNAC.

Thursday, January 10, 2019

Updates to IIIF image annotation in the EADitor back-end

The American Numismatic Society's archival images were migrated into IIIF in the fall of 2017, including the extension of EADitor to faciliate the creation of manifests from TEI files that represent the Newell notebooks. While the front end was updated to use Leaflet for single photographs (MODS records) or Mirador for image collections, like the notebooks or the Agnes Balwin Brett papers, the back-end had not been updated to enable the editing or creation of new annotations.

After the back-to-back releases of the full Seleucid Coins Online and the first phase of Ptolemaic Coins Online in December, I have been able to pivot completely from coin type corpora and data cleaning to working on our digital archives for a brief period. After fixing some bugs, I turned my attention to piecing the image annotation back together in the XForms engine for TEI editing/publication within Archer. The original system was developed in 2014. This blog post covers most of the technical underpinnings, but to summarize: Rainer Simon's Annotorious was hooked into OpenLayers to facilitate image annotation. The create/remove/update handlers in Annotorious were used to round trip the annotations to/from TEI surface elements within tei:facsimiles and Annotorious' JSON model in the XForms engine (using the client-side Javascript hooks in Orbeon). There have been significant updates to Orbeon since 2014, and my original code was somewhat broken, and therefore I needed to explore alternative solutions.

My first attempt was loading a manifest for a Newell notebook into Mirador in the XForms web form. Although Mirador did load the manifest, due to of some unforeseen conflicts between the Javascript in Orbeon and Mirador, the annotation popups (with the TinyMCE library) didn't function correctly. I then began to explore Masahide Kanzaki's Image Annotator. This was appealing, as I had tested this application's ability to show two images on the same canvas in dynamically SPARQL-generated IIIF manifests from Numishare-based type corpora (see this example of RRC 15/1a that combines IIIF images from three different museums into one manifest--one canvas per coin and two images per canvas). The Image Annotator not only loads IIIF manifests into OpenSeaDragon, but was extended to support Annotorious for creating and viewing annotations.

After several days of work, I have been able to fully reactivate image annotation in the EADitor back-end with the Image Annotator. It took a little bit of reverse engineering in order to find the functions for the handlers, with some slight modifications to my original code to hook the Annotorious handlers into the XForms engine. This included some changes in the mathematical calculations for converting the ratio-based coordinates to pixels for the TEI surface's upper-left x,y and lower-right x,y attributions. These TEI attributes are serialized into proper #xywh fragments in the Web Annotations in the manifest.


Fig. 1: Image Annotator in the XForms engine

I also had to track down and comment out some components of the UI (like the document metadata and links) and tweak the CSS so that the OpenSeaDragon window fit within the parameters of my existing Bootstrap 3.x template.

URIs in certain namespaces are still parsed to extract human-readable labels (see Fig. 1 and 2), for example, from the ANS collection. My intention is to extend the range of parseable URIs to include Wikidata, other URIs in the ANS digital library or archives, Social Networks and Archival Context, Worldcat Works, and, eventually, URIs for Hellenistic monograms. I might even extend the parsing to extract thumbnail images for coins and store those in the tei:desc within the TEI document (in addition to simple mixed content w/ tei:ref elements as external links).

Fig. 2: After clicking 'Save', the URI is replaced with an HTML link

After the reworking of the IGCH data over the next several months, we will turn our attention to annotating more of Edward T. Newell's notebooks as part of the NEH-funded Hellenistic Royal Coinages (HRC) project. The UI provided by the Image Annotator is much easier to work with than the one I had developed more directly within XForms nearly five years ago, and so we should see some significant progress toward annotation these notebooks to link to coins in our (or other) numismatic collections, coin types in HRC, Greek coin hoards, and our yet-to-be-published database of Greek monograms. And these annotations will enhance research context in our other platforms by pointing users back to individual notebook pages in Archer from Mantis or IGCH (for example, from http://coinhoards.org/id/igch1664 or http://numismatics.org/collection/1944.100.26870).

SPARQL-generated list of Open Annotations related to IGCH 1664

Thursday, November 15, 2018

An American Europeana

The blog is often reserved for updates or technical explanations of archival/authority software development at the American Numismatic Society, or experimentation in new modes of archival data publication (mainly Linked Open Data).

However, since I have long been a proponent of open, community-oriented efforts to publish cultural heritage aggregations, like Europeana and DPLA, I wanted to take a bit of time to hash out some thoughts in the form of a blog post instead of starting a series of disjointed Twitter threads [1, 2].

Most of you have likely heard that DPLA laid off six employees, and John S. Bracken went online to speak of his vision and answer some questions. This vision seems to revolve around ebook deals primarily, with cultural heritage aggregation as a secondary function of DPLA. However, DPLA laid off the people that actually know how to do that stuff, so the aggregation aspect of the organization (which is its real and lasting value to the American people) no longer seems viable.

I believe the ultimate solution for an American version of Europeana is tying it into the institutional function of a federally-funded organization like the Library of Congress or Smithsonian, with the backing of Congressional support for the benefit of the American people (which is years away, at least). However, I do think there are some shorter-term solutions that can be undertaken to bootstrap an aggregation system and administered by one organization or a small body of institutions working collaboratively. There doesn't need to be a non-profit organization in the middle to manage this system, at least at this phase.

There are a few things to point out regarding the system's political and technical organization:

  1. The real heavy lifting is done by the service/content hubs. It takes more time/money/professional expertise to harvest and normalize the data than it does to build the UI on top of good quality data.
  2. Much of the aggregation software has been written already, but hasn't been shared broadly with the community.
  3. There seems to be a wide variation in the granularity and quality of data provided to DPLA. I wrote a harvester for Orbis Cascade that provided them with DPLA Metadata Application Profile-compliant RDF that had some normalization of strings extracted from Dublin Core to Getty AAT and VIAF URIs, which were modeled properly into SKOS Concepts or EDM Agents. But DPLA couldn't actually ingest their own data model.
  4. Europeana has already written a ton of tools that can be repurposed. 
  5. There are other off the shelf tools that scale that could be appropriated for either the UI or underlying architecture (Blacklight, various open source triplestores, like Apache Fuseki, which I have heard will scale at least to a billion triples).
  6. On a non-technical level, the name "Digital Public Library of America" itself is problematic, because the project has been overwhelmingly driven by R1 research libraries. Cultural Heritage is more than what you find in a Special Collections Library, and museums are notably absent from this picture (in contrast to Europeana).

Without knowing more of the details, I had heard that DPLA had scaling issues with their SPARQL endpoint software. I don't know if this is still an issue with this particular software, but I do believe the data were a problem. Aside from what was produced by those organizations that are part of Orbis Cascade that opted to reconcile their strings to things (sadly, most did not choose to take this additional step), how much data ingested by DPLA is actual, honest to God Linked Open Data--with, you know, links? A giant triplestore that's nothing but literals is not very useful, and it's impossible to build UIs for the public that can live up to the potential of the data and the architectural principles of LOD.

At some point, there needs to be a minimum data quality barrier to entry into DPLA, and part of this is implementing a required layer of reconciliation of entities to authoritative URIs. I understand this does create more work for individual organizations that wish to participate, but the payoffs are immense:

  1. Reconciliation is a two way street: it enables you to extract data from external sources to enhance your own public-facing user interface (biographies about people--that sort of thing).
  2. Social Networks and Archival Context should play a vital role in the reconciliation of people, families, and corporate bodies. There should be greater emphasis in the LibTech community to interoperate with SNAC in order to create entities that only exist in local authority files, which will then enable all CPF entities to be normalized to SNAC URIs upon DPLA ingestion.
    • Furthermore, SNAC itself can interact with DPLA APIs in order to populate a more complete listing of cultural heritage objects related to that entity. Therefore, there is an immediate benefit to contributors to DPLA, as their content will simultaneously become available in SNAC to a wide range of researchers and genealogists via LOD methodologies.
    • SNAC is beginning to aggregate content about entities, so it frankly doesn't make sense for there to be two architecturally dissimilar systems that have the same function. DPLA and SNAC should be brought closer together. They need each other in order for both projects to maximize their potential. I strongly believe these projects are inseparable.
  3. With regard to the first two points, content hubs should put greater emphasis on building the reconciliation services for non-technical libraries, archivists, curators, etc. to use, with intuitive user interfaces that allow for efficient clean-up. Many people (including myself) have already built systems that look up entities in Geonames, VIAF, SNAC, the Getty AAT/ULAN, Wikidata, etc. This work doesn't need to be done from scratch.
Because DPLA's data are so simple and unrefined, many of the lowest hanging fruits in a digital collection interfaces have not been achieved, such as basic geographic visualization. Furthermore facet fields are basically useless because there's no controlled vocabulary.

After expanding the location facet for a basic text search of Austin, I am seeing lists that appear to be Library of Congress-formatted geographic subject headings. The most common heading is "United States - Texas - Travis County - Austin", mainly from the Austin History Center, Austin Public Library. However, there are many more variations of the place name contributed by other organizations.

The many Austins

This is really a problem that needs to be addressed further down the chain from DPLA at the hub level. If you want to build a national aggregation system that reaches its full potential, more emphasis needs to be placed on data normalization.




DPLA decided to go large scale, low quality. I am much more of a small scale, good quality person, because it is easier to scale up later once you have the workflows to produce good quality data than it is to go back and clean up a pile of poor data. And I don't think that the current form of the DPLA interface is powerful enough to demonstrate the value of entity reconciliation to the librarians, curators, etc. making the most substantial investment of time. You can't get the buy-in from that specialist community without demonstrating a powerful user interface that capitalizes on the effort they have made. I know this from experience. Nomisma.org struggled to get buy-in until we built Online Coins of the Roman Empire, and now Nomisma is considered one of the most successful LOD projects out there.

My recommendation is to go back to the drawing board with a small number of data contributors to develop the workflows that are necessary to build a better aggregation system. This process should be completely transparent and can be replicated within the other content hubs. The burden of cleaning data shouldn't fall on the shoulders of DPLA (or whoever comes next).

There are obvious funding issues here, but contributions of staff time and expertise can be more valuable than monetary contributions in this case.

Wednesday, July 11, 2018

Creating and Updating SNAC constellations directly in xEAC

After 2-3 weeks of work, I have made some very significant updates to xEAC, one which paves the way to making archival materials at the American Numismatic Society (and other potential users of our open source software frameworks) broadly accessible to other researchers. This is especially important for us, since we are a small archive with unique materials that don't reach a general historical audience, and we are now able to fulfill one of the potentialities we outlined in our Mellon-NEH Open Humanities Book project: that we would be able to make 200+ open ebooks available through Social Networks and Archival Context (SNAC).

I have introduced a new feature that interacts with the SNAC JSON API within the XForms backend of xEAC (note that you need to use an XForms 2.0 compliant processor for xEAC in order to make use of JSON data). The feature will create a new constellation if none exists or supplement existing constellations with data from the local EAC-CPF record. While the full range of EAC-CPF components is supported by the SNAC API, I have focused primarily on the integration of the stable URI for the entity in the local authority system (e.g., http://numismatics.org/authority/newell), existDates (if they are not already in the constellation), and the biogHist. Importantly, if xEAC users have opted to connect to a SPARQL endpoint that also contains archival or libraries materials, these related resources will be created in SNAC and linked to the constellation.

It should be noted that this system is still in beta and has only been tested with the SNAC development server. There is still work to do with improving the authentication handshake between xEAC and SNAC.

The process

 

Step 1: Reviewing an existing constellation for content


The first step of the process is executed when the user loads the form. If the EAC-CPF record already contains an entityId that conforms to the permanent, stable SNAC ARK URI, a "read" query will be issued to the SNAC API in order to determine what content already exists in the constellation, including what resources are already available in the constellation vs. the resources extracted from the local archival information system via SPARQL.

The SPARQL query for extracted resources from the endpoint is as follows:


PREFIX rdf:      <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX dcterms:  <http://purl.org/dc/terms/>
PREFIX foaf:  <http://xmlns.com/foaf/0.1/>
 
SELECT ?uri ?role ?title ?type ?genre ?abstract ?extent WHERE {
?uri ?role <http://numismatics.org/authority/newell> ;
     dcterms:title ?title ;
     rdf:type ?type ;
     dcterms:type ?genre .
  OPTIONAL {?uri dcterms:abstract ?abstract}
  OPTIONAL {?uri dcterms:extent ?extent}
} ORDER BY ASC(?role)


I recently made an update to our Digital Library and Archival software so that every different type of resource (ebooks and notebooks in TEI, photographs in MODS, finding aids in EAD) will include a dcterms:type linking to a Getty AAT URI in the RDF serialization. This AAT URI, in conjunction with the rdf:type of the archival or library object (often a schema.org Class), will help determine the type of resource according to SNAC's own parameters (BibliographicResource, ArchivalResource, DigitalArchivalRescource). Additionally, the role of the entity with respect to the resource (dcterms:creator, dcterms:subject) informs the role within the SNAC resource-constellation connection: creatorOf, referencedIn. Abstracts and extents are inserted, if available.

Step 2: Validate authentication


SNAC uses Google user tokens for validation within its own system. There is currently no handshake available between xEAC and SNAC which will facilitate multiple users in xEAC to each have their own credentials in SNAC. At the moment, the "user" information is stored in the xEAC config file. A user will have to enter their Google credentials from the SNAC API Key page into the web form and click the "Confirm User Data" button. xEAC will submit an "edit" to a random constellation to verify the validity of the authentication information. If it is successful, the credentials are then stored back into the config (although the token only lasts about 24 hours) and the constellation is immediately unlocked. The user will then proceed to the create/update constellation interface.

Authenticating through xEAC

Step 3: Creating or updating a constellation


The user will now see several checkboxes to add information into the constellation. Eventually, it will be possible to remove data as well. Below is a synopsis of options:

  1. Same As URI: The URI of the entity in the local authority system will be added into the constellation. This is especially important or establishing concordances between different vocabulary systems.
  2. Exist dates can be added into the constellation if they are not already present.
  3. If there isn't already a biogHist in the constellation and there is one present in the EAC-CPF record, the biogHist will be escaped and published to SNAC. A source will also be created in the constellation in order to link the new biogHist to SNAC control metadata, tying the new biogHist directly to the local URI for the authority. This makes it possible to update or delete only the biogHist associated with your own entity without overwriting other biogHist information that might already be present within the constellation. While SNAC does support multiple biogHists, only the most recently added biogHist will appear in the HTML view of the entity. For this reason (at present), xEAC will only insert a biogHist if there isn't one in the constellation already. In step 1, if the constellation already contains a biogHist associated with the source URI for your authority, it will hash encode the constellation's biogHist and compare it to the hash-encoded biogHist currently in the EAC-CPF record. If there is a difference between these hashes, the constellation will be updated with the current version of the biogHist in the EAC-CPF record.
  4. A list of resource relations derived from SPARQL will be displayed. All will be checked by default in order to first create the resource with the "insert_resource" API command, and second to connect the constellation to that newly created resource with "update_constellation". Each resource entry will display some basic metadata and whether or not it already exists in the constellation, and what action will be taken. It is possible to uncheck the box for a resource that exists in the constellation to remove it from the constellation.
The interface for creating and updating SNAC constellations

Step 4: Saving the ARK back to the EAC-CPF record, if applicable

After the successful issuing of "publish_constellation" to the SNAC API, an entityId with the new SNAC ARK URI will be inserted into the EAC-CPF record, if the constellation is newly created (updates presume the ARK already exists in the EAC record). Saving the EAC record will trigger a re-indexing of the document to Solr and a SPARQL/Update that will insert the ARK as a skos:exactMatch into the concept object for the entity.

PREFIX rdf:      <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX skos:  <http://www.w3.org/2004/02/skos/core#>
PREFIX foaf:  <http://xmlns.com/foaf/0.1/>

INSERT { ?concept skos:exactMatch <ARK>  }
WHERE { ?concept foaf:focus <URI> }


The data above are those I consider to most vital to SNAC integration--essential historical or biographical context and related archival or library resources that can be made more broadly accessible. I am not sure how many other authority systems are able to interact with SNAC with this degree of granularity yet, but I am hopeful that these features will propel more unique research materials into the public sphere.

I will briefly touch on these new features when I present our our comprehensive LOD-oriented numismatic research platform at SAA next month (I will upload the slideshow soon).

Thursday, June 7, 2018

SNAC Lookups Updated in xEAC and EADitor

Since the Social Networks and Archival Context has migrated to a new platform, it has published a JSON-based REST API, which they have well-documented. Although EADitor and xEAC have had lookup mechanisms to link personal, corporate, and family entities from SNAC to EAD and EAC-CPF records since 2014 (see here), the lookup mechanisms in the XForms-based backends to these platforms interacted with an unpublicized web service that provided an XML response for simple queries.

With the advent of these new SNAC APIs and JSON processing within the XForms 2.0 spec (present in Orbeon since 2016), I have finally gotten around to overhauling the lookups in both EADitor and xEAC. Following documentation for the Search API, the XForms Submission process now submits (via PUT) an instance that conforms to the required JSON model. The @serialization attribute is set to "application/json" in the submission, and the JSON response from SNAC is serialized back into XML following the XForms 2.0 specification. Side note: the JSON->XML serialization differs between XForms 2.0 and XSLT/XPath 3.0, and so there should be more communication between these groups to standardize JSON->XML across all XML technologies.

The following XML instance is transformed into API-compliant JSON upon submission.


<xforms:instance id="query-json" exclude-result-prefixed="#all">
 <json type="object" xmlns="">
  <command>search</command>
  <term/>
  <entity_type/>
  <start>0</start>
  <count>10</count>
 </json>
</xforms:instance>


The submission is as follows:


<xforms:submission id="query-snac" ref="instance('query-json')" 
    action="http://api.snaccooperative.org" method="put" replace="instance" 
    instance="snac-response" serialization="application/json">
 <xforms:header>
  <xforms:name>User-Agent</xforms:name>
  <xforms:value>XForms/xEAC</xforms:value>
 </xforms:header>
 <xforms:message ev:event="xforms-submit-error" level="modal">Error transfroming 
into JSON and/or interacting with the SNAC
  API.</xforms:message>
</xforms:submission> 

The SNAC URIs are placed into the entityIds within the cpfDescription/identity in EAC-CPF or as the @authfilenumber for a persname, corpname, or famname in EAD.

The next task to to build APIs into xEAC for pushing data (biographical data, skos:exactMatch URIs, and related archival resources) directly into SNAC. By tomorrow, all (or nearly all) of the authorities in the ANS Archives will be linked to SNAC URIs.

Friday, May 18, 2018

Three new Edward Newell research notebooks added to Archer

Three research notebooks of Edward T. Newell have been added to Archer, the archives of the American Numismatic Society. These had been scanned as part of the larger Newell digitization project, which was migrated into IIIF for display in Mirador (with annotations) in late 2017.

These three notebooks had been scanned, but TEI files had not been generated due to some minor oversight. Generating the TEI files was fairly straightforward--there's a small PHP script that will extract MODS from our Koha-based library catalog. These MODS files are subsequently run through an XSLT 3.0 stylesheet to generate TEI with a facsimile listing of all image files associated with the notebook, linking to the IIIF service URI. XSLT 3.0 comes into play to parse the info.json for each image in order to insert the height and width of the source image directly into the TEI, which is used for the TEI->IIIF Manifest JSON transformation (the canvas and image portions of the manifest), which is now inherent to TEI files published in the EADitor platform.

The notebooks all share the same general theme: they are Newell's notes on the coins in the Berlin Münzkabinett, which we aim to annotate in Mirador over the course of the NEH-funded Hellenistic Royal Coinages project.

A fourth notebook was found to have not yet been scanned, and so it will be published online soon.

Friday, April 6, 2018

117 ANS ebooks published to Digital Library

I have finally put the finishing touches on 117 ANS out-of-print publications that have been digitized into TEI (and made available as EPUB and PDF) as part of the NEH and Mellon-funded Open Humanities Book project. This is the "end" (more details on what an end entails later) of the project, in which about 200 American Numismatic Society monographs were digitized and made freely and openly available to the public.

All of these, plus a selection of numismatic electronic theses and dissertations as well as two other ebooks not funded by the NEH-Mellon project, are available in the ANS Digital Library. The details of this project have been outlined in previous blog posts, but to summarize, the TEI files have been annotated with thousands of links to people, places, and other types of entities defined in a variety of information systems--particularly Nomisma.org (for ancient entities), Wikidata, and Geonames (for modern ones).

Additionally:
  • Books have been linked to 153 coins (so far) in the ANS collection identified by accession number. Earlier books cite Newell's personal collection, bequeathed to the ANS and accessioned in 1944. A specialist will have to identify these.
  • 173 total references to coin hoards defined in the Inventory of Greek Coin Hoards, plus several from Kris Lockyear's Coin Hoards of the Roman Republic.
  • 166 references to Roman imperial coin types defined in the NEH-funded Online Coins of the Roman Empire.
  • A small handful of Islamic glass weights in The Metropolitan Museum of Art 
  • One book by Wolfgang Fischer-Bossert, Athenian Decadrachm, has a DOI, connected to his ORCID.
Since each of these annotations is serialized into RDF and published in the ANS archival SPARQL endpoint, the other various information systems (MANTIS, IGCH, OCRE, etc.) query the endpoint for related archival or library materials.

For example, the clipped shilling, 1942.50.1, was minted in Boston, but the note says it was found among a mass of other clippings in London. The findspot is not geographically encoded in our database (and therefore doesn't appear on the map), but this coin is cited in "Part III Finds of American Coins Outside the Americas" in Numismatic finds of the Americas.


Using OpenRefine for Entity Reconciliation

Unlike the first phase of the project, the people and places tagged in these books were extracted into two enormous lists (20,000 total lines) that were reconciled against Wikidata, VIAF, or Nomisma OpenRefine reconciliation APIs. Nomisma was particularly useful because of the high degree of accuracy in matching people and places. Wikidata and VIAF were useful for modern people and places, but these were more challenging in that there might be dozens of American towns with the same name or numerous examples of Charles IV or other regents. I had to evaluate the name within the context of the passage in which it occurred, a tedious process that took nearly two months to complete. The end result, however, has a significantly broader and more accurate coverage than the 85 books in the first iteration of the grant. After painstakingly matching entities to their appropriate identifiers, it only took about a day to write the scripts to incorporate the URIs back into the TEI files, and a few more days of manual, or regex linking for IGCH, ANS coins, etc.

As a result of this effort, and through the concordance between Nomisma identifiers and Pleiades places, there are a total of 3,602 distinct book sections containing 4,304 Pleiades URIs, which can now be made available to scholars through the Pelagios project.


What's Next for ANS Publications?

So while the project concludes in its official capacity, there is room for improvement and further integration. Now that the corpus has been digitized, it will be possible to export all of the references into OpenRefine in an attempt to restructure the TEI and link to URIs defined by Worldcat. We will want to link to other DOIs if possible, and make the references for each book available in Crossref. Some of this relies on the expansion of Crossref itself to support entities identifiers beyond ORCID (e.g., ISNI) and citations for Worldcat. Presently, DOI citation mechanisms allow us to build a network graph of citations for works produced in the last few years, but the extension of this graph to include older journals and monographs will allow us to chart the evolution of scientific and humanistic thought over the course of centuries.

As we know, there is never an "end" to Digital Humanities projects. Only constant improvement. And I believe that the work we have done will open the door to a sort of born-digital approach to future ANS publications.

Tuesday, October 31, 2017

EADitor now supports EAD and MODS to IIIF manifest generation

After migrating the Newell TEI notebooks to support serialization of facsimiles into IIIF manifests and the render of these manifests in an embedded Mirador viewer, I implemented a transformation of EAD finding aid image collections and MODS records for photographs into manifests.

EAD updates

The EAD finding aids were updated to replace the daogrp's linking to flickr images to link to thumbnail, reference, and IIIF service URLs (dao[@xlink:role='IIIFService']). An XSLT transformation of the EAD into manifest JSON occurs, with an intermediate process of iterating through the IIIFService info.json files with the Orbeon XForms processor in XPL to extract the height and width to generate canvases for each image.

The Brett finding aid now includes clickable thumbnails that will launch the zoomable Leaflet viewer in a fancybox popup window. At the top of the page, the user can download the manifest, and there's also a link to view the manifest in our internal Mirador viewer. You can view the EAD XML (link at top) for more details.

MODS updates

The updates to the MODS were twofold. First, in the previous version of Archer, all photographs were suppressed from the public regardless of copyright concerns. We have re-evaluated these concerns by applying one of several Rights Statements. Two of these rights statements are most permissible, and therefore, we will display the high resolution image when we have every right to do so. In any case, thumbnails are Fair Use, and therefore, they are always visible in the record page and the search results pages.

Where copyright allows us to do so, the MODS file includes a URL for the reference image and a URL[@access='raw object' and @note='IIIFService']. When a IIIFService URL is present in the MODS record, the XSLT transformation will include a Leaflet div and initiate the display of the image. See A Portrait Photograph of Margaret Thompson, for example. Like the finding aid, a manifest is dynamically generated from MODS, but only one XForms processor is called to extract the height and width from the info.json for the single image linked in the MODS file.

Pelagios Updates

Since the Brett collection links many photographs to ancient places defined in the Pleiades Gazetteer of Ancient Places, I have updated the EADitor RDF output for Pelagios. The output now includes IIIF service metadata conforming to the Europeana Data Model specification. Rainer Simon has imported these photographs into Peripleo.

Friday, October 6, 2017

Newell notebooks migrated to IIIF

As part of our transition to IIIF for high resolution photographs for the numismatic collection in MANTIS (see http://numismatics.org/collection/1944.100.45250 for example), I have begun to migrate our archival images into IIIF as well. These new features will be available in our new dedicated server as soon as the migration of Wordpress from one server to another is complete, which I expect in the next few weeks. The implementation of IIIF for our archival resources entails three overhauls of the current metadata model and HTML/IIIF Manifest serialization: TEI (for Newell notebooks of facsimile images), Encoded Archival Description (EAD) finding aids, and MODS. The transformation of the TEI notebooks into IIIF compliance is completed, and the functionality for EAD and MODS has been built, but the XML data have not been fully updated to link to IIIF services (mainly because the high resolution images haven't been uploaded to the server yet).

Annotated Newell notebook IIIF manifest displayed in Mirador


TEI to IIIF Manifest

The first Newell notebook was published to Archer (built on EADitor) more than three years ago. There are now about 50 notebooks published, but only a handful have been annotated to link to people, IGCH hoards, and coins in our collection (we will complete the annotation as part of the Hellenistic Royal Coinages project). To summarize the technical underpinnings, each notebook is a TEI file with facsimile elements for each page. The facsimile contains a link to the image and 0-n surface elements representing annotations. These surface elements were created by roundtripping the Annotorious/OpenLayers annotation JSON <-> TEI. The @ulx, @uly, @lrx, and @lry attributes represent the coordinates of the upper left and lower right hand corners of the annotations, and the coordinates were relative ratios based on OpenLayers bounds.

 For IIIF compliance, I ran the TEI through an XSLT 3 transformation to load the info.json metadata from our IIIF image server to extract the height and width of each image, and then recalculate the coordinates to be more in line with Web Annotation segments. The lower right coordinates are still stored in the TEI, but upon generation of annotation lists for the manifest, the left coordinates are subtracted to the right to correctly establish the annotation height and width.

      <surface lrx="1540" lry="155" ulx="1182" uly="54" xml:id="aho40v9vbhq7">
         <desc>
            <ref target="http://coinhoards.org/id/igch1516">IGCH 1516</ref>
         </desc>
     </surface>
      

The tei:facsimile to annotation list transformation outputs:

http://numismatics.org/archives/manifest/nnan187715/canvas/nnan0-187715_X006#xywh=1182,54,358,101


The tei:graphic was replaced with tei:media[@type='IIIFService'], with the @url pointing to the IIIF service URI instead of an image location. XSLT transformations for the manifest, HTML, RDF, and Solr outputs do the rest.

The Javascript has been updated so that clicking on a page under the index of annotations will force Mirador to change the the correct canvas.

You can see an example here: http://numismatics.org/archives/id/nnan187715

I will post another update on EAD and MODS -> IIIF next week. 

Thursday, August 10, 2017

First DOIs minted for ANS Digital Library items

Several weeks ago, we migrated an older, circa 2002 TEI ebook on the Taranto 1911 hoard, authored by John Kroll and Sebastian Heath, into our Digital Library. The original TEI file and subsequent updates have been loaded into our TEI Github repository. The updates follow transcription precedents that we have set in older ANS-published printed monographs as part of the Mellon-funded Open Humanities Book Program: relevant places, objects, people, etc. have been linked to entities in LOD systems, such as Nomisma.org. All of the objects within this hoard (itself linked to IGCH 1864) are in the British Museum and linked to their URIs. Upon publication into the ANS Digital Library, the document parts are now accessible from the IGCH 1864 record and in (eventually) in Pelagios, connected to relevant ancient places.

Since Sebastian is an active scholar, with an ORCID, this document served as a proof of concept for the next iteration of ANS digital publication: that our current and future monographs and journal articles, once issued openly online, should be connected to ORCIDs for their authors, and publication metadata should be submitted to Crossref to mint a DOI and enhance accessibility. Furthermore, since there's a direct connection between ORCID and Crossref submissions, this new digital publication workflow would automatically populate an author's scholarly profile with ANS publications. This is a vast improvement over the likes of Academia.edu, which requires manual submission. The broad vision is this:

Regardless of whether an author submits works through the American Numismatic Society Digital Library, Zenodo.org, Humanities Commons, their own institutional repository, or an Open Access journal system, their ORCID profile is the central, canonical aggregation of the entirety of their intellectual output (which includes datasets, software, etc.).

This aggregation system between DOIs and ORCIDs, following Linked Open Data principles, is the future of academic publication. Ideally, it should be expanded beyond citations to modern works with DOIs and ORCIDs to include more historic works defined by Worldcat and linked to historic scholars with ISNI identifiers. It would take a tremendous amount of work, but in theory, it would be possible to create a network graph of citations across all disciplines, going back in history to the advent of the printed book, charting the evolution of how knowledge is generated and disseminated. Therefore, Crossref, ISNI, and ORCID would perhaps play a greater role than providing simple (and superficial) citation metrics in enabling us to develop a broader historiography and analysis of scholarship itself. We plan to mint DOIs for our historical publications eventually, if Crossref extends its XML schema to support ISNI identifiers.

Under the Hood

Some extensions were implemented in ETDPub, the TEI/MODS publication framework that underlies the ANS Digital library. First, I authored XSLT stylesheets that would crosswalk TEI or MODS into the appropriate Crossref XML model according to their schema version 4.4.0. You can see an example of my MA thesis here: http://numismatics.org/digitallibrary/ark:/53695/gruber_roman_numismatics.xref.

XSLT:
If the author/editor URI matches an ORCID URI in the TEI, then the Admin panel in ETDPub will enable the publication of the metadata to Crossref. Similarly, within the MODS ETD editing interface (in XForms), a user can insert a mods:nameIdentifier[@type='orcid'] under the mods:name for an author/editor in order to capture the ORCID. So far, only TEI or MODS records with ORCIDs attached to people are available for submission into Crossref to mint a DOI.

Submission Workflow

In the admin panel, if a document is eligible for submission to Crossref, a checkbox is available. Clicking on this will fire off a series of actions in the XForms engine:
  1. The TEI/MODS-to-Crossref XML transformation is executed and loaded into an XForms instance
  2. The Crossref XML is serialized to /tmp because it must be attached via multipart/form-data
  3. Still having difficulty getting multipart/form-data to execute correctly in the XForms engine, the XForms engine instead interacts with a PHP script in CGI
  4. After the PHP script responds with a successful HTTP code, the MODS/TEI document is loaded in the XForms engine in order to insert the DOI in the proper location within the document
  5. The TEI/MODS file is saved back to eXist, and the standard publication workflow is executed (a chain of XForms submissions), updating the Solr search index and the triplestore/SPARQL endpoint
So far two documents in the Digital Library have DOIs connected to ORCIDs:

Taranto 1911: http://dx.doi.org/10.26608/taranto1911
My thesis (Recent Advancements in Roman Numismatics): http://dx.doi.org/10.26608/gruber_roman_numismatics

Friday, July 14, 2017

Improved mapping in EADitor - Brett archaeology photos as a test

At long last, I have migrated from OpenLayers to Leaflet in EADitor. This required modifications in two areas: the HTML pages for rendering EAD finding aids and the map interface. As a result, I introduced two new serializations:

  • The map interface renders Solr search results rendering into GeoJSON (instead of OpenLayers displaying Solr->KML as before)
  • A transformation of an EAD finding aid into GeoJSON. A GeoJSON point is created for all unique mappable places from Geonames or Pleiades, and coordinates are extracted in real time by reading Geonames APIs or Pleiades RDF. The GeoJSON features include references to all uniquely addressable components that include that place in the controlaccess element. You can append the extension '.geojson' to get JSON response. Content negotiation will be implemented eventually. See http://numismatics.org/archives/ark:/53695/nnan0037.geojson for example.

 

 Restructuring the Agnes Baldwin Brett finding aid

Agnes Baldwin Brett was a curator at the ANS from 1909-1912 and a prominent scholar of Greek numismatics. Our archives hold a variety of interesting materials, including photographs from her travels around Greece, Italy, and Turkey in the early 1900s. Numerous photos have been digitized, were uploaded to flickr Commons, and linked to the Brett EAD finding aid. Some photographs were identified and described (with brief text snippets) by ANS archivist, David Hill, but all photographs were placed in a single series-level component. All identifiable places were linked in EADitor's Geonames lookup mechanism in a top-level controlaccess element. There was no direct correlation between individual photographs and the people, places, and things depicted.

In order to demonstrate the full functionality of the new mapping interface, I finally took the time to restructure the finding aid so that each photograph would appear in its own item-level component with a controlaccess element enabling individual identification of the place depicted in the photo. Furthermore, while many finding aids have been linked to modern places defined in Geonames, the Brett collection of archaeological photographs provided an opportunity to link photos to ancient places in Pleiades, which would, in turn, open the door to the integration of these valuable materials into the wider Linked Ancient World Data cloud via Pelagios. The photos feature Mycenaean tombs, Greek temples, and even the Grave Stele of Hegeso.

Identifying individual monuments within Athens


Not only that, some photographs feature other students from the American School of Classical Studies at Athens that went on to be prominent scholars later in life. Since many of these scholars have produced published works and archival materials held at other institutions, they have URIs in the Social Network and Archival Context project. EADitor has had SNAC lookups for quite some time, and so I was able to link photos to these URIs when applicable. I hope that we can make these photos available to researchers even beyond the ancient world.

Linking people to SNAC
In addition to the tagging of places and people, many photographs feature known archaeological monuments that are notable enough to warrant their own Wikipedia articles, and therefore Wikidata entity URIs. I extended the subject lookup mechanism in EADitor beyond the standard Library of Congress Subject Headings to query the Wikidata API, embedding entity IDs directly into the EAD finding aid, which are then transformed into dcterms:subject URIs upon RDF serialization.

 

EAD to RDF

Since each individual component has an ID in EADitor, each component is uniquely addressable by fragment identifiers, e.g., http://numismatics.org/archives/ark:/53695/nnan0037#d1e131. After making some minor modifications to the RDF output to conform with the emerging schema.org archival extension, These Wikidata, SNAC, Pleiades, and Geonames URIs are exposed in the RDF for each component, which are hierarchically linked together.

@prefix arch: <http://purl.org/archival/vocab/arch#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix schema: <http://schema.org/> .
@prefix xml: <http://www.w3.org/XML/1998/namespace> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .

<http://numismatics.org/archives/ark:/53695/nnan0037#d1e131> a schema:ArchiveItem ;
    dcterms:coverage <http://www.geonames.org/264371> ;
    dcterms:date "1900-12-07"^^xsd:date ;
    dcterms:identifier "06-00242" ;
    dcterms:isPartOf <http://numismatics.org/archives/ark:/53695/nnan0037#c_92f631e3f903281a8cdedbfebfca0654> ;
    dcterms:subject <http://socialarchive.iath.virginia.edu/ark:/99166/w61c5qjp> ;
    dcterms:title "American School students wearing bug bags" ;
    dcterms:type <http://vocab.getty.edu/aat/300046300> ;
    foaf:depiction <http://farm9.staticflickr.com/8320/8003385533_c83827b679_o.jpg> ;
    foaf:thumbnail <http://farm9.staticflickr.com/8320/8003385533_55f1f093b1_t.jpg> .

This RDF is posted into Archer's SPARQL endpoint.

Archer RDF → SPARQL → Pelagios RDF

Now that we have numerous uniquely addressable photographs linked to Pleiades URIs published in our SPARQL endpoint, it was a breeze to create an RDF export for Pelagios. It is essentially a DESCRIBE query, and our model of RDF is run through XSLT into the Pelagios data model.

PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX dcterms: <http://purl.org/dc/terms/>
DESCRIBE ?s WHERE {
 ?s dcterms:coverage ?place FILTER (strStarts(str(?place), 'https://pleiades.stoa.org'))  
}

The link to the Pelagios VoID is available on the front page of Archer. It is generated by an ASK query similar to above to see whether there are any objects in the SPARQL endpoint with Pleiades places expressed by the dcterms:coverage property.

Summary

The Brett collection is incredibly interesting, and I hope that we will be able to digitize more photographs and the corresponding travel diary at some point in the future. There are still many photographs that haven't been identified, and so perhaps we might be able to accomplish this through crowdsourcing. We will implement a IIIF server by the end of summer and begin the transition of our archival materials into IIIF--not only photographs, but also the Newell diaries. Perhaps one day we will be able to annotate the people, places, and things from the Brett diary and photographs with Mirador or a similar IIIF viewer. While Pelagios integration is somewhat imminent, the aggregation of disparate archival holdings through shared SNAC identifiers is still further along the horizon.

Tuesday, February 28, 2017

Final four Mellon-funded TEI ebooks published

The final four of a group of 86 American Numismatic Society-published books have been checked and uploaded to our Digital Library. Here are some stats I was able to produce from various SPARQL queries of the TEI->Open Annotation RDF:

  • 349 mentions of 164 different Greek coin hoards published in IGCH in 193 sections in 14 books.
  • 266 unique references to nomisma URIs. 146 are mints or regions, and 87 of these identifiers are matches with Pleiades places. These mint references appear in 600 sections 51 books. Including direct Pleiades references (and not only those which are implicit by means of Nomisma concordances), there are 621 sections in these 51 books which will be accessible through the Pelagios Project.
  • 97 of the 266 references are to people, most of whom are linked to Wikidata and VIAF entities that are, in turn, linked to other systems, such as Social Networks and Archival Context
  • More than 1,400 coins in the ANS collection are referenced
  • 139 Roman Imperial coin types in OCRE
  • 4 Roman Republican coin types in CRRO 
These four are the final of 86 total books digitized as part of the NEH-Mellon Open Humanities Book program.  Many thanks to both the National Endowment for the Humanities and the Mellon Foundation for making this possible. The framework and methodologies implemented in this project will be applied to further digitization here at the ANS as we move toward making our entire collection of monographs freely and openly accessible, and I hope that other academic publishers and learned societies will follow in our footsteps in this endeavor.

These books go beyond simple transcription and publication as EPUB files. With links to our own research databases internally and externally to Linked Open Data information systems, we hope that these works will be transformed into research portals to further context about the people, places, events, etc. mentioned in the text. On the other side of the coin, so to speak, researchers interested about the entities, objects, coin hoards, etc. will have access to a wealth of historical information about these things and will gain access to our monographs not only from our own Library, Archive, and Museum systems, but through projects like Pelagios, Digital Public Library of America, and other large scale aggregators of cultural heritage materials.

Friday, January 13, 2017

More than 80 LOD-enhanced ebooks published to the ANS Digital Library

The American Numismatic Society has nearly completed its Mellon Foundation-funded Humanities Open Book program. Eighty-two of 86 books have been enhanced by a Whitney Christopher, a TEI specialist from the King's College London DH program to link to people and places defined on Nomisma.org, Pleiades (either directly linked or by means of Nomisma's internal concordance system), VIAF, Wikidata, and the ANS's own archival authority control system. The final four books will go online soon. They are all available in the ANS Digital Library.

The number of people and places mentioned in these texts is a staggering figure, and it should be noted that we have focused on linking those entities that are most relevant to the texts, but we will continue to refine the linking over time, especially when it comes to Nomisma concepts and bibliographic references to Worldcat Works (links to which have not yet been incorporated). As Nomisma expands further into the Greek world and other domains of numismatics (after the ancient period), we will return to these ebooks to insert or replace links to Nomisma mints, people, and political entities.

Beyond relevant people and places, we have inserted hundreds of links to IGCH records (about 170 different coin hoards are cited in 400 locations in a handful of books), to the ANS collection, and to coin types defined in OCRE or CRRO. So far, more than 100 coins in the ANS and 6 in the Smithsonian American Art Museum have been identified by their accession numbers, although one of the four remaining books to be published will soon include nearly 70 more links to ANS coins. There are many more coins referenced in these books that may now belong to the ANS, but were not accessioned at the date of publication. A curator with more specific knowledge will need to identify these in the future.

One of the most often cited hoard is the Demanhur Hoard (IGCH 1664), which is mentioned in four books and on various pages of two of Edward Newell's notebooks. By linking archival authorities mentioned in these texts, we have greatly enhanced access to the works by and about Edward Newell and other prominent numismatic figures associated with the Society. A user of the ANS's authority portal (built on EAC-CPF) will have access to books written by Newell in our digital library, as well as his archival materials. Furthermore, mentions of Newell from the books written by other scholars will appear under annotations. In his case, he is mentioned in 18 other books, sometimes in multiple sections.

Like Mantis, the OCRE and CRRO config files have been updated to link to our archival SPARQL endpoint, and therefore annotations about specific types are accessible directly through types defined in these system. Nearly 50 types in OCRE are linked from Roman Medallions, and a researcher can drill down into a specific section of the book from RIC 5 Gallienus and Salonina 1.

Finally, through the links to Pleiades, each section in each book that mentions an ancient place will be accessible in Pelagios.

Monday, September 26, 2016

Publication of the NEH/Mellon Open Humanities ebooks

About a month ago, we pushed about 85 TEI files into production in the ANS Digital Library. These ebooks were transcribed from HathiTrust scans as part of the NEH/Mellon Open Humanities Book Program. Not all of the books have value-added tagging yet. We hired a TEI specialist several weeks ago to begin the process of linking coins, coin types, hoards, people, places, and other subject matter in the body of these books to URIs in our or other information systems.

So far three of these books are complete:
  1. The Fifth Dura Hoard
  2. The Earliest Coins of Norway
  3. The Medallic Work of A.A. Weinman
Like the first book published into our Digital Library (Noe's Coin Hoards), the TEI links have been transformed into RDF conforming to Open Annotation, and these annotations are available in our other systems. For example, J. Sanford Saltus is referenced in The Medallic Work of A.A. Weinman, and so this annotation is available in the biography of Saltus in our EAC-CPF-driven authority system.

Most of the remaining books should have completed value-added TEI markup by the end of the year.

Thursday, March 17, 2016

First EBook published as part of Mellon/NEH Humanities Open Book Project

This is a follow-up to some major feature additions in MANTIS and IGCH detailed on the Numishare blog.

Today, we have published our first out of print, open access EBook for the NEH/Mellon Foundation Humanities Open Book Program. It is Sydney Noe's 1920 Coin Hoards, the first issue of Numismatic Notes and Monographs. As we discussed in our grant application, we had a vendor transcribe these PDFs of images we received from HathiTrust into TEI. The TEI is run through a normalization XSLT stylesheet to correct some issues and pull bibliographic metadata from various sources, and then value-added tagging is applied to link to coins in our collection, hoards on coinhoards.org, and entities in various geographic gazetteers or linked open data vocabulary systems.

As a result, we not only have a digital text that you can view in your browser as HTML5 or download as an EPUB 3.0.1, but a richly-tagged document that is exposed as RDF conforming to Open Annotation, which is then published into our archival SPARQL endpoint (and soon published into Pelagios). Many of the technical features of this publication process have already been discussed in this blog or in the post linked above.

This framework is part of a broader effort to integrate all of our Library, Archive, and Museum holdings into a central hub for numismatic research. It is therefore possible to gain further insight about the people, places, and things mentioned in these digital publications through Linked Open Data methodologies, but also to provide greater context to our data-driven numismatic research projects like IGCH, OCRE, etc.

Not only do we have a rich set of interlinked numismatic projects focusing on hoards, coins, and coin types, but now between these things and numismatic monographs and journals, archival research notebooks, finding aids, and authority records. Not only is it possible to read biographical information about Sydney Noe in Archer, you can view a map and timeline of his life, his social network graph, and gain access to a list of materials written by or about him.

This is the topic of my CAA presentation in Oslo in a few weeks.

Friday, March 11, 2016

Toward a more thoroughly integrated numismatic research system

I am making updates to our systems in preparation for the initial publication of NEH/Mellon EBooks. Part of the project is to thoroughly integrate these EBooks with our collection, archives, IGCH, and related project databases. I still have some work to do, but should have the first EBooks ready next week.
I updated the RDF model for our digitized Newell notebooks to conform to the model for our EBooks (Open Annotation) (there is one book published so far, the ANS Medals book by Miller). What this means is that mentions of IGCH, other scholars represented in our biographies site, and [soon] individual coins in Newell's notebooks will be made available through those other interfaces.

See http://coinhoards.org/id/igch1399

  • You can click on individual pages where Newell notes IGCH1399, and the page will load in Archer.
  • You can see a list of coin types from this hoard, and you can download the list of coin types or a full list of coins from the hoard (note that we aren't publishing our Greek coins that aren't connected to coin type URIs in nomisma.org's SPARQL endpoint).

On http://numismatics.org/authority/id/newell (an EAC-CPF authority record)

These already functioned --
  • See a list of archival materials about Edward Newell
  • (Fairly new) Several annotations in Miller's Medallic Arts of the ANS where he mentions Newell. You can click a link to go directly to a section.
  • A social network graph showing Newell and his relations (also driven by SPARQL, detailed here).

On http://numismatics.org/authority/id/noe
  • As before, you can get a list of archival materials about Noe
  • Newell mentions Noe on two pages of a notebook

Next steps:
  1. Update the code for Mantis to display annotations about specific coins referenced in Newell's notebooks or our EBooks.
  2. Update the Pelagios exports for the Digital Library and Archer to make our EBooks and archival materials more broadly accessible to the ancient world community
  3. Build widgets into our Digital Library to pull data from our other systems

This interlinking will be inherent to the publication mechanism for our EBooks. When we publish the first several next week, the annotations will be available in Mantis, the Archer Biographies, IGCH, etc.

I will be discussing these things and more in my presentation at CAA in Oslo at the end of the month.