Reflections On The 2018 Digital Asset Symposium


This article was contributed by DAM Guru member, Jeffrey Marino.

 

Digital Asset Symposium
DAS: New York

Hosted by The Association of Moving Image Archivists
June 6, 2018
Museum of Modern Art
New York, NY

“Who lives, who dies? Who tells the story?” sums up how history gets written – by the survivor. Last month in NYC we did not get to see the musical Hamilton (that’s a line from the show), but we did get to the Digital Asset Symposium for a lineup of thought-provoking presentations by media asset management leaders from non-profits, music entertainment, sports, documentary filmmaking and marketing technology. Interspersed among the expert sessions were sponsor presentations from the marketing technology, big data, big storage and AI industries.

All provided interesting insights on digital asset management processes, the life and survival of the digital asset, and its purpose. As kickoff speaker Nick Gold, Program Director from The Association of Moving Image Archivists said: “A media asset…becomes part of the human story and crucial in the hands of the storyteller.”

The core value of DAM platforms, vendor marketing often points out, is the efficiency and efficacy of maintaining ‘a single source of truth’ for digital assets. When I saw, however, the title of the keystone talk – “The Truth is a Lie” – I thought we might be entering a topical discussion around facts vs alt-facts. Instead we were guided to the arena of quantum physics by Chris Welty, a professor of Cognitive Computing and Sr. Research Scientist at Google. Peeling back the onion on what he called ‘the super-positioning of reality,’ he refreshed us on how photons coexist as both particles and waves, i.e. in two different realities, until observed.

Photo credit: Zachary Zahos

In another example, Professor Lora Arroyo, Chief Scientist at Tagasauris, displayed a landscape image: is it Sunday Mountain, New Zealand; or is it Minas Tirith, Gondor? The image is of course both – its reality depends on the context of the viewer and the descriptive bias of the image.

Their point: because of super-positioning of reality, it’s inevitable that digital asset metadata is inconsistent. Welty cited studies of how people are unable to agree on simple commonalities (such as the color of a flower) or even simpler ones (such as, is this a flower?). Accuracy in metadata, he posited, not only requires definition of what something is (i.e. blue) as well as what it is not (i.e. not monochrome).  That means more metadata. To take on the extra tagging, and to even out those inevitable inconsistencies, Arroyo described how groups of people who are not subject matter experts are able to derive metadata for images better than, well, professionals. Tagasauris packages this as a service called QrowdTruth.

In the next session, “Archiving Human Rights Video: Planting Seeds of Preservation Throughout Production,” Nicole Martin of Human Rights Watch countered the previous discussion by espousing the value of ‘fixity’ for digital assets. The standpoint of HRW is that original, unchanged data are primary legal evidence relevant to real people in the context of their harm or disadvantage. HRW’s processes mandate original asset preservation in its exact original dataform, even ensuring that cloning drives are write protected. Only after such preservation (‘fixity’) is in place do the additional tagging and transcoding of assets and the creative production processes begin.

On the commercial side, we next heard from Randa Marakarah in “Bridge the Gap: Unite Content and Customer Intelligence for Audience Intelligence and Growth.” Randa described how his company, Transform, mines the engagement activity of OTT consumers (aka cord-cutters, the streaming broadcast audience). Transform seeks to provide metrics that influence the development or even the story arc of creative programming. Perhaps such data mining will help improve the accuracy (or at least the gross misdirection) of the targeted ads I get. Fingers crossed!

Sally Hubbard of PBS led the “Smart Stacking of Data and Information Services” session, shedding light on differences between ‘Big Info’ and ‘Big Data.’ Information Science, she explained, is the internal process of storing, transferring content with precision and fixity. Data Science, on the other hand, is the external process of discovery and analysis, seeking to discern linkages that are (or might be) actionable. The symbiosis of the two is that while the library process of adding information increases the basic value of the assets, the analytics process increases market value for the system through predictions based on probability. And we should be mindful, as Gian Klobusicky, Sr. Data Scientist at HBO said, that “probability is logic with uncertainty.”

“Smart Stacking” also is how managers yoke the yin of information with the yang of data, leveraging not just technology but also the human factor. People have an innate ability to process information and perceive context better than algorithms and most importantly, they are the ethical backbone of the ‘stack.’  Dalia Levine, Ontologist at HBO pointed out, “As librarians we are trained explicitly for the presentation and management of data in as factual and unbiased a manner as possible.” Bottom line, ethics is a personal process for every employee at the organization. “Bias,” added Hubbard, “is present in all levels and needs to be monitored and corrected as it occurs.”

Dan Piro, Director of the Digital Asset Archive at the National Hockey League, recapped a big project implementation: capturing and cataloguing 100 years of hockey images, film reels and video from all kinds of formats. Because this was NHL, he was able to throw a lot of resources at it. Without revealing budget, he mentioned that the first vendor contracted for digitization got overwhelmed by the scope of the job and had to renegotiate terms. NHL not only agreed but also added a second vendor to keep the project on track. What drove the big spend at NHL was the very high value of the league’s Centennial for the organization. Piro cheerfully said, “clearly the DAM would have high value once in place, but it terms of actual ROI – who knows?” For many of us, budget and ROI are painful sticking points in getting implementation off the ground, but Piro and his team seized the opportunity to rush the open goal (so to speak).

In 1967, the Montreux Jazz Fest was founded with a combined mission: to stage world-class music performances and to document it all in photos and video for archive, research, education and innovation. Dr. Alain Dufaux, Head of Operations and Development, Metamedia Center at EPFL (Ecole Polytechnique Fédérale de Lausanne, or Swiss Federal Institute of Technology) described how this created a huge store of assets on many formats, even noting that the Festival was a very early (1991) adopter of HD video. Preserving the assets has created several petabytes of digitized archive. With 14,000 master audio recordings, 11,000 hours of video, and over 100,000 photos, it is no wonder that they are experimenting with new options: recordings of Miles Davis and Deep Purple are already stored biochemically on DNA. Innovations driven by the Center include automated defect detection and correction for video; sound ‘bubbles’ that improve the audio experience in the open environments of the library; and interactive capabilities for virtual remixing and ‘open mic-ing’ of Montreux performances by casual visitors.

The closing keynote featured production team members of the Netflix documentary series, “Bobby Kennedy for President,” which streamed this year, the 50th anniversary of his assassination.  This kind of film, they discussed, is a vast process of asset discovery, requiring diligent detective work, a prodigious amount of time, and the tried and true method of talking to people and following lead after lead. The team focused on finding folks who were actually present during the political campaign and created new assets (interviews) to play against old assets (footage often never before archived). The film includes, for example, a clip of a doubtful Bobby Kennedy, which aired only once in 1968, adding a darker color to the myth of Camelot. “It’s the golden age of retrieval,” remarked Archive Producer Rich Remsburg, who researched and delivered assets from sources ranging from ProQuest, to local TV stations, to eBay. Series producer Elizabeth Wolff wryly remarked: “The story gets told only from what’s digitized.”  The story of this film illustrated that finding truth in the data is absolutely driven by digital asset management’s core value: discoverability.

At this Digital Asset Symposium, the presenters generously reviewed their own best practices for sourcing, managing and standardizing metadata. We peeked under the hood at naming conventions and schema, and got bird’s eye views of building and staffing an asset management system with the tools and automation available today. And we looked to the future, where automation is at the top of “the AI Ladder” and currently evolving from its data foundation (Big Info), analytics for insights (Big Data), and machine learning leading to true AI.

But the future is now. Logan Ketchum from Veritone (one of the conference sponsors) reminds us that “Artificial Narrow Intelligence” is already at work in every single use-case based DAM platform on the market. And as we yoke multiple engines together – including the processes and stacking discussed in this conference – it means we are not only solving for our digital asset needs, we are also injecting life into an AI operating system…

And, as we learned from Jurassic Park, life will find a way.

Jeffrey Marino is a Digital Asset and Project Manager at WordCityStudio, Inc . He has worked in broadcast news, documentary, advertising technology and DAM. He recently received his MS in Media Management at The New School and is an active member of DAM Guru.

This post originally appeared on the DAM Guru Blog.Share this Article:

Leave a Reply

Your email address will not be published. Required fields are marked *