Critical AI


[The Ethics of Data Curation is the first in a two-part series of AY 2021-22 workshops organized through a Rutgers Global and NEH-supported collaboration between Critical AI@Rutgers and the Australian National University. Below is the sixth in a series of blogs about each workshop meeting. Click here for the workshop video and the discussion that followed.]

by Maddie Hepner (Photography and Media Arts Honours Graduate ‘21, ANU School of Art and Design)

In recent years, Flickr, the photographic sharing platform (a favorite with amateur shutterbugs as well as professionals) has become a vast data mine for machine learning and image recognition. Whether the photoblogger approves or even knows, #beautiful images of #cats #dailylife and #sunsets have become tapped veins for large datasets. With the aid of this unconsented data, and labelling supplied by an army of (poorly paid) human annotators, machine learning systems “learn” to recognize cats, kangaroos, people, and much else.

In the sixth Ethics of Data Curation workshop, led by the formidable duo of Katrina Sluis and Nicolas Malevé, the utilisation of ‘networked images’ and the ubiquity of image production within the context of the image dataset were at the forefront of discussion. As artist and researcher Baden Pailthorpe asked during his introduction, “What has the rise of image datasets, algorithms, “Mechanical Turks” and computational scale done to the humble photograph?” The duration of this workshop sought to unpack this loaded question.

Heavily referencing the ImageNet dataset, created by Stanford Computer Science professor Dr. Fei-Fei Li in 2009, Sluis and Malevé critiqued and expanded on the inner workings of photographic curation online. This persistent cycle of human-to-machine curation allows viewers to understand uploaded images in an epistemological and ontological sense. The overwhelming scale of curation is a problem with an ironic solution: more curation through datasets. “It takes millions of images to curate millions of images,” Sluis and Malevé state in their presentation, showcasing this incessant sequence, which remains unbreakable even as platform developers try to actively dismantle it.

To explain how images travel through dataset pipelines, Malevé described ImageNet’s particular process.

Diagram of the ImageNet pipeline

To create an ImageNet dataset, you select a term from a “synset”, or “synonym set”, put the term through a search engine, such as Flickr, and receive the photographic results. These results are labelled as candidate images and catalogued by annotators to ensure the image is operating in the correct classification. The annotated images are then shown to various other labourers who come to a consensus on the image’s classification and determine whether the image will be included in ImageNet.

This arduous process, from humble Flickr snapshot to taxonomic data, teases out the subjective tangles from a machine learning perspective. But what happens to the core of the image? Can we claim that these so-called “real-world” images still contain such pith? Malevé explores the recontextualization of the image in On the data set’s ruins, one of the primary readings for the workshop, by stating:

Photographs understood as data are presented as passive samples awaiting the mining of algorithms to be made meaningfully part of computational systems. In such a view, agency is located on the side of the algorithm, and data, as the name suggests, is simply given (Malevé, 2020).

Sluis questioned the democratization of the image within the ubiquitous production of images online and the anti-democratic context of excavating images for dataset curation. The dataset pipeline transforms the image into pure information and displaces it from its position as a cultural form. These photographic identifications have opposing implications within the cultural framework of Flickr versus a statistical dataset environment.  Flickr users share photographs on the platform to achieve recognition and validation from other users. Flickr’s first main goal is to “help people make their photos available to people who matter to them”.

Divergently, computer vision utilises an image to recognise and validate its perception of images and the world depicted within them. Sluis comments on this duality in Survival of the Fittest Image (the second reading for the workshop) by saying that, “photographers are both valued as a source of aesthetic knowledge and as a community in need of aesthetic improvement.” An abundance of ownership is now embedded within an already volatile medium. The battleground of the picture-plane has metamorphosised furthermore into a new duplicitous breed: a site of mass data excavation.

Image: Kyle McDonald/flickr, ImageNet Similarity detail, CC BY 2.0

The photographic image in the context of the dataset is seen as representative of the “real-world” and linked closely with understandings of genuine truth. Yet, throughout the workshop, the concepts of truth and bias in machine learning were questioned and broken down. For example, is it possible to simulate real-world vision when some images in the dataset are subjective and may not be grounded in truth?

This question spurred comments surrounding the fundamental notions of truth in relation to human vision, with references to art critic John Berger’s Ways of Seeing methodology. The relationships between what we see and what we know are intertwined. Computer vision makes this entwinement even more complex. As our discussants pointed out, human vision itself is constructed and the apparatus of the camera further transforms what we see. Almost like something out of an apocalyptic fever dream, this cycle can be reimagined as a perpetually cannibalistic sequence in which human vision is reconstructed through the mechanical retinas of artificial intelligence.

Whilst aesthetic datasets emerge from a process in which photographers commit to self-improvement and self-education, the algorithmic systems they help produce are increasingly mobilised in turn to discipline photographers and shape photographic production (Sluis, 2019).

With a (virtual) room full of inquisitive participants at the workshop’s end, Sluis and Malevé expressed their desire to keep these areas of discussion and critique open for further academic investigation. Machine learning’s grey areas—including identificiation of images and the curation of datasets—are breeding grounds for thoughtful enquiries and attentive action moving forward.

The seemingly passive image on the surface of the screen is perhaps a mere façade—not only of a greater crisis of photographic representation but also of photographic representativeness.

Exit mobile version