NIFFF, Let’s Plays and Cluster Visualizations

A while ago I merged my dissertation wiki with my general website and as a result of that, sometimes journal entries might mix up stuff. This is one of those. As our annual pivoting into summer tradition, I visited the Neuchâtel International Fantastic Film Festival (NIFFF) with my spouse. 16 films, 2 presentations and 1 exhibition later I’m quite besides myself, but in a good way. Disrupted rhythm, lack of sleep, hardly having time to think about work, meeting people, watching fantastic movies - all of this helps shift perspective and experience. It also leads to less work done. But that’s ok. Since it’s summer and nobody wants something from me, I’m equally productive…

NIFFF

These were my favorites:

It was also the first time I participated in some of the festival’s video game related things, since that is my métier now, apparently. Next to two panels, Let’s Play Nature and That Cosy Feeling, I also visited an exhibition house. They showed some artworks that were made with video game engines, some world building stuff, some pieces on fears on the internet and results of a game jam. The world building games spoke to me. I’m a sucker for walking simulators and just looking at virtual environments. The rest was meh… Doing machinima stuff just for the aesthetics - I don’t know. Game jams are important, but usually the results are probably more fun for the people who did it. It’s just seldomly ever really accessible to people outside.

The two panels were low-threshold but very interesting. I loved to see the let’s play format at NIFFF. It just adds a nice touch. The first panel on playing nature focused on Stray and a game whose name I forgot. The focus was on how these two games play with anthropocentrism and analyzed how that topic is tackled, but also made playable. I had Stray on my wishlist anyway already, and now I was even more motivated to play it. The second panel concentrated on the new-ish genre of cozy games. I do not have an opinion about that.

Visualizing Clusters

After a week of tinkering and setting things up, I was finally able to care more about actually analyzing the image clustering. Some weeks ago, I started out with pix-plot, which finds it’s fair share of application in the humanities. It’s relatively easy to setup and use, and it makes pretty visualizations. It took quite a while to work through a larger dataset (~113’000 images) and for that reason, I looked at FiftyOne. The application does the same, but less fancy and way more feature rich. It also allows customizing the process of clustering. Getting to know FiftyOne made me realise, how little I know about the actual technology behind pix-plot, and I was asking myself, to what extent I actually should? Here I follow Janna Omena’s thoughts on Technicity (See Notes on A technicity perspective to the practice of digital methods). I deem it quintessential that I have a rudimentary understanding of the processes at hand and how my research practice relates to the tools I use. That said, I needed to get up to speed to some basic concepts such as image embeddings, cluster algorithms, computer vision model differences, etc. I was definitely worth it.

One of the biggest struggles was one how to actually analyze the result of the clustering processes. This part of the research process is often omitted or void of details. Distant viewing and image cluster papers are often very precise regarding their dataset and setup, but don’t talk much about the human aspects of generating insights. A great example is this paper: Medals and likes: A methodology for big data image dataset analysis of Olympic athletic beauty on Instagram. The method is clean, and the insights generated are valuable. They write about how they created the dataset, about how pix-plot is a great fit for their research and how it actually works, but then they reduce the actual analysis of the image clusters to the follow:

By closely observing the groupings, it was possible to establish an initial hypothesis, indicating that there seems to be a polarization between the image types in two aspects […]

But how does this observing actually happen? Who did observe and for how long? On what kind of screens did you do that? What was the process of observing? Was there a methodical approach, or was it just randomly wandering around the visualization? Luckily, I’m not the only one wondering about these things. In my quest to understand this process better I came across the Distant Viewing Lab, but I haven’t found the time to read their work yet.

For now, I take it as an extended case of what Drucker described in “Humanities Approaches to Graphical Display”. Among other things, while reflecting on gaining knowledge through data visualizations, she calls for a reframing of data as capta. Data not as something already there and ready to be uncovered by the researcher, but as something that is captured by the researcher, which is a focus that includes the creative act and the creator, the researcher. The problem with these image clusterings, heavily processed and automatically created, is that they are seemingly imbued with intelligence. Since the visualizations are pleasant to watch and ordered (instead of random chaos) they evoke the feeling that somebody or something intelligent made them, and I just need to figure out what was the thought behind it. It’s a kind of fallacy, and the truth could not be further from reality. Finally, it’s just some math. But then again, the relation between math and visual beauty is well documented.

It’s important to treat the result this way, as a product of a series of calculations. But then, the humanist researcher comes back into play, must ask questions and reflect on method, results and itself.