Skip to content
Error

Failed to copy link to the clipboard.

Success

Link copied to the clipboard.

Introduction: A human and increasingly nonhuman action

This essay explores the theme of interaction, a human and now increasingly nonhuman action that underpins social media. It starts from the basis that attitudes toward the public sphere are changing, but grounds this claim in a literary-historical past that recognizes potentially comparable epochal moments of technological transformation. For example, the introduction of the printing press in the sixteenth and seventeenth centuries and mass literacy expansion in Europe and North American in the late nineteenth and early twentieth centuries enabled human connectivity, with complex effects on social and cultural exchange. The former inspired a broader sense of, as Brodie Waddell has argued, “the value of literacy in the social, economic, and religious life of people in decidedly ‘unliterary’ occupations” as well as a related desire to express selfhood through writing.1 Lyndal Roper has shown how life writing technologies – literature, pamphlets, woodcuts – also helped witchcraft to become “part of the culture of entertainment,”2 which simultaneously fed an “atmosphere of panic” that spread across Protestant and Catholic Europe.3

The Ego-Media project has highlighted how, as an enterprise, the cultural aspects of social media are often underarticulated and underemphasized. That is, while its sophistication with respect to quantitative analysis is broadly recognized, and is also increasingly deployed to trace social developments, interaction is primarily viewed through the prism of scale and linked to this marketability. That is not to say that individuals and groups have not found ways to work with social media tools to affect on- and offline changes, enacting dialogues and actions that challenge some of the tenets on which the Web 2.0 is built – they have. But moving sociality online changes our understanding of accepted definitions of what it means to interact, and they need to be revisited with an eye to historical precedents that can inform our analysis of periods of technological transformation, when the forms and practices of sociality and civilization are themselves in transition.

While dialogue has been a privileged element of communication historically, as a concept it often implies a narrow focus on the human. In a social media age the term thus fails to always indicate the importance of computational interfaces and networks in facilitating and directing sociality. We therefore deploy the term interaction here. Such a term can include dialogue (between humans, between machines, and between humans and machines). It can also gesture toward the many other extradialogic features of online social activity that mediate communication. Facebook’s reaction buttons, for example, offer users a means of communicating that does not require the same degree of self-articulation as free-form type might.

We see an attention on interaction as fundamentally comprised of several aspects:

  • Interfaces and their relation to communicative forms, practices, and capacities;
  • Dialogues and how these are often enabled, but also constrained in various ways, often by the interfaces themselves;
  • Transactions and their implications for theories of communication;
  • Publics – given dialogue is so central to theories of the democratic and public spheres;
  • Non/human users and players – that is, who or what is interacting, and what the by-products of those interactions might be.

Interactions are framed by a number of factors. They are defined by access to the hardware and software that enables what we now consider to be relatively straightforward digital activities, and by the age and skills of the human users. The latter are both enabled and limited by platform capacities and interfaces. Some make it easier, for example, for human users to engage in multimedia, nonverbal forms of communication that can increase access to on- and offline communities and discussions but are at the same time preset and, to an extent, promote their own uniformities. These communications are increasingly datafied, enabled by private and commercial platforms, raising questions about what it means for something to be marketed as “free” online. The possibilities and the trade-offs of the infrastructure are complex – as are open questions about how the novelty and sustainability of social-media-enabled interaction distract, bend distance and time, and impact on how individuals see themselves or engage with communities.

It is also worth reflecting on the rate of change, specifically with respect to networked and mobile technologies that have become increasingly – but not evenly – accessible in the English-speaking West, which is the primary focus for Ego Media research. Historically the technologies that made human-to-human interactions possible were viewed with a mixture of excitement and anxiety, with their utopian potential to bring people together, and to enable the invention of new communal forms recognized even as many worried about abuse and social dislocation. For example Richard Overy, in writing about Britain between the world wars, has identified a particular character in the relationship between science and society: “There was widespread confidence in scientific possibilities and also a conventional acceptance that science represented some form of absolute truth...The result was often an unsophisticated appropriation of scientific developments that were at best provisional or contradictory.”4

With the development of online spaces, preceding and prevailing discourses conflated technological, commercial, and social innovation. They expanded the situational potential for communication. Families and workers would move but stay close and connected, even as customers could interact with craftspeople, manufacturers, and services to access tailored, “smart” products, allowing niche businesses to thrive, and thus contributing to the emergence of more responsive, resilient, and dynamic economies. At the same time information would move freely, allowing for greater transparency in public life. In this world grassroots activists representing diverse perspectives and experiences would find new allies, organize, and find new ways to make their voices, views, and stories counted and heard. New creative expressions and communities would emerge and flourish across cultural boundaries.

Web 2.0 has enabled this to happen. And yet, even as humans have embraced and expanded their lives online, fears and anxieties about the character of, and data generated by, interactions on a scale almost unfathomable to human intelligence have taken hold and – belatedly – become a key feature of public debate. The initial sense of optimism and excitement that emerged in the West in response to the dot.com boom encouraged musings about the potential for near-limitless economic growth. Human-computer interaction (HCI) research expanded throughout much of the 1990s and into the early 2000s as technological advances became more broadly accessible and integrated (at least in wealthy, wired cities, regions, and nations). Later, in the face of the proliferation of mediated social networking platforms and rapid technological advances, and as concerns about access, abuse, and exploitation emerged, such qualitative research became only more important.5

The breakdown of consensus governing digital behaviors of subjects and agents alike has lead to the amplification of corrosive fantasies, the commodification of conspiracy theories,6 and rising concerns about potential misuses of data sets by private players. People have come to interrogate the extent to which social media platforms are intentionally obfuscating or “black boxing”7 – through algorithms, data collection, and sharing, and by offering users the illusion of choice and control – in order to deflect attention away from how they allow and, in some cases encourage, online abuses. Data warfare, enacted by and at the behest of nations as well as a variety of substate actors and insurgent organizations, has moved from potential to probable, to an active, lived reality with offline consequences. Considered more broadly, despite their potential to inspire what Clare Brant posits as new forms of imaginative agency, interactions are increasingly perceived to have dystopian possibilities.

Summary

Over the course of the Ego-Media project questions regarding interactions have emerged and persist. Who owns the data generated by human and machine interactions? What are humans doing to one another in the digital space, and what are the offline consequences? What might increasingly refined and capable HCI-informed machines do at our behest as huge conglomerates choose to advance the infrastructure of what Shoshana Zuboff calls “surveillance capitalism” in the name of human automation at breakneck pace?8 What was once seen as an essentially progressive technology-driven movement enabling democratized interaction and empowerment of people now looks more like a set of unregulated, tangled systems enabling exploitation, manipulation, and control, and the intentional and passive accretion of power by the very few. And yet we are also seeing challenges to these systems: that is, examples of agency wherein users counteract, circumvent, and co-opt affordances, thus establishing norms and dialogical forms that suit their purposes.

This section addresses some of the ways in which we have conceptualized digital media interactions. It provides a contextual introduction to the research that informs our open questions and (tacit) conclusions about a space and an academic discourse that is changing rapidly in response to new technologies and forms of practice. It considers

  • how individual users and online communities interact with one another and with social media platforms online;
  • what norms and practices for interaction have developed in specific environments and around specific types of posting;
  • how platforms direct or inhibit different forms of human/automated interactions; and
  • the implications of allowing increasingly prevalent but simultaneously opaque machinations to inflect how digital audiences engage with one another and with developing technologies in the merging public and private spheres.

Essay

In order to consider interaction as a theme for Ego Media, we have broken down our analysis into three areas: dialogue, tone, and communities. By identifying these areas we are not attempting to impose a static hierarchy on practices and systems that are necessarily fluid. Instead we want to provide lenses through which the research findings detailed in this publication might be viewed.

While considering these areas, it is necessary to also complicate the nature of digital media interaction by considering forms and practices, and the role of human agency – individual and collective – in each of them, and to consider whether or to what degree we as subjects, selves, and users are willing to revise historical ideas of the public sphere, and in so doing to accept the logic of the market increasingly being imposed on digital interactions. The latter include, in particular, interactions that humans direct and those that assume human-to-human involvement, with technology acting primarily as a facilitator. Increasingly, however, we inhabit a digital world in which nonhuman – or machine – interactions are ascendant, directing behaviors and imposing new aids and barriers to human engagement or access to information.

Dialogue

Offline dialogue has traditionally been conceived, idealistically, as the preserve of self-expressing human beings. Social media, although “social,” are premised on a certain level of extrahuman intervention, be these interface designs, algorithms, or transactions, all of which have the potential to inflect communication and activity and change the way we interact in and with analog and digital worlds. The Ego-Media project is specifically concerned with self-presentation and in what ways and instances technology is shifting conventional autobiographical practices. We conceive of self-presentation as an inherently dialogic activity. Hence, discursive communication, and how, when, where, and why it interacts with self-narratives, is central to our analytical approach to social media. In quotidian “good faith” exchanges shaped by the design of digital media platforms, as well as in potentially insidious cases wherein platforms exploit or “game” interactions, online talk and the “art of the conversation” is changing in response to social media.

Owing at least in part to proprietary algorithms, programmability now interacts with human language in opaque ways, changing how the latter functions in the digital realm both with respect to human-to-human interactions and human-machine dialogues, raising a number of questions about social and digital media platforms going forward. What type of talk makes its way online and what gets blocked; how do dialogues function back and forth? How, when, and why do dialogues break down, and who controls the flow when scale becomes a core determining factor? Furthermore, what constitutes audience engagement in the digital media space? How does alignment and ritual appreciation in discursive practices play out online? How do dialogues shift when feedback is immediate, quantifiable, and (to a lesser degree) qualitative? Who controls amplification? And to what extent are all of these questions influenced by ego-narratives and self-presentation practices? All of these questions speak to the complexities of interaction in the digital age.

The internet and social media in particular have been presented as enabling a utopia of communication (following in a long history of technology’s relationship to dialogue). Discussions centered on how people – or users – would interact online, and how this would improve dialogue across spectrums of experience. Ego-Media projects – Leone Ridsdale’s, Rebecca Roach’s, and Alison McKinlay’s work on health narratives9 (on the King’s College, London website), Rachael Kent’s on self-tracking, and Mikka Lene Pers’s research into mommy vlogging – interrogate the extent to which users, harnessing the multimedia potential of a variety of platforms reaching vast audiences, can move past the initial platform mythos to create spaces that exhibit traces of individuality and independence from hypercommercial and deceptively benign infrastructures. Meanwhile Stijn Peeters’s research into the attraction of Internet Relay Chat (IRC) in the age of Twitter, and more broadly into how users refunction platforms and develop interactive tactics (for example, through the use of hashtags) hints at how individuals and communities might push back against dominant and opaque platforms asserting increasingly monopolistic controls. This is also true when it comes to online storytelling: for example, Alexandra Georgakopoulou’s research shows how users have developed ways of circumventing platform constraints that determine brevity and audience reach. They do so by adopting strategies of narrative stance-taking and by projecting specific roles and response options to their audiences.

Questions persist about the extent to which this is possible when dialogue is reenvisioned as consumer data, and when platforms become more complex and opaque and assert increasingly monopolistic controls. Patches might be introduced to address platform-related problems, many of which have been raised by user communities deploying the very social media platforms they are criticizing. These offer short-term fixes that can at least temporarily diffuse off- and online tensions. And yet if the fundamentals of digital media emerge from a convergence of security, surveillance, and commercial cultures, then the layers of potential deception encompassing digital expression and interactions necessarily mirror and multiply. In Laurence Scott’s assessment, this means that “political, social and commercial forces are encouraging a very particular collapse in the distinction between public and private reality. We are being coaxed into new lines of sight, encouraged to see both public and private, inside and outside, simultaneously.”10 The aims and broader social effects of the platforms are intentionally obscured by gatekeepers (whether tech industry companies, entrepreneurs, systems engineers, governments, or media outlets).

It is unclear even in the descriptive realm – let alone the legal – how to define the public and private with respect to corporate responsibilities, making it extremely difficult to determine who owns and polices digital expression in the global commons. Pre-digital institutions and sanctioned gatekeepers are not irrelevant to these discussions about social media; quite the opposite we would argue, at least with respect to offering models that anticipate and potentially prefigure reapplications of cultural norms and expressions for new media purveyors and profiteers. One problem is that these institutions are challenged in fundamental ways by these so-called disruptors, who in turn influence the form interactions with traditional media take, and at the same time increasingly determine content. This creates a feedback loop that is ultimately more rigid than it appears, at least to users with limited access to or understanding of core codes and platform infrastructures.

There are always exceptions to this. Preexisting and pushed content is not simplistically determinative of human reaction or behavior, and digital media does create spaces for unexpected, contemplative, and meaningful dialogue across space, time, and culture. For example, Charlotte Wu’s research considers how patients writing in the analog and online about living with HIV/AIDs in South Africa make visible individuals who traditionally would have been kept hidden from sight, and how these narratives inform broader discussions of global(ized) pandemics and public health movements. Her research raises important questions about the ethics of sharing on line. The extent to which these examples of cross/transmedia dialogues are exceptional or scalable is difficult to assess in 2019, despite the machine analytics built in to a variety of platforms (it should be noted that many of these analytics are proprietary, so hence not readily available to those who wish to study or monitor trends).

Traditional gatekeeping institutions that provide space for limited dialogue – or the elite performance of it – are conversant in their own cultural norms and committed to maintaining them. However, they have been slow to resist what Michael Wesch has labeled as “context collapse”11 – the flattening of multiple audiences into one12 – and to critically engage with dialogues moving across and between on- and offline spaces.13

Alisa Miller discusses this phenomenon with respect to new forms of online war writing, including in relation to the 2016 US presidential election, wherein human and bot-driven propaganda moved off the screen to dictate mainstream news coverage at crucial moments in the lead-up to voting, establishing a dialogue that was supposedly based on “democratic” digital exchanges, but was actually cyberwar:14 itself a contested term that opens up fundamental questions about what war is and who – or specifically which institution(s) – defend and counter online incursions. Human and machine interactions, some organic and some driven by nefarious actors and players that were not at the time fully revealed to or understood by citizens and traditional gatekeepers, helped to justify the elevation of particular stories and themes. In a particular time and place and under certain conditions, social media interactions and the mechanisms that enable them can exacerbate division and confusion, with significant on- and offline consequences. This is particularly true for interactions directed by commercial-advertising models that ascribe quantifiable values divorced from context.

Tone

Tone is an important aspect of interaction, whether conceived of as style, volume, affect, or ethical norms: the parameters of civil interaction. As such interactions are subject to variations enacted by individual players and – particularly in the digital space – sometimes radical shifts determined by quantified or scaled responses. In the social media realm it often seems impossible to define tonal norms and rules, and even when they are to an extent agreed to defend these from human and machine forces. Even the platforms themselves seem incapable – or at least unwilling – to do this, rendering any number of interactions vulnerable.

Science fiction offers one means to ground this vision of the future in a language that conveys both its futuristic threat and its tonal familiarity. Max Saunders’s research for Ego Media into the To-Day and To-Morrow series unveils a historical example of a form of life writing that overwrites the cultural unease of the present onto a projected future. Here the past interacts with the future in interesting yet at least partially predictable ways. More generally this is one of the core areas of research bridging various Ego-Media projects. Ranging from Clare Brant’s work on international diary forms and practices15 (on the King’s College, London website), to Stijn Peeters’s work on the historic grammars, platforms, and algorithms, these projects interrogate how human beings have utilized technologies with the aim of unlocking affective, accessible means of communication. They also point to how technologies might be harnessed to allow humans to at least in part self-educate themselves in preparation for a machine-determined future: for example, Rob Gallagher offers an examination of science-fiction themed video games16 designed to address online interactions and our relationship with AI, taking into account how as an interactive medium traditional games already provide experiences of conversing, competing, and collaborating with examples of machine intelligence.

When we speak about tone and interaction, tone is necessarily expansive. Digital media technologies have increasingly enabled verbal as well as nonverbal grammars, which now can include computational elements as well as platform affordances, providing multiple ways of viewing emerging digital texts. Tweets, for example, can now be considered as phatic: examples of dialogue that is no longer information-based and is frequently not even language-based. They also transform the tone of digital interactions, and even result in the formation of new tonal layers, as Alexandra Georgakopoulou has revealed in her analysis of ritual appreciation and conventionalized affective language exemplified by, for example, emojis and comments on selfies, and on YouTube spoof videos and remixes. Mikka Lene Pers’s analysis has shown that mommy vloggers and their followers have developed a range of subgenres. These conventionalized ways of representing various aspects of life as a mother are perceived as practices integral to participation in mommy vlogging and have indexical meanings that can be drawn upon as rhetorical and sociocultural resources.

Authors have also developed styles that allow compressed language to articulate and open up tonal registers that acted simultaneously: for example, Rebecca Roach’s work on machine reading Henry James. The initially noncommercial became commercial as well as participatory as readers gained familiarity with, and the ability to read and appropriate, complex, coded grammars: this parallels the comedic if sometimes ironic tone and in-jokey “lulz”17 that characterizes social media interactions on various platforms.

Hence digital media behaviors are determined by learned and at least in some examples platform-specific grammars, be they written, voiced, or gestural, that in practice blur the lines between the individual and the collective. Digital mediums also thrive on public displays that seem like narcissism, where the performance of action is valued over the qualitative nature of the action itself. It can be said that as much as social media push individual content creations, improvisations, and creativity, they also reward the co-opting of other people’s content or expressions. Jodi Dean – an early critic of the ideal of the internet as a utopian public sphere – has argued that selfies and the memes they produce are at the same time collective expressions and examples of intense privatization.18 Other forms of violence in the social media space have also adopted particular grammars that enable cyberbullying and trolling, turning potential users on to and off of digital interaction, as explored in Saunders’s discussion of Mass Observation questionnaires. This is equally the case for content that pushes at tonal norms, be they legal or cultural, from oversharing of the near-universal and potentially mundane – cats, babies, etc. – to the bodily and intimate. We tend to think of platforms as neutral, yet attending to tone in digital media interactions allows us to nuance the way we consider not only digital content, but to more carefully analyze interfaces themselves.

Communities

Defined by their collectivity, communities are conceived of – and sustained by – on- and offline interactions, and now by human and nonhuman elements. Communities can be conceived of as groups of individuals sharing particular interests and, to an extent, invested in working together to maintain some set of common values and toward particular shared goals. Scholarly debates have focused on what constitutes an online community, what enables or undermines their (re)formations and sustainability.

For Ego Media, in conception and practice, community or communities are not synonymous with public or publics: the latter denoting a space where meaning is negotiated. Michael Warner has argued for publics having a self-conception,19 which is also central to Rebecca Roach’s work on talking interfaces. With respect to cultural discourses, Richard Graham has written that, while “nations, language and publics” remain important and provide context for “each individual’s algorithmic milieu,” even seemingly straightforward online actions and interactions are manifold and fluid: “the formations and calculations...are so complex that individuals exist in an unimaginable arrangement.”20 Interactive patterns are only rendered legible with the aid of the nonhuman, and only a few human actors have access to, or can direct, these increasingly sophisticated and complex players.

So we return to dystopias: visions of dialogues and communities that have become legible or predictable, but only to the very few, and hence manipulatable to outside parties, for good and for ill: Alisa Miller’s research explores how even as some platforms enable the spread of war writing and related propaganda by governments, organizations, and individuals, and data-aggregating technologies enable drone warfare that in contested regions (albeit unintentionally in some instances) destroys households, villages, and communities, they also provide spaces in which cross-cultural empathy is at least performed and, in some cases, embraced, resulting in new connections between disparate people and places. In this world fragmentation and atomization is often perceived to be a direct result of the digital mediatization of collective social life. If dialogue has become an increasingly outdated utopian projection of what digital and social media enables, the idea of community too has come under ever greater pressure as it has increasingly gone online.

Facebook, Google, Microsoft, Twitter, and a variety of platforms have pushed ideas that place technological innovation at the center of debates about social media’s corrosive aspects: for example, the idea that AI could be unleashed to help to alleviate hate speech, harassment, exploitation, and the spread of conspiracy theories and propaganda online. This somewhat strange mix of deep cultural and incremental technological solutions meant to patch bugs as opposed to algorithmic design features and ingrained corporate practices and values raises a number of questions. These center in part at the moment, and likely even more so in the future, on machine interactions. With a notable percentage of Twitter accounts now authored by bots, if interaction is culture, can you have a nonhuman community? And if so, what assumptions of possibility have been and can be encoded into the nonhuman, including those explored in Rob Gallagher’s and Rebecca Roach’s research for Ego Media? What about the outsized influence of the bot’s creator on community and can this be counteracted by the digital culture it reads and responds to? Hence AI offers up arguably more potential problems, or at the very least extremely difficult questions, than easy solutions, even as examples of creative repurposing of digital tools proliferate. And furthermore the current digital titans, at least in their senior management, appear overreliant on AI to counter abusive human behaviors encoded and performed by individuals and communities on their respective platforms. This is predictable and even understandable to a degree, in that the promise of AI and a technological solution to cultural problems and capitalist behaviors suspends belief in their constituent publics, delays policy and regulatory action, and serves to justify even more direct and indirect investment in said media companies.

Again, these companies and their requisite platforms are operating in historically discernible ways, even if social media are still relatively new forms of informational or communicative capitalism.21 For example, as both a precedent and a relevant public for Ego Media research, one can think of reading communities and their changing nature in previous eras: from the traveling libraries enabling the development of reading communities described by William St. Clair in the eighteenth and nineteenth centuries22 to interviews with social media publishers, and the “live literature” and author bots as examined by Rebecca Roach. These raise fundamental questions about imagined communities,23 as well as unimaginable communities trained on data sets of enormous scale and complexity that guard against human investigation and comprehension.24 Privatization, individuation, and data overload – and what Judy Estrin and Sam Gill recently identified as digital pollution or an aggregate of different forms of digital waste – breed a sense of invasive unknowability and incomprehension when it comes to how human interaction is now mined, managed and potentially misdirected. Estrin and Gill argue that the internet and social media have now reached a ‘“scale, scope and complexity” that demands a general reckoning: “digital advances have given rise to a pollution that is reducing the quality of our lives and the strength of our democracy.”25 As Clare Brant has shown in her research into imagined and practical agency, this gets at the circular nature of social media systems and critical dialogues; the tools themselves are the polluters – off- and online – and yet avoiding them is increasingly difficult and limiting for human sociality and existence, particularly given global challenges like climate change that demand coordinated action. In such an environment it becomes increasingly difficult to navigate complex conversations about what we as citizens consider to be valid disruption in the name of innovation and progress as opposed to coercive interference, and how to understand different formulations of human and nonhuman communities.26

Where to now?

Endnotes

  1. Brodie Waddell, “‘Verses of My Owne Making’: Literacy, Work, and Social Identity in Early Modern England,” Journal of Social History 54, no. 1 (Fall 2020): 161–84, https://doi.org/https://doi.org/10.1093/jsh/shz011. 166.
  2. Lyndal Roper, Witch Craze (New Haven, Conn.: Yale University Press, 2004). 120.
  3. Roper, Witch Craze. 37.
  4. Richard Overy, The Morbid Age: Britain and the Crisis of Civilisation, 1919–1939 (London: Penguin, 2010). 374.
  5. Anne Adams, Peter Lunt, and Paul Cairns, “A Qualitative Approach to HCI Research,” in Research Methods for Human-Computer Interaction, ed. Paul Cairns and Anna Cox (Cambridge: Cambridge University Press, 2008), 138–57.
  6. Clare Birchall, Shareveillance: The Dangers of Openly Sharing and Covertly Collecting Data (Minneapolis: University of Minnesota Press, 2017).
  7. See Rebecca Roach, “Black Boxes,” King’s College London: Research and Innovation, n.d., https://www.kcl.ac.uk/research/black-boxes.
  8. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: Public Affairs, 2019).
  9. Alison McKinlay, Leone Ridsdale, and Rebecca Roach, “Interactions with Health-Related Information Online in People with Migraine and Epilepsy,” King’s College London: Research and Innovation, n.d., https://www.kcl.ac.uk/research/interactions-with-health-related-information-online-in-people-with-migraine-and-epilepsy.
  10. Laurence Scott, Picnic Comma Lightening: In Search of a New Reality (London: William Heinemann, 2018). xxiii.
  11. Michael Wesch, “YouTube and You: Experiences of Self-Awareness in the Context Collapse of the Recording Webcam,” Explorations in Media Ecology 8, no. 2 (2009): 19–34.
  12. Alice Marwick and dana boyd, “I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse and the Imagined Audience,” New Media and Society 13, no. 1 (2010): 114–33, https://doi.org/10.1177/1461444810365313. 122.
  13. See An Anthropological Introduction to YouTube, 2008, https://www.youtube.com/watch?v=TPAO-lZ4_hU.
  14. Kathleen Hall Jamieson, Cyberwar: How Russian Hackers and Trolls Helped Elect a President (Oxford: Oxford University Press, 2018).
  15. Max Saunders et al., “Diaries 2.0,” King’s College London: Research and Innovation, n.d., https://www.kcl.ac.uk/research/diaries-2.0.
  16. Rob Gallagher, “Videogames, Identity and Digital Subjectivity,” King’s College London: Research and Innovation, n.d., https://www.kcl.ac.uk/research/videogames-identity-and-digital-subjectivity.
  17. Another Anonymous, “Lulz,” in Urban DIctionary, March 18, 2007, https://www.urbandictionary.com/define.php?term=lulz.
  18. Jodi Dean, “Faces as Commons: The Secondary Visuality of Communicative Capitalism,” Open!, December 31, 2016, 1–10.
  19. Michael Warner, Publics and Counterpublics (Cambridge, Mass.: Zone Books, 2005).
  20. Richard Norroy Graham, “Understanding Google: Search Engines and the Changing Nature of Access, Thought and Knowledge within a Global Context.” (Exeter, University of Exeter, 2017), http://hdl.handle.net/10871/32601. 314-15.
  21. Jodi Dean, John W. Anderson, and Geert Lovink, Reformatting Politics: Information Technology and Global Civil Society (New York: Taylor and Francis, 2006).
  22. William St. Clair, The Reading Nation in the Romantic Period (Cambridge: Cambridge University Press, 2007).
  23. Benedict Anderson, Imagined Communities: Reflections on the Origins and Spread of Nationalism (London: Verso, 1983).
  24. Graham, “Understanding Google.”
  25. Judy Estrin and Sam Gill, “The World Is Choking on Digital Pollution,” Washington Monthly, March 2019, https://washingtonmonthly.com/magazine/january-february-march-2019/the-world-is-choking-on-digital-pollution/.
  26. Krystian Woznicki, “Challenging Logistical AI: The Politics of Artificial Artificial Intelligence,” Open!, December 19, 2018, https://onlineopen.org/challenging-logistical-ai. 5.

Bibliography

  • Adams, Anne, Peter Lunt, and Paul Cairns. “A Qualitative Approach to HCI Research.” In Research Methods for Human-Computer Interaction, edited by Paul Cairns and Anna Cox, 138–57. Cambridge: Cambridge University Press, 2008.
  • Anderson, Benedict. Imagined Communities: Reflections on the Origins and Spread of Nationalism. London: Verso, 1983.
  • Another Anonymous. “Lulz.” In Urban DIctionary, March 18, 2007. https://www.urbandictionary.com/define.php?term=lulz.
  • Birchall, Clare. Shareveillance: The Dangers of Openly Sharing and Covertly Collecting Data. Minneapolis: University of Minnesota Press, 2017.
  • Dean, Jodi. “Faces as Commons: The Secondary Visuality of Communicative Capitalism.” Open!, December 31, 2016, 1–10.
  • Dean, Jodi, John W. Anderson, and Geert Lovink. Reformatting Politics: Information Technology and Global Civil Society. New York: Taylor and Francis, 2006.
  • Estrin, Judy, and Sam Gill. “The World Is Choking on Digital Pollution.” Washington Monthly, March 2019. https://washingtonmonthly.com/magazine/january-february-march-2019/the-world-is-choking-on-digital-pollution/.
  • Gallagher, Rob. “Videogames, Identity and Digital Subjectivity.” King’s College London: Research and Innovation, n.d. https://www.kcl.ac.uk/research/videogames-identity-and-digital-subjectivity.
  • Graham, Richard Norroy. “Understanding Google: Search Engines and the Changing Nature of Access, Thought and Knowledge within a Global Context.” University of Exeter, 2017. http://hdl.handle.net/10871/32601.
  • Jamieson, Kathleen Hall. Cyberwar: How Russian Hackers and Trolls Helped Elect a President. Oxford: Oxford University Press, 2018.
  • Marwick, Alice, and dana boyd. “I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse and the Imagined Audience.” New Media and Society 13, no. 1 (2010): 114–33. https://doi.org/10.1177/1461444810365313.
  • McKinlay, Alison, Leone Ridsdale, and Rebecca Roach. “Interactions with Health-Related Information Online in People with Migraine and Epilepsy.” King’s College London: Research and Innovation, n.d. https://www.kcl.ac.uk/research/interactions-with-health-related-information-online-in-people-with-migraine-and-epilepsy.
  • Overy, Richard. The Morbid Age: Britain and the Crisis of Civilisation, 1919–1939. London: Penguin, 2010.
  • Roach, Rebecca. “Black Boxes.” King’s College London: Research and Innovation, n.d. https://www.kcl.ac.uk/research/black-boxes.
  • Roper, Lyndal. Witch Craze. New Haven, Conn.: Yale University Press, 2004.
  • Saunders, Max, Clare Brant, Leone Ridsdale, and Alexandra Georgakopoulou. “Diaries 2.0.” King’s College London: Research and Innovation, n.d. https://www.kcl.ac.uk/research/diaries-2.0.
  • Scott, Laurence. Picnic Comma Lightening: In Search of a New Reality. London: William Heinemann, 2018.
  • St. Clair, William. The Reading Nation in the Romantic Period. Cambridge: Cambridge University Press, 2007.
  • Waddell, Brodie. “‘Verses of My Owne Making’: Literacy, Work, and Social Identity in Early Modern England.” Journal of Social History 54, no. 1 (Fall 2020): 161–84. https://doi.org/https://doi.org/10.1093/jsh/shz011.
  • Warner, Michael. Publics and Counterpublics. Cambridge, Mass.: Zone Books, 2005.
  • Wesch, Michael. “YouTube and You: Experiences of Self-Awareness in the Context Collapse of the Recording Webcam.” Explorations in Media Ecology 8, no. 2 (2009): 19–34.
  • Woznicki, Krystian. “Challenging Logistical AI: The Politics of Artificial Artificial Intelligence.” Open!, December 19, 2018. https://onlineopen.org/challenging-logistical-ai.
  • Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs, 2019.
  • An Anthropological Introduction to YouTube, 2008. https://www.youtube.com/watch?v=TPAO-lZ4_hU.