Digital Technology and Voice: How Platforms Shape Institutional Processes Through Visibilization

Digital Transformation and Institutional Theory Research in the Sociology of Organizations(2022)

引用 3|浏览10
暂无评分
摘要
Abstract Digital technologies, and the affordances they provide, can shape institutional processes in significant ways. In the last decade, social media and other digital platforms have redefined civic engagement by enabling new ways of connecting, collaborating, and mobilizing. In this article, we examine how technological affordances can both enable and hinder institutional processes through visibilization – which we define as the enactment of technological features to foreground and give voice to particular perspectives and discourses while silencing others. We study such dynamics by examining #SchauHin, an activist campaign initiated in Germany to shine a spotlight on experiences of daily racism. Our findings show how actors and counter-actors differentially leveraged the technological features of two digital platforms to shape the campaign. Our study has implications for understanding the role of digital technologies in institutional processes as well as the interplay between affordances and visibility in efforts to deinstitutionalize discriminatory practices and institutions. Keywords Affordances Digital technology Institutional theory Platforms Social media Social movements. Citation Gümüsay, A.A., Raynard, M., Albu, O., Etter, M. and Roulet, T. (2022), "Digital Technology and Voice: How Platforms Shape Institutional Processes Through Visibilization", Gegenhuber, T., Logue, D., Hinings, C.R.(B). and Barrett, M. (Ed.) Digital Transformation and Institutional Theory (Research in the Sociology of Organizations, Vol. 83), Emerald Publishing Limited, Bingley, pp. 57-85. https://doi.org/10.1108/S0733-558X20220000083003 Publisher: Emerald Publishing Limited Copyright © 2022 Ali Aslan Gümüsay, Mia Raynard, Oana Albu, Michael Etter and Thomas Roulet License Published by Emerald Publishing Limited. These chapters are published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of these chapters (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode. Introduction In recent years, large protests against ethnic violence have erupted around the world – particularly in the United States, where the deaths of Black Americans including George Floyd and Breonna Taylor have unleashed a flood of criticism and civil unrest. Amidst the escalating anger and calls of “No justice, no peace,” social injustice and racial divisions have taken center stage. What the expansive scope and momentum of movements such as #BlackLivesMatter have taught us is that digital technologies – and particularly social media – are changing the face of politics and activism (Ouellette & Banet-Weiser, 2018). Individuals, organizations, and activist groups are increasingly taking to social media and other digital platforms to raise awareness of systemic racism and to call for the deinstitutionalization of this deeply ingrained problem (Gantt Shafer, 2017; Matamoros-Fernández, 2017). Digital platforms are online, on-demand systems that have the potential to harness and create large scalable networks of users and resources (Castells, 1998). By providing expansive and immediate connectivity (van Dijck, 2013), digital platforms have become sites of interaction, debate, and conflict that represent a heterogeneity of “norms, values, expectations, and concerns” (Etter, Colleoni, Illia, Meggiorin, & D’Eugenio, 2018, p. 61). Disparate communities – each with their own interests and agendas – are able to come together and engage in various forms of co-creation, ranging from spontaneous (Albu & Etter, 2016) to more orchestrated iterations (Etter & Vestergaard, 2015; Gegenhuber & Naderer, 2019). Such new ways of connecting, collaborating, and mobilizing (Dobusch & Schoeneborn, 2015; Vaast & Kaganer, 2013) have facilitated an aggregation of “voices” in ways that can significantly shape institutional processes (Etter, Ravasi, & Colleoni, 2019; Illia et al., 2022; Roulet, 2020; Scheidgen, Gümüsay, Günzel-Jensen, Krlev, & Wolf, 2021; Wang, Raynard, & Greenwood, 2021). As certain voices are aggregated, they are foregrounded and made visible – while others are pushed to the background, potentially becoming unseen and unvoiced (Hudson, Okhuysen, & Creed, 2015). Thus, the act of making something visible involves an interplay between discursive openness and discursive closure, because the struggle to promote a particular view of reality often has the effect of subordinating equally plausible ones (Clemente & Roulet, 2015; Deetz, 1992; Leonardi & Jackson, 2004). Our interest in this article is to explore the implications of digital technologies for voice, visibility, and institutions. Specifically, we aim to understand how technology can enable and hinder institutional processes through visibilization – which we define as the enactment of technological features to foreground and give voice to particular perspectives, positions, and discourses while silencing or subordinating others. We do so by examining the emergence of #SchauHin, a campaign in Germany that sought to bring daily experiences of systemic racism into the public sphere. Drawing upon multiple data sources and first-hand accounts from those involved in the campaign, we unpack the various ways in which users effected visibilization and influenced the development of the campaign and its goal of contributing to the deinstitutionalization of systemic racism. By showing how users differentially used and appropriated technological features to open and close discourses, this study aims to advance research at the intersection of technology and institutional theory in two ways. First, it contributes to a relational understanding of technology by emphasizing its affordances, i.e., “the action possibilities and opportunities that emerge from actors engaging with technologies” (Faraj & Azad, 2012, p. 238). Digital platforms create opportunities to mobilize power and collective action, not through their “objective” features but through their ability to enable expansive, immediate connectivity and the distributed creation and dissemination of content and knowledge (van Dijck, 2013). In our case, initiators and supporters of the campaign engaged in a discursive struggle with “counter-actors” who sought to disrupt mobilization – with each side enacting platform properties in radically different ways. By showing how this struggle played out, our study extends understandings of “affordances-in-practice” (Costa, 2018) and shows how users “reconcile their own goals with the materiality of a technology” (Leonardi, 2011, p. 154). Second, the study sheds further light on how technology can influence institutional processes (Hinings, Gegenhuber, & Greenwood, 2018) by zooming in on a specific affordance of technology: visibility. Visibility is conceptualized as a “root-affordance” on which other affordances are built (Treem, Leonardi, & van den Hooff, 2020, p. 45; cf. also Flyverbom, Leonardi, Stohl, & Stohl, 2016). Our case builds on this conceptualization by examining how platform features are activated by different sets of actors. Specifically, we show how activation can, on the one hand, generate visibility by opening up discourses about daily racism and, on the other, obscure visibility through the manipulation of content and sowing confusion (Etter & Albu, 2020; Treem et al., 2020). In addition, we show how digital platforms have their own “enactment” properties – as the algorithms and hidden information architectures embedded in digital platforms (Hansen & Flyverbom, 2015) can curate and make some knowledge, behaviors, and preferences visible and others less so. Thus, visibility, as an affordance, has both relational and strategic qualities that are enacted in the process of “seeing and being seen” (Brighten, 2007, p. 325). Our case illuminates these qualities and their implications for enabling or hindering reflection and the critique of intangible aspects of institutions – in our case, systemic racism. On a practical level, our article demonstrates how digital technologies – and platforms in particular – have fundamentally altered civic engagement. Not only do these platforms have the potential to amplify and silence voices (Clemente & Roulet, 2015; Etter & Albu, 2020), they can also facilitate or hinder reflection on and action toward taken-for-granted practices and arrangements. Theoretical Framework Institutional Processes, Visibility, and Digital Platforms It can be argued that the emergence, change, and decline of institutions requires institutionalized practices and arrangements to be made visible (Clemente & Roulet, 2015; Washington & Ventresca, 2004). Studies of institutional emergence, for example, have shown that increasing visibility of the limits or general failings of present institutional arrangements can lead to a mobilization of power and collective action by “champions of new practices and forms” (Schneiberg & Lounsbury, 2017, p. 284; see also Hoffman, 1999; Rodner, Roulet, Kerrigan, & Vom Lehn, 2020; Zietsma, Groenewegen, Logue, & Hinings, 2017). As practices become habits and objectively accepted by the masses, they become visible and in other terms identifiable (Tolbert & Zucker, 1999). Such visibility has also been shown to trigger processes of deinstitutionalization – notably by prompting reflexivity and (re-)examination of taken-for-granted arrangements and social practices (Dacin & Dacin, 2008; Maguire & Hardy, 2009; Seo & Creed, 2002). While visibility can enhance the salience of certain practices, voices, and meanings that are manifested in institutional arrangements (Clemente & Roulet, 2015), it may also subordinate or divert attention away from others. This subordination of alternative ways of “doing” or “being” often contributes to processes of institutional maintenance because the voices of marginalized actors are suppressed or pushed into obscurity (Hudson et al., 2015; Mair & Martí, 2009). In this way, visibility and obscurity represent two sides of the same coin – with both shaping institutional processes in significant ways. Within institutional scholarship, the concept of visibility is often only implicitly acknowledged – in part because institutional arrangements are understood to be supported by intangible sets of beliefs and values (Thornton & Ocasio, 2008) or by discursive productions that are not necessarily accessible to or consumable by all parties (Phillips & Oswick, 2012). Many foundational pillars of institutional arrangements are taken for granted, which makes their very nature invisible, even for those who enact them. Recently, however, studies have begun to emphasize visible material manifestations of institutions as “part of the way in which social processes and organizations are enacted and stabilized” (Monteiro & Nicolini, 2015, p. 61). Practices typically have, for example, a material aspect (Jones, Boxenbaum, & Anthony, 2013) that makes them visible to others (Boxenbaum, Jones, Meyer, & Svejenova, 2018) and, further, makes an actor’s engagement with an institution visible and the monitoring of practice diffusion possible (Chandler & Hwang, 2015). Another stream of related research has shown how actors make their beliefs and values “seen” by voicing them (Cornelissen, Durand, Fiss, Lammers, & Vaara, 2015). Together, these streams of research suggest that actors’ discursive productions are a reflection of their interaction with institutions (Meyer, Jancsary, Höllerer, & Boxenbaum, 2018; Wang et al., 2021), and that through reflexive interactions, audiences may become aware of the structures underpinning institutions (Gray, Purdy, & Ansari, 2015; Raynard, Kodeih, & Greenwood, 2020). Whereas the visibility of practices, voices, and meanings has traditionally been limited by the “spatial and temporal properties of the here and now,” the development of information technologies has brought “a new form of visibility” (Thompson, 2005, p. 35). By enabling expansive connectivity, decentralized content creation, and distributed content aggregation, social media and other digital platforms have opened up opportunities for a wider range of actors to affect institutional processes (Etter et al., 2018; Illia et al., 2022). Marginalized actors, for example, are able to leverage diverse media to air grievances and raise awareness of endemic problems and social injustices (Harmon, 2019; Toubiana & Zietsma, 2017). Thus, whereas visibility and voice had previously been understood as a privilege of the large and powerful – i.e., those with high status, positions of authority, or control over important and extensive resources (Deephouse & Carter, 2005; Roulet, 2020), social media has leveled the playing field to some extent (Etter et al., 2018, 2019; Seidel, Hannigan, & Phillips, 2020). In particular, digital media platforms have provided an influential “podium” for marginalized actors (Wright, Meyer, Reay, & Staggs, 2020), while making large and powerful actors more vulnerable to intensive and widespread scrutiny (Daudigeos, Roulet, & Valiorgue, 2020; den Hond & de Bakker, 2007). In this sense, institutional arrangements may be more easily challenged or maintained, even by marginal actors. Another important change brought on by social media is that it has increased the velocity of content dissemination by enhancing the speed and direction of communication (Castelló, Etter, & Nielsen, 2016; Etter et al., 2019; Wang, Reger, & Pfarrer, 2021). Hidden practices and events can be made public, often instantaneously or with very short time lags (Thompson, 2005). An illustrative example can be seen in how social media has enabled widespread exposure of police violence against Black people, thereby generating awareness and triggering collective mobilization (Ramsden, 2020). The increased velocity of content dissemination has, thus, helped overcome temporal and spatial distance by enabling direct engagement with communities who would otherwise have remained difficult to reach through traditional channels (Breuer, Landman, & Farquhar, 2015; Heavey, Simsek, Kyprianou, & Risius, 2020). As a result of this change in scope and velocity, social media discourses have become increasingly intrusive, unwieldly, and hard to control (Altheide, 2013; Wang et al., 2021). Indeed, the fluid and diffuse nature of social media communities make the control of content and exposure highly challenging (Etter et al., 2019; Roulet, 2020). As Heavey and colleagues (2020, p. 1494) point out, “because communication boundaries are porous on social media, messages targeted at one audience may spillover to others and have a raft of unintended consequences.” Thus, while digital platforms can help actors open up discourses in ways that can mobilize collective action and tackle problematic aspects of institutions (Albu & Etter, 2016; Thompson, 2005), they can also lead to discursive closure, both intentionally and unintentionally (Etter & Albu, 2020). In the next section, we build upon the above-presented insights on visibility and institutional processes, situating them within an affordance-based perspective on technology. We then pull together insights from these different areas of research to develop the concept of visibilization. Technological Affordances and Visibilization The widespread adoption of digital platforms for organizing has raised compelling questions about the ways in which these technologies affect processes of coordination and collaboration (Barberá-Tomás, Castelló, de Bakker, & Zietsma, 2019; Gegenhuber & Naderer, 2019; Leonardi, 2014; Leonardi & Vaast, 2017; Madsen, 2016; Seidel et al., 2020; Treem & Leonardi, 2013). The visibility afforded by digital platforms is commonly assumed to facilitate the transmission of information. However, recent studies also suggest that such visibility may have negative implications, as it paradoxically generates closure through information overload (Chen & Wei, 2019) and algorithmic distortion (Etter & Albu, 2020). It is thus important to elucidate how visibilization gives voice to particular perspectives, positions, and discourses while silencing or subordinating others. This is particularly important in order to further unpack the dark side of, or the negative social consequences associated with, digitalization (Trittin-Ulbrich, Scherer, Munro, & Whelan, 2021). To gain a richer understanding that takes nuanced forms of visibility into account, we adopt an affordance perspective that pays particular attention to socio-materiality (Leonardi, 2012). From such a standpoint, it is the interplay or imbrication (Leonardi, Huysman, & Steinfield, 2013) of the separate but interacting actors – be they social (i.e., users) or material (i.e., digital platforms) – that facilitates the opening and closure of discourses. The material features of technologies (e.g., deleting, adding, or sharing functions) enable particular ways of creating and diminishing the visibility of discourses. At the same time, social actors or users – having different intentions and capabilities – can affect visibility in ways that open up or close down discourses. For example, through their use of these technologies, social actors can coordinate activities, persuade public opinion, or disturb collective action through negative, antisocial, thrill-seeking behavior (Cook, Schaafsma, & Antheunis, 2018). Thus, it is the relational interplay between features and contextual use that gives visibility to voices. Recently, scholars have highlighted that visibility should also be understood from the receiver’s perspective, namely for whom content becomes (in-)visible (Treem et al., 2020). Indeed, some communication is only visible to a small in-group or to actors who inhabit a semi-public sphere, while being invisible to many others. For social movements and activists, these questions are important, as content can be targeted at small or even hidden groups for reasons of coordination (Albu, 2019; Uldam & Kaun, 2018), or it can be targeted at larger audiences with the aim of mobilization (Bennett & Segerberg, 2012). Again, it is the interplay between features and contextual use that shapes the different forms of visibility and closure. Furthermore, scholars have highlighted the mediating role of algorithms as central to the forms of visibility and opaqueness specific to digital platforms (Milan, 2015). Algorithms can be understood as “sets of coded instructions” (van Dijck & Poell, 2013, p. 5) or “formalized rules embedded in technological artifacts” (Coretti & Pica, 2018, p. 73) that have an “entangled, complex, and dynamic agency” (Glaser, Pollock, & D’Adderio, 2021, p. 2) given the co-constitution of technological features and social practices. Algorithms impact what becomes visible as much as what becomes invisible on social media (Hansen & Flyverbom, 2015). They do so by performing “sorting, filtering, and ranking functions” (Neumayer & Rossi, 2016, p. 4) that steer attention and interactions (van Dijck & Poell, 2013) or overrepresent certain forms of interaction and devalue others (Bucher, 2012; Gillespie, 2014; Rieder, 2012). Research has shown that algorithms may work against users’ aims of making certain discourses visible (Poell & van Dijck, 2015) while closing others (Etter & Albu, 2020; Uldam & Kaun, 2018). Indeed, organizations that run social media platforms are often profit oriented and have designed algorithms to provide visibility to certain content with the goal of increasing user engagement for purposes of data collection and advertising (Gillespie, 2014). Overall, then, we understand the visibilization process as one accomplished by the interplay of openness and closure. This emerges from the interaction of specific digital platform features (e.g., Twitter hashtags powered by algorithms, wiki pages, etc.) and human actors’ contextual intentions and use (e.g., the democratic participation and freedom of speech promoted by activists). Visibilization, in other words, is accomplished by human and nonhuman actors (Latour, 1996) – including the underlying algorithmic and informational architectures of digital platforms (e.g., trending hashtags, newsfeeds). This affordance-based perspective sensitizes scholars to the interplay between the materiality of technology and users’ varying intentions, the combination of which can enhance or obscure the visibility of practices, voices, and meanings that underpin institutional arrangements. Methodology Research Context The features of particular technologies, combined with their contextual use, create diverse forms of (in-)visibility. To better understand these patterns, we traced the emergence of the #SchauHin campaign in Germany, which sought to raise awareness of systemic racism in everyday interactions. As the campaign touched upon the highly debated issue of racism in German society, it attracted the attention of counter-actors, who sought to preempt and hinder its development. We selected the #SchauHin campaign as a paradigmatic case study (Flyvbjerg, 2006), which provides a window into understanding technological affordances and their potential role in institutional processes. The nature and development of the campaign, in particular, provided an opportunity to examine how digital platforms generate both visibility and closure for different discourses. We focused on a 16-month period from September 2013 until December 2014 – however, we continued to observe the case and collect data until June 2020. The idea for the campaign was initially discussed on Twitter and then moved to Titanpad – a digital, real-time collaborative text editing and writing platform that existed from 2010 to 2017. Although Titanpad facilitated a deeper engagement and development of ideas among organizers and supporters, counter-actors soon gained access and began disrupting development efforts. In response to this disruption, the campaign moved, again, back to Twitter – which, as a microblogging and social network platform, offered a very different set of technological features than Titanpad. Due to the fact that the campaign moved across different digital platforms, and because groups of users appropriated the same technological features in divergent ways, #SchauHin provides an illuminating case in which to study how technology shapes institutional processes. For our purposes, it is an ideal context for understanding visibilization and how the appropriation of platform features can create discursive openness and closure. Data Sources This study draws on both internal and external data sources of the campaign. We were given access to #SchauHin organizers’ internal documents and data files, which included internal memos, strategy documents, and email exchanges. These data amounted to over 2,000 pages of visuals and text. We also examined data from the Titanpad platform and took screenshots at various points in time. Additionally, we examined the #SchauHin and #SchauHin2 Twitter profiles, manually screening 800 tweets with the hashtag #SchauHin. To supplement these data, we collected an additional 18 media articles and 14 videos that covered the campaign. Data Analysis To understand how the different groups of users utilized technological features to influence the campaign with its goal of drawing attention to systemic racism, we employed a qualitative analytic approach (Eisenhardt, 1989; Yin, 1994). As our case could be classified as a digital social movement, we were initially interested in how the digital nature of the social movement impacted organizing and mobilization. However, the emergence of counter-actors who sought to disrupt #SchauHin alerted us to the struggle over visibility, and the potential role that digital platforms may play in shaping this visibility. As we collected further data, and as the #SchauHin campaign progressed, we identified commonalities and differences in how users were enacting various technological features. These patterns prompted us to reflect upon how the features of Titanpad and Twitter impacted the struggle over establishing #SchauHin – and, how they affected the campaign’s broader goal of raising awareness of systemic racism. To organize our data and emerging insights, we structured key events along a chronological timeline. We, then, examined the content generated on Titanpad and Twitter, mapping it onto the timeline to get a better understanding of how the campaign developed and the actors involved. We also drew on internal documents and media reports to help make sense of the activities and struggles that unfolded. Once we were confident that we had identified and understood how different platform features and their enactment enabled or hindered the development of the campaign, we sought to gain a deeper understanding of how and why. Our coding and discussions converged upon the importance of visibility, specifically in terms of the perspectives, opinions, and content that supported the campaign and those that detracted or diverted attention away from it. We noted four features, in particular, that actors engaged with to generate or obscure visibility. These included the adding/editing/deleting of content, the use of hashtags, the creation of profiles, and the trending topic algorithm. While the nature and levels of visibility can be somewhat idiosyncratic to the platforms, we focused on broader indications of visibility such as the volume of interactions, as manifested in discussions, tweets, likes, profile follows, as well as the trending of messages. We then examined how visibility shaped discursive openness and closure by foregrounding particular perspectives and positions, while silencing or subordinating others. Findings The emergence and development of #SchauHin was marked by an ongoing struggle between supporters of the campaign and counter-actors who actively tried to prevent and disrupt mobilization efforts. Central to this struggle was the visibility of communicated content – an affordance that was differentially appropriated by users to enable, facilitate, or hinder the development of the campaign. As supporters tried to generate visibility and open up discourse around daily racism, counter-actors sought to hinder such efforts by obscuring content and enacting discursive closure. Below, we begin with a short overview of how the campaign started. We, then, describe how four digital platform features were differentially used by each group of actors to accomplish divergent aims. We highlight, in particular, how the interplay between different technological features and their contextual use shaped the struggle around visibility and invisibility. Initiating the Campaign The idea for the #SchauHin campaign emerged during a conference at the Friedrich Ebert Foundation in Berlin on September 2, 2013. Activists, bloggers, and journalists came together to discuss topics such as blogging about sexism and racism, the role of the mass media, and the differences between the mass media, social media, and the blogosphere. One central theme that repeatedly emerged was the lack of visibility of stories and experiences from people confronting racism. One panelist suggested creating a hashtag to start a conversation and allow people to share their experiences of daily racism: Can I make a suggestion first? The issue is racism and sexism. This is actually the ultimate opportunity, where these different blogospheres on the internet have possibly just come together, where probably people from both areas and even more are watching the livestream. Maybe in the livestream you can discuss what kind of hashtag could be used for everyday racism as a topic. And “everyday racism” is too long, so something shorter please. (Panel discussion “Rassismus & Sexismus ab_bloggen” (blog_away racism and sexism)) Conference participants took up this call and began enlisting people to help find an appropriate and catchy name for the hashtag, which could be used to draw attention to systemic racism in day-to-day encounters: Looking for a hashtag for everyday racism. Got ideas? #abbloggen ((@User1) September 2, 2013) The @User2 is looking for a Twitter hashtag to flag up everyday racism. Any ideas? #abbloggen ((@User3) September 2, 2013) Within four days after the conference, people had tweeted multiple suggestions including #MeinSchland (MyGermany), #keinRassistaber (notaRacistbut) and #rausschrei (outcry). Below are a few examples of how people engaged in the call to find a hashtag: @User2 @User4 #meinschland and #rausschrei are the ones I like best. #keinRassistaber is also good, but a bit too long. ((@User5) September 6, 2013) The #-everyday racism suggestions included: #allrass #DeinRassismus #zumausderHautfahren #AFD #keinRassistaber. What do you think of #meinschland? ((@User2) September 6, 2013) As more and more people began participating in the search for a hashtag, organizers made the decision to move the conversation to the open platform Titanpad. As a web editor, Titanpad provided a way to make views and information visible through written exchange. This effectively enabled more in-depth discussions and engagement. Organizers announced the switch to Titanpad in a tweet: The search for a hashtag for everyday racism in Germany continues. Here: http://t.co/Fd4vFdB5a3 Ideas? ((@User2) September 6, 2013) The move to Titanpad marked the beginning of the planning phase of the campaign, as organizers sought to generate visibility for it and open up discourse. Once the planning phase was complete, the organizers launched the campaign by moving to Twitter. Each of these two platforms provided different technological features, which were differentially used by supporters and counter-ac
更多
查看译文
关键词
platforms shape institutional processes,technology,visibilization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要