Spaces for the Future
eBook - ePub

Spaces for the Future

A Companion to Philosophy of Technology

Joseph C. Pitt, Ashley Shew, Joseph C. Pitt, Ashley Shew

Share book
  1. 364 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Spaces for the Future

A Companion to Philosophy of Technology

Joseph C. Pitt, Ashley Shew, Joseph C. Pitt, Ashley Shew

Book details
Book preview
Table of contents
Citations

About This Book

Focused on mapping out contemporary and future domains in philosophy of technology, this volume serves as an excellent, forward-looking resource in the field and in cognate areas of study. The 32 chapters, all of them appearing in print here for the first time, were written by both established scholars and fresh voices. They cover topics ranging from data discrimination and engineering design, to art and technology, space junk, and beyond. Spaces for the Future: A Companion to Philosophy of Technology is structured in six parts: (1) Ethical Space and Experience; (2) Political Space and Agency; (3) Virtual Space and Property; (4) Personal Space and Design; (5) Inner Space and Environment; and (6) Outer Space and Imagination. The organization maps out current and emerging spaces of activity in the field and anticipates the big issues that we soon will face.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Spaces for the Future an online PDF/ePUB?
Yes, you can access Spaces for the Future by Joseph C. Pitt, Ashley Shew, Joseph C. Pitt, Ashley Shew in PDF and/or ePUB format, as well as other popular books in Philosophie & Histoire et théorie de la philosophie. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2017
ISBN
9781135007744

Part 1

Ethical Space and Experience

Chapter 1

Data, Technology, and Gender

Thinking About (and From) Trans Lives

Anna Lauren Hoffmann

Introduction

For scholars and students interested in topics of gender identity, data, and information technology, the current historical moment is a curious one. The proliferation of personal computing devices—from laptops to mobile phones to “smart” watches—combined with widespread internet access, means that people are producing unprecedented amounts of digital data, leading some scholars and technology evangelists to declare a “big data” revolution. At the same time, issues of sexism and gender inequality have taken on new urgency as women face increasing levels of harassment online, especially on large social networking sites like Twitter. The blame for this falls, in part, on platform owners and developers that fail to thoroughly consider role of design in promoting safety for the most vulnerable users. Finally, the emergence of high-profile transgender activists, performers, and celebrities—from Laverne Cox to Caitlyn Jenner—has brought attention to a minority population of trans, nonbinary, and gender-nonconforming people and communities that have been (until now, at least) largely overlooked, often to the detriment of the health and safety of these populations.
Of course, some would view these three trends as mostly unrelated: at a quick glance, big data, gender and sexism online, and the health and needs of transgender people seem to have little to do with one another. Against this easy assumption, however, this chapter suggestions that—while not wholly reducible to one another—these three issues intersect in important ways and, in particular, they shine a light the ongoing struggles minority identities and lives face in our increasingly data-driven world. The ‘big data revolution’ cannot be divorced from the technologies and systems that support it—technologies and systems that have long struggled to account for diverse and nonnormative lives.
In the following, these three trends are woven together to further our thinking about gender, identity, and technology. The first section attends to the biases and assumptions that underwrite the idea of ‘big data.’ It argues that big data and the quantitative insights into human behavior they stand to provide are not given but, rather, they are something we make and remake in practice. The second section traces key strands of thinking about the relationship between gender and technology, offering deeper insight into the ways in which gendered biases or stereotypes are built into the practice of scientific and technological development. Finally, the third section takes these lessons and extends them to thinking about the lives and identities of gender minorities, such as transgender individuals. I should note, however, that the discussions of relevant literature throughout this chapter are not intended to be comprehensive (indeed, a fully comprehensive literature review of any section’s topic would fall outside the scope of this chapter). Rather, I mean only to hit on the most salient trends and points as they relate to and help to discuss issues of data, technology, information systems, and gender identity.

Confronting the Mythology of Big Data

The term big data represents many things. As Rob Kitchin (2014a) describes, the term often refers to data sets and databases that are ‘big’ along three lines: volume, velocity, and variety (the 3Vs) (67–68; see also: Zikopoulos and Eaton 2011). Under this definition, big data are unique because of their massive size (petabytes or even zettabytes), the rapidity of their production (sometimes near real time, as with data generated by social networking sites), and their diversity (they are expansive, contain data and metadata, and they can be both structured and unstructured) (Kitchin 2014b: 1). Big data are also sometimes marked by other features, like scalability (Mayer-Schönberger and Cukier 2013), the ease by which they are combined with other data (Kitchin and McArdle 2016), and their often fine-grained or detailed nature (Dodge and Kitchin 2005). Beyond technical features, big data also represent a kind of mythology. As boyd and Crawford (2012) put it, big data simultaneously are produced and thrive on a “widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy” (663).
Although the technical features of big data may raise their own practical and philosophical issues, the focus of this section is the mythology of big data. This myth—the idea that more and bigger data equals more and greater truth—is a seductive one; it suggests that the social world can be explained from a value-neutral, objective point in much the same way that the physical universe is understood through measurable and mathematically quantifiable features (Jurgenson 2014). Instead of filtering our data through the ideas and theories that make up various branches of the social sciences (like sociology, linguistics, or psychology) we can simply harness the power of today’s computers to perform automated statistical analyses on massive data sets that capture traces of human behavior. Computers can find patterns and identify correlations that humans cannot, patterns that—while not proof of causation—are basically good enough to do the job of predicting (rather than explaining) future behavior. As Geoffrey Bowker (2014) describes, such an approach seems—at least superficially—to “[avoid] funneling our findings through vapid stereotypes” (1796). Amazon, for example, deploys an online recommender system that
work[s] through correlation of purchases without passing through the vapid categories of the marketers—you don’t need to know whether someone is male or female, queer or straight, you just need to know his or her patterns of purchases and find similar clusters.
(Bowker 2014: 1796)
The seductiveness of this idea has led some big data evangelists to proclaim that we have reached the “end of theory,” a point in time where knowledge production is simply a matter of “[throwing] numbers into the biggest computing clusters the world has ever seen and [letting] statistical algorithms find patterns where science cannot” (Anderson 2008: n.p.). As Caroline Basset (2015) summarizes the idea, “Big Data ushers in new forms of expertise and promises to render various forms of human expertise increasingly necessary” through “automation of forms of data capture, information gathering, data analysis and ultimately knowledge production” (549). In Robert W. Gehl’s (2015) words, “a common refrain … is that we are in for a revolution, but only if we recognize the problem of too much data and accept the impartial findings of data science for the good of us all” (420). In short, big data appear to make “human expertise seem increasingly beside the point” (Bassett 2015: 549).
But one can only admit the “end of theory” if one also accepts uncritically the mythology of big data. But many scholars—including those cited earlier—warn that this myth is dangerous, as it overlooks the ways in which our very ideas about what constitutes ‘data’ are themselves framed by theoretical perspectives and assumptions. At a fundamental level, the mere act of calling some things data (and disregarding other things as ‘not data’) represents a kind of theory itself: even unstructured data rely on categories of chronological time or textual sources that have already been shaped by assumptions about the world enforced by data collection instruments. Any given data set is, by necessity, limited by its sources or its aims—no single data set, even the most massive ones, can contain all conceivable data points because not everyone or everything is conceived of as ‘data.’ Consequently, big data continue to suffer from “blind spots and problems of representativeness, precisely because [they] cannot account for those who participate in the social world in way that do not register as digital signals” (Crawford et al. 2014: 1667).
Assumptions about what constitute ‘data’ are built into the instruments and tools we use to collect, analyze, and understand the data itself. These tools “have their own inbuilt limitations and restrictions”—for example, data available through social networking sites like Twitter and Facebook are constrained by the poor archiving and search functions of those sites, making it easy for researchers to look at events or conversations in the present and immediate past but also difficult to track older events or conversations (boyd and Crawford 2012: 666). As a consequence, research conducted on or through these sites often inherits a temporal bias, and given the constraints of these social platforms, researchers prioritize immediacy over more reflective or temporally distant analyses. The mythology of big data—its appeal to automated, technologically sophisticated systems and claims to objectivity—works to obscure these biases and their limits for accounting for certain kinds of people or communities (Crawford et al. 2014: 1667) As Bowker (2014) puts it: “just because we have big (or very big, or massive) data does not meant that our databases are not theoretically structured in ways that enable certain perspectives and disable others” (1797).
To be critical scholars and students of big data we must be vigilant against a mythology. It is imperative that we pierce the veil of technological wonder and readily scrutinize big data’s claims to impartiality or neutrality and recognize that data and knowledge are made legible and valuable not in a vacuum, but in context. As Tom Boellstorff (2013) rightfully asserts: “There is a great need for theorization precisely when emerging configurations of data might seem to make concepts superfluous—to underscore that there is no Archimedean point of pure data outside conceptual worlds” (n.p.). To be sure, these limits and biases do not automatically mean that large-scale, data-intensive research is necessarily bad or unimportant. Rather, they simply underscore the continued relevance of theoretical and other types of inquiry even in the midst of a big data ‘revolution.’ As Crawford et al. (2014) argue,
the already tired binary of big data—is it good or bad?—neglects a far more complex reality that is developing. There is a multitude of different—sometimes completely opposed—disciplinary settings, techniques, and practices that still assemble (albeit uncomfortably) under the banner of big data.
(1665)

Surfacing the Role of Gender in the Design and Production of Science and Technology

The previous section challenged the seeming neutrality and objectivity of big data by reasserting the importance of paying critical attention to the social, political, and technological biases that underlie processes of collecting, analyzing, and making sense of data. This section builds on this idea by zeroing in on one particular kind of social and political bias: gender bias. It focuses on the work of scholars and commentators that show how scientific and technological practices (and the knowledge they produce) are shaped and constrained by considerations of gender.
Early work on gender and technology focused almost exclusively on highlighting the overlooked contributions of women to the history and development of science and technology. Work in this vein sometimes focuses on women’s contributions to sites conventionally associated with men—such as industry, engineering, or scientific research—and demonstrates how the narratives that have emerged around these sites have tended to privilege the work and ideas of men despite the presence and contributions of women. For example, a focus on the men who built the first electronic, all-purpose computer—the Electronic Numerical Integrator and Computer (ENIAC)—overlooks the fact that it was a team of women mathematicians that worked to program the machine and make it operational (Sydell 2014). These sorts of skewed narratives “have tended to make the very male world of invention and engineering look ‘normal,’ and thus even more exclusively male than is actually the case” (Lerman, Mohun, and Oldenziel 2003: 431). As Nathan Ensmenger (2010) summarizes, “the idea that many … computing professions were not only historically unusually accepting of women, but were in fact once considered ‘feminized’ occupations, seems … unbelievable” against a backdrop that so heavily associates computing with men and masculinity (120).
Other approaches work in a different direction, looking instead at activities and spaces historically associated with women but overlooked as significant sites of technological activity. Building on feminist critiques of Marxism that emphasized the role of unpaid and domestic labor (most often performed by women), work in this area examines the relationship between gender and technology outside of conventional sites of scientific or technological production. Cynthia Cockburn and Susan Ormrod (1993)—in their now-classic work Gender and Technology in the Making—examined the history and rise of the microwave oven not only in its design and development phase, but through to its dissemination into kitchens and the home. Cockburn and Ormrod (1993) show how a technology that starts out as a high-tech innovation ends up—through processes of marketing, retailing, and associations with ‘women’s work’ like household cooking—viewed as a rote household appliance, ultimately ignoring the ways that women’s specific technical knowledge (of cooking, for example) also contributed to the design, distribution, and use of a particular technology.
Despite progress in recognizing the contributions of women in the history of science and technology, however, biases still persist in our narratives about novel or innovative technologies. The story of the relatively recent and much-lauded Google Books project, for example, foregrounds the vision of Google’s founders Sergey Brin and Larry Page as well as the company’s (male-dominated) engineering teams that developed a novel way for quickly and effectively scanning, digitizing, and bringing entire library collections online (thus greatly expanding access to recorded knowledge). Lost in this narrative are the contributions of librarians (primarily women) who collected, organized, curated, and maintained the collections upon which Google Books is built (Hoffmann and Bloom, forthcoming) as well as the women and people of color who performed the manual labor of turning pages for Google’s scanning machines (Wen 2014).
Further approaches to gender and technology center not on the narratives that grow up around particular technologies, but instead on the ways in which gender biases influence the development and design of technology itself. Work in this vein seeks to uncover how sexist assumptions and stereotypes end up designed—or ‘baked’—into various systems and artifacts. For example, video games that offer only male avatars for players (or male and highly sexualized female avatars) implicitly encode the assumption that only (heterosexual) men play video games (Friedman 1996). More recently, commentators have pointed out how software applications and personal tracking tools also fail to account for the specific needs of women. For example, the release of Apple’s HealthKit for its popular mobile phones (iPhones) promised a set of comprehensive tools for tracking personal health and biometric information. However, HealthKit’s first iteration failed to include a tool for tracking menstruation (Duhaime-Ross 2014). Studying the relationship between gender and technology in this way allows us to reveal and destabilize these seemingly ‘natural’ defaults by revealing the ways in which they actively construct biased or even harmful ideas about women. (For more thorough summaries of the state of gender and technology studies at different points in its development, see McGaw 1982; Lerman, Mohan, and Odenziel 2003; Wajcman 2009).
Finally, gender has also ...

Table of contents