Krugman – That 1937 feeling This is from March 2010, thus, some data i
Q#1:
Paul Krugman – That 1937 feeling
This is from March 2010, thus, some data is out of date. However, the point of the video is to help you make the connections between the recent/current economic policies and the events of 1937.
The Topic:
The Great Depression was a low probability event. It required several negative economic events to occur at the same time and for policy makers to respond poorly to those events. One of the best books on this topic is John Kenneth Gailbraith's The Great Crash; you might want to read it.
The events of 1937 are largely forgotten, even by economists. But for those who study the relationships between public policy and economic growth, the lessons of 1937 are some of the most important ones from the 20th century.
Table 7.2 on page 232 of your text shows GDP growth of 12.8% in 1936. In 1937 it was 6.9%, and by 1938, it was -5.5 percent. How does an economy go from the strong growth (admittedly from a relatively low base) of 1936 to another recession by 1938? In this case, the answer is government policy.
For this week's discussion, go on the web or to the UMUC library and learn the specific monetary and fiscal policy changes (don't forget taxes) that occurred in 1937. Use what you have learned about GDP, production costs and aggregate demand, and aggregate supply to project the most likely results of those changes.
Next view our current economic condition and the recently proposed tax increases, the ending of the Federal Reserve's quantitative easing and the proposals for increased government spending that may or may not be offset by spending cuts in other areas. Again using the tools you have learned, what do you think is the most likely result?
In other words, compare 2015 to 1937 and explore what lessons and cautions may be learned from that comparison.
Think in terms of an economic policy advisor.
****Sources you can use:
- About the Federal Reserve System
- Federal Reserve Banks
- Federal Reserve Board
- Federal Open Market Committee
- The Implementation of Monetary Policy
- Monetary Policy and the Economy
Q#2:
Choose one of the following options:
Option #1: Digital Ethics
Examine the four issues with an ethical dimension on pages 16-18 of Digital Enlightenment Forum . Security for the Digital World Within an Ethical Framework. IOS Press, 2016 (PLEASE see attached for article). Choose one that most interests you and do the following:
- Restate the issue in your own words by paraphrasing it. (Please see tips on paraphrasing here.)
- Explain what you think the main ethical problem or dilemma is.
- Provide one real-life example of some emerging or new technology you think this ethical issue/dilemma applies to. Your real-life example should be from a reliable article in a popular technology publication or an article in the UMGC library.
- Explore what you think a possible solution to this ethical issue in your emerging or new technology could be.
- Use two quotes from any of your resources to support or explain your points. Make sure to provide in-text citations for both quotes in MLA format.
- Provide references for all sources in MLA format.
Option #2: Disclosive Computer Ethics
After reading Philip Brey's article "Disclosive Computer Ethics" (PLEASE see attached for article) in the Learning Materials, please do the following:
- Explain how Brey defines disclosive computer ethics and value-sensitive design.
- Discuss why Brey thinks the relationship between disclosive computer ethics and value-sensitive design is important.
- Provide one real-life example of some emerging or new technology you think either does have or does not have a value-sensitive design. Your real-life example should be from a reliable article in a popular technology publication or an article in the UMGC library.
- If you think your example does not have a value-sensitive design, explore and suggest a change that could make it have a better value-sensitive design.
- If you think your example does have a value-sensitive design, explain what specific evidence you see that suggests it does.
- Use two quotes from any of your resources to support or explain your points. Make sure to provide in-text citations for both quotes in MLA format.
- Provide references for all sources in MLA format.
Examples of popular technology publications that can be used for this question include:
- Wired Magazine
- MIT Technology Review
- Tech Briefs
- GeekTime
- ComputerWorld
- Innovation & Tech Today
- Digit Magazine
- Trotons Tech Magazine
Option #3: Design for Privacy
After reading the "Improving privacy choice through design" by Terpstra, Schouten and Leenes, please do the following:
- Summarize why the authors think reflective thinking is an important aspect of making decisions related to digital technology and privacy. (Please see tips on summarizing here.)
- Describe what the authors mean when they propose that friction be introduced into technology interfaces.
- Choose an app or program you have recently installed on your phone or computer and explain how the programmers could tweak the installation process or the way the app or program works to encourage more reflective thinking about personal privacy by the user.
- Finally, do you think that the solution proposed in this article would help personal data be used in more ethical ways by big corporations? Why or why not?
- Use two quotes from any of your resources to support or explain your points. Make sure to provide in-text citations for both quotes in MLA format.
- Provide references for all sources in MLA format.
3 Values in technology and disclosive computer ethics
Philip Brey
3.1 Introduction
Is it possible to do an ethical study of computer systems themselves inde- pendently of their use by human beings? The theories and approaches in this chapter answer this question affirmatively and hold that such studies should have an important role in computer and information ethics. In doing so, they undermine conventional wisdom that computer ethics, and ethics generally, is concerned solely with human conduct, and they open up new directions for computer ethics, as well as for the design of computer systems.
As our starting point for this chapter, let us consider some typical examples of ethical questions that are raised in relation to computers and information technology, such as can be found throughout this book:
� Is it wrong for a system operator to disclose the content of employee email messages to employers or other third parties?
� Should individuals have the freedom to post discriminatory, degrading and defamatory messages on the Internet?
� Is it wrong for companies to use data-mining techniques to generate con- sumer profiles based on purchasing behaviour, and should they be allowed to do so?
� Should governments design policies to overcome the digital divide between skilled and unskilled computer users?
As these examples show, ethical questions regarding information and com- munication technology typically focus on the morality of particular ways of using the technology or the morally right way to regulate such uses.
Taken for granted in such questions, however, are the computer systems and software that are used. Could there, however, not also be valid ethical questions that concern the technology itself? Could there be an ethics of computer systems separate from the ethics of using computer systems? The embedded values approach in computer ethics, formulated initially by Helen Nissenbaum (1998; Flanagan, Howe and Nissenbaum 2008) and since adopted by many authors in the field, answers these questions affirmatively, and aims to develop a theory and methodology for moral reflection on computer systems themselves, independently of particular ways of using them.
C o p y r i g h t 2 0 1 0 . C a m b r i d g e U n i v e r s i t y P r e s s .
A l l r i g h t s r e s e r v e d . M a y n o t b e r e p r o d u c e d i n a n y f o r m w i t h o u t p e r m i s s i o n f r o m t h e p u b l i s h e r , e x c e p t f a i r u s e s p e r m i t t e d u n d e r U . S . o r a p p l i c a b l e c o p y r i g h t l a w .
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS AN: 317678 ; Luciano Floridi.; The Cambridge Handbook of Information and Computer Ethics Account: s4264928.main.edsebook
42 Philip Brey
The embedded values approach holds that computer systems and software are not morally neutral and that it is possible to identify tendencies in them to promote or demote particular moral values and norms. It holds, for example, that computer programs can be supportive of privacy, freedom of informa- tion, or property rights or, instead, to go against the realization of these val- ues. Such tendencies in computer systems are called ‘embedded’, ‘embodied’ or ‘built-in’ moral values or norms. They are built-in in the sense that they can be identified and studied largely or wholly independently of actual uses of the system, although they manifest themselves in a variety of uses of the system. The embedded values approach aims to identify such tenden- cies and to morally evaluate them. By claiming that computer systems may incorporate and manifest values, the embedded values approach is not claim- ing that computer systems engage in moral actions, that they are morally praiseworthy or blameworthy, or that they bear moral responsibility (Johnson 2006). It is claiming, however, that the design and operation of computer systems has moral consequences and therefore should be subjected to ethical analysis.
If the embedded values approach is right, then the scope of computer ethics is broadened considerably. Computer ethics should not just study ethical issues in the use of computer technology, but also in the technology itself. And if computer systems and software are indeed value-laden, then many new ethi- cal issues emerge for their design. Moreover, it suggests that design practices and methodologies, particularly those in information systems design and soft- ware engineering, can be changed to include the consideration of embedded values.
In the following section, Section 3.2, the case will be made for the embed- ded values approach, and some common objections against it will be dis- cussed. Section 3.3 will then turn to an exposition of a particular approach in computer ethics that incorporates the embedded values approach, disclo- sive computer ethics, proposed by the author (Brey 2000). Disclosive com- puter ethics is an attempt to incorporate the notion of embedded values into a comprehensive approach to computer ethics. Section 3.4 considers value-sensitive design (VSD), an approach to design developed by computer scientist Batya Friedman and her associates, which incorporates notions of the embedded values approach (Friedman, Kahn and Borning 2006). The VSD approach is not an approach within ethics but within computer sci- ence, specifically within information systems design and software engineer- ing. It aims to account for values in a comprehensive manner in the design process, and makes use of insights of the embedded values approach for this purpose. In a concluding section, the state of the art in these dif- ferent approaches is evaluated and some suggestions are made for future research.
EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use
43 Values in technology and disclosive computer ethics
3.2 How technology embodies values
The existing literature on embedded values in computer technology is still young, and has perhaps focused more on case studies and applications for design than on theoretical underpinnings. The idea that technology embod- ies values has been inspired by work in the interdisciplinary field of science and technology studies, which investigates the development of science and technology and their interaction with society. Authors in this field agree that technology is not neutral but shaped by society. Some have argued, specifi- cally, that technological artefacts (products or systems) issue constraints on the world surrounding them (Latour 1992) and that they can harbour political consequences (Wiener 1954). Authors in the embedded value approach have taken these ideas and applied them to ethics, arguing that technological arte- facts are not morally neutral but value-laden. However, what it means for an artefact to have an embedded value remains somewhat vague.
In this section a more precise description of what it means for a technologi- cal artefact to have embedded values is articulated and defended. The position taken here is in line with existing accounts of embedded values, although their authors need not agree with all of the claims made in this section. The idea of embedded values is best understood as a claim that technological artefacts (and in particular computer systems and software) have built-in tendencies to promote or demote the realization of particular values. Defined in this way, a built-in value is a special sort of built-in consequence. In this section a defence of the thesis that technological artefacts are capable of having built-in con- sequences is first discussed. Then tendencies for the promotion of values are identified as special kinds of built-in consequences of technological artefacts. The section is concluded by a brief review of the literature on values in infor- mation technology, and a discussion of how values come to be embedded in technology.
3.2.1 Consequences built into technology
The embedded values approach promotes the idea that technology can have built-in tendencies to promote or demote particular values. This idea, how- ever, runs counter to a frequently held belief about technology, the idea that technology itself is neutral with respect to consequences. Let us call this the neutrality thesis. The neutrality thesis holds that there are no consequences that are inherent to technological artefacts, but rather that artefacts can always be used in a variety of different ways, and that each of these uses comes with its own consequences. For example, a hammer can be used to hammer nails, but also to break objects, to kill someone, to flatten dough, to keep a pile of paper in place or to conduct electricity. These uses have radically different
EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use
44 Philip Brey
effects on the world, and it is difficult to point to any single effect that is constant in all of them.
The hammer example, and other examples like it (a similar example could be given for a laptop), suggest strongly that the neutrality thesis is true. If so, this would have important consequences for an ethics of technology. It would follow that ethics should not pay much attention to technological artefacts themselves, because they in themselves do not ‘do’ anything. Rather, ethics should focus on their usage alone.
This conclusion holds only if one assumes that the notion of embedded values requires that there are consequences that manifest themselves in each and every use of an artefact. But this strong claim need not be made. A weaker claim is that artefacts may have built-in consequences in that there are recurring consequences that manifest themselves in a wide range of uses of the artefact, though not in all uses. If such recurring consequences can be associated with technological artefacts, this may be sufficient to falsify the strong claim of the neutrality thesis that each use of a technological artefact comes with its own consequences. And a good case can be made that at least some artefacts can be associated with such recurring consequences.
An ordinary gas-engine automobile, for example, can evidently be used in many different ways: for commuter traffic, for leisure driving, to taxi passengers or cargo, for hit jobs, for auto racing, but also as a museum piece, as a temporary shelter for the rain or as a barricade. Whereas there is no single consequence that results from all of these uses, there are several consequences that result from a large number of these uses: in all but the last three uses, gasoline is used up, greenhouse gases and other pollutants are being released, noise is being generated, and at least one person (the driver) is being moved around at high speeds. These uses, moreover, have something in common: they are all central uses of automobiles, in that they are accepted uses that are frequent in society and that account for the continued production and usage of automobiles. The other three uses are peripheral in that they are less dominant uses that depend for their continued existence on these central uses, because their central uses account for the continued production and consumption of automobiles. Central uses of the automobile make use of its capacity for driving, and when it is used in this capacity, certain consequences are very likely to occur. Generalizing from this example, a case can be made that technological artefacts are capable of having built-in consequences in the sense that particular consequences may manifest themselves in all of the central uses of the artefact.
It may be objected that, even with this restriction, the idea of built-in consequences employs a too deterministic conception of technology. It sug- gests that, when technological artefacts are used, particular consequences are necessary or unavoidable. In reality, there are usually ways to avoid par- ticular consequences. For example, a gas-fuelled automobile need not emit
EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use
45 Values in technology and disclosive computer ethics
greenhouse gases into the atmosphere if a greenbox device is attached to it, which captures carbon dioxide and nitrous oxide and converts it into bio-oil. To avoid this objection, it may be claimed that the notion of built-in con- sequences does not refer to necessary, unavoidable consequences but rather to strong tendencies towards certain consequences. The claim is that these consequences are normally realized whenever the technology is used, unless it is used in a context that is highly unusual or if extraordinary steps are taken to avoid particular consequences. Built-in consequences are therefore never absolute but always relative to a set of typical uses and contexts of use, outside of which the consequences may not occur.
Do many artefacts have built-in consequences in the way defined above? The extent to which technological artefacts have built-in consequences can be correlated with two factors: the extent to which they are capable of exerting force or behaviour autonomously, and the extent to which they are embedded in a fixed context of use. As for the first parameter, some artefacts seem to depend strongly on users for their consequences, whereas others seem to be able to generate effects on their own. Mechanical and electrical devices, in particular, are capable of displaying all kinds of behaviours on their own, ranging from simple processes, like the consumption of fuel or the emission of steam, to complex actions, like those of robots and artificial agents. Elements of infrastructure, like buildings, bridges, canals and railway tracks, may not behave autonomously but, by their mere presence, they do impose significant constraints on their environment, including the actions and movements of people, and in this way engender their own consequences. Artefacts that are not mechanical, electrical or infrastructural, like simple hand-held tools and utensils, tend to have less consequences of their own and their consequences tend to be more dependent on the uses to which they are put.
As for the second parameter, it is easier to attribute built-in consequences to technological artefacts that are placed in a fixed context of use than to those that are used in many different contexts. Adapting an example by Winner (1980), an overpass that is 180 cm (6 ft) high has as a generic built-in consequence that it prevents traffic from going through that is more than 180 cm high. But when such an overpass is built over the main access road to an island from a city in which automobiles are generally less than 180 cm high and buses are taller, then it acquires a more specific built-in consequence, which is that buses are being prevented from going to the island whereas automobiles do have access. When, in addition, it is the case that buses are the primary means of transportation for black citizens, whereas most white citizens own automobiles, then the more specific consequence of the overpass is that it allows easy access to the island for one racial group, while denying it to another. When the context of use of an artefact is relatively fixed, the immediate, physical consequences associated with a technology can often be translated into social consequences because there are reliable correlations
EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use
46 Philip Brey
between the physical and the social (for example between prevention of access to buses and prevention of access to blacks) that are present (Latour 1992).
3.2.2 From consequences to values
Let us now turn from built-in consequences to embedded values. An embedded value is a special kind of built-in consequence. It has already been explained how technological artefacts can have built-in consequences. What needs to be explained now is how some of these built-in consequences can be asso- ciated with values. To be able to make this case, let us first consider what a value is.
Although the notion of a value remains somewhat ambiguous in philosophy, some agreements seem to have emerged (Frankena 1973). First, philosophers tend to agree that values depend on valuation. Valuation is the act of valuing something, or finding it valuable, and to find something valuable is to find it good in some way. People find all kinds of things valuable, both abstract and concrete, real and unreal, general and specific. Those things that people find valuable that are both ideal and general, like justice and generosity, are called values, with disvalues being those general qualities that are considered to be bad or evil, like injustice and avarice. Values, then, correspond to idealized qualities or conditions in the world that people find good. For example, the value of justice corresponds to some idealized, general condition of the world in which all persons are treated fairly and rewarded rightly.
To have a value is to want it to be realized. A value is realized if the ideal conditions defined by it are matched by conditions in the actual world. For example, the value of freedom is fully realized if everyone in the world is completely free. Often, though, a full realization of the ideal conditions expressed in a value is not possible. It may not be possible for everyone to be completely free, as there are always at least some constraints and limitations that keep people from a state of complete freedom. Therefore, values can generally be realized only to a degree.
The use of a technological artefact may result in the partial realization of a value. For instance, the use of software that has been designed not to make one’s personal information accessible to others helps to realize the value of privacy. The use of an artefact may also hinder the realization of a value or promote the realization of a disvalue. For instance, the use of software that contains spyware or otherwise leaks personal data to third parties harms the realization of the value of privacy. Technological artefacts are hence capable of either promoting or harming the realization of values when they are used. When this occurs systematically, in all of its central uses, we may say that the artefact embodies a special kind of built-in consequence, which is a built- in tendency to promote or harm the realization of a value. Such a built-in tendency may be called, in short, an embedded value or disvalue. For example,
EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use
47 Values in technology and disclosive computer ethics
spyware-laden software has a tendency to harm privacy in all of its typical uses, and may therefore be claimed to have harm to privacy as an embedded disvalue.
Embedded values approaches often focus on moral values. Moral values are ideals about how people ought to behave in relation to others and them- selves and how society should be organized so as to promote the right course of action. Examples of moral values are justice, freedom, privacy and hon- esty. Next to moral values, there are different kinds of non-moral values, for example, aesthetic, economic, (non-moral) social and personal values, such as beauty, efficiency, social harmony and friendliness.
Values should be distinguished from norms, which can also be embedded in technology. Norms are rules that prescribe which kinds of actions or state of affairs are forbidden, obligatory or allowed. They are often based on values that provide a rationale for them. Moral norms prescribe which actions are forbidden, obligatory or allowed from the point of view of morality. Exam- ples of moral norms are ‘do not steal’ and ‘personal information should not be provided to third parties unless the bearer has consented to such distri- bution’. Examples of non-moral norms are ‘pedestrians should walk on the right side of the street’ and ‘fish products should not contain more than 10 mg histamines per 100 grams’. Just as technological artefacts can promote the realization of values, they can also promote the enforcement of norms. Embedded norms are a special kind of built-in consequence. They are tenden- cies to effectuate norms by bringing it about that the environment behaves or is organized according to the norm. For example, web browsers can be set not to accept cookies from websites, thereby enforcing the norm that websites should not collect information about their user. By enforcing a norm, arte- facts thereby also promote the corresponding value, if any (e.g., privacy in the example).
So far we have seen that technological artefacts may have embedded values understood as special kinds of built-in consequences. Because this conception relates values to causal capacities of artefacts to affect their environment, it may be called the causalist conception of embedded values. In the literature on embedded values, other conceptions have been presented as well. Notably, Flanagan, Howe and Nissenbaum (2008) and Johnson (1997) discuss what they call an expressive conception of embedded values. Artefacts may be said to be expressive of values in that they incorporate or contain symbolic meanings that refer to values. For example, a particular brand of computer may sym- bolize or represent status and success, or the representation of characters and events in a computer game may reveal racial prejudices or patriarchal values. Expressive embedded values in artefacts represent the values of designers or users of the artefact. This does not imply, however, that they also function to realize these values. It is conceivable that the values expressed in arte- facts cause people to adopt these values and thereby contribute to their own
EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use
48 Philip Brey
realization. Whether this happens frequently remains an open question. In any case, whereas the expressive conception of embedded values merits fur- ther philosophical reflection, the remainder of this chapter will be focused on the causalist conception.
3.2.3 Values in information technology
The embedded values approach within computer ethics studies embedded val- ues in computer systems and software and their emergence, and provides moral evaluations of them. The study of embedded values in Information and Communication Technology (ICT) has begun with a seminal paper by Batya Friedman and Helen Nissenbaum in which they consider bias in computer systems (Friedman and Nissenbaum 1996). A biased computer system or pro- gram is defined by them as one that systematically and unfairly discriminates against certain individuals or groups, who may be users or other stakeholders of the system. Examples include educational programs that have much more appeal to boys than to girls, loan approval software that gives negative rec- ommendations for loans to individuals with ethnic surnames, and databases for matching organ donors with potential transplant recipients that system- atically favour individuals retrieved and displayed immediately on the first screen over individuals displayed on later screens. Building on their work, I have distinguished user biases that discriminate against (groups of) users of an information system, and information biases that discriminate against stake- holders represented by the system (Brey 1998). I have discussed various kinds of user bias, such as user exclusion and the selective penalization of users, as well as different kinds of information bias, including bias in information content, data selection, categorization, search and matching algorithms and the display of information.
After their study of bias in computer systems, Friedman and Nissenbaum went on to consider consequences of software agents for the autonomy of users. Software agents are small programs that act on behalf of the user to perform tasks. Friedman and Nissenbaum (1987) argue that software agents can undermine user autonomy in various ways – for example by having only limited capabilities to perform wanted tasks or by not making relevant infor- mation available to the user – and argue that it is important that software agents are designed so as to enhance user autonomy. The issue of user auton- omy is also taken up in Brey (1998, 1999c), in which I argue that computer systems can undermine autonomy by supporting monitoring by third parties, by imposing their own operational logic on the user, thus limiting creativity and choice, or by making users dependent on systems operators or others for maintenance or access to systems functions.
Deborah Johnson (1997) considers the claim that the Internet is an inher- ently democratic technology. Some have claimed that the Internet, because of
EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use
49 Values in technology and disclosive computer ethics
its distributed and nonhierarchical nature, promotes democratic processes by empowering individuals and stimulating democratic dialogue and decision- making (see Chapter 10). Johnson subscribes to this democratic potential. She cautions, however, that these democratic tendencies may be limited if the Internet is subjected to filtering systems that only give a small group of individuals control over the flow of information on the Internet. She hence identifies both democratic and undemocratic tendencies in the technology that may become dominant depending on future use and development.
Other studies, within the embedded values approach, have focused on spe- cific values, such as privacy, trust, community, moral accountability and informed consent, or on specific technologies. Introna and Nissenbaum (2000) consider biases in the algorithms of search engines, which, they argue, favour websites with a popular and broad subject matter over specialized sites, and the powerful over the less powerful. Introna (2007) argues that existing plagiarism detection software creates an artificial distinction between alleged plagiarists and non-plagiarists, which is unfair. Introna (2005) considers values embed- ded in facial recognition systems. Camp (1999) analyses the implications of Internet protocols for democracy. Flanagan, Howe and Nissenbaum (2005) study values in computer games, and Brey (1999b, 2008) studies them in computer games, computer simulations and virtual reality applications. Agre and Mailloux (1997) reveal the implications for privacy of Intelligent Vehicle- Highway Systems, Tavani (1999) analyses the implications of data-mining techniques for privacy and Fleischmann (2007) considers values embedded in digital libraries.
3.2.4 The emergence of values in information technology
What has not been discussed so far is how technological artefacts and systems acquire embedded values. This issue has been ably taken up by Friedman and Nissenbaum (1996). They analyse the different ways in which biases (injustices) can emerge in computer systems. Although their focus is on biases, their analysis can easily be generalized to values in general. Biases, they argue, can have three different types of origins. Preexisting biases arise from values and attitudes that exist prior to the design of a system. They can either be individual, resulting from the values of those who have a significant input into the design of the systems, or societal, resulting from organizations, institutions or the general culture that constitute the context in which the system is developed. Examples are racial biases of designers that become embedded in loan approval software, and overall gender biases in society that lead to the development of computer games that are more appealing to boys than to girls. Friedman and Nissenbaum note that preexisting biases can be embedded in systems intentionally, through conscious efforts of individuals or institutions, or unintentionally and unconsciously.
EBSCOhost – printed on 2/6/2023 1:50 PM via UNIVERSITY OF MARYLAND GLOBAL CAMPUS. All use subject to https://www.ebsco.com/terms-of-use
50 Philip Brey
A second type is technical bias, which arises from technical constraints or considerations. The design of computer systems includes all kinds of technical limitations and assumptions that are perhaps not value-laden in themselves but that could result in value-laden designs, for example because limited screen sizes cannot display all results of a search process, thereby privileging those results that are displayed first, or because computer algorithms or models contain formalized, simplified representations of reality that introduce biases or limit the autonomy of users, or because software engineering techniques do not allow for adequate security, leading to systematic breaches of privacy. A third and final type is emergent bias, which arises when the social context in which the system is used is not the one intended by its designers. In the new context, the system may not adequately support the capabilities, values or interests of some user groups or the interests of other stakeholders. For example, an ATM that relies heavily on written instructions may be installed in a neighborhood with a predominantly illiterate population.
Friedman and Nissenbaum’s classification can easily be extended to embed- ded values in general. Embedded values may hence be identified as preexist- ing, technical or emergent. What this classification shows is that embedded values are not necessarily a reflection of the values of designers. When they are, moreover, their embedding often has not been intentional. However, their embedding can be an intentional act. If designers are aware of the way in which values are embedded into artefacts, and if they can sufficiently antic- ipate future uses of an artefact and its future context(s) of use, then they are in a position to intentionally design artefacts to support particular val- ues. Several approaches have been proposed in recent years that aim to make considerations of value part of the design process. In Section 3.4, the most inf
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.