After reading the quote Improving privacy choice through designe by Terpstra, Schouten and Leenes please do the following: Summarize why the authors
Please read the following article and then answer the questions. Please use article for reference and cite in MLA format. Due tomorrow 3pm EST.
After reading the "Improving privacy choice through design" by Terpstra, Schouten and Leenes (WHICH IS ATTACHED), please do the following:
1. Summarize why the authors think reflective thinking is an important aspect of making decisions related to digital technology and privacy.
1. Describe what the authors mean when they propose that friction be introduced into technology interfaces.
1. Choose an app or program you have recently installed on your phone or computer and explain how the programmers could tweak the installation process or the way the app or program works to encourage more reflective thinking about personal privacy by the user.
1. Finally, do you think that the solution proposed in this article would help personal data be used in more ethical ways by big corporations? Why or why not?
1. Use two quotes from any of your resources to support or explain your points. Make sure to provide in-text citations for both quotes in MLA format.
1. Provide references for all sources in MLA format.
,
4/8/22, 9:00 PMImproving privacy choice through design: How designing for reflection could support privacy self-management
Page 1 of 20https://journals.uic.edu/ojs/index.php/fm/article/download/9358/8051?inline=1
In today’s society online privacy is primarily regulated by two main regulatory systems: (command-and-control) law and notice and consent (i.e., agreeing to terms of agreement and privacy policies). Both systems prohibit reflection on privacy issues from the public at large and restrict the privacy debate to the legal and regulatory domains. However, from a socio-ethical standpoint, the general public needs to be included in the privacy debate in order to make well- informed decisions and contribute to the law-making process. Therefore, we argue that privacy regulation must shift from a purely legal debate and simple one-time yes/no decisions by ‘data subjects’ to public (debate and) awareness and continuous reflection on privacy and privacy decisions by users of IT systems and services. In order to allow for this reflective thinking, individuals need to (1) understand what is at stake when interacting with digital technology; (2) have the ability to reflect on the consequences of their privacy decisions; and (3) have meaningful controls to express their privacy preferences. Together, these three factors could provide for knowledge, evaluation and choice within the context of online privacy. In this paper, we elaborate on these factors and provide a design-for-privacy model that introduces friction as a central design concept that stimulates reflective thinking and thus restores the privacy debate within the public arena.
Contents
I. Introduction II. The complexity of privacy and individual choice III. Improving privacy self-management with reflective thinking IV. Better privacy decisions through designing for reflection V. Conclusion
I. Introduction
In July 2018, the Dutch news platform De Correspondent revealed how military and intelligence personnel could easily be identified through the use of a fitness tracker, by observing (mostly publicly available) running activity near known military bases [1]. Even though the identities of military and intelligence personnel are considered highly confidential, 90 percent of individuals who tracked their runs around sensitive targets listed their name and home city publicly on their profile page, despite the option within the app of keeping such information private [2]. Moreover, by subsequently analysing the individuals’ additional running activities, it was trivial to infer their home addresses: simply observe where most running activity begins and ends.
Improvingprivacychoicethroughdesign: Howdesigningforreflectioncouldsupport privacyself-management byArnoutTerpstra,AlexanderP.Schouten, AlwindeRooij,andRonaldE.Leenes
4/8/22, 9:00 PMImproving privacy choice through design: How designing for reflection could support privacy self-management
Page 2 of 20https://journals.uic.edu/ojs/index.php/fm/article/download/9358/8051?inline=1
In addition, over the course of 2018 it became known that a company called Cambridge Analytica harvested the profiles of tens of millions of Facebook users without their knowledge [3]. Shortly thereafter, this did not appear to be a single instance as other companies used similar tactics to harvest personal data through Facebook [4]. The same year, the Norwegian Consumer Council published a report on how three big tech companies deceive individuals into sharing more personal data by deliberately applying cunning design patterns and hiding away privacy-friendly choices [5]. Many organisations adopt such deceptive designs because it supports them by monetising personal data [6].
That individuals are not aware of the consequences of their privacy choices or have little choice in the first place was one of the main incentives for the General Data Protection Regulation (GDPR) [7] which came into effect in 2018 within the European Union (EU), unifying the regulation of processing personal data across the EU and the European Economic Area (EEA). Key principles in the GDPR regarding the processing of personal data relate to transparency to those whose data are being processed (data subjects) [8], as well as furthering their control over the processing of data. These principles force organizations to be more transparent about what data they collect and for what purposes and to offer more control to individual data subjects regarding the use of data and their privacy.
The United States (US) lack such omnibus regulation, but scandals like those repeatedly involving Facebook, increasingly lead to calls for developing data protection regulation in the US as well [9]. Moreover, companies themselves become increasingly aware that transparency and control over data use are paramount in order to uphold their reputation, retain their user base, and remain competitive. For example, last year’s Cambridge Analytica controversy saw Facebook temporarily losing US$119 billion of their market value [10]. In response, Facebook initially denied there was a problem as all users gave their consent over sharing personal data. At the same time, however, Facebook promised to investigate current data access policies by third party apps and take further measures to limit those policies [11] as well as implement and promote new transparency and control features [12]. However, properly managing one’s own privacy preferences is rather difficult for individuals (Solove, 2013; Cranor, et al., 2006; Trepte, et al., 2015).
It is still unclear how to properly provide individuals with information about their privacy choices (Fischer-Hübner, et al., 2016). Notice (or transparency; informing the individual) and consent (or the ability to choose whether or not to use a product or service) is currently the main regulatory mechanism through which individuals legally retain responsibility for making their own privacy choices (Calo, 2014; Ben-Shahar and Schneider, 2011). It assumes a knowledge gap between an institution offering a service and an individual consuming that service (Calo, 2014; Hoofnagle and Urban, 2014). By mandating the disclosure of facts about institutional uses of personal data through a privacy policy prior to first use, this knowledge gap can be reduced. By reading privacy policies, future consumers are expected to understand in what ways their privacy might be affected, which in turn allows them to make well- informed, rational decisions. However, three problems have been identified with notice and consent.
First, privacy policies do not truly inform and thus help the individual to make better decisions. They are too lengthy and too difficult to understand for the average reader (Jensen and Potts, 2004; McDonald and Cranor, 2008; Cate, 2006; Milne, et al., 2006; Grossklags and Good, 2008). Simplifying the content of privacy policies does not consistently influence disclosure behaviour and may even be used to seduce individuals to disclose even more personal information (Adjerid, et al., 2013). Moreover, individuals become suspicious and instinctively share less personal information because of the mere presence of a privacy policy, even when the actual content is beneficial for the individual. Thus, the actual content of a privacy policy does not matter (Marreiros, et al., 2017; John, et al., 2011).
Second, human decision-making is often irrational and biased (Facione, 2015; Kahneman, 2011; Thaler and Sunstein, 2009; Acquisti and Grossklags, 2005; Acquisti, 2004; Strandburg, 2004; Kokolakis, 2017). Notice and consent assumes that more information leads to better decisions (Calo, 2014; Ben-Shahar and Schneider, 2011). However, in the context of privacy self-management, many decisions are taken rapidly and intuitively, with relatively low cognitive effort, based on cognitive strategies such as heuristics (Thaler and Sunstein, 2009; Acquisti and Grossklags, 2005; Acquisti, 2004; Simon, 1957). This indicates that the knowledge gap of notice and consent is not actually reduced. Rather, to reduce this gap, an individual is required to learn something and therefore, think critically and reflectively (Facione, 2015; Mezirow, 2003, 1997).
Third, there is no meaningful choice (Cate, 2006; Koops and Leenes, 2006). First, because meaningful choice entails understanding what the choice is about, which requires a certain degree of generic privacy literacy (Park, 2013; Trepte, et al., 2015; Langenderfer and Miyazaki, 2009), as well as contextual knowledge about how specific products and services might affect an individual’s privacy. Second, because there is no granularity in the choice itself. It is a
4/8/22, 9:00 PMImproving privacy choice through design: How designing for reflection could support privacy self-management
Page 3 of 20https://journals.uic.edu/ojs/index.php/fm/article/download/9358/8051?inline=1
take-it-or-leave-it dichotomy. When an individual disagrees with (some of) the terms, there is no room for negotiation. Third, monopolies in the information industry are common due to network effects, low marginal costs, and technical lock-in (Anderson, 2014), which leaves little room for alternatives and thus for choice. For example, if an individual’s friends and relatives decided to use a certain social networking site (SNS), one has little choice but to follow their decisions, since SNSs are not interoperable (Au Yeung, et al., 2009).
Two alternative regulatory mechanisms can be considered to counteract these issues: (1) more (detailed) regulation through (command-and-control) law; and, (2) steer or enforce an individual’s behaviour through nudges or architecture (code) (Lessig, 2006; Murray and Scott, 2002; Calo, 2014, 2012; Brownsword, 2005) [13]. Law, when properly enforced, provides regulators with the means to dictate what is lawful and what is not. Many digital technologies are already regulated through law, e.g., through copyright, defamation, and obscenity laws (Lessig, 2006). Specific regulation could be enacted to ban certain practices, such as deceptive interfaces. However, too much regulation through command-and-control law might have negative effects on innovation and competition (Calo, 2014, 2012; Posner, 1981). Furthermore, law is expensive, difficult to enforce, and often politically unattractive (Calo, 2012).
More fundamentally, both mechanisms confine the debate on privacy and how to regulate it exclusively to the domain of regulators (such as governments, public/private institutions, and policy-makers). In democratic societies, individuals also have “a say as to what the rules are to be” [14], which calls for the possibility to intentionally break a rule as a way of challenging the rules potentially leading to changes in the law (Lessig, 2006; Brownsword, 2006; Koops and Leenes, 2005). This in turn requires active citizens with critical and reflective thinking capabilities (ten Dam and Volman, 2004; Mezirow, 1997). However, both nudges and code fail to engage with the regulatee (the individual) on a moral level because they reduce individual choice to one of compliancy, or by even completely omitting the ability to choose at all (Brownsword, 2005). This affects how individuals deal with and learn about privacy and privacy issues: by depreciating the moral value of choice, individuals feel resigned (Turow, et al., 2015; Hargittai and Marwick, 2016) and cannot become morally responsible agents (Brownsword, 2005; Mezirow, 2003, 1997).
Thus, both for individual privacy self-management as well as a larger societal discussion on privacy and privacy regulation, it is imperative that individuals become active citizens who regularly and deliberately think about and reflect on privacy issues and privacy choices. This entails that actual meaningful choices can be made. However, as argued, both nudges and code depreciate the moral value of choice by changing situations, not people (Brownsword, 2005). Thus, notice and consent should not be abandoned within the context of digital technology and privacy. However, the question then becomes: how can the issues identified above with notice and consent be mitigated in order to improve privacy self-management?
In this article, we argue that individuals should be encouraged through the design of the products and services they use to use their reflective thinking capabilities about privacy issues before, during, and after interacting with digital technology to make individual choices and regain a moral position. This requires at least three components: (1) a way to provoke individuals to escape their habitual behaviour and thoughts; (2) a phase of reflective thinking during which individuals actively reasons about their assumptions and beliefs and is supported by the product or service to learn about privacy (privacy literacy) and the potential consequences of privacy choices; and, (3) meaningful controls, or the ability to put newly learned thoughts and ideas into action. Together, these three elements will increase knowledge, evaluation, and choice within the context of digital technology and privacy.
Reflective thinking can be triggered through design, i.e., reflective design (Sengers, et al., 2005) or adversarial design (DiSalvo, 2012). Specifically, we argue that by applying friction as a disorienting dilemma, the reflective thinking process is triggered (Mezirow, 2006, 2000). Moreover, friction can guide further reflection and understanding as individuals are deliberately presented with ambiguous content and alternative viewpoints (Vasalou, et al., 2015). Friction can also foster (critical) discourse which leads to communicative learning (Mezirow, 2000, 1990), a learning dimension highly that is undervalued within the context of privacy and digital technology.
In this paper, we provide the theoretical ground for how designing for reflection could improve notice and consent as a regulatory mechanism within the context of privacy and digital technology. We propose a model with high-level design guidelines which designers could use when designing digital products or services which affect an individual’s privacy. When applying these guidelines, individuals are triggered to think consciously and reflectively, which increases privacy literacy and enables more deliberate decision-making. Moreover, it fosters (critical) discourse and
4/8/22, 9:00 PMImproving privacy choice through design: How designing for reflection could support privacy self-management
Page 4 of 20https://journals.uic.edu/ojs/index.php/fm/article/download/9358/8051?inline=1
restores the privacy debate to the public arena.
II. The complexity of privacy and individual choice
Privacy is a complex concept. Many attempts to conceptualise the term have been undertaken, but there is no single agreed upon definition (Whitman, 2004; Allen, 2000). Privacy differs amongst cultures (Moore, 2008; Altman, 1977), is “a plurality of different things” [15] and is “fundamentally dynamic” [16]. Moreover, the meaning of privacy evolves over time through personal experiences (Goldfarb and Tucker, 2012; Kezer, et al., 2016) and changing norms in society (Westin, 2004). In sum, “privacy is a living, continually changing thing, a fluid concept, dependent on socio-cultural factors” (Koops and Leenes, 2006).
Adding to this complexity is that there are many types of privacy: bodily, spatial, communicational, proprietary, intellectual, decisional, associational, behavioural, and informational (Koops, et al., 2016). Informational privacy refers to any piece of information about an individual, including its body, location, communication, property, thoughts, decisions, associations, and behaviour. Informational privacy can therefore be regarded as an overarching type over the other privacy dimensions. Within the context of digital technology, informational privacy is most relevant since technology typically enables the generation, storage, and distribution of personal data at a large scale.
Furthermore, there are different types of (personal) data. In a report by the World Economic Forum [17] a distinction is made between volunteered data — data created and explicitly shared by individuals, such as providing the name of one’s home town to a SNS, even though this piece of data might not be required for completing the registration; observed data — data captured by recording the actions of individuals, such as locational data often embedded in pictures made with devices also equipped with a GPS sensor; and, inferred data — data about individuals based on the analysis of volunteered or observed data, such as deriving that someone is healthy based on the distance ran in the previous year, or deriving an individual’s home address based on the location where running exercises frequently begin and end.
Within the context of digital technology, privacy is often viewed as the individual’s ability to control the use of one’s personal data (Westin, 1967; Allen, 2000; Schwartz, 1999; Koops, et al., 2016). However, there are conceptual, practical and moral limitations to this view (Allen, 2000). Instead, privacy can be best viewed as contextual integrity (Nissenbaum, 2011, 2004), meaning norms of appropriateness and norms of flow or distribution prescribe what is or is not a privacy violation in a particular context. These norms originate for example socially, culturally, and politically, and may be explicit or implicit, specific, or variable and incomplete.
Norms of appropriateness dictate within a given context what information is “allowable, expected, or even demanded to be revealed” [18]. For instance, certain fitness trackers allow individuals to personally keep track of their progress towards becoming more fit by analysing and comparing their distinct runs. For this to work, it is appropriate for the app to collect locational data, e.g., to display running routes and times within an app. Furthermore, norms of flow or distribution dictate how the information is (further) distributed. These norms indicate to what extent information can be shared by the recipient with others. For instance, the institution behind the fitness tracker needs to store (personal) information on their servers (provided the app contains cloud-syncing functionality). However, when this information is, unbeknownst to the individual, shared with third parties e.g., to generate additional revenue, this norm is breached. Satisfying both these norms constitutes maintaining contextual integrity. Violating either (or both) of these norms constitutes a privacy violation [19].
In addition to the above-mentioned theoretical complexity of privacy, managing one’s privacy preferences in practice is difficult as well (Solove, 2013; Cranor, et al., 2006; Trepte, et al., 2015). Privacy self-management suffers from two types of problems (Solove, 2013): (1) structural problems, such as, there simply are too many entities involved in collecting and using personal data. It is impossible to oversee all relations and data flows between these entities. Moreover, pieces of personal data are being aggregated over time and across separate databases, which makes it very hard, if not impossible, for individuals to properly assess potential privacy harms at a fixed moment in time. Lastly, consenting to the disclosure of personal data is often treated as an individual transaction, completely disregarding the larger social dimensions of privacy. (2) Cognitive problems, which impair an individual’s ability to make properly informed, rational choices as was referred to above already. Many privacy decisions are taken rapidly and intuitively,
4/8/22, 9:00 PMImproving privacy choice through design: How designing for reflection could support privacy self-management
Page 5 of 20https://journals.uic.edu/ojs/index.php/fm/article/download/9358/8051?inline=1
with relatively low cognitive effort, based on cognitive strategies such as heuristics (Thaler and Sunstein, 2009; Acquisti and Grossklags, 2005; Acquisti, 2004).
In order to manage one’s privacy and make good privacy decisions, two dimensions of knowledge about privacy are required: (1) generic knowledge, also referred to as privacy literacy, and (2) contextual knowledge, i.e., the potential consequences of disclosing personal data to a specific product or service. Privacy literacy is a combination of factual or declarative (knowing that) and procedural (knowing how) knowledge about privacy and data protection (Park, 2013; Trepte, et al., 2015; Langenderfer and Miyazaki, 2009). For example, the Online Privacy Literacy Scale (OPLIS) is an instrument that measures privacy literacy according to four different factors: (1) knowledge about institutional practices; (2) knowledge about technical aspects of data protection; (3) knowledge about data protection law; and, (4) knowledge about data protection strategies (Masur, et al., 2017). Privacy literacy is necessary to reason about privacy within a specific context. For example, generic knowledge about the fact that personal information is often aggregated from different sources should be used to reason about the potential consequences of a privacy decision within a specific context (e.g., will the personal information given to this product or service combined with others I already use?).
Thus far we can conclude that the required amount and nature of privacy knowledge is not trivial. Moreover, it takes considerable effort to apply such knowledge in specific online contexts in order to make good privacy decisions. In fact, this complexity is one of the major factors accounting for the existence of the privacy paradox, which refers to the fact that although many individuals value their privacy, most individuals disclose significantly more personal information online than their stated intentions would predict (Norberg, et al., 2007; Barnes, 2006; Acquisti and Gross, 2006; Jensen, et al., 2005). Thus, in order to be able to perform meaningful privacy self-management, individuals need a proper understanding of privacy (privacy literacy) and the potential consequences of their privacy choices; and critically reflect on these.
III. Improving privacy self-management with reflective thinking
Reflective thinking provides a possible way to cope with the complexity of privacy and privacy choices and hence could be encouraged to mitigate privacy loss by bad or wrong choices. Reflective thinking, or (critical) reflection as it is often called, is the examination of previous experiences and assumptions as input for future action, assumptions and decisions (White, et al., 2006; ten Dam and Volman, 2004; Ennis, 1991). It starts with an awareness that existing assumptions need to be (re-)examined, initiated by a disorienting dilemma (Mezirow, 2006, 2000): an “[a]nomal[y] […] of which old ways of knowing cannot make sense” [20]. A disorienting dilemma, or breakdown (Baumer, 2015), refers to experiences, beliefs, feelings that are incompatible with an individual’s existing frames of reference. It can be triggered from a sudden major event in life, e.g., a crisis, or an accumulation of smaller changes over time within one’s frames of reference (Mezirow, 2000) [21].
After this initial step, the reflective thinking process continues with what can be summarised as inquiry (Baumer, 2015); an examination into what caused the disorienting dilemma, e.g., by critically considering one’s existing assumptions or by exploring of possible solutions (Mezirow, 2006, 2000). Prior knowledge and prior experiences defined as frames of reference are used by individuals to continuously make sense of the world (Kitchenham, 2008; Mezirow, 2006; Cranton and King, 2003). Frames of reference comprises habits of mind and subsequent points of view (Kitchenham, 2008; Mezirow, 2006). Habits of mind are profound and “broad, abstract, orienting, habitual ways of thinking, feeling and acting, influenced by assumptions that constitute a set of codes” [22] and include, amongst others, sociolinguistic, moral-ethical, epistemic, philosophical, psychological, and aesthetic dimensions (Mezirow, 2006) of one’s identity (Illeris, 2014). Habits of mind are in turn expressed as points of view (Kitchenham, 2008; Mezirow, 2006) which refers to “the constellation of belief, memory, value judgement, attitude and feeling that shapes a particular interpretation” [23].
An example of a habit of mind within the context of privacy and digital technology is the conviction that privacy is indeed valuable and important to protect. However, a resulting point of view based on prior experiences can be apathy and powerlessness, due to the feeling individuals have “that once information is shared, it is ultimately out of their control” [24]. A positive experience with one situation, for example a personal photograph accidentally shared with
4/8/22, 9:00 PMImproving privacy choice through design: How designing for reflection could support privacy self-management
Page 6 of 20https://journals.uic.edu/ojs/index.php/fm/article/download/9358/8051?inline=1
the wrong person who then does not abuse this situation and immediately deletes the picture, might change one’s point of view towards that specific situation/person, but not immediately affect one’s deeper habit of mind (because ultimately, the use of the photograph once received by someone else is indeed out of control for the individual). However, when such situations occur more frequently, the individual’s habit of mind might eventually change incrementally.
Lastly, the reflective thinking process closes with a review and re-evaluation of one’s assumptions, leading to changes in beliefs, attitudes, and behaviour. During these phases, also referred to as transformation (Baumer, 2015), experiences are transformed into learning, e.g., by which new assumptions and beliefs are incorporated into an individual’s frames of reference (Mezirow, 2000).
When critical reflection is used to guide decision-making or action, it becomes (transformative) learning (Mezirow, 2000, 1990). Transformative learning refers to two types of learning: (1) instrumental learning, which is task-oriented and aims at specific steps/skills to control and manipulate the environment; and, (2) communicative which is about the meaning behind words and actions (Mezirow, 2006, 2003, 2000, 1990; Taylor, 2007). In different terms, through instrumental learning one learns what to do or how to act; through communicative learning, one learns about the reasons for why to act in certain ways. Within the context of privacy self-management, instrumental learning refers to privacy literacy; the factual and procedural knowledge related to privacy and privacy choices. Communicative learning on the other hand concerns critically reflecting on assumptions, beliefs, values, feelings, and meanings with regards to privacy and privacy choices.
Reflection applies to both instrumental and communicative learning, but these domains differ in how meaning is validated. Within instrumental learning, hypotheses are formed and empirically validated (Mezirow, 2000, 1990). For example, the question whether an SNS app can still be used without sharing locational data can be empirically tested by deciding not to share this data and assess whether the app still works. In communicative learning this approach cannot be used. Instead, here “meaning is validated through critical discourse” [25] which “always reflects wider patterns of relationship and power” [26]. Within communicative learning, for example, one would reflect on what it really means to share location data with an SNS: what if the SNS aggregates locational data of all one’s pictures and combines it with other types of data; will the SNS then be able to deduce all kinds of extra information, e.g., where one’s friends live or how many times a year one goes on a holiday? Is the (free) service one gets in return for this still worth the potential privacy risks that follow? These questions really concern an individual’s frames of reference and require considering a more global view (of one’s assumptions, beliefs, values, feelings, and meanings), which leads to deeper, more complex reflection and to a more profound transformation of values and beliefs (Kitchenham, 2008).
Reflective thinking is usually applied within an educational context (Taylor, 2007; White, et al., 2006). There, the reflective thinking process is generally triggered and guided by a present teacher who assists the learner in acquiring and enhancing skills, insights, and dispositions to become critically reflective (Mezirow, 2003, 2000). In order to do this, it is the responsibility of the teacher to create the right environment for reflective discourse to happen, by providing tools/experiences (e.g., by using metaphors, questioning students’ assumptions or providing feedback; Roberts, 2006) and by setting the right conditions (e.g., equality between participants, a safe and open environment) (Fook, 2015; Taylor, 2007; Mezirow, 2006, 2000).
However, within other contexts there is no present teacher to trigger and guide reflection. Within the context of digital technology and privacy, the product or service is all there is; people don’t read manuals or take courses. Hence, they should be designed to trigger and guide reflection. The designer thus becomes the ex-ante teacher by creating the right (digital) environment for reflection to occur even while the individual is not participating in an explicit learning activity. Such instructive designs, we hope, could then improve an individual’s ability to make deliberate, well- informed privacy decisions and encourage participation in the public debate concerning privacy.
IV. Better privacy decisions through designing for reflection
The importance and potential of reflection by design is acknowledged in design theories, especially in the ‘reflective design’ (Sengers, et al., 2005) and ‘adverserial design’ (DiSalvo, 2012) theories. Moreover, while interacting with artefacts, three levels of processing in the brain are potentially active: (1) the visceral level; (2) the behavioural level;
4/8/22, 9:00 PMImproving privacy choice through design: How designing for reflection could support privacy self-management
Page 7 of 20https://journals.uic.edu/ojs/index.php/fm/article/download/9358/8051?inline=1
and, (3) the reflective lev
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.