Evaluation Plan Project: Literature Review Adve
Evaluation Plan Project: Literature Review
Adventurous travelers may see their journey as an investigative effort to identify landmarks on their expedition. Similarly, evaluation is an investigative effort that attempts to identify the landmarks of merit, worth, and significance. Before fully refining the focus of any investigation or research individuals should attempt to determine the current state of affairs in their area of interest.
Researchers attempt to establish merit, worth, and significance applicable to other researchers, practitioners, and the body of knowledge. One of the most common ways to accomplish this is to conduct a literature review. A literature review allows one to take a “snapshot” of the established knowledge in a particular area. Using this snapshot, researchers can determine the merit of their research questions and how they may need to modify their research goals.
Key words: Electronic Health Records, Health Information Technology or HIT, Interoperability, Nursing
Required a 3-page literature review from SIX or more peer-reviewed articles that addresses the following:
· Synthesize the findings in the literature as they relate to the goal of INTEROPERABILITY within the health information technology and various hospitals.
· Explain the original conclusions that derived from the evidence gathered.
· Support the synthesis and conclusions using evidence from the literature.
I have attached few article for your review, but need to find more articles focused to the topic.
2/24/22, 12:49 PM Rubric Detail – Blackboard Learn
https://class.waldenu.edu/webapps/bbgs-deep-links-BBLEARN/app/course/rubric?course_id=_16936340_1&rubric_id=_2920715_1 1/4
Rubric Detail Select Grid View or List View to change the rubric's layout.
Excellent Good Fair Poor
Conduct a review of literature relevant to the case you selected and the goals you developed in Week 5. Locate a minimum of six full-text research articles to use in your literature review.
23 (23%) – 25 (25%)
Six appropriate articles are researched, and one research goal and one viewpoint are identi�ed clearly with speci�c detail regarding how system implementation evaluations researched are similar to the selected model.
20 (20%) – 22 (22%)
Six appropriate articles are researched, and one research goal and one viewpoint are identi�ed with some detail regarding how system implementation evaluations researched are similar to the selected model.
18 (18%) – 19 (19%)
Three approriate articles are researched, and one research goal and one viewpoint are identi�ed with details regarding how system implementation evaluations researched are similar to the selected model that are vague, inaccurate, or omitted.
0 (0%) – 17 (17%) Fewer than three articles are researched, and/or articles are inappropriate. Research goals, viewpoints, or details regarding how system implementation evaluations researched are similar to the selected are vague, incomplete, or missing.
Name: NURS_6541_Week6_Assignment_Rubric EXIT
Grid View List View
2/24/22, 12:49 PM Rubric Detail – Blackboard Learn
https://class.waldenu.edu/webapps/bbgs-deep-links-BBLEARN/app/course/rubric?course_id=_16936340_1&rubric_id=_2920715_1 2/4
Excellent Good Fair Poor
In a 2- to 3-page paper:
Synthesize the �ndings in the literature as they relate to the case you selected and goals you developed in Week 5.
23 (23%) – 25 (25%)
The response clearly, accurately, and with speci�c detail synthesizes six research articles as they relate to the selected case and goals developed.
20 (20%) – 22 (22%)
The response synthesizes six research articles as they relate to the selected case and goals developed.
18 (18%) – 19 (19%)
The response synthesizes three research articles with vague or inaccurate details regarding how they relate to the selected case and goals developed.
0 (0%) – 17 (17%) The response synthesizes fewer than three research articles, and/or provides vague, incomplete, or inaccurate details regarding how they relate to the selected case and goals developed.
Explain the original conclusions that you derived from the evidence you gathered.
23 (23%) – 25 (25%)
The response clearly, accurately, and with speci�c detail explains the original conclusions derived from the evidence you gathered.
20 (20%) – 22 (22%)
The response explains the original conclusions derived from the evidence you gathered.
18 (18%) – 19 (19%)
The response explains with vague or inaccurate details the original conclusions derived from the evidence you gathered.
0 (0%) – 17 (17%) The response explains with vague, inaccurate, and/or incomplete details the original conclusions derived from the evidence you gathered.
Support your synthesis and conclusions using evidence from the literature.
9 (9%) – 10 (10%) The response clearly, accurately, and with speci�c detail supports the synthesis and conclusions using evidence from the literature.
8 (8%) – 8 (8%) The response supports the synthesis and conclusions using evidence from the literature.
7 (7%) – 7 (7%) The response supports the synthesis and conclusions in a vague or inaccurate manner and/or fails to appropriately apply evidence from the literature.
0 (0%) – 6 (6%) The response supports the synthesis and conclusions in a vague, inaccurate, or incomplete manner and/or fails to apply evidence from the literature.
2/24/22, 12:49 PM Rubric Detail – Blackboard Learn
https://class.waldenu.edu/webapps/bbgs-deep-links-BBLEARN/app/course/rubric?course_id=_16936340_1&rubric_id=_2920715_1 3/4
Excellent Good Fair Poor
Written Expression and Formatting — Paragraph Development and Organization:
Paragraphs make clear points that support well- developed ideas, �ow logically, and demonstrate continuity of ideas. Sentences are carefully focused—neither long and rambling nor short and lacking substance. A clear and comprehensive purpose statement and introduction are provided that delineate all required criteria.
5 (5%) – 5 (5%) Paragraphs and sentences follow writing standards for �ow, continuity, and clarity.
A clear and comprehensive purpose statement, introduction, and conclusion are provided that delineate all required criteria.
4 (4%) – 4 (4%) Paragraphs and sentences follow writing standards for �ow, continuity, and clarity 80% of the time.
Purpose, introduction, and conclusion of the assignment are stated, yet are brief and not descriptive.
3 (3%) – 3 (3%) Paragraphs and sentences follow writing standards for �ow, continuity, and clarity 60%–79% of the time.
Purpose, introduction, and conclusion of the assignment are vague or o� topic.
0 (0%) – 2 (2%) Paragraphs and sentences follow writing standards for �ow, continuity, and clarity < 60% of the time.
No purpose statement, introduction, or conclusion were provided.
Written Expression and Formatting — English Writing Standards:
Correct grammar, mechanics, and proper punctuation
5 (5%) – 5 (5%) Uses correct grammar, spelling, and punctuation with no errors.
4 (4%) – 4 (4%) Contains a few (1 or 2) grammar, spelling, and punctuation errors.
3 (3%) – 3 (3%) Contains several (3 or 4) grammar, spelling, and punctuation errors.
0 (0%) – 2 (2%) Contains many (≥ 5) grammar, spelling, and punctuation errors that interfere with the reader’s understanding.
2/24/22, 12:49 PM Rubric Detail – Blackboard Learn
https://class.waldenu.edu/webapps/bbgs-deep-links-BBLEARN/app/course/rubric?course_id=_16936340_1&rubric_id=_2920715_1 4/4
Excellent Good Fair Poor
Written Expression and Formatting — The paper follows correct APA format for title page, headings, font, spacing, margins, indentations, page numbers, running heads, parenthetical/in- text citations, and reference list.
5 (5%) – 5 (5%) Uses correct APA format with no errors.
4 (4%) – 4 (4%) Contains a few (1 or 2) APA format errors.
3 (3%) – 3 (3%) Contains several (3 or 4) APA format errors.
0 (0%) – 2 (2%) Contains many (≥ 5) APA format errors.
Total Points: 100
Name: NURS_6541_Week6_Assignment_Rubric
EXIT
,
Research and Applications
Piloting a model-to-data approach to enable predictive
analytics in health care through patient mortality
prediction
Timothy Bergquist 1,
*, Yao Yan 2,
*, Thomas Schaffter 3 , Thomas Yu
3 , Vikas Pejaver
1 ,
Noah Hammarlund1, Justin Prosser4, Justin Guinney1,3, and Sean Mooney1
1Biomedical Informatics and Medical Education, University of Washington, Seattle, Washington, USA, 2Molecular Engineering
and Sciences Institute, University of Washington, Seattle, Washington, USA, 3Sage Bionetworks, Seattle, Washington, USA4Insti-
tute for Translational Health Sciences, University of Washington, Seattle, Washington, USA
*These authors contributed equally.
Corresponding Author: Sean Mooney, PhD, Biomedical Informatics and Medical Education, University of Washington, Seattle,
WA 98195, USA; [email protected]
Received 11 December 2019; Revised 16 April 2020; Editorial Decision 20 April 2020; Accepted 6 May 2020
ABSTRACT
Objective: The development of predictive models for clinical application requires the availability of electronic
health record (EHR) data, which is complicated by patient privacy concerns. We showcase the “Model to Data”
(MTD) approach as a new mechanism to make private clinical data available for the development of predictive
models. Under this framework, we eliminate researchers’ direct interaction with patient data by delivering con-
tainerized models to the EHR data.
Materials and Methods: We operationalize the MTD framework using the Synapse collaboration platform and
an on-premises secure computing environment at the University of Washington hosting EHR data. Container-
ized mortality prediction models developed by a model developer, were delivered to the University of Washing-
ton via Synapse, where the models were trained and evaluated. Model performance metrics were returned to
the model developer.
Results: The model developer was able to develop 3 mortality prediction models under the MTD framework us-
ing simple demographic features (area under the receiver-operating characteristic curve [AUROC], 0.693), dem-
ographics and 5 common chronic diseases (AUROC, 0.861), and the 1000 most common features from the
EHR’s condition/procedure/drug domains (AUROC, 0.921).
Discussion: We demonstrate the feasibility of the MTD framework to facilitate the development of predictive
models on private EHR data, enabled by common data models and containerization software. We identify chal-
lenges that both the model developer and the health system information technology group encountered and
propose future efforts to improve implementation.
Conclusions: The MTD framework lowers the barrier of access to EHR data and can accelerate the development
and evaluation of clinical prediction models.
Key words: electronic health records, clinical informatics, data sharing, privacy, data science
VC The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/),
which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact
[email protected] 1393
Journal of the American Medical Informatics Association, 27(9), 2020, 1393–1400
doi: 10.1093/jamia/ocaa083
Advance Access Publication Date: 8 July 2020
Research and Applications
D ow
nloaded from https://academ
ic.oup.com /jam
ia/article/27/9/1393/5868591 by guest on 29 M arch 2022
INTRODUCTION
Electronic health records and the future of data-driven
health care Healthcare providers substantially increased their use of electronic
health record (EHR) systems in the past decade.1 While the primary
drivers of EHR adoption were the 2009 Health Information Tech-
nology for Economic and Clinical Health Act and the data exchange
capabilities of EHRs,2 secondary use of EHR data to improve clini-
cal decision support and healthcare quality also contributed to
large-scale adoption.3 EHRs contain a rich set of information about
patients and their health history, including doctors’ notes, medica-
tions prescribed, and billing codes.4 The prevalence of EHR systems
in hospitals enables the accumulation and utilization of large clinical
data to address specific clinical questions. Given the size and com-
plexity of these data, machine learning approaches provide insights
in a more automated and scalable manner.5,6 Healthcare providers
have already begun to implement predictive analytics solutions to
optimize patient care, including models for 30-day readmissions,
mortality, and sepsis.7 As hospitals improve data capture quality
and quantity, opportunities for more granular and impactful predic-
tion questions will become more prevalent.
Hurdles to clinical data access Healthcare institutions face the challenge of balancing patient pri-
vacy and EHR data utilization.8 Regulatory policies such as Health
Insurance Portability and Accountability Act and Health Informa-
tion Technology for Economic and Clinical Health Act place the
onus and financial burden of ensuring the security and privacy of
the patient records on the healthcare institutions hosting the data. A
consequence of these regulations is the difficulty of sharing clinical
data within the research community. Research collaborations are of-
ten bound by highly restrictive data use agreements or business asso-
ciate agreements limiting the scope, duration, quantities, and types
of EHR data that can be shared.9 This friction has slowed, if not im-
peded, researchers’ abilities to build and test clinical models.9 While
these data host-researcher relationships are important and lead to
impactful collaborations, they are often limited to intrainstitution
collaborations, relegating many researchers with no healthcare insti-
tution connections to smaller public datasets or inferior synthetic
data. One exception to this is the patient-level prediction working
group in the Observational Health Data Sciences and Informatics
community, which developed a framework for building and exter-
nally validating machine learning models.10 While the PLP group
has successfully streamlined the process to externally validate model
performance, there is still an assumption that the model developers
have direct access to an EHR dataset that conforms to the Observa-
tional Medical Outcomes Partnerships (OMOP) Common Data
Model (CDM),11,12 on which they can develop their models. In or-
der to support model building and testing more widely in the re-
search community, new governance models and technological
systems are needed to minimize the risk of reidentification of
patients, while maximizing the ease of access and use of the clinical
data.
Methods for sharing clinical data De-identification of EHR data and the generation of synthetic EHR
data are 2 solutions to enable clinical data sharing. De-identification
methods focus on removing or obfuscating the 18 identifiers that
make up the protected health information as defined by the Health
Insurance Portability and Accountability Act.13 De-identification
reduces the risk of information leakage but may still leave a unique
fingerprint of information that is susceptible to reidentification.13,14
De-identified datasets like MIMIC-III are available for research and
have led to innovative research studies.15–17 However, these datasets
are either limited in size (MIMIC-III [Medical Information Mart for
Intensive Care-III] only includes 38 597 distinct adult patients and
49 785 hospital admissions), scope (MIMIC-III is specific to inten-
sive care unit patients), and availability (data use agreements are re-
quired to use MIMIC-III).
Generated synthetic data attempt to preserve the structure, for-
mat, and distributions of real EHR datasets but do not contain iden-
tifiable information about real patients.18 Synthetic data generators,
such as medGAN,16 can generate EHR datasets consisting of high-
dimensional discrete variables (both binary and count features), al-
though the temporal information of each EHR entry is not main-
tained. Methods such as OSIM2 are able to maintain the temporal
information but only simulate a subset of the data specific to a use-
case (eg, drug and treatment effects).19 Synthea uses publicly avail-
able data to generate synthetic EHR data but is limited to the 10
most common reasons for primary care encounters and 10 chronic
diseases that have the highest morbidity in the United States.20 To
our knowledge, no existing method can generate an entire synthetic
repository while preserving complete longitudinal and correlational
aspects of all features from the original clinical repository.
“Model to data” framework The “Model to Data” (MTD) framework, a method designed to al-
low machine learning research on private biomedical data, was de-
scribed by Guinney et al21 as an alternative to traditional data
sharing methods.The focus of MTD is to enable the development of
analytic tools and predictive models without granting researchers di-
rect, physical access to the data. Instead, a researcher sends a con-
tainerized model to the data hosts who are then responsible for
running the model on the researcher’s behalf. In contrast to the
methods previously described, in which the shared or synthetic data
were limited in both scope and size, an MTD approach grants a re-
searcher the ability to use all available data from identified datasets,
even as those data stay at the host sites, while not giving direct ac-
cess to the researcher. This strategy enables the protection of confi-
dential data while allowing researchers to leverage complete clinical
datasets. The MTD framework relies on modern containerization
software such as Docker22 or Singularity23 for model portability,
which serves as a “vehicle,” sending models designed by a model de-
veloper to a secure, isolated, and controlled computing environment
where it can be executed on sensitive data. The use of containeriza-
tion software not only facilitates the secure delivery and execution
of models, but it opens up the ability for integration into cloud envi-
ronments (eg, Amazon Web Services, Google Cloud) for cost-
effective and scalable data analysis.
The MTD approach has been successful in a series of recent com-
munity challenges but has not yet been shown to work with large,
EHR datasets.24 Here, we present a pilot study of an MTD frame-
work implementation enabling the intake and ingestion of contain-
erized clinical prediction models by a large healthcare institution
(the University of Washington health system, UW Medicine) to their
on-premises secure computing infrastructure. The main goals of this
pilot are to demonstrate (1) the operationalization of the MTD ap-
proach within a large health system, (2) the ability of the MTD
framework to facilitate predictive model development by a re-
searcher (here referred to as the model developer) who does not
1394 Journal of the American Medical Informatics Association, 2020, Vol. 27, No. 9
D ow
nloaded from https://academ
ic.oup.com /jam
ia/article/27/9/1393/5868591 by guest on 29 M arch 2022
have direct access to UW Medicine EHR data, and (3) the feasibility
of a MTD community challenge for evaluating clinical algorithms
on remotely stored and protected patient data.
MATERIALS AND METHODS
Pilot data description The UW Medicine enterprise data warehouse (EDW) includes pa-
tient records from medical sites across the UW Medicine system in-
cluding the University of Washington Medical Center, Harborview
Medical Center, and Northwest Hospital and Medical Center. The
EDW gathers data from over 60 sources across these institutions in-
cluding laboratory results, microbiology reports, demographic data,
diagnosis codes, and reported allergies. An analytics team at the Uni-
versity of Washington transformed the patient records from 2010 to
the present day into a standardized data format, OMOP CDM v5.0.
For this pilot study, we selected all patients who had at least 1 visit
in the UW OMOP repository, which represented 1.3 million
patients, 22 million visits, 33 million procedures, 5 million drug ex-
posure records, 48 million condition records, 10 million observa-
tions, and 221 million measurements.
Scientific question for the pilot of the “model to data”
approach For this MTD demonstration, the scientific question we asked the
model developer to address was the following: Given the past elec-
tronic health records of each patient, predict the likelihood that he/
she will pass away within the next 180 days following his/her last
visit. Patients who had a death record and whose last visit records
were within 180 days of the death date were defined as positives.
Negatives were defined as patients whose death records were more
than 180 days away from the last visit or who did not have a death
record and whose last visit was at least 180 days prior to the end of
the available data.
We selected all-cause mortality as the scientific question due to
the abundance and availability of patient outcomes from the Wash-
ington state death registry. As UW has linked patient records with
state death records, the gold standard benchmarks are not con-
strained to events happening within the clinic. Moreover, the mor-
tality prediction question has been thoroughly studied.25–27 For
these reasons, patient mortality prediction represents a well-defined
proof-of-concept study to showcase the potential of the MTD evalu-
ation platform.
Defining the training and evaluation datasets For the purpose of this study, we split the data into 2 sets: the train-
ing and evaluation datasets. In a live healthcare setting, EHR data is
constantly changing and evolving along with clinical practice, and
prospective evaluation of predictive models is important to ensure
that the clinical decision support recommendations generated from
model predictions are robust to these changes. We defined the evalu-
ation dataset as patients who had more recently visited the clinic
prior to our last death record and the training dataset as all the other
patients. This way the longitudinal properties of the data would be
approximately maintained.
The last death record in the available UW OMOP repository at the
time of this study was February 24, 2019. Any record or measurement
that was found after this date was excluded from the pilot dataset and
this date was defined as “end of data.” When building the evaluation
dataset, we considered the date 180 days prior to the end of data (Au-
gust 24, 2018) the end of the “evaluation window” and the beginning
of the evaluation window to be 9 months prior to the evaluation win-
dow start (November 24, 2017). We chose a 9-month evaluation win-
dow size because this resulted in an 80/20 split between the training
and evaluation datasets. We defined the evaluation window as the pe-
riod of time in which, if a patient had a visit, we included that patient
and all their records in the evaluation dataset. Patients who had visits
outside the window, but none within the window, were included in
the training data. Visit records that fell after the evaluation window
end were removed from the evaluation dataset (Figure 1, patient 7)
and from the training dataset for patients who did not have a con-
firmed death (Figure 1, patient 3). We only defined the true positives
for the evaluation dataset and created a gold standard of these
patients’ mortality status based on their last visit date and the death ta-
ble. However, we gave the model developer the flexibility to select
prediction dates for patients in the training dataset and to create corre-
sponding true positives and true negatives for training purposes. See
the Supplementary Appendix for additional information.
Model evaluation pipeline Docker containerized models
Docker is a tool designed to facilitate the sharing of software and de-
pendencies in a single unit called an image.22 These images make
Figure 1. Defining the evaluation dataset. Any patient with at least 1 visit within the evaluation window was included in the evaluation dataset (gold). All other pa-
tient records were added to the training dataset (blue). Visits that were after the evaluation window end were excluded from the evaluation dataset and from the
training dataset for patients who did not have a confirmed death (light/transparent blue). A 9-month evaluation window was chosen as the timeframe as that
resulted in an 80/20 split between the training dataset and the evaluation dataset.
Journal of the American Medical Informatics Association, 2020, Vol. 27, No. 9 1395
D ow
nloaded from https://academ
ic.oup.com /jam
ia/article/27/9/1393/5868591 by guest on 29 M arch 2022
package dependency, language compilation, and environmental var-
iables easier to manage. This technology enables the simulation of
an operating system that can be run on any computer that has the
Docker engine or compatible container runtime installed. These con-
tainers can also be completely isolated from the Internet or the
server on which they are hosted, an important feature when bringing
unknown codes to process protected data. For this study, the model
developer built mortality prediction Docker images, which included
dependencies and instructions for running models in the Docker
container.
Synapse collaboration platform
Synapse is an open-source software platform developed by Sage
Bionetworks (Seattle, WA) for researchers to share data, compare
and communicate their methodologies, and seek collaboration.28
Synapse is composed of a set of shared REST (representational state
transfer)-based web services that support both a website to facilitate
collaboration among scientific teams and integration with analysis
tools and programming languages to allow computational interac-
tions.29The Synapse platform provides services that enable submis-
sions of files or Docker images to an evaluation queue, which have
previously been used to manage containerized models submitted to
DREAM challenges.28 We use an evaluation queue to manage the
model developer’s Docker image submissions.
Submission processing pipeline
To manage the Docker images submitted to the Synapse Collabora-
tion Platform, we used a Common Workflow Language (CWL)
pipeline, developed at Sage Bionetworks. The CWL pipeline moni-
tors an evaluation queue on Synapse for new submissions, automati-
cally downloading and running the docker image when the
submission is detected. Executed commands are isolated from net-
work access by Docker containers run on UW servers.
UW on-premises server infrastructure
We installed this workflow pipeline in a UW Medicine environment
running Docker v1.13.1. UW Research Information Technology
uses CentOS 7 (Red Hat Linux) for their platforms. The OMOP
data were stored in this environment and were completely isolated
behind UW’s firewalls. The workflow pipeline was configured to
run up to 4 models in parallel. Each model had access to 70 GB of
RAM, 4 vCPUs, and 50 GB of SSD.
Institutional review board considerations We received an institutional review board (IRB) nonhuman subjects
research designation from the University of Washington Human
Subjects Research Division to construct a dataset derived from all
patient records from the EDW that had been converted to the
OMOP v5.0 Common Data Model (institutional review board num-
ber: STUDY00002532). Data were extracted by an honest broker,
the UW Medicine Research IT data services team, and no patient
identifiers were available to the research team. The model developer
had no access to the UW data.
RESULTS
Model development, submission, and evaluation For this demonstration, a model developer built a dockerized mor-
tality prediction model. The model developer was a graduate student
from the University of Washington who did not have access to the
UW OMOP clinical repository. This model was first tested on a syn-
thetic dataset (SynPUF),30 by the model developer to ensure that the
model did not fail when accessing data, training, and making predic-
tions. The model developer submitted the model as a Docker image
to Synapse, via a designated evaluation queue, in which the Docker
image was uploaded to a secure Docker Hub cloud storage service
managed by Sage Bionetworks. The CWL pipeline at the UW secure
environment detected this submission and pulled the image into the
UW computing environment. Once in the secure environment, the
pipeline verified, built, and ran the image through 2 stages, the train-
ing and inference stages. During the training stage, a model was
trained and saved to the mounted volume “model” and during the
inference stage a “predictions.csv” file was written to the mounted
volume “output” with mortality probability scores (between 0 and
1) for each patients in the evaluation dataset (Figure 2). Each stage
Figure 2. Schema showing the Docker container structure for the training stage and inference stage of running the Docker image.
1396 Journal of the American Medical Informatics Association, 2020, Vol. 27, No. 9
D ow
nloaded from https://academ
ic.oup.com /jam
ia/article/27/9/1393/5868591 by guest on 29 M arch 2022
had a mounted volume “scratch” available for storing intermediate
files such as selected features (Figure 2). The model developer speci-
fied commands and dependencies (eg, python packages) for the 2
stages in the Dockerfile, train.sh, and infer.sh. The training and eval-
uation datasets were mounted to read-only volumes designated
“train” and “infer” (Figure 2).
After checking that the “predictions.csv” file had the proper for-
mat
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.