What quality improvement measures would you suggest to enhance patient safety and quality within the EHR?? 2. What role do you see the APN playing in Quality and Patient Safety initi
Health Information System
PLEASE USE THE READING RESOURCES INCLUDING OTHER OUTSIDE RESOURCES TO COMPLETE THE ASSIGNMENT.
Please provide in-text citation of all references used
Instructions:
Based on this week’s readings and the safety measures identified in the text, respond to the following in 600 words
1. What quality improvement measures would you suggest to enhance patient safety and quality within the EHR?
2. What role do you see the APN playing in Quality and Patient Safety initiatives?
3. Explain from a provider perspective what data you would utilize within quality and patient safety initiatives.
READING
McBride and Tietze (2022)
∙ Chapter 9: Workflow Redesign in Quality Improvement
∙ Chapter 20: HIT and Implications for Patient Safety
∙ Chapter 21: Quality Improvement Strategies
Additional resources
∙ Tubaishat, A, (2019).The Effects of Electronic Health Records on Patient Safety Available at: https://northernkentuckyuniversity.idm.oclc.org/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=asn&AN=141206600
∙ Cohen et al., (2018). Primary care practices’ abilities and challenges in using electronic health record data f
By Deborah J. Cohen, David A. Dorr, Kyle Knierim, C. Annette DuBard, Jennifer R. Hemler, Jennifer D. Hall, Miguel Marino, Leif I. Solberg, K. John McConnell, Len M. Nichols, Donald E. Nease Jr., Samuel T. Edwards, Winfred Y. Wu, Hang Pham-Singer, Abel N. Kho, Robert L. Phillips Jr., Luke V. Rasmussen, F. Daniel Duffy, and Bijal A. Balasubramanian
Primary Care Practices’ Abilities And Challenges In Using Electronic Health Record Data For Quality Improvement
ABSTRACT Federal value-based payment programs require primary care practices to conduct quality improvement activities, informed by the electronic reports on clinical quality measures that their electronic health records (EHRs) generate. To determine whether EHRs produce reports adequate to the task, we examined survey responses from 1,492 practices across twelve states, supplemented with qualitative data. Meaningful-use participation, which requires the use of a federally certified EHR, was associated with the ability to generate reports—but the reports did not necessarily support quality improvement initiatives. Practices reported numerous challenges in generating adequate reports, such as difficulty manipulating and aligning measurement time frames with quality improvement needs, lack of functionality for generating reports on electronic clinical quality measures at different levels, discordance between clinical guidelines and measures available in reports, questionable data quality, and vendors that were unreceptive to changing EHR configuration beyond federal requirements. The current state of EHR measurement functionality may be insufficient to support federal initiatives that tie payment to clinical quality measures.
S ince 2008, adoption of office-based physician electronic health records (EHRs) has more than doubled.1
Federal investment played a critical role in accelerating EHR adoption
through a combination of financial incentives (the EHR Incentive Program) and technical as- sistance programs (Regional Extension Cen- ters).2–6 The expectation was that widespread adoption of EHRs would efficiently generate meaningful data, enabling accurate measure- ment of quality, informing practice quality im- provement efforts, and ultimately leading to im- proved care processes and outcomes.Yet little is known about how well EHRs meet these expect-
ations, particularly among primary care practic- es with scarce technical resources.7–11
The EHR Incentive Program set standards for the meaningful use of EHRs, which included implementing an EHR system and demonstrat- ing its use to improve care. Therewere seventeen core standards defined in stages 1 and 2 of the meaningful-use program (2015–17). Stage 3 be- gan in 2017 and expanded the requirements to include health information exchange, interoper- ability, and advanced quality measurement to maximize clinical effectiveness and efficiency by supporting quality improvement. As of 2017 the EHR Incentive Program defined sixty-four electronic clinical quality measures12 that are
doi: 10.1377/hlthaff.2017.1254 HEALTH AFFAIRS 37, NO. 4 (2018): 635–643 ©2018 Project HOPE— The People-to-People Health Foundation, Inc.
Deborah J. Cohen ([email protected] ohsu.edu) is a professor of family medicine and vice chair of research in the Department of Family Medicine at Oregon Health & Science University, in Portland.
David A. Dorr is a professor and vice chair of medical informatics and clinical epidemiology, both at Oregon Health & Science University.
Kyle Knierim is an assistant research professor of family medicine and associate director of the Practice Innovation Program, both at the University of Colorado School of Medicine, in Aurora.
C. Annette DuBard is vice president of Clinical Strategy at Aledade, Inc., in Bethesda, Maryland.
Jennifer R. Hemler is a research associate in the Department of Family Medicine and Community Health, Research Division, Rutgers Robert Wood Johnson Medical School, in New Brunswick, New Jersey.
Jennifer D. Hall is a research associate in family medicine at Oregon Health & Science University.
Miguel Marino is an assistant professor of family medicine at Oregon Health & Science University.
Leif I. Solberg is a senior adviser and director for care improvement research at HealthPartners Institute, in Minneapolis, Minnesota.
April 2018 37 :4 Health Affairs 635
Health Information Technology
Downloaded from HealthAffairs.org on August 30, 2022. Copyright Project HOPE—The People-to-People Health Foundation, Inc.
For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.
aligned with national quality standards. The rationale behind using these measures was to reduce the need for clinicians’ involvement in reporting by using data already collected within the EHR and automating the electronic submis- sion of results. Quality measurement for payment grew with
the 2006 implementation of the Physician Qual- ityReportingSystem, as an increasingnumberof clinicians and practices reported their quality data electronically. In 2016 the Quality Payment Program6 was developed as a way to streamline quality reporting programs while expanding the expectations of electronic reporting as defined by the Merit-based Incentive Payment Program. A core expectation of meaningful use and the subsequent Quality Payment Program was for EHRs to have the capability to measure and re- port electronic clinical quality measures and for practices to use these data to improve quality. To that end, the Office of the National Coordinator for Health Information Technology (ONC) worked with the Centers for Medicare and Med- icaid Services (CMS) and stakeholders to estab- lish a set of certification criteria for EHRs. Use of an ONC-certified EHRwas a core requirement of meaningful use. The functionality of certified EHR systems’ reporting of electronic clinical quality measures was aligned with CMS-based incentives and quality criteria; it is anticipated that these quality-based incentives will continue as meaningful use evolves into the Quality Pay- ment Program. Clinicians participating in the Quality Pay-
ment Program were required to report on the full 2017 performance period by March 31, 2018. In addition to meeting external reporting requirements, EHRsmust help practices identify delivery gaps and “bright spots” of perfor- mance13,14 that are critical for quality improve- ment. This requires the ability to produce pa- tient-, clinician-, and practice-level reports across various measurement periods and at dif- ferent frequencies and to allow for customized specifications to conduct improvement cycles.15
EHR systems often fail to meet these expecta- tions, but it is unclear whether this is because of implementationdifferences, providers’ lackof knowledge about capabilities, or lack of capabil- ities in the EHRs themselves.16–18
We explore how well EHRs—as currently implemented—meet the measurement-related quality improvement needs in primary care prac- tice. To do so, we examined survey data from 1,492 practices and combined this information with qualitative data to gain a richer answer than surveys alone could provide. Our findings high- light the challenges that practices face as value- based payment replaces volume-based systems.
Study Data And Methods Study Design And Cohort In 2015 the Agency for Healthcare Research and Quality (AHRQ) launched EvidenceNOW: Advancing Heart Health in Primary Care. EvidenceNOW is a three-year initiative dedicated to helping small and medium-size primary care practices across the US use the latest evidence to improve cardio- vascular health and develop their capacity for ongoing improvement. AHRQ funded seven grantees (called cooperatives) that span seven US regions (and twelve states). Cooperatives were tasked with developing and leveraging sus- tainable infrastructure to support over 200 prac- tices in their regions in improving electronic clinical quality measures endorsed by CMS and the National Quality Forum for aspirin use,19
blood pressure monitoring,20 cholesterol man- agement,21 and smoking screening and cessation support22 (the ABCS measures). AHRQ also funded an evaluation of this initia-
tive called Evaluating SystemChange to Advance Learning and Take Evidence to Scale (ESCA- LATES) to centralize, harmonize, collect, and analyze mixed-methods data with the goal of generating cross-cooperative, generalizable findings.23 ESCALATES started at the same time the cooperatives’ work began. The goals of ES- CALATES included identifying facilitators of and barriers to implementing regionwide infrastruc- ture to support quality improvement among pri- mary care practices, of which health information technology (IT) was a central component. Data Sources ESCALATES compiled quanti-
tative survey data collected by the cooperatives from the 1,492practices.While cooperative study designs (for example, steppedwedge, group ran- domized trials) varied, all cooperatives used their first year (May 2015–April 2016) for re- cruitment and start-up activities, and all stag- gered the time at which practices received the intervention. Survey data were collected from practices before the start of the intervention (that is, at baseline),which ranged fromSeptem- ber 2015 to April 2017.We collected complemen- tary qualitative data (observation, interview, and online diary) for this study in the period May 2015–April 2017.23 We chose this time peri- od because it gave us exposure to the data issues that manifested themselves during start-up and implementation. Qualitative Data Collection And Manage-
ment We conducted two site visits with every cooperative. The first site visit occurred before implementation of the intervention (August 2015–March 2016) and focused on understanding the cooperative, its partners, re- gional resources (including EHRand data capac- ities), and approach to supporting large-scale
K. John McConnell is a professor of emergency medicine and director of the Center for Health Systems Effectiveness, both at Oregon Health & Science University.
Len M. Nichols is director of the Center for Health Policy Research and Ethics and a professor of health policy at George Mason University, in Fairfax, Virginia.
Donald E. Nease Jr. is an associate professor of family medicine at the University of Colorado School of Medicine, in Aurora.
Samuel T. Edwards is an assistant research professor of family medicine and an assistant professor of medicine at Oregon Health & Science University and a staff physician in the Section of General Internal Medicine, Veterans Affairs Portland Health Care System.
Winfred Y. Wu is clinical and scientific director in the Primary Care Information Project at the New York City Department of Health and Mental Hygiene, in Long Island City, New York.
Hang Pham-Singer is senior director of quality improvement in the Primary Care Information Project at the New York City Department of Health and Mental Hygiene.
Abel N. Kho is an associate professor and director of the Center for Health Information Partnerships, Northwestern University, in Chicago, Illinois.
Robert L. Phillips Jr. is vice president for research and policy at the American Board of Family Medicine, in Washington, D.C.
Luke V. Rasmussen is a clinical research associate in the Department of Preventive Medicine, Northwestern University.
F. Daniel Duffy is professor of medical informatics and internal medicine at the University of Oklahoma School of Community Medicine–Tulsa.
Health Information Technology
636 Health Affairs April 2018 37 :4 Downloaded from HealthAffairs.org on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc. For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.
practice improvement. The second site visit was conducted during implementation of the inter- vention (July 2016–April 2017) and focused on observing practice facilitators workwith practic- es.We observed forty-one facilitators conducting sixty uniquepractice quality improvement visits. During site visits we took field notes and con- ducted and recorded (and later transcribed the recordings of) semistructured interviews with key stakeholders (for example, investigators, fa- cilitators, and health IT experts). To supplement observation and interview
data, we attended and took notes at a meeting of an AHRQ-initiated cooperative work group to discuss health IT challenges, and we imple- mented an online diary24 for each cooperative that included documentation by key stakehold- ers (such as investigators, health ITexperts, and facilitators) of implementation experiences in real time (approximately twice a month). Online diary data, interviews, meeting notes,
and field notes were deidentified for individual participants and reviewed for accuracy. To con- firm our findings, cooperative representatives completed a table that characterized obstacles to using EHR data for quality improvement. We used Atlas.ti for data management and
analysis. The Oregon Health & Science Univer- sity Institutional Review Board approved and monitored this study.
Survey Measures Cooperatives administered a survey to all of their practices. The survey, completed by a lead clinician or practice manag- er, consisted of a subset of questions from the NationalAmbulatoryMedicalCareSurvey’sElec- tronic Medical Records Questionnaire25–28 and assessedpractice characteristics, EHRcharacter- istics,29 and reporting capabilities (see online appendix exhibit A1 for survey items).30
Qualitative Data Analysis Three authors (Deborah Cohen, Jennifer Hemler, and Jennifer Hall) analyzed qualitative data in real time fol- lowing an immersion-crystallization approach31
and coded data to identify text related to clinical qualitymeasurement, quality improvement, and EHRs.We analyzed data within and across coop- eratives to identify nuanced findings and varia- tions regarding usage of EHRs for quality im- provement. Data collection and analysis were iterative; initial findings prompted additional questions that were later answered in the online diaries and during site visits to cooperatives.32
We triangulated data with other sources, discus- sing differences until we reached saturation— the point at which no new findings emerged.32
Qualitative findings informed the selection of variables for quantitative analyses, and both quantitative and qualitative data informed inter- pretations.
Quantitative Data Analysis Two authors (Bijal Balasubramanian and Miguel Marino) useddescriptive statistics to characterize theEvi- denceNOW practice sample and used multivari- able logistic regression to evaluate the associa- tion between practice characteristics and EHR reporting capability, measured as a “yes” or “no” response to the following question: “Does your practice have someone who can configure or write quality reports from the EHR?” Indica- tor variables for cooperatives were included in the logistic model to account for regional vari- ability, and we used multiple imputation by chained equations to account for missing data (see appendix exhibit A2).30 We performed sta- tistical analyses using R, version 3.4.0. Limitations Our study had several limita-
tions. First, our findings may have underesti- mated the challenges that practices face in using EHRs for quality measurement, as the practices recruited to participate in EvidenceNOW may have self-selected based on their greater quality improvement and health IT confidence. Second, our understanding of practices’ chal-
lenges in using EHRs for quality measurement was based on the views of cooperative experts and does not necessarily represent the practices’ perspectives. Thus, we were unable to quantify the extent to which practices experienced these problems.Yet it is from the cooperatives’ vantage point that we identified problems that are often difficult to characterize using practice-level sur- veys, and it may be that solutions are most effec- tive at the regional rather than practice level. Third, our primary survey outcome—the re-
sponse to the question “Does your practice have someone who can configure or write quality re- ports from the EHR?”—combines workforce and reporting capacity in a single item.While itmight be preferred to parse these issues in separate items, we did not do this because of concerns about response burden.Our qualitative data sug- gest that directing more survey questions to practicesmight not have been useful, since prac- tices lack staff with the expertise to answermore technically complex questions. Data collected from cooperatives’ health IT experts comple- mented practice survey data, shedding light on this complex issue. Fourth, our study findingswere also limited by
our inability to identify whether some EHRs faced more or fewer challenges than others, and by the fact that some survey items had more than 10 percent missing data. However, our con- clusions were based on one of the largest studies of geographically dispersed primary care prac- tices, and the use of multiple imputation lever- aged this scale to minimize potential bias due to missing data.
Bijal A. Balasubramanian is an associate professor in the Department of Epidemiology, Human Genetics, and Environmental Sciences, and regional dean of UTHealth School of Public Health, in Dallas, Texas.
April 2018 37 :4 Health Affairs 637 Downloaded from HealthAffairs.org on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc. For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.
Study Results Of the 1,710 practices recruited to Evidence- NOW, 1,492 (87.3 percent) completed the prac- tice survey. The majority of these practices had tenor fewer clinicians (84percent), were located in urban or suburban areas (71 percent), and were owned by clinicians (40 percent) or hospi- tal/health systems (23 percent) (exhibit 1). Over 93 percent used EHRs, of which 81 percent were certified by the ONC for 2014.While sixty-eight different EHRs were represented, Epic, eClini- calWorks, and NextGen were the most common- ly used systems. The number of different EHR systems among practices within a cooperative
ranged from four to thirty-two. Sixty percent of practices participated in stages 1 and 2 of meaningful use. (More detailed findings are in exhibit 1 and appendix exhibit A2.)30
Challenges Using Electronic Clinical Quality Measures For Quality Improvement Practices and quality improvement facilitators experienced significant challenges using EHRs to generate tailored reports of electronic clinical quality measures for quality improvement, which led to substantial delays in reporting qual- ity measures and engaging in measurement- informed quality improvement activities (ex- hibit 2).
Exhibit 1
Selected characteristics and electronic health record (EHR) system capacity of 1,492 EvidenceNOW practices
Practices Range across cooperatives (%)Number Percent
Practice size (number of clinicians)
1 356 23.9 6.2–52.4 2–5 696 46.6 16.2–59.1 6–10 205 13.7 6.8–17.2 11 or more 160 10.7 1.9–23.4
Practice ownership
Clinician 603 40.4 27.8–72.8 Hospital/health system 342 22.9 1.6–53.9 Federala 322 21.6 8.4–42.7 Academic 19 1.3 0.0–5.8 Other or noneb 147 9.9 1.0–38.8
Practice locationc
Urban 948 63.5 34.9–100.0 Suburban 107 7.2 0.0–14.8 Large town 202 13.5 0.0–29.5 Rural area 235 15.8 0.0–27.9
Electronic health record characteristics
Practices using ONC-certified EHR (n= 1,490) 1,215 81.5 58.9–100.0 Participation in meaningful use (n= 1,490) Neither stage 1 nor stage 2 230 15.4 8.4–23.8 Stage 1 only 176 11.8 5.3–20.7 Stages 1 and 2 887 59.5 38.0–84.5
Clinical quality measure reporting capability
Produced a CQM in prior 6 monthsd (n= 1,281) Aspirin 616 48.1 30.9–65.0 Blood pressure 817 63.8 43.5–78.8 Smoking 868 67.8 48.7–80.8 All three 596 46.5 29.8–64.2
Report CQMs at practice leveld (n= 1,069) 897 84.0 52.7–95.7 Report CQMs at provider leveld (n= 1,069) 903 84.5 55.2–94.7 Ability to create CQM reports from EHRe (n= 1,490) 913 61.3 37.2–75.2
SOURCE Authors’ analysis of data from the EvidenceNOW practice survey. NOTES Percentages might not sum to 100 because of missing data. Denominators for some variables are dependent on survey skip logic. ONC is Office of the National Coordinator for Health Information Technology. aIncludes federally qualified health centers; rural health clinics; Indian Health Service clinics; and Veterans Affairs, military, Department of Defense, or other federally owned practices. bIncludes practices with nonfederal, private/nonclinician, or tribal ownership; those indicating “other” without specifying an ownership type; and those responding “no” to every other ownership type. cLocation categories determined using rural-urban commuting area codes. Practices in one cooperative were excluded from the analysis because questions about reporting clinical quality measures (CQMs) were not included in their survey. dOne cooperative was excluded from the analysis because CQM reporting questions were not included in their survey eMore than 15 percent of the practices had missing data.
Health Information Technology
638 Health Affairs April 2018 37 :4 Downloaded from HealthAffairs.org on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc. For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.
Generating Reports Of Electronic Clini- cal Quality Measures For Quality Improve- ment Practices participating in stages 1 and 2 of meaningful use were more likely to report being able to generate reports of electronic clinical quality measures at the practice and clinician levels, compared to practices not participating (odds ratio: 1.65) (exhibit 3). Similarly, practices participating in quality improvement demon- stration projects or in external payment pro- grams that incentivized quality measurement had 51–73 percent higher odds of reporting an ability to generate reports of electronic clinical quality measures (exhibit 3). Facilitators and health ITexperts working directly with practices noted that practices could produce reports that complied with meaningful use. However, EHR reporting tools did not meet practices’ needs for quality improvement measurement.
Practices reported needing reports with cus- tomizable time frames, which could be repeated as desired, to align with quality improvement activities. Cooperative experts reported that some ONC-certified EHRs, as implemented, could generate Physician Quality Reporting Sys- tem or meaningful-use clinical quality reports only for a calendar year. When functions were available to customize measurement periods, significant manual configuration or additional modules were required. According to a report on measurement challenges from cooperative 3, “out of the box tools are inadequate to use for routine quality improvement. This necessitated working with vendors to deploy reports in the linked reporting tool, which required expertise in database query writing, which is almost uni- versally absent from the skillset of staff at inde- pendent small practices.”
Exhibit 2
Challenges with using electronic health records (EHRs) for quality measurement and improvement
Challenge Specific problems
General challenges
Inability to produce clinical quality reports that align with quality improvement needs
ONC-certified EHRs for meaningful use do not provide customizable measure specifications, date ranges, and frequency of reports.
Vendors are resistant to making changes to EHRs beyond what is required for ONC certification and meaningful use, and any changes are expensive and take too much time to deliver.
Most practices lack the technical expertise to extract and prepare data and cannot afford external consultants.
Inability to produce clinical quality reports at practice, clinical team, clinician, and patient levels
Most EHRs lack this functionality, which is necessary to compare clinicians and produce lists of patients in need of services or of services needed by individual patients.
Purchasing this functionality is an upgrade expense that smaller practices cannot afford. When this functionality is present, smaller primary care practices usually lack the necessary health IT expertise to make use of these tools.
Data from EHR reports are not credible or trustworthy EHR design features lead to suboptimal documentation of clinical quality measures (for example, EHRs lack consistent or obvious places to document the measures).
Clinical team documentation behavior leads to incomplete extraction of clinical quality variables.
Delays in modifying specifications when guidelines or measures change
Delays in government revision of value sets after changes occur. Delays in vendor programmatic changes per value set changes. Delays in practice EHR upgrades.
Challenges in developing regional data infrastructure (data warehouses, hubs, exchanges)
Cooperatives developing regional data infrastructure encounter developmental delays
Vendors charge excessive fees for connecting practices to a data warehouse, hub, or health information exchange.
Vendors are unresponsive and “drag their heels” when working with cooperatives to create connections.
Vendors exclude information from continuity-of-care documents that is critical to calculating clinical quality measures.
Vendor tools for exporting batches of the documents are slow, making the documents difficult to export.
Data exported in batches of the documents lack credibility and trustworthiness for the reasons listed above.
Inability to benchmark performance because data extracted from different EHRs are not comparable
Variations in EHR system versions and implementations. Vendors make different decisions about what fields or codes to include when calculating clinical quality measures.
SOURCE Authors’ analysis of qualitative data from EvidenceNOW practices. NOTE Continuity-of-care documents are explained in the text. ONC is Office of the National Coordinator for Health Information Technology. IT is information technology.
April 2018 37 :4 Health Affairs 639 Downloaded from HealthAffairs.org on August 30, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc. For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.
EHRvendors charged extra fees to access these tools, and smaller practices couldnot pay for this assistance. Additionally, some EHRs could gen- erate meaningful-use metrics only for patients with Medicare or Medicaid coverage (often a minority of practice patients). Many vendors were resistant to making software changes be- yond what was required for Physician Quality Reporting System or meaningful use reporting. Thus, most practices were unable to query EHR data for measurement in rapid-cycle tests of change. Practices owned by health/hospital systems
had higher odds of reporting the ability to gen- erate reports of electronic clinical quality mea- sures, compared to clinician-owned practices (OR: 2.88), while solo and rural practices were less likely than practices with six or more physi-
cians and those in urban areas to report being able to generate such reports (exhibit 3). Com- plementary qualitative data showed that system- owned practices had greater health IT and data capability than solo and rural practices did, but these resources were centralized. These practic- es and facilitators experienced substantial and repeated delays in getting access to data needed for quality improvement, as organizational pri- orities took precedence (particularly when tied to payment), and their experts were over- whelmed with other demands. New Clinical Guidelines Quality measure-
ment was complicated by changes in clinical guidelines. The American College of Cardiology and American Heart Association guidelines on cardiovasculardisease risk changeddramatically in 2013.33 At the start of EvidenceNOW in 2015, measurements for the A, B, and S parts of the ABCS measures were routinely part of the Physi- cian Quality Reporting System. However, CMS did not publish the criteria for the C part (the cholesterol measure) until May 4, 2017. The measure chosen for the EvidenceNOW initiative matched the 2013 guideline, but lack of a com- plementary official CMS measure meant that no EHR had yet implemented a similar measure in their system. Some practices created their own measures based on all or part of the new guide- lines to inform quality improvement, but this was not useful for benchmarking. Validity Across Different Electronic
Health Record Systems Facilitators and health IT experts often found verifiable problems in clinical quality reports. For example, a represen- tative of cooperative 6 told us in an interview: “Doctors always look at our data and say it’s not [correct]…. Unless you put [information] in the exact spot, it doesn’t pull it [for the electronic clinical quality measures]…. They didn’t hit the little cog-radio button. It takes [you] to a tem- plate that you have to complete. In order to pull the data it has to be on there.” It was common for there to be specific loca-
tions (fo
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.