Quality Improvement Proposal Paper.

24/7 Homework Help

Stuck on a homework question? Our verified tutors can answer all questions, from basic math to advanced rocket science!

Quality Improvement Proposal Paper.

Quality Improvement Proposal Paper.

ORDER NOW FOR CUSTOMIZED AND ORIGINAL ESSAY PAPERS 

Identify a quality improvement opportunity in your organization or practice. In a 1,250-1,500 word paper, describe the problem or issue and propose a quality improvement initiative based on evidence-based practice. Apply “The Road to Evidence-Based Practice” process, illustrated in Chapter 4 of your textbook, to create your proposal.

Include the following: Quality Improvement Proposal Paper.

  1. Provide an overview of the problem and the setting in which the problem or issue occurs. Quality Improvement Proposal Paper.
  2. Explain why a quality improvement initiative is needed in this area and the expected outcome.
  3. Discuss how the results of previous research demonstrate support for the quality improvement initiative and its projected outcomes. Include a minimum of three peer-reviewed sources published within the last 5 years, not included in the course materials or textbook, that establish evidence in support of the quality improvement proposed. Quality Improvement Proposal Paper.
  4. Discuss steps necessary to implement the quality improvement initiative. Provide evidence and rationale to support your answer.
  5. Explain how the quality improvement initiative will be evaluated to determine whether there was improvement.
  6. Support your explanation by identifying the variables, hypothesis test, and statistical test that you would need to prove that the quality improvement initiative succeeded.

While APA style is not required for the body of this assignment, solid academic writing is expected, and documentation of sources should be presented using APA formatting guidelines, which can be found in the APA Style Guide, located in the Student Success Center.

This assignment uses a rubric. Please review the rubric prior to beginning the assignment to become familiar with the expectations for successful completion.

1attachments

Slide 1 of 1

  • attachment_1attachment_1

Statistical AnalysisBy June Helbig

Essential Questions

  • What is the difference between experimental, quasi-experimental, and nonexperimental research?
  • How does qualitative research differ from quantitative research?
  • What is the difference between causation and correlation?
  • What is the difference between research and quality improvement.

Introduction

Health care professionals must have knowledge regarding research methods and statistical analysis. Statistical analysis is associated with evidence-based practices and is responsible for many of the new and innovative treatments and procedures performed by health care professionals every day; however, knowing about evidence-based practices and using them is not enough. One must know how the results were obtained and if the results are statistically sound (see Figure 4.1). Health care professionals are concerned with quality and providing those in their care with safe, patient-centered care.

Figure 4.1

The Road to Evidence-Based Practice

Using the results of research involves incorporating those results into care provided to patients. Using new drugs, treatments, or procedures is for the sake of quality patient care. A very simple way to look at it is to consider the three Rs that make up evidence-based practice—Research, Results, and Review. The research is completed, the results are obtained, and then experts review the results of the study. If the review is found to be positive, the drug, treatment, or procedure can now considered evidence-based and can be put into practice (see Figures 4.1 and 4.2).

Figure 4.2

The Three Rs of Evidence-Based Practice

Many hospitals have professional practice committees that examine the results of research and quality improvement projects. The purpose of these committees is to find new and innovative ways to provide quality health care. Implemented changes are supported by results or evidence obtained through research. The following questions are some examples that may be asked when reviewing results connected with a research study.

  • Can the same results be duplicated?
  • What data were obtained and how were the data analyzed?
  • What type of research was performed?
  • Was it experimental, quasi-experimental, or nonexperimental research?
  • Was it a qualitative study or a quantitative study?
  • Were the research findings the result of cause and effect or were the findings the result of correlation?

These and many other questions will be answered as statistics for health care professionals is learned. After reading this chapter, the health care professional will be able to understand and answer those questions.

Conceptual Framework

conceptual framework is an analytical tool used to build a research study. All research studies begin with a hypothesis. The researcher takes that hypothesis and formulates a theory. The researcher is able to take that theory and build a conceptual framework to investigate the theory. In order to answer the hypothesis, the research process must be followed, which will lead to a result. The hope is that the result aligns with the researcher’s theory; however, the conclusion may or may not support the hypothesis. The steps taken to prove a theory are considered the building materials of a conceptual framework. The conceptual framework explains what will be investigated, how it will be investigated, and what will be needed to arrive at a conclusion. The conceptual framework defines the tools needed to answer the hypothesis and the variables the researcher or investigator will encounter along the way.

Experimental Versus Nonexperimental Research

When conducting experimental research, the researcher sets up the study to evaluate an experimental drug, treatment, or intervention. This type of research is a randomized control trial (RCT). Some patients receive the experimental drug, treatment, or procedure, and the other group does not. Randomization involves something similar to a coin toss (see Figure 4.3).

Figure 4.3

Example: Randomization Control Study

In randomized control trials, the control group is the group in which no experimentation occurs. The control group receives customary and routine treatment. The experimental group is where the independent variable is manipulated. In randomized control research designs, one group of patients will receive the experimental drug, treatment, or procedure, and the other group of patients will receive customary treatment. Randomization is like flipping a coin (see Figure 4.3). Heads the patient is in the group that receives the experimental intervention, tails the patient receives the customary treatment. Another research scenario could be the study of a new medication. In this case, it could be the dose of the medication that is different among the groups. This allows the researchers to evaluate the effects of dosage on the patients in different groups. Table 4.1 provides a visual example of manipulation, control, and randomization for experimental research.

Table 4.1

Manipulation, Control, and Experimentation

PatientDrugDoseIntervention
Patient A(Manipulation)Lisinopril20 mgManipulation Manipulation of dosage ↑40 mg—independent variable
Patient B(Tails: Control Group)Lisinopril20 mgControl Control Group: Randomization Coin tossPatient is randomly placed in the control group of the study, receiving customary treatment.Nothing is changed.
Patient C(Heads: Experimental Group)Lisinopril20 mgExperimentation Experimental Group: Randomization Coin tossHeads: New medication; patient started on propranolol 10 mg daily.

All RCTs are experimental. The research design specifies the study sample that will be selected to participate in the trial or study. Once the population is defined, the participant is randomized into either the control group or the experimental group. Different methods are used to randomize the participants. Experimental designs can establish causation. A high degree of internal validity can be obtained through similar control and experimental groups. In research, one cannot receive less than the usual customary treatment. The participant in the experimental arm can be subjected to risk by not receiving the usual and customary treatment.

A good experiment minimizes the variability of the evaluation and provides unbiased evaluation of the intervention by avoiding confounding from other factors, which are known and unknown. Randomization ensures that each patient has an equal chance of receiving any of the treatments under study. (Suresh, 2011, para. 2)

There are several different methods the research investigator can use for randomization. Methods of randomization include basic randomization, which is based off of a single event, such as flipping a coin or rolling dice. Some methods are more complicated, such as opening an envelope or placing a phone call to receive a control or experimental group assignment for the patient. These different types of randomization may not always work because there may be too many independent variables. The study must conform to the rules and regulations set by the Code of Federal Regulations to remain ethical.

quasi-experimental study is able to identify why certain things happen. A quasi-experimental study does not use any form of randomization but looks for a causal relationship between receiving a treatment and not receiving a treatment. With the absence of randomization, the study can no longer be considered experimental. Quasi-experimental research designs identify treatment groups and comparison groups (see Figure 4.4). Because there is no randomization, selection may be based on similar characteristics or similar comorbidities. Extraneous variables may be responsible for jeopardizing internal validity. Extraneous variables are variables that are not foreseen. The researcher is not aware of extraneous variables when designing a research study.

Figure 4.4

Quasi-Experimental Research Design

One of the most frequently used types of quasi-experimental research design is the pretest-posttest design. This occurs when one group is given a pretest, which could be a medication, treatment, or procedure. This medication, treatment, or procedure is considered the independent variable. Patients are assessed pretest or prior to administration of a medication, treatment, or procedure. After the independent variable (medication, treatment, or procedure) is given or performed, a posttest is given, or the patient is reassessed. The differences between the status of the patient’s pretest (prior administration of medication, treatment, or procedure) and posttest (group after the administration of medication, treatment, or procedure) is the result of the study. There is no control group or randomization. All research participants receive the medication, treatment, or procedure. This study design is not considered experimental because there was no randomization or control and is considered quasi-experimental research (see Figure 4.5).

Figure 4.5

Quasi-Experimental Design Pretest/Posttest

Another type of quasi-experimental research design is the historical comparison design. Because quasi-experimental groups do not use randomization or control groups, the researcher may use historical group data as a control for comparison. This can be done by a retrospective chart review. For example, 15 years ago, patients were turned every 2 hours, but the mattress that existed then was just a standard mattress. Through a retrospective chart review, it was found that 10% of the patient population that was bedbound in the Intensive Care Unit (ICU) developed pressure ulcers. The mattresses in the hospital were changed this year to alternating pressure mattresses. The patients continued to be turned every 2 hours. It was found that only 5% of the bedbound population in the ICU developed pressure ulcers. This is an example of a historical comparison study. The researcher collected the same data on the same patient population, just 15 years apart (see Figure 4.6). This is also considered quasi-experimental research because no control or randomization occurred.

Figure 4.6

Quasi-Experimental Historical Comparison Design

A common type of nonexperimental research is a correlational design. Correlational design looks at the association or relationship between variables. It is not like a quasi-experimental design study or randomized control trial because there is nothing new introduced in the design of the study. There is no new medication, treatment, or procedure introduced. The correlational study looks at variables and the relationships that variables may have with each other. By seeing how variables exist naturally, one can evaluate or theorize what would happen if one of the variables were manipulated. Would there be change, and what type of change would occur? The results of a correlational study describe the relationships between the variables. The data collected can be retrospective or prospective and can be used to formulate a theory or as a foundation for a randomized control trial. There does not have to be causation with correlation as demonstrated by the example in Table 4.2 and Figure 4.7.

Table 4.2

Correlation vs. Causation Table

JanFebMarAprMayJunJulAugSeptOctNovDec
Eating a Healthy Diet(times per month)234567891011129
Filling Gas Tank(times per month)34567891011121311

Figure 4.7

Correlation vs. Causation Graph

A correlational study can be simple, comparative, longitudinal, or cross-sectional. Each study’s purpose is the same—to describe the relationships between variables—but the designs differ in time periods and groups of variables. For a simple correlational design, data are collected from one group of variables over one period of time. In a comparative correlational design study, data are collected on two or more groups of variables, still over one period of time. For a longitudinal design study, data are collected for one group of variables over two or more periods of time. Lastly, cross-sectional correlational design research collects data from whatever groups the researcher has selected over just one period of time. Quality Improvement Proposal Paper.

Quantitative vs. Qualitative Research

Research can be performed two ways, and both methods are defined by the type of variables that are collected as data (see Table 3.3). Quantitative research is performed by evaluating numbers and numeric variables that result in measurable data. Qualitative research is performed by evaluating nonnumeric variables. Qualitative data are collected through descriptive characteristics that cannot be measured with numbers—observation, open-ended questions, or interview. It is through these nonnumerical variables that the research question can be answered. The type of research that is performed is determined by the researcher when developing the conceptual framework. Quality Improvement Proposal Paper.

Table 3.3

Qualitative vs. Quantitative Research

Qualitative ResearchQuantitative Research
Numerical data Measurable data collected (numbers and numeric variables)
Nonnumerical data Data are most often collected through observation, open-ended questions, or text-based interviews

Quantitative research relies on measurement using the scales described previously—nominal level of measurementordinal level of measurementinterval level of measurement, and ratio level of measurement. The data collected are analyzed using statistical analysis to answer the research question. The type of statistical analysis used is determined when constructing the research question. Quantitative research generates numbers. The numerical information collected is reflective of the variable being analyzed. For example, gender is collected in many research studies. “Male” or “Female” is not numerical, but if 100 participants were enrolled and 40 were female and 60 were male, then the variable of gender becomes numeric. Once it is numeric, it can be manipulated and applied to all levels of measurement (see Table 4.4).

Table 4.4

Variables for Each Level of Measurement

VariableNominal Level of MeasurementOrdinal Level of MeasurementInterval Level of MeasurementRatio Level of Measurement
XRank6060/100 60 males out of 100 participants 60%
MaleM601st
FemaleF402nd4040/100 40 females out of 100 participants 40%

In health care, quality, patient-centered care is provided to all patients. If health care professionals want to prevent hospital-acquired pressure ulcers, the entire population cannot realistically be participants in the study, so a number is chosen that is reflective of the entire population. Many large clinical trials can enroll up to 25,000 or more participants nationally. For health care research, that number is far too large. Quality Improvement Proposal Paper.

When performing research, health care professionals usually formulate a hypothesis in regard to a problem. The researcher may choose a percentage of the population of the hospital served, or a number is chosen that is sufficient to obtain results. Saturation, a term used with qualitative research, occurs when enough data have been collected to support results of the study. Results of a research study have generalizability, meaning the results can be applied accurately to the general population. When generalizability is present, the quantitative research study is well-designed, and the results can be applied to the general population.  Quality Improvement Proposal Paper.

In qualitative research, data are most often collected through observation, open-ended questions, or interview. Data collected are words and not numbers. The researcher compiles lists of words, behaviors, and responses from participants as well as observational videos. The data collected is representative of commonalities observed. Participants’ rights are respected, and informed consent may be obtained if performing observational research. In order to maintain validity and reliability in qualitative research, rigor must be maintained. Rigor is consistency in data collection, as well as accuracy; as in the attention to all details. When rigor is maintained, the findings of the qualitative study are proven to be true and reliable. Quality Improvement Proposal Paper.

If a qualitative study of handwashing compliance in the intensive care unit were being performed, the researcher would be present in the intensive care unit observing staff and taking notes or video of staff washing their hands. Qualitative research can be difficult because observation or interviewing can be very time consuming. Many times, the sample size may be small because of the massive amount of data collected for the study. If the sample size is too large, there would be a lot of redundancy. Redundancy occurs when information collected is repetitive, so no new information needs to be collected. One can say the sky is blue so many times that no one needs to say it again, at which point no new data is being generated. This is called the saturation point, which occurs when no new data is being generated and the endpoint of the qualitative study is defined. When performing qualitative research, the investigator may get to the point where no new information is being obtained and decides that saturation has been met. This may be sooner than the expected end date or later than the predetermined end date, but once no new information is being generated, the investigator can call an end to the study. Table 4.5 reflects common terms in qualitative and quantitative research. While there are some terms that are used in both methods, some are more in one. Quality Improvement Proposal Paper.

Table 4.5

Qualitative and Quantitative Research Terms

Qualitative ResearchQuantitative Research
IRB Approval✓*
Informed Consent✓*
Enrolls Human Subjects
Continuous Variables
Categorical Variables
Mostly Subjective
Mostly Objective
Unstructured Responses
Fixed Responses
Inductive Reasoning
Deductive Reasoning
Randomization
Saturation
Statistical Data Analysis
Validity
Reliability
Redundancy
Generalizability
Transferability
Rigor

Note. *There may be certain circumstances when informed consent and Institutional Review Board (IRB) approval are not required

Qualitative studies are usually completed when the end date is reached, or the point of saturation occurs. The investigator of a qualitative research study is deeply involved in the study and many times will make decisions regarding the course of the study as the data collection evolves. Because qualitative research consists of words and not numbers, analysis takes place through the development of commonalities and themes.

There are three different types of qualitative designs: phenomenologygrounded theory, and ethnography.

  • Phenomenology is considered empirical research because data are collected through observation and experiences. It can be through direct contact with what is being observed or through indirect contact, which is solely observation.
  • Grounded theory is research that takes first person observations or interviews and develops a theory or concept about the population being observed.
  • Ethnography is a type of qualitative research that studies cultures, everyday life, and cultural changes through observation or interview. Quality Improvement Proposal Paper.

Well-designed qualitative research has transferability as well as generalizability. Transferability is the ability to apply the results of the qualitative study to similar experiences and similar groups of people. It goes hand-in-hand with generalizability in using these aspects of research studies to be well designed and fairly accurate. Transferability demonstrates claims and connections of the qualitative research that was performed. Generalizability is the ability of the results to be applied to people and situations. Quality Improvement Proposal Paper.

systematic review is a type of literature review in which information is collected from similar completed research studies and summarized. Before starting a systematic review, the researcher must have an objective as to why the review is being performed. There must be clear criteria and well-defined characteristics as to what types of studies are going to be reviewed and what characteristics are going to be collected. The systematic review is a nonexperimental research study because nothing new is being introduced. A meta-analysis is a statistical method used to evaluate multiple studies.

In statistics, prevalence is used to describe data collected regarding the number of health care related illnesses, conditions, and outcomes that commonly occur in a population. Prevalence is the number or percentage of the population that has a disease or health care related illness or problem over a specific period of time. This data is responsible for much of the research performed by nursing. Health care professionals want to provide quality, patient-centered care, and one way to improve care is to research common problems for better solutions. Every month, most hospitals and health care facilities report specific data as required to the Centers for Medicare & Medicaid Services (CMS), the Joint Commission (TJC), and the Agency for Healthcare Research and Quality (AHRQ). New protocols and procedures have been found by investigators as a result of the data reported and the analysis of the data collected. In health care, much of the data is collected to assure that quality of care is delivered.

Quality indicators include occurrence of hospital-acquired pressure ulcers, inpatient falls with and without injury, hospital-acquired pneumonia, and patient satisfaction. Some of the quality measures are collected in patient satisfaction surveys, which are sent out monthly. Patient satisfaction surveys include collecting data on nurse-doctor communication, nurse-patient communication, and pain management. Each month this data is reported to CMS, TJC, and AHRQ. The results of quality indicator reporting are posted monthly on most inpatient hospital units. As health care providers, knowing and adjusting care based on the results of data collected for quality measures is paramount to improving the quality of care provided to patients.

Another method of qualitative research is the case study. Case studies can be performed on an individual case or a group of similar cases. By researching and describing everything associated with a specific case, the researcher is able to get very specific details and data that could contribute to knowledge regarding the specific problem being investigated. Case studies are valuable, but when compared to the results of a randomized control study the results are not as valued. The hierarchy of evidence, sometimes called levels of evidence, is assigned to the different types of research designs that are used to perform research. Each research design has a value according to the strength of the results. Randomized controlled studies are considered the best approach to study the “efficacy and safety of a treatment” (Kabisch, Ruckes, Seivert-Grafe & Blettner, 2011, p. 663) (see Figure 4.8).

Figure 4.8

Hierarchy of Evidence

Quality Improvement vs. Research

The purpose of performing research is to find new knowledge about the effect of a medicine, treatment, or procedure. Health care providers and health care organizations collect data for several different reasons. Data collection is performed to meet the requirements of mandatory reporting by the Centers for Medicare and Medicaid Services. Data are also collected for quality improvement, and data are collected for research purposes. If data are collected for research purposes, different procedures are required. For health care, data collection for quality can be done without acquiring approval or consent, which is required for research studies.

Quality Improvement Project

Health care providers use research to provide evidence-based care that promotes quality health outcomes for individuals, families, communities, and health care systems. Health care providers “also use research to shape health policy in direct care, within an organization, and at the local, state, and federal levels” (American Nurses Association, n.d., para. 1). Nursing research can involve new treatments or procedures that may improve care provided to patients. Research is a systematic investigation that evaluates and obtains results to develop or modify medications, procedures, and treatments. Research is intended to answer a question or test a hypothesis. A hypothesis is an educated guess or an assumption that can be validated by testing or experimentation. Once a hypothesis is developed, research can be performed to prove or disprove the hypothesis.

Research contributes to generalizable knowledge, but before a research study is started, it must be approved by an Institutional Review Board (IRB). An IRB is a committee that applies research ethics to all studies to assure no harm is done to participants. The IRB may approve or disapprove a study, or it can ask for modifications to the study. When research is performed, the participant may have to sign an informed consent form to voluntarily participate in the study.

Quality improvement (QI) is data driven and usually done to improve the quality of care provided to patients. QI may benefit a process, system, and possibly the patient. QI, as defined by the Department of Health and Human Services (2011), consists of “systematic and continuous actions that lead to measurable improvement in health services and the health status of targeted patient groups” (p. 1). When health care providers and nurses carry out a QI project, it may not be the implementation of something new, but an improvement upon something already in place. QI takes a team to produce results. A QI project does not subject the participant to any risk, and the participant may not even be aware of being involved in a QI project. The QI project usually occurs at the facility where the problem was found. Monthly data are collected regarding patient safety at most facilities. Facilities include health care institutions such as hospitals, skilled nursing facilities, long-term facilities, clinics, and doctors’ offices.

There are some circumstances that may occur when a QI project must be submitted to the IRB for evaluation because the possibility exists that the QI project could be considered research. The differences between research and QI can be based on intent. Research contributes to generalizable knowledge, where information from a QI project may only improve upon what is already in place. If the QI project includes a new treatment instead of improving upon what is already in place, then it might be considered research and must be submitted to the IRB for a decision. The IRB will determine whether or not the plan proposed is research or QI. The IRB may rule the study is exempt if done for QI purposes. If there is risk to the participant, then the IRB will require the study to be conducted as research.

Risk to the patient not only means that physical harm may occur, but Health Insurance Portability and Accountability Act (HIPAA) violations can occur as well. If identifiable health information is collected, then the IRB must decide whether to classify the QI project as research or solely QI. Other aspects of a research study that are not part of QI are randomization and informed consent. If these are present, then the project is no longer considered QI. A QI project usually takes place within the organization that is trying to improve upon something that was realized as a result of data collection analysis. Quality indicators are collected monthly, so health care organizations must act if deficiencies are found. If deficiencies are found, action plans must be put in place to correct any problems that are occurring. Figure 4.9 highlights differences between research and QI for approvals and terms.

Figure 4.9

Research vs. Quality Improvement

ResearchQuality Improvement
IRB Approval✓*
Informed Consent✓*
Risk to Research Participant
Randomization
Validity
Reliability

Note. *There may be certain circumstances when informed consent and IRB approval are required.

Quality Improvement Project

One of the quality indicators collected monthly in all health care facilities is the number of patient falls. For example, if falls are very high in Organization A, the Quality Department may decide to invest in yellow gowns and yellow socks. Yellow is the color in all hospitals that is associated with patient falls. Data are collected monthly to evaluate the rate of patient falls. If the fall rate decreases by having patients wear a yellow gown and yellow socks, then the QI project was successful. Because of the great results from the QI project, all patients with a high fall risk score will now wear yellow gowns and socks.

If the project was conducted as a research study, changes would be made to the conceptual framework of the study. For example, if a research study using the same group of patients described in the above QI project with the yellow gowns and socks were being conducted, the researcher may compare one group to another. Group A will wear yellow gowns and socks, making Group A the experimental group and Group B the control group. Group A will be all patients in Rooms 1-15 with a high fall risk score and will wear a yellow gown and yellow socks. Group B will be all patients with a high fall risk score in Rooms 16-30 and will wear a regular hospital gown and socks. Data will be collected for Rooms 1-30 for 3 months to evaluate fall rates. The data collected will be compared to the previous 3-month period of fall occurrences (see Table 4.6).

Table 4.6

Quality Improvement Strategies

Research Study: Fall rate will decrease when patients wear yellow gowns and socks over a 3-month period of time (April–June 2018) compared to the fall rate for the previous 3-month period (January–March 2018). Yellow gowns alert health care workers that patient is a high fall risk.
YellowGownsYellowSocksNumber of Falls Over January–March 2018Number of Falls Over April–June 2018Did Fall Rate Decrease Over 3 Months
Group ARooms 1–15YesYes41Yes
Group BRooms 16–30NoNo34No
Result—Fall rate decreased when patients wore a yellow gown and yellow socks.

Quality Improvement Strategies

Various QI strategies are typically used within health care organizations. Several of the current QI strategies will be discussed below, including PDSA cycle, FADE, lean strategy, and Six Sigma. These strategies focus on either improving patient care or improving the processes surrounding patient care. Organizations routinely collect data regarding patient care and performance measures of the departments within the organization. Together they define the quality of patient care provided by a specific organization.

PDSA Cycle

One strategy is “Plan-Do-Study-Act,” or the PDSA cycle. This simple, four-step tool is typically used by an organization for QI (Agency for Healthcare Research and Quality [AHRQ], 2013). Once a problem has been identified, a plan is created to observe the problem and collect data (plan). After the plan has been made, it is tested on a small sample (do), and the data collected is analyzed (study). After the data is studied, changes are made based on what was learned (act).

Hospitals collect data everyday regarding the care of patients. It is every hospital’s goal to provide safe, patient-centered, quality care. The information collected is distributed to hospital committees, and if the hospital performs poorly, the committee must come up with a plan for improvement. Using the PDSA cycle is a quick way to find and implement changes to improve the quality of care provided to patients (see Figure 4.10).

Figure 4.10

PDSA Cycle

Note. Adapted from “Plan-Do-Study-Act (PDSA) Cycle,” by the Agency for Healthcare Research and Quality, 2013.

FADE

FADE, an acronym for focus, analysis, development, and execute and evaluate, is another four-step QI strategy. The focus step is when the problem or process that needs improvement is identified. The analysis involves the collection and analysis of data to define a clear baseline so that any root cause is identified. Performing this step properly is critical to the outcome of this QI strategy. The development step in this strategy is the Development step. In the development step, the action plan is created to support the method of improvement. The last executes the action plan developed and evaluates the results of the plan. Continuous monitoring must also take place to assure the success of change continues (see Figure 4.11).

Figure 4.11

FADE Model

Lean Method

The Toyota Motor Company developed the lean method for QI to eliminate waste (AHRQ, 2017). If an employee identifies something wasteful, then production is stopped until the wasteful activity can be corrected. Each employee is valued and tasked with finding areas that are wasteful, so corrections can be put into place. In health care, management empowers their employees to identify patient care and process problems in order to minimize inefficiency and focus on providing patients with a safe, patient-centered experience. Employees are also empowered to come up with solutions as well.

The lean strategy is used to advance QI by focusing on the patient experience, regulatory bodies, payers, and all health care providers. If anything is found to cause a problem related to one of these areas, or is found to be a problem, then every employee at every level is made to feel empowered to improve the process or problem. The lean strategy empowers individual employees and multidisciplinary committees to identify and address poor quality standards and procedures. Solutions are found by creating action plans to improve the identified patient problem and process problems as well. The lean method can be summarized using four key points:

  1. Everyone is empowered and tasked with identifying patient care and process problems.
  2. Management engages employees to identify patient care and process problems.
  3. Strategies are created for the reporting of patient care and process problems.
  4. Multidisciplinary committees are in place to address patient care and process problems and to create quality action plans for QI.

Lean Six Sigma

Lean Six Sigma is a strategy that focuses on process improvement and, in health care, the elimination of problems that may have led to the death of a patient or to a sentinel event. In lean Six Sigma, there are usually two different aspects of focus. The first emphasis is the removal of waste in the process that contributes to elongated cycle times, such as waiting or extra processing. The second emphasis is on defect elimination. A sentinel event is investigated to ensure that similar events do not occur in the future. It is used to improve patient safety by finding and eliminating life-threatening errors. A process called DMAIC (define, measure, analyze, improve, control) is an approach to improve the process that led to the error, and in health care, this is used as part of the Six Sigma strategy. Six Sigma is similar to PDSA except an additional step is added: control. This fifth step “provides extra emphasis on maintaining high levels of performance and low levels of variability. This typically entails a plan to continuously measure and monitor the process” to assure compliance (Glasgow, 2011, para. 5).

Statistical Process Control Charts (Media) – Special Cause vs. Normal Variation

Control charts are used to monitor the stability and control of the process being improved over time. A control chart contains three main elements. There must be a time series graph with a central line depicting shifts. Having upper control limits and a lower control limit must be included as well. Control charts show historic trends over time and how well the new process is performing. Common cause variation is “fluctuation caused by unknown factors resulting in a steady but random distribution of output around the average of the data and a measure of the process potential, or how well the process can perform when special cause variation [is] removed” (“Common Cause Variation,” n.d., para. 1). The variation may be caused by unknown factors, or it can be a measure of the potential of a process.

Quality/Safety – Relationship Between Adverse Events and Hospital Deaths

Quality and patient safety are of the utmost importance to health care organizations. Everything done by the organization is done to assure quality and safety. In 1999, the Institute of Medicine (IOM) published To Err Is Human: Building a Safer Health System. This paper discussed errors occurring at health care institutions that were causing patients to die. According to estimates from two major studies, “at least 44,000 people, and perhaps as many as 98,000 people, die in hospitals each year as a result of medical errors that could have been prevented” (IOM, 1999, para. 1). Since then, health care institutions have been working toward improving the quality of care delivered and preventing unnecessary patient deaths. One way to prevent death is to begin collecting information about adverse events and perform an investigation as to why the event occurred.

Study Examples

The following are examples of studies conducted using different research designs. Included in the examples are a retrospective chart review, a quasi-experimental study, a randomized control study, a qualitative study, and a QI study.

Example: Retrospective Chart Review

A retrospective chart review, titled Is Researching Adverse Events in Hospital Deaths a Good Way to Describe Patient Safety in Hospitals: A Retrospective Patient Record Review Study was performed to assess patient safety (Baines, Langelaan, Bruijne, & Wagner, 2015). The investigators wanted to understand if the adverse events of the living (those who were discharged home) had a relationship to those patients who died in the hospital. Could an adverse event predict patient safety?

A total of 11,949 charts were reviewed. Of that total, 50% were patients who had died, and the other 50% were patients who were discharged alive. The main outcome measures were adverse events in inpatient deaths and in patients discharged alive. The researchers looked at size, preventability, clinical process, and type of adverse event (Baines et al., 2015).

The retrospective chart study found that more information regarding adverse events was learned from the patients who died than from patients who were discharged alive. Many of the events were similar but were not representative of the number of adverse events. Patients who had died were older, had longer inpatient stays, and were more urgently admitted, but they were not generally admitted to a surgical unit. The researchers also found that the patients who died had more preventable adverse events than those who were discharged alive. It was also found that the patients discharged alive had more preventable adverse events related to a surgical process.

Example: Quasi-Experimental Study

In May of 2016, a large study was conducted to evaluate Medicare fee-for-service (FFS) readmissions after an intervention was applied to high-risk discharge patients. The study was funded by the CMS to reduce readmissions among all discharged Medicare FFS patients. The study, “Quasi-Experimental Evaluation of the Effectiveness of a Large-Scale Readmission Reduction Program” (Jenq, Doyle, Belton, Herrin, & Horowitz, 2016), was conducted at an urban academic medical center in New Haven, Connecticut, beginning in May 2012.

The interest in preventing readmissions comes from the financial penalties imposed by the Readmission Reduction Program of the Patient Protection and Affordable Care Act (ACA) of 2010. Patient readmissions occur when patients are admitted within 30 days from the date of discharge. If patients are readmitted within 30 days, there are penalties imposed as dictated by the ACA. Hospitals have been conducting smaller clinical trials to investigate readmission reduction methods. Most of these trials included fewer than 400 patients who received an intervention. The study being discussed enrolled 10,621 patients (Jenq et al., 2016).

The target population were patients older than 65 years with Medicare FFS insurance. Part of the inclusion criteria were that patients resided in nearby ZIP codes and were discharged alive to either home or another facility. Patients who left against medical advice or who were discharged to hospice were not included. The control population was made up of discharge patients and high-risk discharge patients older than 54 years with the same discharge status and ZIP codes, but who did not have Medicare FFS insurance.

The intervention provided to the target population included:

  • Personalized transitional care,
  • Education,
  • Medication reconciliation,
  • Follow-up telephone calls, and
  • Linkage to community resources (Jenq et al., 2016, para. 4).

The program was implemented in Yale-New Haven Hospitals with a total of 1,541 inpatient beds on two campuses. The program had the support of “senior executive leaders who had made readmission reduction a hospital-wide quality improvement priority” (Jenq et al., 2016, para. 14).

It was found by providing the intervention to the target population that the readmission rate was reduced by 9.3% when interventions were applied over 19 months. Only 58% of the target population actually received the intervention. CMS was looking for a 20% decrease in the rate of readmissions. The study found patients who were discharged home were followed up by transitional care consultants who were hired specifically for this program. There were times when the elderly patients did not always benefit from all the community services they could have received. There was also help from the Area Agency for the Aging, which was able to provide resources about the community (Jenq et al., 2016).

Results:  We enrolled 10 621 (58.3%) of 18 223 target discharge patients (73.9% of discharge patients screened as high risk) and included all target discharge patients in the analysis. The mean (SD) age of the target discharge patients was 79.7 (8.8) years. The adjusted readmission rate decreased from 21.5% to 19.5% in the target population and from 21.1% to 21.0% in the control population, a relative reduction of 9.3%. The number needed to treat to avoid 1 readmission was 50. In a difference-in-differences analysis using a logistic regression model, the odds of readmission in the target population decreased significantly more than that of the control population in the intervention period (odds ratio, 0.90; 95% CI, 0.83-0.99; P = .03). In a comparative interrupted time series analysis of the difference in monthly adjusted admission rates, the target population decreased an absolute −3.09 (95% CI, −6.47 to 0.29; P = .07) relative to the control population, a similar but nonsignificant effect. (Jenq et al., 2016, para. 6)

Example: Randomized Control Study (Experimental)

In 2012, a randomized-control trial (RCT), titled “Early Childhood Family Intervention and Long-Term Obesity Prevention Among High-Risk Minority Youth (Brotman et al., 2012), was conducted to test the hypothesis that family intervention can promote effective parenting in early childhood that will affect the rate of obesity in preadolescence.

Childhood obesity is a growing epidemic associated with an increasing incidence of hypertension, and diabetes and can be extremely costly. “Rates of overweight (BMI ≥ 85th percentile) have doubled among 2- to 5-year-olds over the past 3 decades; overweight preschool-aged children are 5 times more likely to be obese (BMI ≥ 95th percentile) at age 12 than non-overweight children” (Brotman et al., 2012, para. 1). Obesity prevention is especially important during early childhood, which has already been identified as a critical period. Two characteristics of effective parenting were identified. One characteristic was responsiveness and the other control. Responsiveness included parental warmth, sensitivity, and involvement. The second was parental control, which included expectations from the child by the parent, including aspects of self-control and parental discipline (Brotman et al., 2012).

The participants were divided into two follow-up groups. They were named Follow-Up Study 1 and Follow-Up Study 2. There was a total of 186 minority youth who were at risk for behavioral problems enrolled in this study. Forty of those were girls enrolled into Follow-Up Study 1, and the remaining 146 children were enrolled into Follow-Up Study 2. There was long-term follow-up after random assignment to family intervention or control condition, which occurred at age 4. The study design included two RCTs. The first follow-up study enrolled 99 children, including 40 girls who had a familial risk for behavior problems. The second follow-up study enrolled 496 children, including 146 boys and girls at risk for behavioral problems. Neither intervention targeted obesity, nor addressed nutrition and activity of the children. The researchers did provide behavioral family interventions. Interventions included “weekly 2-hour parent and child groups over a 6-month period. Descriptions of the interventions and positive effects on parenting (e.g., responsiveness, control) and child behavior (e.g., aggression, social competence, stress response) have been reported” (Brotman et al., 2012, para. 12).

BMI and health behaviors were measured an average of 5 years after intervention in Study 1 and 3 years after intervention in Study 2. The results showed that youth in the intervention group had significantly lower BMI at follow-up than did youth in the control group.  There were also significant differences on blood pressure, diet, and physical activity demonstrated by both groups. Successful obesity prevention could have a huge impact on public health considering that high-risk minority groups are at risk of being obese. Further inquiry is needed regarding effective parenting, which can be seen as promising after analyzing the results of this trial (Brotman et al., 2012).

Example: Qualitative Study (Nonexperimental)

A study, titled “A Qualitative Study of Experienced Nurses’ Voluntary Turnover: Learning From Their Perspectives”(Hayward, Bungay, Wolff, & MacDonald, 2016), was conducted by performing interviews of 12 registered nurses. The 12 nurses included in the study had an average number of 16 years in practice in a wide variety of inpatient acute care settings. The researchers developed a hypothesis about what factors would contribute to the experienced nurses’ reason to voluntarily leave their jobs and pursue other avenues.

The purposive sample of 12 nurses included four who worked part-time and eight who worked full-time. The sample size was small because of the abundance of information collected from interviews in qualitative studies. The researchers believed that by choosing only nurses who had resigned to participate in the study, they could focus on the specific issues that were being explored, such as nurse fatigue and workload demands. While nurses who had not resigned also felt the same way, but were unable to resign due to daily and personal problems. The selection is vital to the outcome of the study.

The nurses’ decisions to resign were a combination of work environment and personal factors. Major themes that ran through the interviews included “higher patient acuity, increased workload demands, ineffective working relationships among nurses and with physicians, gaps in leadership support and negative impacts on nurses’ health and well-being” (Hayward et al., 2016, para. 5). Other reasons, including poor relationships with co-workers and lack of leadership, led to job dissatisfaction and their decision to leave. Lack of leadership support led the nurses “to feel dissatisfied and ill equipped to perform their job. The impact of high stress was evident on the health and emotional well-being of nurses” (Hayward et al., 2016, para. 5).

Example: Quality Improvement Study

Patient satisfaction is a very important indicator of the perception patients have of the care they received during their hospital stay. One of the quality indicators is communication with physicians. While in the hospital, patients are usually ill, and it was questioned whether patients properly remembered their provider and the communication that took place. The study, “Positive Impact on Patient Satisfaction and Caregiver Identification using Team Facecards: A Quality Improvement Study” (Martin et al., 2017), was conducted to see if patients could remember their interactions with their health care providers by having facecards with the name, picture, and specialty of the physician to improve the ability of the patient to identify members of their health care team.

The facecards were given to patients during the interventional period of the study. Each facecard identified physicians and their specialty. There were 192 patients included in the study, with 50% of the patients in the interventional group receiving the facecards identifying their physicians and their role and the remaining 50% of the patients in the control arm, who did not receive the facecards. All 192 patients received a survey to complete after discharge (Martin et al., 2017).

Results: A total of 192 patients completed the survey. They were divided into a control group (n = 96, 50%) and an interventional group (n = 96, 50%) during the period of the study (February 2016–August 2016). Patients who received the intervention were more likely to identify: their team attending (71 [74%] in the interventional group vs [34.4%] in the control group; P < 0.001); team resident (40 [40.7%] in the interventional group vs 25 [26%] in the control group; P = 0.0222); team intern (42 [43.8%] in the interventional group vs 19 [19.8%] in the control group; P = 0.0004). Patients in the interventional group reported slightly higher level of satisfaction (72 [75%] reported level of satisfaction > 9 on a scale of 1 to 10 in the interventional group vs 59 [61.5%] in the control group). (Martin et al., 2017, para. 5)

Use of facecards improved patient identification of primary team members and roles; however, patients still lacked enough knowledge of provider roles. The use of facecards showed a slight improvement on overall patient satisfaction (Martin et al., 2017).

Future/Trends – Patient-Centered Care and Shared Decision Making

The IOM has identified the six domains of health care quality. They are composed of different domains that make up an analytical framework that guides development initiatives for quality in the public and private sectors. Most measures address effectiveness and safety, while others address patient-centeredness, timeliness, efficiency, and equity of care.

  • Safe: Avoiding harm to patients from the care that is intended to help them.
  • Effective: Providing services based on scientific knowledge to all who could benefit and refraining from providing services to those not likely to benefit (avoiding underuse and misuse, respectively).
  • Patient-centered: Providing care that is respectful of and responsive to individual patient preferences, needs, and values and ensuring that patient values guide all clinical decisions.
  • Timely: Reducing waits and sometimes harmful delays for both those who receive and those who give care.
  • Efficient: Avoiding waste, including waste of equipment, supplies, ideas, and energy.
  • Equitable: Providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status. (AHRQ, 2016, para. 2)

These frameworks make it easier for consumers of health care to understand quality. The quality measures collected from health care organizations can be classified as structure quality measures, process quality measures, or outcome quality measures. When consumers evaluate quality measures, they can make educated decisions by comparing different organizations prior to choosing their health care provider.

The measures can provide valuable information about the organization. If one wanted to evaluate structural quality measures, then information regarding electronic health records or medication safety systems, as well as the number of board certified physicians or the physician-to-patient ratio could be explored. Process measures can give consumers information about preventative services and the status of their patient population, including how many are well and how many are ill. The outcome measures can tell consumers mortality rates and the rate of surgical complications or hospital-acquired infections of the organization being investigated (AHRQ, 2015).

Reflective Summary

Being able to understand basic research statistics will empower the health care professional to understand the results of research studies. For example, research articles can deepen the health care professional’s understanding of every aspect of health care, including new medications, new treatments, and new procedures. Understanding basic research designs will allow health care professionals not only to gain valuable knowledge by reading journals, but to advance that knowledge. Knowing the differences of quantitative research from qualitative research will enable health care professionals to tell the differences in the value of numerical data from data consisting of words and themes. Health care professionals need to know and understand the results of research, which leads to evidence-based practices, which leads to the goal of all health care professionals—to provide patients with safe quality care.

Key Terms

Case Study: A method of qualitative research in which one focus is studied.

Conceptual Framework: An analytic tool used to build a research study, this defines the tools needed to answer the research question and the variables the researcher or investigator will encounter along the way.

Control Group: The group of subjects not receiving the treatment; the sample not receiving the intervention being studied; also known as comparison group.

Correlational Research Design: A type of quantitative research that is not controlled and aims to understand relationships between variables.

DMAIC: The define, measure, analyze, improve, control approach to improving a process.

Ethnography Research Design: A method of qualitative research design focused on understanding people and cultures; often studied through observation.

Evidence-Based Practice: The integration of clinical expertise, the most up-to-date research, and patient’s preferences to formulate and implement best practices for patient care.

Experimental Group: The group in a research study that receives the experimental drug, treatment, or procedure.

Experimental Research Design: A type of quantitative research design that is highly controlled to study cause and effect with independent and dependent variables.

Extraneous Variables: A variable that can influence the relationship between the independent and dependent variables; can be controlled either through research design or statistical procedures; were not foreseen or known at the beginning of the study.

FADE: A four-step strategy for quality improvement—focus, analysis, development, execute/evaluate.

Generalization: The degree the findings can be generalized from a sample to a larger population.

Grounded Theory Research Design: The collection and analysis of data from interviews and/or observation.

Hierarchy of Evidence: A core principle of evidence-based practice that defines levels of evidence from weak to strong.

Hypothesis: A testable statement of a relationship; an epidemiologic hypothesis is the relationship is between the exposure (person, time, and/or place) and the occurrence of a disease or condition.

Independent Variable: The experimental or predictor variable. It is manipulated in the research to observe the effect on the dependent variable.

Institutional Review Board (IRB): The group assembled to review research proposals and monitor progress to ensure protection of human subjects.

Internal Validity: The ability of the researcher to minimize external influence on the data achieved in the study.

Interval Level of Measurement: The variable has rank order and equal distances on the points of the scale.

Lean Method: A quality improvement strategy in which every employee at every level is made to feel empowered to find and solve problems.

Meta-Analysis: A statistical method used to evaluate the results of a systematic review.

Nominal Level of Measurement: Used to name or categorize things; the first level of measurement.

Nonexperimental Research: A research study that does not involve an experimental drug, treatment, or procedure.

Ordinal Level of Measurement: Defines the relationship between things and assigns an order or ranking to each thing; the second level of measurement.

PDSA Cycle: A strategy tool for quality improvement: plan, do, study, act.

Phenomenology Research Design: Qualitative research method used to study people through their lived experiences.

Prevalence: Describes data collected regarding health care related illnesses, conditions, and outcomes.

Qualitative Research: Research design using nonnumeric variables.

Quality Improvement (QI): A systematic and formal approach to collecting, analyzing, and disseminating data in order to improve services or products that a business renders.

Quantitative Research: Research performed by evaluating numbers and numeric variables that result in measurable data.

Quasi-Experimental Research Design: A type of quantitative research design that is partially controlled that studies cause and effect of variables.

Randomization: A method used like chance, such as flipping a coin.

Randomized Control Trials (RCTs): Research studies in which patients are chosen at random to receive the treatment/intervention being tested; considered the gold standard of research design.

Ratio: A comparison of any two numbers by division.

Ratio Level of Measurement: A measurement level with equal distances between the points and a zero-starting point.

Redundancy: This occurs when information collected is repetitive and no new information is being gathered.

Retrospective Chart Review: A medical record review that collects data to answer a question.

Rigor: The accuracy and consistency in data collection.

Saturation Point: Occurs when no new data are being generated, and the endpoint of the qualitative study is defined.

Six Sigma: A quality improvement strategy that investigates sentinel events to learn to prevent future events.

Systematic Review: Literature review that summarizes evidence by identifying, selecting, assessing, and synthesizing the findings of similar but separate studies.

Transferability: How applicable the results are to other subjects and in another context.

References

Agency for Healthcare Research and Quality. (2013). Plan-do-study-act (PDSA) cycle. Retrieved from https://innovations.ahrq.gov/qualitytools/plan-do-study-act-pdsa-cycle

Agency for Healthcare Research and Quality. (2015). Types of quality measures. Retrieved from https://www.ahrq.gov/professionals/quality-patient-safety/talkingquality/create/types.html

Agency for Healthcare Research & Quality. (2016). The six domains of health care quality. Retrieved from https://www.ahrq.gov/professionals/quality-patient-safety/talkingquality/create/sixdomains.html

Agency for Healthcare Research and Quality. (2017). Section 4: Ways to approach quality improvement process. Retrieved from https://www.ahrq.gov/cahps/quality-improvement/improvement-guide/4-approach-qi-process/sect4part2.html

American Nurses Association. (n.d.). Nursing research. Retrieved from http://www.nursingworld.org/EspeciallyForYou/Nurse-Researchers

Baines, R. J., Langelaan, M., Bruijne, M. C., & Wagner, C. (2015). Is researching adverse events in hospital deaths a good way to describe patient safety in hospitals: A retrospective patient record review study. Retrieved from https://bmjopen.bmj.com/content/5/7/e007380

Brotman, L. M., Dawson-McClure, S., Huang, K. Y., Theise, R., Kamboukos, D., Wang, J., . . . Ogedegbe, G. (2012). Early childhood family intervention and long-term obesity prevention among high-risk minority youth. Pediatrics, 129(3), e621-e628. doi:10.1542/peds.2011-1568

Common cause variation. (n.d.). In iSixSigma dictionary. Retrieved from https://www.isixsigma.com/dictionary/common-cause-variation/

Department of Health and Human Services. (2011). Quality improvement. Retrieved from https://www.hrsa.gov/sites/default/files/quality/toolbox/508pdfs/qualityimprovement.pdf

Glasgow, J. (2011). Introduction to lean and Six Sigma approaches to quality improvement. Retrieved from https://www.qualitymeasures.ahrq.gov/expert/expert-commentary/32943/introduction-to-lean-and-six-sigma-approaches-to-quality-improvement

Hayward, D., Bungay, V., Wolff, A. C., & Macdonald, V. (2016). A qualitative study of experienced nurses’ voluntary turnover: Learning from their perspectives. Journal of Clinical Nursing, 25, 1336-1345. doi:10.1111/jocn.13210

Institute of Medicine. (1999). To err is human: Building a safer health system. Retrieved from http://www.nationalacademies.org/hmd/~/media/Files/Report%20Files/1999/To-Err-is-Human/To%20Err%20is%20Human%201999%20%20report%20brief.pdf

Jenq, G. Y., Doyle, M. M., Belton, B. M., Herrin, J., & Horowitz, L. I. (2016). Quasi-experimental evaluation of the effectiveness of a large-scale readmission reduction program. JAMA Internal Medicine, 176, 681-690. doi:10.1001/jamainternmed.2016.0833

Kabisch, M., Ruckes, C., Seibert-Grafe, M., & Blettner, M. (2011). Randomized controlled trials: Part 17 of a series on evaluation of scientific publications. Deutsches Ärzteblatt International108(39), 663–668.

Martin, N. M., Odeh, K., Boujelbane, L., Rijhwani, M. V., Olet, S., Noor, A., . . . Battiola, R. (2017). Positive impact on patient satisfaction and caregiver identification using team facecards: a quality improvement study. Journal of Patient Centered-Research & Reviews, 4(4), 263. Retrieved from https://digitalrepository.aurorahealthcare.org/jpcrr/vol4/iss4/27/

Patient Protection and Affordable Care Act, Pub. L. 111-148, 124 Stat. 119 (2010).

Suresh, K. P. (2011). An overview of randomization techniques: An unbiased assessment of outcome in clinical research. Journal of Human Reproductive Sciences, 4(1), 8-11. doi: 10.4103/0974-1208.82352

Hire a competent writer to help you with

Quality Improvement Proposal Paper.

troublesome homework