Using your learning from this week, See attachments ‘W4 Assignment’ for detailed instructions and additional attachment for reso
Assignment:
Using your learning from this week, See attachments "W4 Assignment" for detailed instructions and additional attachment for resources
Each response should be a minimum of 150 words
· Use font size 12 and 1” margins.
· No plagiarism
· APA citing
Week 4 Mid-Term
Measurements & Evaluation
This chapter focuses on six rules and their associated tools, techniques, and tips for measuring the magnitude of problems and the effect of solutions so that the evaluations are more evidence-based, that is, they are based on actual observations or outcomes, not hypothetical events or hearsay. Collectively, the rules, tools, techniques, and tips are meant to support the evaluation of interventions or solutions designed to improve human performance. Their use increases the chances that evaluation is based on valid information that is useful to decision-makers. Rules are prescribed guides for what to do, when, and why. The rules begin with how to get agreement on what measures and metrics to use as the basis of the evaluation. They conclude with how to present findings to clients to facilitate understanding and decisions. Tools are instruments used in the execution of a task. They are a means to an end. Techniques are suggestions about how to carry out a task or make better use of a tool usually with the intent of saving time or reducing error. Tips are bits of expert advice intended to make the application of a rule or the use of a tool easier. Tools, techniques, and tips are meaningless without rules; likewise, rules without tools, techniques, and tips are difficult to apply.
Resources: Mosele, J. & Dessinger, J. (2009). Handbook of Improving Performance in the Workplace. (Volume 3). Pfeiffer-Wiley
To prepare for this Discussion, pay particular attention to the following Learning Resources:
· Review this week’s Learning Resources, especially:
· Read Week 4 Lecture – See Word doc .
· Read Chapter 11-12 – See Word doc
Assignment:
In completing your midterm exam, please be sure that your work follows essay format. Your work should include significant responses that are supported by outside research. Each response should be a minimum of 150 words and should include a reference list. Your responses should include examples and should be entirely in your own words.
· Using your knowledge from what you have learned in the past 4 weeks, analyze and provide an example of four (4) basic terms that are associated with measurement and evaluation.
· You have learned that measurement and performance chain play a significant role in human performance technology. Using your workplace or a company from your assignments, prove how measurement and performance chain can be applied to help improve organizational success.
· You have learned that training impacts business performance in a positive and negative way. Summarize how business performance can be impacted in both a positive and negative way and provide an example of how the negative can be transformed into a positive leading to organizational success.
· Ethics play a role in performance management. Distinguish the role ethics play and provide a real-world example of an organization that was impacted by the ethics of their employees in either a positive or negative manner.
· Using what you have learned over the past 4 weeks, analyze the guidelines, process, and decisions that are critical to testing. Provide an example of how these concepts pertain to your workplace
· No plagiarism
,
Evaluating Results and Benefits – Week #4 Lecture 1
Performance Improvement
Welcome to Week 4. We are officially halfway through the course. This week we will discuss the importance of performance improvement within the workplace. This is an essential topic when considering the success of an organization. Performance improvement is defined as the measurement of output of a business process. The process is then modified in order to increase the output and/or increase efficiency or effectiveness of the process. Performance improvement can be used at an individual level or at an organizational level which makes this an effective tool in generating organizational success (Moseley & Dessigner, 2009). Performance improvement is considered to be an organizational change where management puts a program into place in order to measure the current level of performance throughout the organization. This allows management to develop ideas that can modify organizational behavior and infrastructure. The end result aims to be higher output, effectiveness, and efficiency. In addition, organizational efficacy may be improved as the measurements can look at goals and objectives that need improvement. In the workplace, human performance can often be improved by engaging employees in a rewarding experience. By rewarding an employee, behavior can be modified to motivate the employees to become more productive. When an employee is motivated, it is easier to direct them towards the goal of the organization which ultimately leads to success. Rewards do not always have to be monetary. Organizational or departmental competitions might be one way to motivate an employee. Time off, gift cards, and flex time are examples of non-cash rewards that might motivate an individual within the workplace. The goal is to connect the employees with the rewards as a means of being successful in performance improvement. Return on Investment, or ROI, in training and development can be defined as a means of measuring the economic return that has been generated from an investment as a result of a training program. The returns are then compared against the cost of the program in order to achieve an annual rate for return on the investment. So, you might be wondering what this has to do with performance overall. The answer is simple, ROI is about judging the investment based on training and development. Customer complaints and returns are also a measurement in ROI which in the end, gives a solid measurement of the success of a program and/or product. If the program can boost the bottom line, you have a solid program. If not, it’s time to reconsider, make changes, and move forward (Moseley & Dessigner, 2009). Now that you understanding performance improvement and the importance of the return on investment, it is important to discuss performance testing. After all, this is necessary as it works hand-in-hand with the above. Performance tests require an individual to perform a task while an evaluator observes. The performance test will test the workplace processes to ensure accuracy, efficiency, and reliability. A performance test is real-time and allows for immediate feedback from the evaluator. In the instance a task is not working properly, the evaluation will provide ideas for improvement. In addition, once feedback is received, the task can be performed again with the improvements to determine efficiency and the cycle continues until success occurs. There are two steps in designing a performance test, which is important to understand. Design and development are essential in the design of the performance test. Design synthesizes analysis data and then specifies a solution. Development builds several testing scenarios in order to determine the best output. Designing is typically the first step and development follows. By doing this, you have an opportunity at trial and error to ensure that the performance testing conducted gives you the most bang for your buck. In the end, you will find the greatest results to ensure that the performance is meeting organizational and industrial standards creating a productive and profitable organization. Resources: Mosele, J. & Dessinger, J. (2009). Handbook of Improving Performance in the Workplace. (Volume 3). Pfeiffer-Wiley
,
CHAPTER ELEVEN Performance-Based Evaluation: Tools, Techniques, and Tips
Judith A. Hale
This chapter focuses on six rules and their associated tools, techniques, and tips for measuring the magnitude of problems and the effect of solutions so that the evaluations are more evidence-based, that is, they are based on actual observations or outcomes, not hypothetical events or hearsay. Collectively, the rules, tools, techniques, and tips are meant to support the evaluation of interventions or solutions designed to improve human performance. Their use increases the chances that evaluation is based on valid information that is useful to decision-makers. Rules are prescribed guides for what to do, when, and why. The rules begin with how to get agreement on what measures and metrics to use as the basis of the evaluation. They conclude with how to present findings to clients to facilitate understanding and decisions. Tools are instruments used in the execution of a task. They are a means to an end. Techniques are suggestions about how to carry out a task or make better use of a tool usually with the intent of saving time or reducing error. Tips are bits of expert advice intended to make the application of a rule or the use of a tool easier. Tools, techniques, and tips are meaningless without rules; likewise, rules without tools, techniques, and tips are difficult to apply.
THE RULES
The rules for evaluating needs and solutions based on facts or evidence are
1. Get sufficient clarity—Have clients explain what they perceive as a need or goal in detail. The factors and observations they are using as a basis for determining there is a problem are the same factors they will use to judge improvement or success. Clarity about the details facilitates gaining consensus about the need and the evidence.
2. Set a baseline—Set a baseline or describe the current state of affairs sufficiently so that improvement can be measured. Clients cannot determine whether circumstances have changed unless they have something against which to compare the new situation.
3. Leverage data already being collected—Leverage data the client already has to measure whether change is happening and the desired level of improvement occurred. This saves time, reduces the cost of evaluating, and increases the likelihood the evidence will be accepted.
4. Track leading indicators—Leading indicators are the presence of interim behaviors or results that predict results if they continue. When clients track leading indicators, they are in a better position to take corrective action in time to make a difference.
5. Analyze the data—Examine the data for patterns, frequency, and significance so they guide future decisions. The analysis should lead to insights and better understanding of the current situation and how much change has occurred.
6. Tell the story—Communicate the logic behind the decision and the evidence used to measure the effectiveness of the solution. This will facilitate commitment to the solution and meaningful dialogue about the need for any next steps to further support improvement.
The rules are somewhat linear or similar to a procedure; however, it helps to have a deeper understanding of some of the more common performance improvement measures and metrics to use them efficiently.
MEASURES, METRICS, AND EVIDENCE
In the world of learning and performance, evaluation is the act of passing judgment on the value of a problem and its proposed solutions. Measurement is the act of gathering data and then using what is found out as a basis for decisions as to the worth of a problem and the value of a solution. Measures are the attributes that the people doing the evaluation pay attention to when making a judgment, such as customer service, timeliness, security, return on investment, and so on. Metrics are units of measurement such as how frequently a behavior occurs, how long before a behavior appears in seconds or hours, how many checks or levels of approval there are, and how much money is gained in hundreds or thousands of dollars. For example, if a client wants to measure customer service, the metric might be how frequently people exhibit the previously determined desired behaviors. If the measure is time, the metric may be years, days, or milliseconds, depending on the circumstances. Taken together measures and metrics are what people accept as evidence that there is a problem and that circumstances improved after a solution was imposed.
1. Get Sufficient Clarity
The first rule is to get sufficient clarity as to what stakeholders are using as evidence that a need exists and what information they will accept as proof that performance improved. A desired by-product of getting clarity is consensus among stakeholders as to the importance of the need and what they will accept as evidence of improvement. Clients typically dictate solutions, such as training, coaching, new software, or a change in personnel to improve performance. They may assume the basis for the request is obvious and accepted by others. However, until the information on which they are making the request is explicit, it is difficult to determine whether there is agreement or whether there is sufficient evidence to warrant action. The best time to help clients articulate the basis for their request is at the time of the request. There are tools, techniques, and tips to help clients better articulate or express what they are using as evidence a need exists or what they will take as evidence that the situation improved as a result of some intervention.
Tool 1a: Getting Clarity. A simple, but effective tool is shown in Table 11.1 , Getting Clarity. It can be a spreadsheet or table that lists the problem and the evidence in different columns. Clients use it to capture what is known and what is suspected. The Issue column is where clients list the problem they are concerned about. The Evidence column is where clients note what information they are using as a basis for their conclusion that there is a problem and how pervasive it is. It helps clients connect the problem with the evidence. For example, the issue might be customer complaints, turnover of key personnel, or cost overruns. The questions then are about how clients know these are the issues. Tool 1a. Getting Clarity, as shown in Table 11.1 , has examples in it. However, when using the tool put only that information in each column that is relevant to the situation.
There are at least two ways to use Tool 1a: (1) ask questions and fill it out based on what is learned or (2) prepare it ahead of time using one’s best guess or past experience.
Technique and Tip 1. Ask Questions. A simple technique is to probe, simply asking for more information about the logic behind the request. For example, if clients were told it seemed they had given the situation a lot of thought and the goal was to not waste their time or misuse their resources, they may be more willing to openly discuss the basis on which they decided there was a problem. They may be more willing to share what led them to the conclusion that an action or a solution was needed. The intent is to get clients to explain what they have seen happening that convinced them that a solution is needed and what behaviors will convince them that the situation had improved. A tip is to position the questions asked of them as a desire to save time, avoid mistakes, and use resources wisely. Most people are willing to share their experiences and reasoning if the request is not experienced as a statement of doubt about how they made the decision but rather a genuine interest in better understanding the problem.
Technique and Tip 2. Come Prepared and Have an Organization Scheme. It is best to have measures and metrics already in mind before discussing a problem or a solution. This is easier to do when one has more experience with a client or a performance problem. The list of measures and metrics are used to facilitate a more robust conversation with clients. A technique that supports this tip is to develop an organizing scheme for measures that quickly presents a mental image or reference point about how to evaluate a need or a solution. Table 11.2, Function and Program, Measures presents one way of organizing measures. It separates measuring a function’s worth from that of a solution’s worth. It also suggests measures that clients may already be thinking about, but may not express.
Measures of Contribution . These measures are used to judge the degree overall that the learning and performance function adds value to the organization. Examples of contribution might be
1. Alignment—The degree clients see the link between what actions are being proposed and their needs being met. The metric might be the number of programs explicitly tied to major initiatives.
2. Productivity—The degree clients see how much was delivered and how timely the work was done. Metrics might be the number of programs produced within a year and the lapse time in days or weeks between the request and the delivery.
3. Cost competitive—The degree clients see the use of cost competitive resources and their being used wisely. Metrics might be the number and cost of internal and external resources used to develop solutions.
4. Customer relations—The degree clients experience the learning and performance improvement function as easy to work with. Metrics might be the average rating of customers’ opinion on a survey and the number of anecdotes commending the function’s work.
Program Measures . These are the factors clients consider when judging the worth of specific products, programs, and services. They might include:
1. Satisfaction—How satisfied stakeholders are with the current state and how satisfied they are after implementing the solution. Metrics might be the average rating of opinions on a survey and the standard deviation (the amount of variance) among those opinions.
2. Learning—How proficient workers were before a solution was implemented compared to after it was implemented. Metrics might be pre- and post-test scores, how frequently completed work met standards, and how quickly tasks were done.
3. Transfer or behavior change—How many people’s behavior changed after the solution was implemented and how quickly did it change. Metrics might be the frequency of discrete behaviors and how many days it took for those behaviors to show up consistently.
4. Goal accomplishment—To what degree did the solution deliver on the promise? The metric depends on the goal. If the goal was increased sales, the metric might be the number of proposals accepted or the number of leads that converted to sales.
5. Time to proficiency—How long does it take to bring people to proficiency compared to what it was after the implementation of the solution. The metric might be quantity of work performed within a given time frame, accuracy of work, or how quickly people could do the work to standard without supervision.
6. Cost of proficiency—What it costs in time and dollars to bring a workforce to proficiency and how much it would cost to increase the level of proficiency. The metrics might include the fee for external resources compared to the aggregate cost of using employees, such as salary, benefits, facilities, equipment, and so forth.
Measures by Level . Table 11.3 is another example of how to organize issues, that is, at the workplace, work, or worker levels. The issues listed are examples.
Each identified issue then lends itself to questions about what the evidence is to determine whether there is a problem and what can be used to measure improvement. For all three levels, the measures and metrics might be the frequency of rework, misused resources, loss of talent, and the like. What may be different are the cause and the solution. When clients are given a menu of measures and metrics, they are in a better position to pick the ones that are most relevant, accessible, and would help them make better decisions. Having an organizing schema and using tools like that shown in Table 11.1 also allow clients to add metrics meaningful to their situation. In the process it will become clear on what basis clients currently judge that there is a problem, the adequacy of the work done to address those problems, and the value of the solutions.
2. Set a Baseline
The baseline is simply the current state of affairs. Without this information there is little or no basis for determining whether circumstances improved as a result of an intervention or solution. The tool used to gain clarity ( Table 11.1 ) can be expanded to record the baseline by simply adding another column, as shown in Table 11.4 . The second column lists what is being used as evidence of a need, and the third column is where the baseline is recorded. Table 11.4 has examples of the type of information to might capture in the Getting Clarity Tool.
Technique and Tip 3. Do Not Be Afraid of Fuzzy Data; Instead Improve It. Sometimes in our desire to be precise, people too easily reject or are suspicious of information about the current state of affairs because it is old or the client has doubts about its accuracy. A technique that helps is to take the data in whatever condition they are and suggest using the solution as an opportunity to get better data. For example, when clients say “yes, but. …” to the suggestion to use customer satisfaction survey results as a baseline, first discuss what other data might be available. For example, in the retail industry the number of returns and aging receivables might be used to augment customer satisfaction data. In the financial services industry, the number of referrals and renewal of contracts for services could augment customer satisfaction data. Next acknowledge that the data may be incomplete, but offer that they still provide a baseline, and future measurement will produce better data. Finally, offer suggestions about how to get more accurate baseline data such as leveraging data from other sources.
3. Leverage Data Already Being Collected
One of the frequently cited excuses for not evaluating program effectiveness is the argument that evaluation costs money and takes time. What people unfortunately conclude is that they lack the money and the time to measure change efficiently or cost-effectively. The argument presumes the measurement has to start from scratch or the beginning, so to speak. However, if clients leverage the measurement that is already occurring, they can save time and avoid unnecessary expenses. Table 11.5 has examples of measurement activities commonly done. All of these measures could be leveraged to identify needs, set baselines, and measure improvement.
Table 11.5 Typical Ongoing Measurement
|
Annual employee morale survey usually done by human resources |
|
Customer satisfaction survey done by marketing |
|
Exit interviews done by human resources |
|
Safety reports usually done by safety or quality control |
|
Call center technical and customer support call sheets, usually done by the call centers themselves |
|
Periodic compliance studies done by internal audit or quality assurance |
|
Aging receivables report usually done by accounting, specifically accounts receivables |
|
Sales logs with number of calls, who was called, usually done by sales staff or their managers |
Technique and Tip 4. Assume the Data Already Exists. Most organizations collect an immense amount of data about their costs, operations, customer satisfaction, and the like. Therefore, the tip is to assume someone in the organization can already produce meaningful measures and metrics. The technique is to partner or collaborate with other departments that already capture different types of performance data. Going back to Tool 1a, the table has a column for current evidence. These are the data the organization is already getting. A question clients might be asked is, “How do they use these data to measure change or improvement?’ A challenge might be getting access to the data. A tip is to offer to help the other departments get better data or help them argue their case for greater management support.
4. Track Leading Indicators
Leading indicators are data that predict success or failure. A common mistake is to wait until a lot of time has passed to determine whether circumstances improved. Waiting also results in lost opportunities to reinforce a solution or to take corrective action. Examples of leading indicators have been added to Tool 1a. Getting Clarity in Table 11.6 . Another column can be added or replace the baseline data column with one for suggested leading indicators. In most instances, the data sources for the leading indicators are the same as the baseline, but are captured and reported more frequently.
In other instances, the information sources for the leading indicators are not the same as for the baseline, and, therefore, would have to be collected. Here are some examples:
If the goal is for employees to get more timely feedback on their performance on the premise that this will result in fewer grievances and improved efficiencies; and the solutions include asking supervisors to do more frequent performance reviews, to redesign the performance review form, to automate the process, and to train supervisors on how to use the form, the leading indicators might be
The number of supervisors asking for technical support to use the new system each month
The number of reviews posted on the automated system monthly
The number of employees reporting that they got reviews in the last thirty or sixty days
If the goal is to improve customer retention on the premise that it will increase profits or margins and cash flow because repeat customers require less technical support and are more likely to buy more product and buy it more quickly; and the solutions are to offer technical training to customers, to certify customers who complete the training, and to have account executives call customers more frequently, then the leading indicators might be
The number of customers requesting information about the training and certification
The number of customers participating in the training and applying for certification
The number of account executives calling key customers more frequently
The average sales cycle times of customers who are signed up for training and eventually certified compared to those who do not sign up for training
The frequency and time duration of technical support to clients who are participating in training and later certified compared to those who are not trained or certified
Technique and Tip 5. Think of Leading Indicators as Formative Evaluation or a Way to Measure Transfer. Typically, formative evaluation occurs before the launch of a product or program. It is done to confirm the usability of a solution and the accuracy of the information. However, formative evaluation can also be done after the launch to measure usability rates and the target audience’s initial perceptions. In this case what is being measured is the rate of transfer; the goal is to identify early what needs to be done to increase usage and overcome resistance. Unfortunately, organizations often invest in a program, launch it, and then believe the target audience will automatically use it or adopt the new desired behaviors without further intervention. However, if the target audience does not use the program or adopt the new behaviors in a timely fashion, the odds are they will not do it later. Therefore, formative evaluation that is done after the launch can increase the odds that a program will be successful. A technique is to more purposefully do post-launch formative evaluation, or measure transfer, and to decide ahead of time what indicators to use to measure acceptance and resistance. In this case, the indicators become leading indicators or predictors.
Technique and Tip 6. Use Self-Report and Let People Know It. Self-report is the process of asking a target audience to report on its own behavior. Should someone question the validity of self-report, there is some research that shows it is valid (Norwick, Choi, & Ben-Shachar, 2002). The technique is to survey the people whose behavior is expected to change, usually the target audience of the solution. For example, if the solution was for people to use a procedure, system, or performance support tool, simply ask them how frequently they are using it. A tip is to let people know in advance that they will be asked at some time in the future about their usage. Another tip is to be sure to get permission from the target audience’s supervisor to solicit their input and then be sure to tell them permission was granted so they know any future questions are legitimate. Tool 2a, as shown in Table 11.7, suggests a five-point scale for surveying a target audience and possible questions.
Technique and Tip 7. Poll Vested Parties to Confirm Self-Report Data. Should clients continue to doubt the self-report data, a technique is to confirm the results by polling others who have a vested interest in the adoption of the new behaviors, such as supervisors, team leads, or customers. However, let people know in advance that they will be asked for their obs
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.
