Compare and contrast the potential design trade-offs involved when designing a specific decision support or expert system: Directly from tools vs. using a DSS appliance or expe
Compare and contrast the potential design trade-offs involved when designing a specific decision support or expert system:
Directly from tools vs. using a DSS appliance or expert system shell.
Need 2-3 pages with peer reviewed citations in APA format.
7
DESIGNING A DECISION SUPPORT SYSTEM
At this point, you may be sold on the idea of decision support systems. You believe they are important, and you want to include them within the assets of your department or organization. The next logical question is how to start. The answer is a clear and unequivocal, "it depends."
The best approach depends upon the kind of systems already in place and the intended focus of the DSS. As with any good systems analysis and design process, it is important to understand the needs of the application and to select the models, model management system, databases, database management system, and user interface in a manner that best meets the needs of that application. Successful DSS can be built on almost any kind of platform with almost any kind of software, but it is crucial that the choices fit the application. Selecting tools and vendors before understanding the problem or forcing tools to meet needs after the fact will certainly lead to failure.
The physical design of a successful DSS must follow a logical design, which in turn must be guided by the decision-making process. In particular, designers should ask the same fundamental questions as those on which reporters rely:
• Who needs the DSS? • What advantages does the user expect by using the DSS? • When will the DSS be used? • Where does this system fit into the general business process? • Why is a DSS needed? • How will the DSS be used?
Decision Support Systems for Business Intelligence by Vicki L. Sauter Copyright © 2010 John Wiley & Sons, Inc.
315
316 DESIGNING A DECISION SUPPORT SYSTEM
While these questions seem obvious, we must keep returning to them as a reality test that the system is providing support for decisions.
Unfortunately, the systems development life cycle approach, which provides a reliable framework in which to design transaction processing systems (TPS), generally does not work for DSS design. Unlike TPS, DSS typically will have fuzzy or even wicked problem definitions that change substantially over time. In addition, since DSS support decision making, generally that of higher level managers, its design is highly subjective and subject to change. Since such managers have less time and less inclination to attend training sessions, it is necessary to create a system that has lower training needs than those generally associated with TPS. Finally, it is difficult to determine with certainty that a DSS works properly for all applications. Test data sets and problem scenarios can be developed for TPS and run against a system to determine whether it works properly. But, by its very nature, which is to be flexible and allow decision makers to use it as it best fits their decision style, DSS cannot be "tested" to ensure that they always work properly.
Therefore, DSS require a different approach to design. It must be a process and a product that relate to the constraints of the domain in which the DSS will be used. Gachet and Sprague (2005) remind us that there must be tangible improvements in the life of the decision maker to justify using the system. The DSS must make it easier to get data, improved knowledge management, and improved outcomes for it to be used. The faster the DSS can make the points of the value, the faster the DSS will be adopted. If those factors are to be realized, they argue, designers must use a context-based development life cycle for DSS design. This methodology emphasizes the following:
1. Identify Requirement Specifications Based on Contextual Issues. This means that the first step to design is to identify the user interface requirements from the end users. In ad- dition, at this stage designers must identify needs for data integration to bring improvement in the process and where there is a need for parity with workflow.
2. Preliminary Conceptual Design. There must be an emphasis on inputs and outputs from the end-user requirements: what do they need and how must it be represented. Also in this step designers identify specific hardware and software requirements and identify specifications for databases.
3. Logical Design and Architectural Specifications. In this stage, designers begin to specify user interfaces. Using early prototyping, they can compare their understanding of the interface needs with the users to ensure they understood the message correctly. In addition, designers must specify the procedures for obtaining data and sharing it with others and the distributed architecture required for appropriate levels of integration of the DSS with other systems. Finally, designers must model data and the strategic design as well as develop procedures for maintenance and backup of the system.
Design insights Picking a Team
As in any large-scale, important application, the question of who should do the development may be critical. Often project teams are hand-picked members of the staff who are pulled together especially for their ability to respond to a particular need. They are thought of as a SWAT team in that they develop the DSS and then return to their separate departments, If they are successful, then they are often called upon for the next important application, Especially with the design of DSS, here are sometimes subtle elements of group synergy that lead to success for the group in one application but not in other applications. Unfortunately, the understanding of what leads to such success in high-performance projects is not well understood.
DESIGNING A DECISION SUPPORT SYSTEM
4. Detailed Design and Testing. While testing is important in any design process, in this methodology, the emphasis is on testing the system with the end users and testing the integration of the system with the decision makers' functions. That means we need to test if individual decision makers can use the system and if it flows nicely in their workflow process. Of course, this also includes testing the resilience, reliability, and scalability of the system and its performance under specific failure scenarios.
5. Operational Implementation. In this stage, the system is made operational in a subset of the decision makers' world. Systems are linked to appropriate parts of the data warehouse and than are made available to used by decision. Those decision makers involved in the test would be trained and receive access to the system.
6. Evaluation and Modification. Finally, the system is evaluated in terms of it overall user acceptance, system integration, architecture resilience, and scalability. Finally the system is modified across the organization.
7. Operational Deployment. Final changes are made in the system and it is distributed to all users after training. This stage includes continuous monitoring of both technical problems in the operation of the DSS and patterns of use that might suggest problems.
Pick (2008) further addresses the process of Gachet and Sprague's first step of require- ments definition. He states that it is important to elucidate the value of the DSS before beginning to build the system. Pick's argument is that the benefits of a DSS are often much more subtle than decision makers expect, and so it is important to sensitize them to the benefits that might be expected as the process starts. In addition, consideration of the benefits early in the process will help decision makers develop a better understanding of the opportunities that might be built into the system. He suggests questions such as the following (Pick, 2008, p. 725):
• If we will be better able to cope with large or complex problems, how much may that ability be worth?
• If the system will allow greater exploration and discovery, how much might the resulting insights be worth?
• If there is better knowledge processing, how is this beneficial? If the system pro- vides better understanding of a problem, can anyone judge the costs of incomplete understanding?
So, how would a designer know when he or she has a good DSS? Arnott and Dodson (2008) provide a simple model of what impacts DSS success, as shown in Figure 7.1. They bring two basic concepts to a methodology for designing DSS. First they, as Gachet and Sprague (2006) and Pick (2008) say, the system must be comfortable for the user and improve decision making. Arnott and Dodson represent these concepts with "user satisfaction" and "impact of the system." Notice that they show user satisfaction impacting use. What this implies is that if the users do not see the benefit of the system, find it too difficult to use, or do not find the information and models they perceive they need to complete their decision, they are likely not to use the system at all. Clearly, even if the system could have a significant impact if it were used, it will not be a success if users do not find what they need.
Arnott and Dodson (2008) also identify 10 critical success factors that need to be satisfied to ensure both use and success (pp. 770-771):
• There is a committed and informed executive sponsor. • There is widespread management support.
318 DESIGNING A DECISION SUPPORT SYSTEM
Figure 7.1. A model of DSS success. (Adapted from D. Arnott, and G. Dodson, "Decision Support Systems Failure," in F. Burstein and C.W. Holsapple (Eds.), Handbook on Decision Support Systems, Vol. I, Berlin: Springer-Verlag, 2008, p. 768.) Image is reprinted here with permission.
• The design team has appropriate skills. • The design team uses appropriate technology. • The design team has adequate resources. • There is effective data management. • There is a clear link with business objectives. • There exists well-defined requirements. • The system is allowed to evolve in development. • The design team manages project scope.
These critical success factors mirror those generally accepted for system design. In par- ticular, they highlight that the success of the DSS is dependent upon it being aligned with business objectives and the technology plan of the organization. This will be discussed in the next section.
These methodologies, however, identify decision makers being comfortable with the system as the critical component to DSS success. It has been said that most users would rather live with a problem they cannot solve than use a solution they cannot understand. Thus making the DSS too "black box" or difficult to use will make it an instant failure. Of course, designers need to know what factors will make the system easy to use and comfortable to use. Norman (2007, p. 93) identifies six design rules:
Provide rich, complete and natural signals. Be predictable. Provide good conceptual models. Make output understandable. Provide continual awareness without annoyance. Exploit natural mappings.
PLANNING FOR DECISION SUPPORT SYSTEMS 319
You will notice that this list tells us that understandability and requiring the system design to follow the decision process are important aspects of good design. If the system is predictable, cues (that guide the operations of the system or the evaluation of information) are informative, and the output is presented in a clear and useful manner, the decision maker is likely to use the DSS. Norman's emphasis is on providing a comfortable metaphor for the system to which the user can relate. If the metaphor is right, then the procedures will be understandable and the signals will be informative. In addition, he says that there should be ubiquitous, yet nonobtrusive help available to the user.
Norman further emphasized rules of good design from the perspective of the system that mirrors the themes of understandability and congruence with the decision process:
• Keep things simple. • Give people a conceptual model. • Give reasons. • Make people think they are in control. • Continually reassure. • Never label human behavior as "error."
As a field, we tend to forget the most important design rule—keep everything simple. This means the user interface, the processes needed to use the system, and the output. Removing clutter and the newest but unnecessary gadget will encourage users to focus on the important forms of support the system has to offer. In addition, these principles remind us that the decision maker, not the DSS, ultimately will make the choice among alternatives. The system must provide support and work in the way the user needs or the decision maker will not use the system. Helping to make the system more predictable and more like a trusted assistant will encourage decision makers to utilize its power. This includes the specific attributes of the data, models, and user interface discussed in previous chapters.
PLANNING FOR DECISION SUPPORT SYSTEMS
In an ideal world, a multilevel plan guides the development of new DSS, such as that described in Figure 7.2. The plan provides specifications for a specific DSS, in terms of the way it interacts with the rest of the business processes, the kind of information that it will provide, and its relative importance to the growth of the organization.
The specifications for DSS begin with the corporate strategic or long-range plan. A strategic plan defines where the corporation expects to change its products or processes and during what time line and provides direction to management of the corporation as a whole. The MIS master plan, in turn, inherits its priorities and concerns from this corporate strategic plan. The information system (IS) plan provides guidelines for prioritizing requests for maintenance of existing systems and creation of new systems. In particular, it describes the priorities for hardware, software, and staff necessary to respond to corporate strategy plans. The IS master plan specifies modifications and maintenance of legacy systems, creation and implementation of new systems, and diffusion of technology within the organization. It should provide a plan for regular updating and other maintenance. Finally, it should provide specifications for how staff should proceed in the creation of systems.
The DSS plan derives its priorities from the IS plan. Its goal is to coordinate future implementations in the broadest possible way to ensure that all decision making is supported
320 DESIGNING A DECISION SUPPORT SYSTEM
Figure 7.2. Ideal planning.
in an appropriate way while planning for the reuse of code, flexibility for the future, and the greatest potential for growth. In particular, the DSS plan should help answer questions such as those posed by Sprague and Carlson (1982):
• How can current needs susceptible to DSS be recognized? • How can the likely extent of their growth be assessed? • What types of DSS are required to support the needs, now and in the future? • What are the minimum startup capabilities required, both organizational and tech-
nical? • What kind of plan can be developed to establish the long-term direction yet respond
to unanticipated developments in managerial needs and technical capabilities?
The DSS master plan would provide direction in the selection of hardware and software and for integration with current systems. In addition, it could include a process for the creation of reusable libraries of code that future designers could embed into similarly operating DSS.
Designing a Specific DSS
Where DSS master plans exist, there is already some guidance in how to proceed. More often than not, however, such plans do not exist. Then, designers must judge for themselves how the DSS will fit into corporate plans and how it will interact with other systems. The methodology described in Figure 7.3 will help designers ensure they get the best fit. Note
PLANNING FOR DECISION SUPPORT SYSTEMS 321
Goals:
Goncerns:
Initial analysis
• Identify key decisions • Identify key information needs
• Theoretical or conceptual needs • Industry-based needs • Corporation-based needs • Decision-specific parameters
Situation analysis
Goals: · Understand the organizational setting • Understand the task • Understand the user characteristics
System design
Goals: · Logical design • System construction • System evaluation
Goals:
Implementation
• Demonstration • Training • Deployment
Figure 7.3. DSS design methodology.
that it differs from the traditional systems development life cycle (SDLC) approach in that it puts much more emphasis on determining what information needs to be provided and in what fashion.
In the first stage, the designer learns the decision needs and environment. Designers must know the key decisions under consideration by the decision maker and the related information needs if the DSS is to be a tool that supports decisions. Then the designer can begin to examine the parameters needed for consideration. Sometimes these parameters will be easy to identify. For example, one key issue for investment executives is what investments will provide the best returns. Knowing that, they need to consider return, relative risk, tax advantages, term of return, and other fiscal parameters. On the other hand, a chief executive officer-s key issue might be how to prevent a leveraged buyout or how to strategically acquire a new vertical market. In this situation, even knowledge of the key decision reveals information needs well.
DESIGNING A DECISION SUPPORT SYSTEM
Interviewing Techniques. Often designers learn decision makers' needs by inter- viewing them. There are many ways of conducting interviews, each of which provides different kinds of information. For example, consider the interviewing styles noted in Fig- ure 7.4 and discussed below. Interviews can be structured, unstructured, or focused. They can follow case studies or protocol analysis. Finally, they can utilize tools such as card sorting and multidimensional scaling.
The benefit of interviews is that they provide access to information or a perspective on information that only the decision maker can provide. In both the structured interview and the focused interview, the designer is interacting with the decision maker to obtain information regarding a prescribed set of topics. This interaction might be in a face-to-face setting, over the telephone, via computer, or by a pen-and-paper questionnaire. Generally the richest information can be gleaned in a face-to-face setting in a neutral location (away from the interruptions of the decision maker's normal activities). Good results can be achieved with intelligent computer questionnaires (that move through the questions as a function of the answers already provided); unfortunately, it is generally too expensive to develop this software for a one-time use.
The degree of structure we build into the interview depends upon the specificity of the information we seek. A structured interview is one in which the questions and the order in which they will be asked are prescribed. The interviewer seeks short answers that provide specific information. A focused interview, on the other hand, is relatively unstructured. In this case, the interviewer also has a set of questions and an order for asking the questions. However, the questions are more general, allowing the respondent to drive the direction of the discussion. The interviewer must be prepared with probing questions that help the respondent to focus on salient points.
Concepts
Problems (major tasks completed by decision maker)
Solutions (possible outcomes)
Problem- solving steps
Raw concepts
Concept definition
Concept structures
Problems
Problem types
Solutions
Solution types
General steps
Specific steps
Structured interviews
V
S
S
S
Focused interviews
^
^
^
^
^
S
Case study interviews
^
S
S
S
Protocol analysis
^
V
S
V
Card sorting
V
S
S
Multidimensional scaling
s
S
s
Figure 7.4. Interviewing techniques matrix.
PLANNING FOR DECISION SUPPORT SYSTEMS 323
The more structured the interview, the greater the chance that the decision maker will provide precisely the information sought. However, the more structured the interview, the less likely the decision maker will provide insights the designer had not considered previously. Therefore, if the designer is relatively uninformed about the choice process or the decision maker's tendencies, the focused interview will allow for greater probing of new avenues and hence greater understanding of the relationships between tasks and concepts and why the procedures are sequenced in a particular fashion.
A protocol analysis is a different kind of interview because the interviewer does not set even the basis of the discussion. Instead, respondents complete their typical choice processes (including seeking information, generating alternatives, merging information, modeling, sensitivity analysis, and other tasks included in the process). In order to communicate what is happening and why it is happening, the decision maker verbalizes each task and subtask and how a decision is made to move to another task. Usually, the interviewer does not intervene but just records the descriptions provided. Protocol analysis is a valuable tool because it helps the designer understand what the decision makers actually do in the choice process, not what they perceive they do. This can be important because often the decision maker is not aware of the actual tasks and hence cannot communicate them; this can be a particular problem with very experienced decision makers, as discussed in Chapter 2.
Other Techniques. Both the card-sorting technique and multidimensional scaling require the decision maker to perform some task from which the designer infers the preferred information and models. "Card sorting" refers to any task (whether or not one actually uses cards, even if one uses a computer simulation, such as that shown in Figure 7.5) in which the decision maker iteratively sorts and combines things or concepts to determine a point of view. For example, if the choice situation involves loan applications, the decision maker
Figure 7.5. Card Sorts simulation.
DESIGNING A DECISION SUPPORT SYSTEM
would sort a set of loan applications into multiple piles (perhaps "acceptable," "borderline", and "unacceptable"). After the decision maker is comfortable with the similarity of the loan applications in each pile, the designer analyzes the applications, with the help of the decision maker, to determine the bases for the sorting. In other words, by noting the similarity and differences among the applications within piles and between piles, the designer can glean the set of criteria and standards for applying them. This helps the designer to understand how to provide information and models for the decision maker.
Multidimensional scaling is a similar process in which decision makers are asked to rate items as being similar or dissimilar. It differs from card sorting in that it forces the decision maker to make choices among less complex alternatives. For example, rather than asking whether or not an entire application is acceptable, the designer would ask the decision maker to compare two candidates with particular incomes or particular debt ratios with regard to their risks as loan candidates. Designers pose a large number of combinations and analyze the data mathematically to determine the criteria being employed by the decision makers. Unfortunately, the factors driving the decision often are not obvious or they lack face validity. Hence the exercise can result in no useful information.
To identify more information needs, designers research the specific kind of decision under consideration. For example, they can identify some informational needs by studying the conceptual and theoretical bases for decisions, such as those covered in business school classes. From such an analysis, designers could identify that investment executives need to consider term of investment, relative risk, tax advantages, and other fiscal parameters in addition to the fundamental question of return on investment. This would provide a starting point for identifying additional needs. Alternatively, designers can gain insight by learning about the industry in general. For example, designers of a DSS for a pharmaceutical firm could gain insights by examining the creation, approval, marketing, and selling processes for drugs. Issues such as testing, purity, reliability, and statistical confidence levels would become evident. Such topics would likely have a home in any DSS in such a firm. Finally, designers could examine copies of reports, memos, transactions, and models to identify additional needs. This is comparable to an archeological analysis of the context from which inferences about needs can be drawn.
Influence Diagrams. It is important to be sure that all of the critical factors are represented in a DSS. Hence, designers often rely upon tools like influence diagrams to help them keep track of the range of information that is needed in a DSS. This popular decision analysis tool helps to identify and to clarify the variables that might be considered as well as the information needed to assess the variables. For example, suppose that a designer is developing a DSS to help investors. As a starting point, the designer knows that there are quantitative models that can be used to describe the financial market and to forecast changes in the market. Similarly, the designer knows that the decision makers will rely on some expert judgment about the financial market. These quantitative and qualitative factors will influence the decision about how to invest. Of course, even with the best forecasts and qualitative judgments, there may be sudden changes in the many factors that influence the market, including events that change assumptions or even news that appears to change those assumptions. In other words, the range of information and models that need to be included in the DSS for this relatively straightforward situation can be very complex. Designers, then, use influence diagrams to keep track of the factors that need to be included in a DSS.
Influence diagrams have few symbols and rules and so are easy to draw once the conditions are well understood. First one must consider the variables of the decision itself. There are decision variables that are controlled by the decision maker, outcome variables
PLANNING FOR DECISION SUPPORT SYSTEMS 325
The decision.
A chance variable that is out of the control of a decision maker.
> The objective of the decision. This is the variable the decision maker is attempting to maximize or minimize.
A deterministic function of the quantities that depend on it or an intermediate variable.
> Influence.
Figure 7.6. Influence diagram symbols.
that represent the outcome of the decision, exogenous variables that influence the decision but are not under the control of the decision maker, and intermediate variables that are evaluated between the decision and the outcome. To map those into an influence diagram, consider Figure 7.6, which shows the symbols that can be used in an influence diagram. As shown in this figure, there are symbols corresponding to a decision (a rectangle), exogenous variables (an ellipse), intermediate variables (a rounded rectangle), the outcome variables (an elongated hexagon), and the influence (an arrow). Using these symbols, one can diagram the factors needing representation in the DSS. Consider again the example DSS in the previous paragraph. These relationships are shown in an influence diagram in Figure 7.7. See that the ultimate goal is (to maximize) profit (as shown by the hexagon). The decision that will impact profit is the investments shown in the center (rectangle). The decision maker comes to the decisions about investments after consideration of the quantitative models (the left rectangle) and expert judgment (the right rectangle). Of course, in making these choices, it is necessary to keep an eye on the events and relationship changes in the environment. This tells us the kinds of information needed in the DSS. Each of these decisions can be broken down into more detail to determine specific information, specific variables, and specific models that might be included.
There are computer tools that can help designers build these diagrams and use them to create the system. For example, consider Lumina's Analytica, which builds influence diagrams easily, as shown in the top portion of Figure 7.8. These tools can then provide the ba
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.