Skip to content

Cognitive Systems Engineering (CSE) emerged from Human Factors as researchers began to realize that in order to fully understand human-computer interaction it was necessary to understand the 'work to be done' on the other side of the computer. They began to realize that for an interface to be effective, it had to map into both a 'mind' and onto a 'problem domain.'  They began to realize that a representation only leads to productive thinking if it makes the 'deep structure' of the work domain salient.  Thus, the design of the representation had to be motivated by a deep understanding of the domain (as well as a deep understanding of the mind).

User-eXperience Design (UXD) emerged from Product Design as designers began to realize that they were not simply creating 'objects.' They were creating experiences. They began to realize that products were embedded in a larger context, and that the ultimate measure of the quality of their design was the impact on this larger context - on the user experience. They began to realize that the quality of their designs did not simply lie in the object, but rather in the impact that the object had on the larger experience that it engendered. Designers began to realize that they were not simply shaping objects, but they were shaping experiences. Thus, the design of the object had to be motivated by a deep understanding of the context of use (as well as a deep understanding of the materials or technologies).

The common ground is the user-experience.  CSE and UXD are both about designing experiences. They both require that designers deal with minds, objects, and contexts or ecologies. The motivating contexts have been different, with CSE emerging largely from experiences in safety critical systems (e.g., aviation, nuclear power); and UXD emerging largely from experiences with consumer products (e.g., tooth brushes, doors). However, the common realization is that 'context matters.' The common realization is that the constraints of the 'mind' and the constraints of the 'object' can only be fully understood in relation to a 'context of use.'  The common realization is that 'functions matter.' And that 'functions' are relations between agents, tools, and ecologies.

The CSE and UXD communities have both come to realize that the qualities that matter are not in either the mind or the object, but rather in the experience. They have discovered that the proof of the pudding is in the eating.

Cognitive Systems Engineering (CSE) emerged from Human Factors as researchers began to realize that in order to fully understand human-computer interaction it was necessary to understand the 'work to be done' on the other side of the computer. They began to realize that for an interface to be effective, it had to map into both a 'mind' and onto a 'problem domain.'  They began to realize that a representation only leads to productive thinking if it makes the 'deep structure' of the work domain salient.  Thus, the design of the representation had to be motivated by a deep understanding of the domain (as well as a deep understanding of the mind).

User-eXperience Design (UXD) emerged from Product Design as designers began to realize that they were not simply creating 'objects.' They were creating experiences. They began to realize that products were embedded in a larger context, and that the ultimate measure of the quality of their design was the impact on this larger context - on the user experience. They began to realize that the quality of their designs did not simply lie in the object, but rather in the impact that the object had on the larger experience that it engendered. Designers began to realize that they were not simply shaping objects, but they were shaping experiences. Thus, the design of the object had to be motivated by a deep understanding of the context of use (as well as a deep understanding of the materials or technologies).

The common ground is the user-experience.  CSE and UXD are both about designing experiences. They both require that designers deal with minds, objects, and contexts or ecologies. The motivating contexts have been different, with CSE emerging largely from experiences in safety critical systems (e.g., aviation, nuclear power); and UXD emerging largely from experiences with consumer products (e.g., tooth brushes, doors). However, the common realization is that 'context matters.' The common realization is that the constraints of the 'mind' and the constraints of the 'object' can only be fully understood in relation to a 'context of use.'  The common realization is that 'functions matter.' And that 'functions' are relations between agents, tools, and ecologies.

The CSE and UXD communities have both come to realize that the qualities that matter are not in either the mind or the object, but rather in the experience. They have discovered that the proof of the pudding is in the eating.

Over the last 20 years or so, the vision of how to help organizations improve safety has been changing from a focus on 'stamping out errors' to a focus on 'managing the quality of work.'

This change reflects a similar evolution in how the Forestry service manages fire safety. There was a period when the focus was on 'stamping out forrest fires,' and the poster child for these efforts was Smokey the Bear (Only you can prevent forrest fires). However, the forestry service has learned that a side-effect of an approach that focusses exclusively on preventing fires is the build up of fuel on the forest floors. Because of this build up, when a fire inevitably occurs it can burn at levels that can be catastrophic for forest health. The forest will not naturally recover from the burn.

Smokey the Bear Effect

The forestry service now understands that low intensity fires can be integral to the long term health of a forrest. These low intensity fires help to prevent the build up of fuel and also can promote germination of seeds and new growth.

The alternative to 'stamping out fires' is to manage forrest health. This sometimes involves introducing controlled burns or letting low intensity fires burn themselves out.

The general implication of this is that safety programs should be guided by a vision of health or quality, rather than be simply a reaction to errors. With respect to improving safety, programs focused on health/quality will have greater impacts, than programs designed to 'stamp out errors.' Programs designed to stamp out errors, tend to also end up stamping out the information (e.g., feedback) that is essential for systems to learn from mistakes and to tune to complex, dynamic situations. Like low intensity fires, learning from mistakes and near misses actually contributes to the overall health of a high reliability organization.

This new perspective is beautifully illustrated in Sidney Dekker's new movie that can be viewed on YouTube:

Safety Differently

The CVDi display for evaluating heart health has been updated. The new version includes an option for SI units.  Also, some of the interaction dynamics have been updated. This is still a work in progress, so we welcome feedback and suggestions for how to improve and expand this interface.

https://mile-two.gitlab.io/CVDI/

 

 

1

The Big Data Problem and Visualization

The digitization of healthcare data using Electronic Healthcare Record (EHR) systems is a great boon to medical researchers. Prior to EHR systems, researchers were responsible for collecting and archiving the patient data necessary to build models for guiding healthcare decisions (e.g., the Framingham Study of Cardiovascular Health). However, with EHR systems, the job of collecting and archiving patient data is off-loaded from the researchers, freeing them to focus on the BIG DATA PROBLEM. Thus, there is a lot of excitement in the healthcare community about the coming BIG DATA REVOLUTION and computer scientists are enthusiastically embracing the challenge of providing tools for BIG DATA VISUALIZATION.

It is very likely that the availability of data and the application of advanced visualization tools will stimulate significant advances in the science of healthcare. However, will these advances translate into better patient care? Recent experiences with EHR systems suggest that the answer is "NO! Not unless we also solve the LITTLE DATA PROBLEM."

The Little Data Problem in Healthcare

Compared to the excitement about embracing the BIG DATA PROBLEM, healthcare technologists and in particular EHR developers have paid relatively little attention to visualization problems on the front end of EHR systems. The EHR interfaces to the frontline healthcare workers consist almost exclusively of text, dialog boxes, and pull-down menus. These interfaces are designed for ‘data input-output.’ They do very little to help physicians to make sense of the data relative to judging risk and making treatment decisions. For example, the current EHR interfaces do little to help physicians to ‘see’ what the data ‘mean’ relative to the risk of a cardiac event; or to ‘see’ the recommended treatment options for a specific patient.

The LITTLE DATA PROBLEM for healthcare involves creative design of interfaces to help physicians to visualize the data for a specific patient in light of the current medical research. The goal is for the interface representations to support the physician in making well-informed treatment decisions and for communicating those decisions to patients. For example, the interface representations should allow a physician to ‘see’ patient data relative to risk models (e.g., Framingham model) and relative to published standards of care (e.g., Adult Treatment Panel IV), so that the decisions made are informed by the evidence-base. In addition, the representation should facilitate discussions with patients to explain and recommend treatment options, to engender trust, and ultimately to increase the likelihood of compliance.

Thus, while EHRs are making things better for medical research, they are making the everyday work of healthcare more difficult. The benefits with respect to the ‘Big Data Problem’ are coming at the expense of increased burden on frontline healthcare workers who have to enter the data and access it through clumsy interfaces. In many cases, the technology is becoming a barrier to communications with the patients, because time spend interacting with the technology is reducing the time available for interacting directly with patients (Arndt, et al, 2017).

At Mile Two, we are bringing Cognitive Systems Engineering (CSE), UX Design, and Agile Development processes together to tackle the LITTLE DATA PROBLEM. Follow this link to see an example of a direct manipulation interface that illustrates how interfaces to EHR systems might better serve the needs of both frontline healthcare workers and patients: CVDi.

Conclusion

The major point is that advances resulting from the BIG DATA REVOLUTION will have little impact on the quality of everyday healthcare if we don't also solve the LITTLE DATA PROBLEM associated with EHR systems.

2

New study finds that physicians spend twice as much time interacting with EHR systems as interacting directly with patients.

http://www.beckershospitalreview.com/ehrs/primary-care-physicians-spend-close-to-6-hours-performing-ehr-tasks-study-finds.html

http://www.annfammed.org/content/15/5/419.full

This is a classical example of clumsy automation. That is, automation that disrupts the normal flow of work, rather than facilitating it. It is unfortunate that healthcare is far behind other industries when it comes to understanding how to use IT to enhance the quality of every day work. While the healthcare industry promotes the potential wonders of "big data," the needs of everyday clinical physicians have been largely overlooked.

EHR systems have been designed around the problem of 'data management' and the problems of 'healthcare management' have been largely unrecognized or unappreciated by the designers of EHR systems.

In solving the 'data' problem, the healthcare IT industry has actually made the 'meaning' problem more difficult for clinical physicians.

This should be a great opportunity for Cognitive Systems Engineering innovations, IF anyone in the healthcare industry is willing to listen.

2

Introduction

It has long been recognized that in complex work domains such as management and healthcare, the decision-making behavior of experts often deviates from the prescriptions of analytic or normative logic.  The observed behaviors have been characterized as intuitive, muddling through, fuzzy, heuristic, situated, or recognition-primed. While there is broad consensus on what people typically do when faced with complex problems, the interesting debate, relative to training decision-making or facilitating the development of expertise, is not about what people do, but rather about what people ought to do.

On the one hand, many have suggested that training should focus on increasing conformity with the normative prescriptions.  Thus, the training should be designed to alert people to the generic biases that have been identified (e.g., representativeness heuristic, availability heuristic, overconfidence, confirmatory bias, illusory correlation), to warn people about the potential dangers (i.e., errors) associated with these biases, and to increase knowledge and appreciation of the analytical norms. In short, the focus of training clinical decision making should be on reducing (opportunities for) errors in the form of deviations from logical rationality.

On the other hand, we (and others) have suggested that the heuristics and intuitions of experts actually reflect smart adaptations to the complexities of specific work domains. This reflects the view that heuristics take advantage of domain constraints leading to efficient ways to manage the complexities of complex (ill-structured) problems, such as those in healthcare. As Eva & Norman [2005] suggest, “successful heuristics should be embraced rather than overcome” (p. 871). Thus, to support clinical decision making, training should not focus on circumventing the use of heuristics but should focus on increasing the perspicacity of heuristic decision making, that is, on tuning the (recognition) processes that underlie the adaptive selection and use of heuristics in the domain of interest.

Common versus Worst Things in the ED

In his field study of decision-making in the ED, Feufel [2009] observed that the choices of physicians were shaped by two heuristics: 1) Common things are common; and 2) Worst case. Figure 1 illustrates these two heuristics as two-loops in an adaptive control system. The Common Thing heuristic aligns well with classical Bayesian norms for evaluating the state of the world. It suggests that the hypotheses guiding treatment should reflect a judgment about what is most likely based on the prior odds and the current observations (i.e., what is most common given the symptoms). Note that this heuristic biases physicians toward a ‘confirmatory’ search process, as their observations are guided by beliefs about what might be the common thing. Thus, tests and interventions tend to be directed toward confirming and treating the common thing.

Figure 1. Illustrates the decision-making process as an adaptive control system guided by two complementary heuristics: Common Thing and Worst Thing.

The Worst Case heuristic shifts the focus from ‘likelihood’ to the potential consequences associated with different conditions.  Goldberg, Kuhn, Andrew and Thomas [2002] begin their article on “Coping with Medical Mistakes” with the following example:

 “While moonlighting in an emergency room, a resident physician evaluated a 35-year-old woman who was 6 months pregnant and complaining of a headache. The physician diagnosed a ‘mixed-tension sinus headache.’ The patient returned to the ER 3 days later with an intracerebral bleed, presumably related to eclampsia, and died (p. 289)”

This illustrates an ED physician’s worst nightmare – that a condition that ultimately leads to serious harm to a patient will be overlooked.  The Worst Case heuristic is designed to help guard against this type of error. While considering the common thing, ED physicians are also trained to simultaneously be alert to and to rule-out potential conditions that might lead to serious consequences (i.e., worst cases). Note that the Worst Case heuristic biases physicians toward a disconfirming search strategy as they attempt to rule-out a possible worst thing – often while simultaneously treating the more likely common thing. While either heuristic alone reflects a bounded rationality, the coupling of the two as illustrated in Figure 1 tends to result in a rationality that can be very well tuned to the demands of emergency medicine.

Ill-defined Problems

In contrast to the logical puzzles that have typically been used in laboratory research on human decision-making, the problems faced by ED physicians are ‘ill-defined’ or ‘messy.’ Lopes [1982] suggested that the normative logic (e.g., deduction and induction logic) that works for comparatively simple logical puzzles will not work for the kinds of ill-defined problems faced by ED physicians. She suggests that ill-defined problems are essentially problems of pulling out the ‘signal’ (i.e., the patient’s actual condition) from a noisy background (i.e., all the potential conditions that a patient might have). Thus, the theory of signal detection (or observer theory) illustrated in Figures 2 & 3 provides a more appropriate context for evaluating performance.

Figure 2. The logic of signal detection theory is used to illustrate the challenge of discriminating a worst case from a common thing.

Figure 2 uses a signal detection metaphor to illustrate the potential ambiguities associated with discriminating the Worst Cases from the Common Things in the form of two overlapping distributions of signals. The degree of overlap between the distributions represents the potential similarity between the symptoms associated with the alternatives. The more overlap, the harder it will be to discriminate between potential conditions. The key parameter with respect to clinical judgment is the line labeled Decision Criterion. The placement of this line reflects the criterion that is used to decide whether to focus treatment on the common thing (moving the criteria to the right to reduce false alarms) or the worst thing (moving the criteria to the left to reduce misses). Note that there is no possibility for perfect (i.e., error free) performance. Rather, the decision criterion will determine the trade-off between two types of errors: 1) false alarms – expending resources to rule out the worst case, when the patient’s condition is consistent with the common thing; or 2) misses – treating the common thing, when the worst case is present.

In order to address the question of what is the ‘ideal’ or at least ‘satisfactory’ criterion for discriminating between treating the common thing or the worst thing it is necessary to consider the potential values associated with the treatments and potential consequences as illustrated in the pay-off matrix in Figure 3.  Thus, the decision is not simply a function of finding ‘truth.’ Rather, the decision involves a consideration of values: What costs are associated with the tests that would be required to conclusively rule-out a worst case? How severe would be the health consequences of missing a potential worst case? Missing some things can have far more drastic consequences than missing other things.

Figure 3. The payoff matrix is used to illustrate the values associated with potential errors (i.e., consequences of misses and false alarms).

The key implication of Figures 2 and 3 is that eliminating all errors is not possible. Given enough time, every ED physician will experience both misses and false alarms. That is, there will be cases where they miss a worst case and other cases where they pursue a worst case only to discover that it was the common thing. While perfect performance (zero-error) is an unattainable goal, the number of errors can be reduced by increasing the ability to discriminate between potential patient states (e.g., recognizing the patterns, choosing the tests that are most diagnostic). This would effectively reduce the overlap between the distributions in Figure 2. The long-range or overall consequences of any remaining errors can be reduced by setting the decision criterion to reflect the value trade-offs illustrated in Figure 3. In cases where expensive tests are necessary to conclusively rule-out potential worst cases, this raises difficult ethical questions involving weighing the cost of missing a worst case, versus the expense of the additional tests that in many cases will prove unnecessary.

Conclusion

The problems faced by ED physicians are better characterized in terms of the theory of signal detection, rather than in terms of more classical models of logic that fail to take into account the perceptual dynamics of selecting and interpreting information. In this context, heuristics that are tuned to the particulars of a domain (such as common things and worst cases) are intelligent adaptations to the situation dynamics (rather than compromises resulting from internal information processing limitations). While each of these heuristics is bounded with respect to rationality, the combination tends to provide a very intelligent response to the situation dynamics of the ED. The quality of this adaptation will ultimately depend on how well these heuristics are tuned to the value system (payoff matrix) for a specific context.

Note that while the signal detection theory is typically applied to single discrete observations, the ED is a dynamic situation as illustrated in Figure 1, where multiple samples are collected over time. Thus, a more appropriate model is Observer Theory, which extends the logic of signal detection to dynamic situations, where judgment can be adjusted as a function of multiple observations relevant to competing hypotheses  [see Flach and Voorhorst, 2016; or Jagacinski & Flach, 2003 for discussion of Observer Theory]. However, the implication is the same - skilled muddling involves weighing evidence in order to pull the 'signal' out from a complex, 'noisy' situation.

Finally, it is important to appreciate that with respect to the two heuristics, it is not a case of 'either-or,' rather it is a 'both-and' proposition. That is the heuristics are typically operating concurrently - with the physician often treating the common thing, while awaiting test results to rule out a possible worst case. The challenge is in allocating resources to the concurrent heuristics, while taking into account the associated costs and benefits as reflected in a value system (payoff matrix).

2

Introduction

It has long been recognized that in complex work domains such as management and healthcare, the decision-making behavior of experts often deviates from the prescriptions of analytic or normative logic.  The observed behaviors have been characterized as intuitive, muddling through, fuzzy, heuristic, situated, or recognition-primed. While there is broad consensus on what people typically do when faced with complex problems, the interesting debate, relative to training decision-making or facilitating the development of expertise, is not about what people do, but rather about what people ought to do.

On the one hand, many have suggested that training should focus on increasing conformity with the normative prescriptions.  Thus, the training should be designed to alert people to the generic biases that have been identified (e.g., representativeness heuristic, availability heuristic, overconfidence, confirmatory bias, illusory correlation), to warn people about the potential dangers (i.e., errors) associated with these biases, and to increase knowledge and appreciation of the analytical norms. In short, the focus of training clinical decision making should be on reducing (opportunities for) errors in the form of deviations from logical rationality.

On the other hand, we (and others) have suggested that the heuristics and intuitions of experts actually reflect smart adaptations to the complexities of specific work domains. This reflects the view that heuristics take advantage of domain constraints leading to efficient ways to manage the complexities of complex (ill-structured) problems, such as those in healthcare. As Eva & Norman [2005] suggest, “successful heuristics should be embraced rather than overcome” (p. 871). Thus, to support clinical decision making, training should not focus on circumventing the use of heuristics but should focus on increasing the perspicacity of heuristic decision making, that is, on tuning the (recognition) processes that underlie the adaptive selection and use of heuristics in the domain of interest.

Common versus Worst Things in the ED

In his field study of decision-making in the ED, Feufel [2009] observed that the choices of physicians were shaped by two heuristics: 1) Common things are common; and 2) Worst case. Figure 1 illustrates these two heuristics as two-loops in an adaptive control system. The Common Thing heuristic aligns well with classical Bayesian norms for evaluating the state of the world. It suggests that the hypotheses guiding treatment should reflect a judgment about what is most likely based on the prior odds and the current observations (i.e., what is most common given the symptoms). Note that this heuristic biases physicians toward a ‘confirmatory’ search process, as their observations are guided by beliefs about what might be the common thing. Thus, tests and interventions tend to be directed toward confirming and treating the common thing.

Figure 1. Illustrates the decision-making process as an adaptive control system guided by two complementary heuristics: Common Thing and Worst Thing.

The Worst Case heuristic shifts the focus from ‘likelihood’ to the potential consequences associated with different conditions.  Goldberg, Kuhn, Andrew and Thomas [2002] begin their article on “Coping with Medical Mistakes” with the following example:

 “While moonlighting in an emergency room, a resident physician evaluated a 35-year-old woman who was 6 months pregnant and complaining of a headache. The physician diagnosed a ‘mixed-tension sinus headache.’ The patient returned to the ER 3 days later with an intracerebral bleed, presumably related to eclampsia, and died (p. 289)”

This illustrates an ED physician’s worst nightmare – that a condition that ultimately leads to serious harm to a patient will be overlooked.  The Worst Case heuristic is designed to help guard against this type of error. While considering the common thing, ED physicians are also trained to simultaneously be alert to and to rule-out potential conditions that might lead to serious consequences (i.e., worst cases). Note that the Worst Case heuristic biases physicians toward a disconfirming search strategy as they attempt to rule-out a possible worst thing – often while simultaneously treating the more likely common thing. While either heuristic alone reflects a bounded rationality, the coupling of the two as illustrated in Figure 1 tends to result in a rationality that can be very well tuned to the demands of emergency medicine.

Ill-defined Problems

In contrast to the logical puzzles that have typically been used in laboratory research on human decision-making, the problems faced by ED physicians are ‘ill-defined’ or ‘messy.’ Lopes [1982] suggested that the normative logic (e.g., deduction and induction logic) that works for comparatively simple logical puzzles will not work for the kinds of ill-defined problems faced by ED physicians. She suggests that ill-defined problems are essentially problems of pulling out the ‘signal’ (i.e., the patient’s actual condition) from a noisy background (i.e., all the potential conditions that a patient might have). Thus, the theory of signal detection (or observer theory) illustrated in Figures 2 & 3 provides a more appropriate context for evaluating performance.

Figure 2. The logic of signal detection theory is used to illustrate the challenge of discriminating a worst case from a common thing.

Figure 2 uses a signal detection metaphor to illustrate the potential ambiguities associated with discriminating the Worst Cases from the Common Things in the form of two overlapping distributions of signals. The degree of overlap between the distributions represents the potential similarity between the symptoms associated with the alternatives. The more overlap, the harder it will be to discriminate between potential conditions. The key parameter with respect to clinical judgment is the line labeled Decision Criterion. The placement of this line reflects the criterion that is used to decide whether to focus treatment on the common thing (moving the criteria to the right to reduce false alarms) or the worst thing (moving the criteria to the left to reduce misses). Note that there is no possibility for perfect (i.e., error free) performance. Rather, the decision criterion will determine the trade-off between two types of errors: 1) false alarms – expending resources to rule out the worst case, when the patient’s condition is consistent with the common thing; or 2) misses – treating the common thing, when the worst case is present.

In order to address the question of what is the ‘ideal’ or at least ‘satisfactory’ criterion for discriminating between treating the common thing or the worst thing it is necessary to consider the potential values associated with the treatments and potential consequences as illustrated in the pay-off matrix in Figure 3.  Thus, the decision is not simply a function of finding ‘truth.’ Rather, the decision involves a consideration of values: What costs are associated with the tests that would be required to conclusively rule-out a worst case? How severe would be the health consequences of missing a potential worst case? Missing some things can have far more drastic consequences than missing other things.

Figure 3. The payoff matrix is used to illustrate the values associated with potential errors (i.e., consequences of misses and false alarms).

The key implication of Figures 2 and 3 is that eliminating all errors is not possible. Given enough time, every ED physician will experience both misses and false alarms. That is, there will be cases where they miss a worst case and other cases where they pursue a worst case only to discover that it was the common thing. While perfect performance (zero-error) is an unattainable goal, the number of errors can be reduced by increasing the ability to discriminate between potential patient states (e.g., recognizing the patterns, choosing the tests that are most diagnostic). This would effectively reduce the overlap between the distributions in Figure 2. The long-range or overall consequences of any remaining errors can be reduced by setting the decision criterion to reflect the value trade-offs illustrated in Figure 3. In cases where expensive tests are necessary to conclusively rule-out potential worst cases, this raises difficult ethical questions involving weighing the cost of missing a worst case, versus the expense of the additional tests that in many cases will prove unnecessary.

Conclusion

The problems faced by ED physicians are better characterized in terms of the theory of signal detection, rather than in terms of more classical models of logic that fail to take into account the perceptual dynamics of selecting and interpreting information. In this context, heuristics that are tuned to the particulars of a domain (such as common things and worst cases) are intelligent adaptations to the situation dynamics (rather than compromises resulting from internal information processing limitations). While each of these heuristics is bounded with respect to rationality, the combination tends to provide a very intelligent response to the situation dynamics of the ED. The quality of this adaptation will ultimately depend on how well these heuristics are tuned to the value system (payoff matrix) for a specific context.

Note that while the signal detection theory is typically applied to single discrete observations, the ED is a dynamic situation as illustrated in Figure 1, where multiple samples are collected over time. Thus, a more appropriate model is Observer Theory, which extends the logic of signal detection to dynamic situations, where judgment can be adjusted as a function of multiple observations relevant to competing hypotheses  [see Flach and Voorhorst, 2016; or Jagacinski & Flach, 2003 for discussion of Observer Theory]. However, the implication is the same - skilled muddling involves weighing evidence in order to pull the 'signal' out from a complex, 'noisy' situation.

Finally, it is important to appreciate that with respect to the two heuristics, it is not a case of 'either-or,' rather it is a 'both-and' proposition. That is the heuristics are typically operating concurrently - with the physician often treating the common thing, while awaiting test results to rule out a possible worst case. The challenge is in allocating resources to the concurrent heuristics, while taking into account the associated costs and benefits as reflected in a value system (payoff matrix).

The team at Mile Two recently created an App (CVDi) to help people to make sense of clinical values associated with cardiovascular health. The App is a direct manipulation interface that allows people to enter and change clinical values and to get immediate feedback about the impact on overall health and treatment options.

The feedback about overall health is provided in the form of three Risk Models from published research on cardiovascular health. Each model is based on longitudinal models that have tracked the statistical relations between various clinical measures (e.g., age, total cholesterol, blood pressure) and incidents of cardiovascular disease (e.g., heart attacks or strokes).  However, the three models each use different subsets of that data to predict risk, and thus the risk estimates can be quite varied.

A number of people who have reviewed the CVDi App have suggested that this variation among the models might be a source of confusion to users or it might lead people to cherry-pick the value that fits their preconceptions (e.g., someone who is skeptical about medicine might take the best value as justification for not going to the doctor; while a hypochondriac might take the worst value as justification for his fears). In essence, the suggestion is that the variability among the risk estimates is NOISE that will reduce the likelihood that people will make good decisions. These people suggest that we pick one (e.g., the 'best') model and drop the other two.

We have an alternative hypothesis. We believe that the variation among the models is INFORMATION that provides the potential for deeper insight into the complex problem of cardiovascular health. Our hypothesis is that the variation will lead people to consider the basis for each model (e.g., whether it is based on lipids, or BMI, or whether C-reactive proteins are included).  Our interface is designed so that it is easy to SEE the contribution of each of these variables to each of the models. For example, a big difference in risk estimates between the lipid-based models and the BMI-based model might signify the degree to which weight or lipids is contributing to the risk.  We believe this is useful information in selecting an appropriate treatment option (e.g., statins or diet).

The larger question here is the function of MODELS in cognitive systems or decision support systems. Should the function of models be to give people THE ANSWER; or should the function of models be to provide insight into the complexity so that people are well-informed about the problem - so that they are better able to muddle through to discover a satisfying answer.

Although there is great awareness that human rationality is bounded, there is less appreciation of the fact that all computational models are bounded. While we tend to be skeptical about human judgment, there is a tendency to take the output of computational models as the answer or as the truth. I believe this tendency is dangerous! I believe it is unwise to think that there is a single correct answer to a complex problem!

As I have argued in previous posts, I believe that muddling through is the best approach to complex problems. And thus, the purpose of modeling should be to guide the muddling process, NOT to short-circuit the muddling process with THE ANSWER. The purpose of the model is to enhance situation awareness, helping people to muddle well and increasing the likelihood that they will make well-informed choices.

Long ago we made the case that for supporting complex decision making, models should be used to suggest a variety of alternatives - to provide deeper insight into possible solutions - rather than to provide answers:

Brill, E.D. Jr., Flach, J.M., Hopkins, L.D., & Ranjithan, S. (1990). MGA: A decision support system for complex, incompletely defined problems. IEEE Transactions on Systems, Man, and Cybernetics, 20(4), 745-757.

Link to the CVDi interface: CVDi

 

I just completed my first month of work outside the walls of the Ivory Tower of academia. After more than forty years as a student and academic, I left Wright State University and joined a small tech start-up in Dayton - Mile Two. I had some trepidations about life on the outside, but my first month has exceeded my highest expectations. It is quite exciting to be part of a team with talented UX designers and programmers, who can translate my vague ideas into concrete products.

For the last 6 years my students and I had been struggling to translate principles of Ecological Interface Design (EID) into healthcare solutions. Tim McEwen generated a first concept for such an application in his 2012 dissertation and ever since then we have been trying to get support to extend this work. But our efforts were blocked at every turn (two failed proposals to NIH and two failures with NSF). We made countless pleas to various companies and healthcare organizations. We got lots of pats on the back and kind words (some outstanding reviews at NSF), but no follow through with support for continuing the work.

Then in two weeks at Mile Two the team was able to translate our ideas into a web app. Here is a link: CVDi.  Please try it out.  We see this as one step forward in an iterative design process and are eager to get feedback. Please play with this app and let us know about any problems or give us suggestions for improvements.

I am glad that I am no longer a prisoner of an 'institution' and I am excited by all the new possibilities that being at Mile Two will afford. I am looking forward to this new phase of my education.

It's never crowded along the extra mile. (Wayne W. Dyer)