Category:Methods

From Clinfowiki
Jump to: navigation, search

The 3V Method.

On Validation and Verification Of Decision Support Protocol Subsystems During Implementation-Optimization: Encapsulating P(X)

By Donald Crawford

DMICE, Oregon Health and Science University Portland, OR

Abstract There exists a chicken-and-egg problem in advancing our Healthcare Information Systems (HIS) in context of electronic health records and decision support: how can we study integrated biomedical decision support (DS) protocols in clinical environments while remaining reasonably certain we will not confer harm in real world scenarios? Outcomes on HIS from an arbitrary program P are used to validate that P does no harm on HIS during implementation-optimization. Predicates are used to help demonstrate how software specifications constrain the boundaries between the problem spaces of decision support subsystems and health information exchange (HIE). As values of the same data types visible in CPOE data exchange occur at these boundaries, they are used as a measure for software validation, as well as for measuring changes in Health Information Systems through special rules that transpose Health Information Exchange [HIE] metrics to impacts on HIS particular to each use case scenario. The sum of these changes results in not only a measure of validity for P, but also for implementation-optimization of P in context of every use case scenario in any CPOE subsystem whose output domain is in the CPOE output space. Each use-case impact score is an algorithmic output specific to each HIE use case that considers: communication complexity; CPOE functional design parameters; HL7 direct health care delivery functions; CPOE and decision support operational constraint specifications, and the two entities in communication during the use case (e.g., "physician to hospital"; "clinician to health plan"). Each use case impacted by P's output () OUT(P(forall x in X)) is located by first tracing data flows nodes, and capturing time and space metrics in the exchanged data. When these data placed into a scoring algorithm (see TEST function, page 6) they represent a reliable dataset from which to measure how P(X) impacts health care by considering baseline measures taken in absence of P(X). Further, as this method utilizes use cases under CPOE, we are able to turn the system on itself, optimizing not P, but paths through the CPOE system itself, enabling higher clock cycles to paths deficient - in real time. Introduction - Software And The Human Machine:

Software is a machine in circuits, thus P(X) = circuits. Circuits and data flow diagrams are equatable through the relational symbology of entity relationships (ER). Given that figure 2 is representable as a db schema, we may agree partial circuits = HIE that("some circuitry is used in health information exchange") in context. Since software = circuits and partial software = HIE and circuits subset dataflows CPOE software and impacts on HIS may be represented as a partial equality. This partiality is intuitive considering HIE is comprised of machines, humans, and other agents. However, if we consider instantiation in actual clinical settings (with "real data in the pipelines"), we can also add a term (a predicate), , representing conditions that reliably results in VALID(DS(forall d in D)) (please see predicate Table for definitions). Further, using algebra and any term in the equality, we are able to abstract huge volumes of data with pointers to VALID(DS(forall d in D)), automatically, and at will (relational database query assumed). Since P(X) subseteq DS(D), then OK(P(X),partial SPEC). Uur ultimate aim being to use this to identify not SPECifications unmet, but to attribute unmet SPECifications to a reasonably exhaustive set of parameters that modify HIS (again - communication complexity; CPOE functional design parameters (table 1); direct health care delivery functions (figure 4); and the specific agent-relationship of the specific use case scenario - see figure 3). Thus, we must design with in mind - but we introduce a method to TEST this. If (TEST(B = "baseline HIE data in absence of P") - TEST(P)) is not significant (choose your own alpha), then OK(P), but our TEST() function is directional, it also shows how much (calculations on quantities) P deviates, and in what direction (for better or for worse). We test forall(OUT(P(forall x in X))) for ~OK(X) every step of the way, and, should or any term used in calculations go out of SPEC (out of the range of VALID(DS(forall x in D, SPEC))), our implicit subprocess in DS halts at.

Critically, if we constrain the program to a superset of functional requirements we are not designing programs, we are designing systems. If we design our systems properly, we may ignore verification and validation in context of a systems subcomponent if both it's input and output domains are already constrained consideringis true.

The ability to equate health information exchange data flows to any programis grounded in meta-logic; examples of meta-logical tools include predicate logic, data flow diagrams, and logic programming languages. Praxis Inc. (http://www.praxis-his.com) services demonstrates successful application of this paradigm - Praxi's software developmental paradigm parses concise functional requirements into a diagramatic entity relationship data flow syntax called, "Z," then from from Z to actual code (in Praxis Inc's case, the Spark language). The method presented herein is a simple implementation of this paradigm that translates communication nodes in CPOE data flows (ER diagrams) through prolog blocks that parse the rules of SPEC, which, when integrated in with a functional decision support relational schema, are able to be automatically validated as within the domain constrained by intension of.


FIGURE 1 --> [1] FIGURE 2 -->[2] FIGURE 3 --> [3]

Assumption: each use case scenario is considered to have equal impact on HIE. We consider communication complexity (formal rules made in specification planning) for each use case affected ("impacted") by each OUT(P(x in X, SPEC)). This allows a comparison of how each use case impacts HIE, and allows us to record an unlimited range of HIE metrics during any set of use cases or paths through . Each use case scenario that OUT(Pi(X)) effects, where is an aritrary version of P OK(Pi(X)) in OUT(DS(D,SPEC))means that any Pi under SPEC is valid. But, of-course - we must first have a baseline. To form a baseline, data are captured at these communication nodes in HIE between DS and CPOE (figure 1 for conceptual delineated HIS problem space), then used without P in a TEST considering each node in the institutions HIS several times to account for differences in confounders such as time of day, and hospital capacity. As we constrain the scope of this method's required specification planning to the intersection of the problem spaces of DS and CPOE - the intuitive location of our data capture events, OUT(DS(forall d in D)) becomes the baseline measures. Thus, we only need to have a functioning CPOE system to test P - we do not even require a functioning DS system (though DS's domains must be defined - and at the institutional-directive level). Specific use case data captures specify the behavior for any P in DS on HIS in user scenarios; two examples include the number of keystrokes per session, and total use case session time.

TABLE 1 --> [4] FIGURE 4 --> [5]


Vacuous Validation and Verification ("3V Method"): _____________________________________________________________________


Given a CPOE decision support subsystem DS with a domain D, and DS subprocess P with a domain X, and data criterion SPEC where OK(DS(forall x inD,SPEC)):

 X subseteq D
 P(X) = partial DS(D)
 SPEC is VALID and RELIABLE iff 
 forall x in D COMPLETE(X,SPEC)
 DS(x) = TEST(x,SPEC)forall x in X where forall x in X subseteq D implies OK(X)
 If P(any x) > TEST(SPECS) not OK(P(X))

or...http://file031a.bebo.com/15/large/2007/03/02/20/3704769671a3751796730b834346264l.jpg]

This last phrase can be tested using the TEST function on an array of captured HIS use case values. This is obviously validation. Verification is inherent in TEST due to specifications that assert their own validity - critically, this assertion is verifiable if validity is achieved.

"X" is an array of values captured during each and every use case.

"i," represents the number of communication channel nodes that disseminate from each use case (e.g., after a physician has a particular decision support session [one sitting with the computer], the physician calls on a clinician that performs some test; therein we have 2 nodes, one for the session, and one for the physician to clinician communicae).

"SPEC" represents the mathematical constraints on each term in a TEST function such that TEST fails when any constraint cannot be met. Any suite of TESTs may either continue or halt - depending on institutional DS directives. Default values are always to halt when any such value exceeds any constraint. Failure in this case is written as not[P(forall x in X or partial X)] where either is eliminated from the implementation-optimization paths for all Ps. Where TEST(x) takes on values outside these limits, . not[OK(TEST(x) forall x in X)]SPEC is a set of coefficients in TEST's polynomial expression that, if deviated from over a given range, will result in rejecting P, by definition.

We can assign probability values or confidence intervals - or any such statistical technique to determine when . When we do P(X) ≠ TEST(X) this, the physical computer session(s) in the institution P(X) ≠ TEST(X) are identified. Evidence for how use-cases (and all derivable information) impact CPOE data flows can now be tested with arbitrary Ps.

TABLE: [6]

TEST = sum from 1 to N (sum from 1 to i CCOMPLEX sub i (X)*(USECASEWT sub i(X)*HISWT sub i(X)*USEXCPOEXHISNodi) )

Initially, the entire range of HIS data exchange should be normalized to allow a baseline from which to measure deviations. As an example, every average in the CPOE system (such as average number of prescribed drugs, and/or the average seconds it takes to load a particular screen) should not deviate from their means within specified significance levels. The setting of these means should be the responsibility of hospital physicians, though necessarily in concert with the biomedical informatician. At the outset, constrained metrics will be monitored in real time (see "Tracking Monitor" in figure 2) to prevent any significant slow-down or deviation in behavior of the hosting CPOE system.

PROCEDURE:

Step 1:

Assumming we have a SPEC: at minimum, this is a list of weights discussed in table 2, we observe or trace HIS use case scenarios and record X in a statistically significant sample of HIE use cases. Preferably this is automated data capturing at the decision support level resulting in a large sets of use cases whose sample size is appropriate to infer descriptives. Nevertheless, manual acquisition of data for simpler hypothesis testing (vs. whole-program or system validation) uses smaller sets of data that could be used to compare paths through any P1 against any baseline P2 at the micro-level. This can be done manually using Boolean values for perceived beneficial or deleterious impacts in a matrix of use cases vs. useability metrics (number of keystrokes; screen times; number of "back-button" events) as determined by the researcher. These boolean values can be plugged into TEST X-variable array just as if this were automated - our weights normalize each measured category (e.g., number of keystrokes, ...) just as with our automated calculation.

Step 2: Graph the outcomes against a baseline (3D histograms work quite well). Further, the data sets represented in figures' 3, 4, and tables' 1, 2; and are able to be queried in a relational database for tracing the locatino of the communication node (not just the use case) where inefficiencies and problems occur that effect the observed outcomes of on HIS.

Step 3: Ascribe to your decision support subsystem, P, the "optimization value" derived by plugging in your data into the TEST() function. If this is done manually, a spreadsheet may help - this is iterative on sums and quite tedious. This value is useful in relative rankings of decision support subsystems, as well as in comparing the paths through any P.

If we run several TESTs on different paths through P, we have a direct and easy-to-implement method for implementation-optimization on paths or versions, or programs under decision support or CPOE during implementation-optimization. Since OUT(P(forall x in X)) subseteq OUT(DS(forall d in D)), we can set OUT(P(forall x in X)) = OUT(DS(forall d in D)), and since we limited the definition of the DS problem space to only mean DS was in CPOE, we can pervert the problem space of DS anywhere inside the CPOE problem space to allow implementation-optimization on CPOE data flows themselves in real-time under the full premises of this report since

forall OUT(P(X)) subseteq forall OUT(DS(x in D)) subset forall OUT(CPOE(any domain)) or [7]


END

Pages in category "Methods"

The following 5 pages are in this category, out of 5 total.