Decision Support Systems are also increasingly being used in medicine. If they are medical devices, they must meet the legal requirements (e.g., the general safety and performance requirements).
The hype surrounding Artificial Intelligence, in particular Machine Learning, and users such as Watson are raising hopes for the performance of Decision Support Systems.
This article presents the challenges manufacturers of Decision Support Systems must overcome and gives tips on how these challenges can be mastered. It also explains the corresponding FDA guidance document.
1. Decision Support Systems: Definition and examples
a) Definition
“A Decision Support System (DSS) is a software system that collects, reprocesses, and presents information based on which people can make operational or strategic decisions.”
Source: according to Wikipedia and others
b) Delimitation
Many conventional management information systems and data warehouses cleanse and consolidate data and make it available through dashboards or OLAP cubes. The target group of these systems is usually management.
Decision Support Systems (DSS) follow similar approaches but are usually based on models or advanced mathematical procedures. They claim to present not only information but also knowledge relevant to decision-making. Many DSSs are, therefore, based on data warehouses. They use Artificial Intelligence-based procedures to process the data, especially in the medical field.
c) Examples from the medical sector
Decision Support Systems in medicine, also known as “Clinical Decision Support Systems” CDSS, serve all facets of medical action, such as
- Diagnostic support
- Interpretation of radiological images, particularly for oncological and neurological issues as well as infection diagnostics
- Pathology and histology not limited to an oncological contextDermatology: e.g., assessment of skin lesionsOphthalmology: diagnosis of glaucoma, among other things
- Therapy support
- Drug therapy, e.g., inspection of contraindications and interactions, or the selection of medication, e.g., antibiotics or cytostatic drugs.
- Consideration of therapy alternatives or combinations (e.g., chemotherapy versus radiation versus surgery versus palliative care)Radiation planning
- Triage, decision on priority, and next steps
- Monitoring
- Alarming of intensive care patients, e.g., Patient Data Management Systems (PDMS)
- Monitoring of patients at home
- Differential diagnosis based on findings, the clinic, and other parameters
2. FDA guidance document „Clinical Decision Support Software”
a) Introduction
The FDA published its own guidance document for Decision Support Systems as a first draft in 2017, a second in 2019, and a final in 2022. However, anyone hoping to find guidance on meeting the regulatory requirements for this product class will be disappointed. The document merely defines – albeit in great detail – when a DSS becomes a medical device.
You can download the 2022 version of the document “Clinical Decision Support Software” here.
For this, the FDA relies on the definition of the legal text, Article 520(o)(1)(E) of the Food, Drug and Cosmetic Act, or FD&C for short. Unfortunately, this text is so poorly written that the FDA now feels compelled to interpret it for the case of DSS.
b) Interpretation of the legal text
Accordingly, software is only not a medical device if all the following conditions are met:
- It is NOT intended to capture, process, or analyze:
- medical images,
- signals originating from an IVD device
- (physiological) signals such as ECGs or EEGs and associated analysis software.
- It is (only) used to display, analyze, or print medical information about a patient or to display other medical information such as books, clinical publications or guidelines, drug package inserts, or authority recommendations, even if the healthcare professional uses this information to decide on preventive measures, diagnoses, and treatments.
- It supports healthcare professionals by providing recommendations for preventing, diagnosing, or treating diseases.
This is interesting because this is exactly what would classify the software as a medical device in Europe. The FDA goes even further and announces that it will not require compliance with regulatory requirements even for software not intended for healthcare professionals but for patients.
However, the following paragraph substantially restricts this generosity: - The software makes these recommendations so transparent that users can arrive at them themselves without having to rely primarily on the software. This requires the following:
- The software’s intended purpose (function) is clearly stated.
- The user group is clearly defined (e.g., ultrasound technician, vascular surgeon).
- The inputs on which the recommendations are based are named and publicly available.
- The logical rationale and derivation of the recommendation are made transparent. The FDA also insists that this information is not only publicly available (e.g., in the medical literature) but that the intended users can understand this information and follow the recommendation.
The FDA has attempted to translate some of these decisions into a table:
Is the intended user a healthcare professional? | Can the user independently review the basis? | Is it Device CDS? |
yes | yes | no, it is Non-Device-CDS because it meets all of section 520(o)(1) criteria |
yes | no | yes, it is Device-CDS |
no, it is a patient or caregiver | yes | yes, it is Device-CDS |
no, it is a patient or caregiver | no | yes, it is Device-CDS |
The FDA distinguishes between Device-CDS and Non-Device-CDS in the update of the guidance document.
c) Mapping with the IMDRF document
The FDA also feels compelled to align its delineation with the IMDRF’s framework on Software as a Medical Device (SaMD). You may be familiar with this table:
state of health care situation or condition | significance of information provided by SaMD to healthcare decision | ||
treat or diagnose | drive clinical management | inform clinical management | |
critical | IV | III | II |
serious | III | II | I |
non-serious | II | I | I |
The FDA only sees non-CDS in the “inform clinical management” column. The FDA excludes the other two columns:
SaMD functions that drive clinical management are not CDS, as defined in the Cures Act and used in this guidance […]
SaMD functions that treat or diagnose are not CDS, as defined in the Cures Act and used in this guidance […]
Clinical Decision Support Software – Draft Guidance for Industry and Food and Drug Administration Staff
d) First conclusion
The restriction in the fourth point of the FD&C clearly relativizes the delta to the European regulations that opened in the third point. Nevertheless, a Decision Support System that maps a decision tree published in a clinical guideline would not be a medical device in the USA. Still, it may be one in Europe, at least if the manufacturer declares that the system serves the (improvement of) treatment.
With the update of the guidance document in 2019, the FDA classifies Clinical Decision Support Systems that “drive clinical management” or are even used for diagnosis or treatment as medical devices. The authority summarizes this as follows:
IMDRF risk categorizsation | can the user independently review the basis | intended user is HCP | intended user is patient or caregiver |
---|---|---|---|
inform X critical | yes | not a device | oversight focus |
inform X critical | no | oversight focus | oversight focus |
inform X serious | yes | not a device | oversight focus |
inform X serious | no | oversight focus | oversight focus |
inform X non-serious | yes | not a device | enforcement discretion |
inform X non-serious | no | enforcement discretion | oversight focus |
In the final 2022 version, the guidance provides many examples and interpretations for each of these four points. It also includes examples for “Non-Device CDS Software Functions” and “Device Software Functions.”
It specifies that only if four conditions are met the FDA will not classify CDS as a medical device:
- They are not(!) used to collect, process, or analyze image data, data from IVDs, or other signals, e.g., biosignals.
- They are used to display, analyze, or print medical information on patients, guidelines, or clinical literature.
- They serve to support professionals in the prevention, diagnosis, or treatment of diseases or health conditions
- They allow professionals to independently review the output of these recommendations.
“Drug-drug interaction and drug-allergy contraindication notifications to avert adverse drug reactions” would be an example of such a “function” that does not(!) make a software a medical device.
In Europe, this would be at least a class IIa medical device, more likely IIb.
e) Examples
Decision Support Systems that are not medical devices
Examples of Decision Support Systems that are not medical devices include software that
- lists suitable clinical guidelines for an existing diagnosis,
- warns of drug interactions and contraindications if(!) these are based on “FDA-approved” drug warnings (e.g., in package inserts),
- suggests an investigation based on clinical guidelines, e.g., that a doctor first orders a liver function test before prescribing statins (a drug),
- suggests chemotherapy based on medical history and test results, e.g., specific chemotherapy for BRCA-positive patients,
- calculates nutrition plans,
- provides assistance in setting ventilation parameters for ventilator patients based on blood gases – but does not make these settings immediately, or
- makes a statement based on laboratory values, such as HbA1c, glucose, etc., as to whether this patient has diabetes according to established guidelines.
Decision Support Systems that are medical devices
Examples of Decision Support Systems that are medical devices include software that
- is used for radiation planning based on CT data,
- is used to evaluate medical image data or to reconstruct these images in 3D,
- helps to place a catheter correctly based on image data,
- analyzes physiological signals from medical devices, e.g., heart rate, to monitor whether the patient is having a stroke or narcolepsy,
- analyzes sound signals to diagnose bronchitis,
- evaluates a lesion and the surrounding skin on an image to draw conclusions about the benign or malignant nature of this lesion (e.g., melanoma),
- differential diagnosis is used, for example, to differentiate ischemia from a stroke,
- from the respiration to apnea,
- concludes tuberculous meningitis or viral meningitis from CSF (cerebrospinal fluid spectroscopy),
- determines the number of cells in pathology, or
- uses a proprietary algorithm based on blood pressure to determine whether an antihypertensive drug has been effective.
f) Second conclusion
The US legislator’s definition of when software is a medical device is so incomprehensible that the FDA feels compelled to clarify this in a separate guidance document.
Many manufacturers may see it as a simplification if they can place DSS on the market without prior clearance by the FDA. However, this presupposes, among other things, that the algorithms have been published in the scientific literature and proven to be valid.
However, evidence of clinical validity is not sufficient to prove patient safety. It is, therefore, surprising that the American legislator (not primarily the FDA) is more generous in some places than the European jurisdiction.
3. Regulatory requirements in Europe
a) Basic requirements (MDD, MDR)
Some interpretation guides provide assistance on when software should be classified as a medical device. However, neither the Medical Device Directive MDD nor the Medical Device Regulation MDR set specific requirements for Decision Support Systems. This is understandable because legal texts cannot formulate specific requirements for every product class. The fact that the MDR has done so for mobile platforms is one of the inconsistencies and aberrations of this EU regulation.
Read more about when software is to be classified as a medical device here.
b) Classification according to MDD
Clinical Decision Support Systems are active devices. If they are used for diagnostic support, Rule 10 applies if, for example, they enable direct diagnosis or monitoring of vital bodily functions. These devices then fall into class IIa. Otherwise, Rule 12 applies: “All other active devices are classified as class I.”
The MDD does not say what a “direct diagnosis is” but the MDR does:
“A device is considered to allow direct diagnosis when it provides the diagnosis of the disease or condition in question by itself or when it provides decisive information for the diagnosis.”
Source: MDR
However, Clinical DSS can also be used for therapy. In this case, the classification depends on whether and which therapeutic devices are directly or indirectly influenced.
c) Classification according to MDR
As the name suggests, Decision Support Systems are used for decision support. In other words, they provide information that relates to decisions on diagnoses, therapies, monitoring, or prevention of diseases and injuries.
These devices, therefore, fall into at least class IIa according to Rule 11 of the MDR.
Read more about Rule 11 of the MDR here.
4. Further regulatory requirements
Medical and Clinical Decision Support Systems require medical data, including data that can be directly assigned to patients. Accordingly, this data is particularly worthy of protection. Manufacturers must take data protection requirements into account.
5. Decision Support Systems: Challenges
Manufacturers of Clinical DSS face challenges because not everything that appears technically feasible is technically, regulatory, and ethically possible.
Challenge 1: Intended purpose
Manufacturers must clearly state the purpose and the promised performance (the “claims”). This applies, for example, to the sensitivity and specificity of diagnoses. This concerns the dependence of these claims on boundary conditions such as the patient population (age, gender, co-morbidities, etc.).
It is precisely this clarity that many manufacturers are unable or unwilling to provide.
Challenge 2: Proof of performance, verification
Tests can never prove the correctness of an implementation or device. However, the expected outputs can be specified with classic algorithms such as decision trees.
These outputs are no longer deterministic with many Artificial Intelligence procedures, especially neural networks. This also applies if the database on which the algorithms operate is not constant or the software even expands it independently. The predictability and repeatability of test results are, at the very least, more difficult.
In these cases, manufacturers often fail to define and justify pass-fail criteria.
Challenge 3: Risk management
If the outputs of Decision Support Systems are not deterministic, and in particular, if it cannot be ruled out that they may even make completely wrong recommendations, the risks can hardly be predicted.
The argument put forward by many manufacturers that a doctor would still check the output falls short of the mark. Doctors will hopefully reduce the likelihood that an incorrect recommendation will lead to harm. However, the risk still exists, and proof that doctors actually reduce the probability must be provided.
The benefit argumentation of some manufacturers is also open to attack. This is particularly true when the argument is based on economic savings. A benefit calculation based on the reduction of human error is more helpful. However, this also needs to be proven, e.g., with the help of clinical literature.
Challenge 4: Validation, clinical evaluation
The clinical evaluation must provide evidence that the promised benefit is achieved and that there are no risks that are not already known and assessed as acceptable. This proof of benefit requires a promise of benefit that is often not sufficiently specific (see challenge 1).
The benefit is measured as an improvement over the state of the art. But what is this gold standard? Many manufacturers compare the output of the DSS with the recommendations that doctors would give. But is this really the gold standard? This needs to be justified.
6. Conclusion
Since the 1970s, Clinical Decision Support Systems have experienced many “hypes” with exaggerated expectations, leading to disillusionment. However, with every technical innovation, such as the current “machine learning,” we come closer to the objective of supporting human intelligence with computer systems, sometimes even replacing it.
Advertisements such as IBM’s for Watson raise expectations that can lead to deep disappointment for affected patients. It is up to everyone to decide how responsible such actions are. After all, neither the technological, regulatory, nor medical hurdles have been overcome in such a way that these systems can find their way into mainstream health care.
Change history
- 2022-09-30: Final draft of the FDA guidance document Clinical Decision Support Software added