24 ноября, 2019

Ethics Council comments on the challenges of artificial intelligence

Core messages

The German Ethics Council has published a statement in which it comprehensively examines and evaluates the effects of digital technologies on human self-understanding and coexistence. «The use of AI must expand human development, not diminish it. AI must not replace humans. These are basic rules for ethical evaluation,» says Professor Alena Buyx, Chairwoman of the German Ethics Council, on the preliminary version of the statement «Man and Machine – Challenges of Artificial Intelligence»:

Digital technology and AI systems have Today, they have found their way into almost all areas of public and private life and range from tumor diagnostics and intelligent tutoring systems in schools to recommendation systems on online platforms and software that is intended to support decisions in the social and judicial systems or in the police.

For the ethical evaluation of such developments and their use in various fields, it is necessary to understand not only the techniques, but also their interactions with the people who use them or are affected by their application, the statement continues.Central to this is the question of what effects are associated «when activities that were previously reserved for humans are delegated to machines. Are human authorship and possibilities for action expanded or diminished by the use of AI»? In its statement, the German Ethics Council deals with this question exemplarily in the four application areas of school education, public communication as well as administration and medicine as well as health care. AI-supported digital products are increasingly being used in medicine and the healthcare system. According to the Ethics Council, the consideration of the opportunities and risks associated with them requires at least a threefold differentiation. Firstly, several groups of actors must be distinguished who have different functions and responsibilities with regard to the use of AI. Secondly, healthcare encompasses different areas of application for AI products, from research to specific patient care. Thirdly, different degrees of «replacement of human action segments» can be observed.

Even the development of suitable AI components for medical practice requires close interdisciplinary cooperation between various experts and places high demands on the quality of the training data used in order to minimize avoidable distortions of the results from the outset.Systems should be designed in such a way that  they provide plausibility checks in the use phase in order to avoid the risks of automation bias. Appropriate testing, certification and auditing measures should ensure that only sufficiently tested AI products are used whose «basic functionality can be sufficiently explained and interpreted, at least in systems that propose decisions with serious consequences for those affected».

In medical research, the use of AI can be beneficial in several ways, provided that the protection of the persons participating in the studies and their data is guaranteed, the statement also states. AI could, for example, provide helpful preparation and support for literature searches or the evaluation of large databases, discover new correlations between certain phenomena and make accurate predictions on this basis, for example on the spread of a virus.

In medical care, AI instruments are increasingly being used for diagnostics and therapy.  In particular, advances in AI-supported image recognition opened up new possibilities for early detection, localization and characterization of pathological changes.In therapy, AI is used, for example, in surgical robots.

If medical activities were delegated to such a narrow to medium extent to technology, tumors could, for example, be detected earlier, treatment options expanded and the chances of successful treatment increased. For doctors, the technology also opens up the opportunity to be relieved of monotonous routine work and to gain more time for the exchange with the respective patient. However, these opportunities are also offset by risks, «for example, if specialists lose their own competences due to the progressive delegation of certain tasks to technical systems or neglect due diligence obligations in dealing with AI-supported technology due to an automation bias».

In order to realize the opportunities of AI use in clinical situations and to minimize risks, several levels must be taken into account. According to the Ethics Council, this requires, among other things, comprehensive and as uniform as possible technical equipment, personnel training and continuous quality assurance as well as strategies that ensure that findings are also checked for plausibility in AI-supported protocols, that the personal life situation of patients is comprehensively taken into account and that it is communicated in a trusting manner.

The large need for data in most medical AI applications also poses challenges, both with regard to the protection of the privacy of data subjects and with regard to a sometimes very restrictive individual interpretation practice of applicable data protection regulations, which could stand in the way of realizing the potential of AI use in clinical practice.

One of the few medical fields of action in which AI-based systems are partly medical or medical.psychotherapy could sometimes largely or completely replace other health workers. For some years now, instruments have been used here, mostly in the form of screen-based apps that offer a kind of therapy on an algorithmic basis. On the one hand, given their low threshold and constant availability, such apps could bring people into initial contact with therapeutic offers who would otherwise receive therapy too late or not at all. On the other hand, there are concerns about lack of quality controls, the protection of privacy or «when people build a kind of emotional relationship with the therapeutic app». It is also controversially discussed whether the increasing use of such apps promotes further reduction of therapeutic specialists.

Based on these considerations, the German Ethics Council has formulated nine recommendations for the use of AI in the health sector:

The development, testing and certification of medical AI products requires close cooperation with the relevant regulatory authorities and, in particular, with the relevant medical societies in order to detect weak points of the products at an early stage and to achieve high quality standards. establish.

When selecting training, validation and test data sets, it should be ensured beyond existing legal requirements with appropriate monitoring as well as precise and at the same time sensibly implementable documentation obligations that the factors relevant to the relevant patient groups (e.g.age, gender, ethnic influencing factors, pre-existing conditions and comorbidities).

When designing AI products for decision support, it must be ensured that the results are presented in a form that makes the dangers of automation bias, for example, transparent and counteracts them; In addition, the need for a reflexive plausibility check of the action proposed by the AI system must be underlined.

The collection, processing and disclosure of health-related data generally requires strict requirements and high standards of education, data protection and privacy.

If the superiority of AI applications over conventional treatment methods, which has been carefully proven by empirical studies, it must be ensured that they are available to all relevant patient groups.

For proven superior AI applications, rapid integration into the clinical training of healthcare professionals should take place.The other health professions should also include appropriate elements in the training in order to strengthen the application competence in AI applications in the health sector.

In the routine application of AI components, it should not only be ensured that those who use them clinically have a high level of methodological expertise for classifying the results, but also strict due diligence obligations are observed in the collection and transfer of data as well as in the plausibility check of the machine-given recommendations for action. Particular attention must be paid to the risk of losing theoretical as well as haptic-practical experience knowledge and corresponding skills; this risk should be counteracted by appropriate and specific training measures.

As medical, therapeutic and nursing action segments continue to be replaced by AI components, it must not only be ensured that patients are informed in advance about all decision-relevant circumstances of their treatment. In addition, targeted communicative measures should also be taken to actively counteract the impending feeling of increasing «objectification» and to protect the relationship of trust between the persons involved.The higher the degree of technical substitution of human actions by AI components, the greater the need for information and support among patients. The increased use of AI components in care must not lead to a further devaluation of speaking medicine or a reduction in personnel.

According to the Ethics Council, a complete replacement of doctors by an AI system endangers patient well-being and cannot be justified by the fact that there is already an acute shortage of staff in certain areas of care. Especially in complex treatment situations, a «personal counterpart» is required, who can be increasingly supported by technical components, but does not become superfluous even as a person responsible for the planning, implementation and monitoring of the treatment process.

This full text is unfortunately reserved for medical professionals

You have reached the maximum number of articles for unregistered visitors

TAGS:
Comments are closed.