Artificial Intelligence is nothing without transparency
It is undeniably a complex moment for the world of science for many reasons, including and above all internal ones. The scientific method has been repeatedly put "under stress" by a pandemic that has generated misunderstandings, tensions and frictions due to the need for rapid results in the presence of methods that cannot be rapid. In the months in which we find ourselves hearing daily proclamations about the upcoming vaccines, a new criticism rains down on the head of the AI algorithms that Google would like to propose for reading mammograms.
Google, for its part, defends itself by explaining that some aspects cannot be "open" to science by virtue of special needs, for example to protect the intellectual property and technologies of the group. However, this creates a problem which, besides being obviously very practical, is also a problem of principle: if science cannot reproduce a method and a result, then it cannot validate a path or make it its own. It can accept it, as an artifact in itself, but it cannot consider it part of a scientific method and shared knowledge.
The article, published in Nature, explains that the reasons given by Google are in themselves acceptable, but at the same time considers the approach taken by Mountain View to be wrong: if a process cannot be made accessible , we need at least to devise systems that allow scientists to analyze and verify, “allowing peer-review of studies and their evidence”.
The hopes placed on AI in the medical field are extremely high. In this specific case, Google boasts a method capable of reading mammograms with a greater capacity than the eye and human expertise, reducing false positives and increasing general diagnostic capabilities. In order to make these potentials structural, however, AI must be able to be screened and made its own by science, as Galileo Galilei taught the world centuries ago. Faced with Artificial Intelligence, Galileo's principles are more solid than ever and science needs to prove to itself that it can also cope with this important evolution.
Source: Nature
AI is transparent
The accusation is not related to the method itself, considered fascinating and potentially useful, but to the way it was presented and proposed. The criticism made relates to the fact that Google's proposal is not reproducible and therefore Science cannot make it its own: it cannot work on it, it cannot validate its effectiveness, it cannot use studies that can optimize its effectiveness.Google, for its part, defends itself by explaining that some aspects cannot be "open" to science by virtue of special needs, for example to protect the intellectual property and technologies of the group. However, this creates a problem which, besides being obviously very practical, is also a problem of principle: if science cannot reproduce a method and a result, then it cannot validate a path or make it its own. It can accept it, as an artifact in itself, but it cannot consider it part of a scientific method and shared knowledge.
The article, published in Nature, explains that the reasons given by Google are in themselves acceptable, but at the same time considers the approach taken by Mountain View to be wrong: if a process cannot be made accessible , we need at least to devise systems that allow scientists to analyze and verify, “allowing peer-review of studies and their evidence”.
The hopes placed on AI in the medical field are extremely high. In this specific case, Google boasts a method capable of reading mammograms with a greater capacity than the eye and human expertise, reducing false positives and increasing general diagnostic capabilities. In order to make these potentials structural, however, AI must be able to be screened and made its own by science, as Galileo Galilei taught the world centuries ago. Faced with Artificial Intelligence, Galileo's principles are more solid than ever and science needs to prove to itself that it can also cope with this important evolution.
Source: Nature