University researchers also highlighted discrimination in AI technology as they reflect symptom profiles from medical records, reflect and exacerbate prejudice against different personalities
(For a quick snapshot of the top 5 tech stories, subscribe to our today’s Cash Newsletter. Click here to subscribe for free.)
Companies around the world have developed methods over the past year to harness the power of big data and machine learning (ML) in medicine. A model developed by the Massachusetts Institute of Technology (MIT) uses AI to detect asymptomatic COVID-19 patients through a cough recorded on their smartphone. In South Korea, a company used cloud computing to scan chest X-rays to monitor infected patients.
Artificial intelligence (AI) and ML have been deployed extensively during epidemics, and they have been used ranging from data extraction to vaccination delivery. But experts at the University of Cambridge question the ethical use of AI because they tend to harm technology from minorities and low socio-economic status.
Stephen Cave, director of the Center for the Future of Intelligence (CFI) of Cambridge, said, “Relaxing moral needs in a crisis can have harmful consequences that go beyond the life of an epidemic.”
Also read Competition among predictive algorithms is bad for customers, study finds
Predicting patients’ deteriorating rates such as making diagnostic choices that may require ventilation can be flawed because the AI model uses biased data. These trained datasets and algorithms are essentially skewed against groups that frequently access health services, including those belonging to a minority ethnic community and people of low social status, the Cambridge team warned.
Another issue is that algorithms are used to allocate vaccines locally, nationally and globally. Last December, the Stanford Medical Centers Immunization Scheme algorithm excluded many young front-line workers.
“In many cases, AI plays a central role in determining who is best to survive an epidemic. In a health crisis of this magnitude, the stakes are high for fairness and equity, ”said Alexa Haggerty, a research associate at the University of Cambridge.
Also read How bias crept into AI-powered technologies
University researchers also highlighted discrimination in AI technology as they take symptom profiles from medical records, reflect and increase prejudice towards minorities.
The use of the contact-tracing application has also been criticized by many experts around the world, stating that it excludes those who do not have access to the Internet and who have digital skills in addition to other user privacy issues is lacking.
In India, biometric identification programs can be linked to vaccination delivery, raising concerns for data privacy and security. Other vaccine allocation algorithms, including some used by the COVAX alliance, are driven by privately owned AI. These private algorithms are like ‘black boxes’, Haggerty noted.
.
Leave a Reply