
Much is said about the accelerated processes of digital transformation and the adoption of emerging technologies as a result of the pandemic, however, little is investigated about the evaluation and results of these innovative solutions, such as artificial intelligence (AI). practices can we implement in the processes?
“We don’t need good intentions, we need good practices.”
Let’s look at some figures and trends from recent years. In 2021, 52% of companies accelerated their AI adoption plans due to the COVID-19 crisis and 86% said that AI became a “mainstream technology” in their business. The average adoption rate in all regions of the world increased by 6% compared to 2020, remaining at 56% – and in Latin America from 41% to 47%.
In 2021, global AI funding increased by 108% (US$66.8 trillion) compared to 2020. This increase was led by the health sector, which accounted for almost a fifth of total funding (18%).
Potential and challenges of artificial intelligence
AI has the potential to expand access to care, improve health outcomes, create more efficient processes, and reduce costs. However, AI can generate biases, which could have various implications depending on the type of solution that is implemented. Let’s think about AI applications that support medical decision making that directly impact people. That is, if there is an error or negative bias in the algorithm’s prediction and the medical staff decides to take action based on it, the consequences can be fatal.
For example, there is evidence that simple heart disease prediction rules used for decades in industrialized countries were biased. The cardiovascular risk score – Framingham Heart Study – performed very well in Caucasian patients, but not in African-Americans. About 80% of the data collected were from the white population, and therefore the diagnoses were more applicable to that group than to other underrepresented groups. This problem of racial bias is repeated in commonly used algorithms and in other population groups that are sometimes underrepresented, such as women. And if we are talking about complex AI models, the risk is even greater.
The importance of algorithmic auditing
In response to these cases of skewed results, lack of transparency, and misuse of data, there has been a growing interest in more active investigation of algorithmic errors and system failures, and an increase in calls for ethical audits. algorithms as a solution to these risks.
These audits seek to evaluate an AI system and its development process, including the design and data used to train the system, and review impacts in terms of precision, discrimination, bias, privacy and security, among others, providing elements of accountability. accounts, transparency and explainability. The example of LAURA, a Brazilian solution that is undoubtedly a world pioneer in the adoption of AI in the health sector, helps to understand what kind of results an exercise like this could have.
The case of LAURA and good practices in artificial intelligence
In 2021, the IDB under its regional initiative fAIr LAC, hand in hand with Eticas Consulting, carried out an algorithmic audit of the LAURA system – who are allies of the initiative and voluntarily participated, recognizing the value of an audit to improve their solution and transparency standards. -. A social impact analysis was carried out to understand what aspects could limit the operation of the system, and a quantitative analysis to detect and recommend measures in possible cases of discrimination, differential impact or algorithmic bias.
On the one hand, the analysis revealed that there is clarity and good acceptability of LAURA by end users, due to the ease of use and the transmission of clinical knowledge. In terms of data protection, it has a complete and well-structured privacy policy, an access and authentication standard with basic security mechanisms, with a pseudonymization policy that follows privacy principles by design.
However, it evidenced limitations in communicating to users about the scope and restrictions of the algorithmic model, particularly regarding the impact on population groups. The analysis identified that despite having a higher proportion of the female population younger than the male population, the predictive capacity for this population subgroup was lower.
These results informed the LAURA team about improvements to mitigate potential ethical risks in the deployment of their solution. This example shows the importance of evaluating the accelerated adoption of AI solutions where, despite good intentions, these iterative and periodic evaluation processes need to be adopted as a good practice in the industry. Algorithmic auditing should no longer be seen as a privilege that certain projects may have, but as a requirement that should be incorporated into solutions that have a direct impact on the well-being of the population.