Share in:

Artificial Intelligence (AI) is taking the world by storm. Enabling machines to simulate intelligent human behaviour, AI unleashes a world of possibilities for businesses to automate and optimise processes, as well as generating insights for better decision-making. Nevertheless, developing this cutting-edge technology relies on historical data generated by humans, and therefore risks reproducing discriminatory practices and harmful stereotypes created by those very humans. How can we ensure that we leverage AI in the right way?

While still an emerging technology, AI is no longer just the subject of science fiction. It is being adopted across industries in the public and private sectors. In Finance, the greatest potential for AI lies in managing unstructured and volatile data. “Examples include complying with new accounting standards, reviewing expense reports and processing vendor invoices,” according to Gartner.

The wide adoption of AI needs to be reconciled with the valid concerns regarding ethics and security. In fact, organisations need the trust of all stakeholders to be able to fully embrace AI. “AI cannot thrive if the business does not trust AI techniques, so organisations need checks and balances to assess and respond to threats and damage and to ensure integrity is embedded into AI,” says Gartner. How are these checks and balances being put into practice?

We interviewed Sara Larsson from the Swedish Gender Equality Agency, a government organisation that aims to ensure the effective implementation of gender equality policy in Sweden. Does AI contribute to or complicate the goals of the gender equality policy? In 2021, they launched an initiative together with the AI Sustainability Centre, which offers “an insight engine with a unique methodology for screening, assessing, mitigating, auditing and reporting exposure to ethical risk in AI solutions”. Together, they are developing a tool that companies and the public sector can use to evaluate gender fairness in their AI solutions.

How does the work of the Swedish Gender Equality Agency relate to AI?

We need to consider how systems are contributing to inequality or can contribute to equality when citizens come in contact with the different governmental bodies. This issue has not been touched on before. Around a year ago we started to investigate if there are any other governments in the world that have anything implemented in this area or are doing anything in particular with AI – but there were none.

What we specifically looked for was research covering the relationship between inequality and technical solutions that use AI to automate tasks and decisions. We also wanted to learn what actions need to be taken if these systems do, in fact, contribute to more inequality in society.

At a meeting with the public sector, we raised the possible effects that digitalisation can have on gender equality. Other government agencies also expressed an interest in this topic, especially the Swedish Social Insurance Agency, which administers social insurance for financial security in the event of illness or disability, and for families with children.

For a year, the Swedish Gender Equality Agency has collaborated with AI Sustainability Centre, with the Swedish Social Insurance Agency and the Swedish Tax Agency as practical examples in their research.

Read the rest of this article on page 30 of inlumi’s Enabling Decisions magazine:

Latest articles:

Related articles