Data governance, AI and gender bias
Today’s society is shaped by the rapid transformation of digital technologies. An increasingly data-driven society is emerging, while more and more objects and devices are connected to the Internet. This growing “global data skin” is the result of combining large volumes of data continuously generated in real time, and the adoption of powerful data processing tools. One of the aspects that distinguish the so-called Society 4.0, from previous technological revolutions, is the current information processing capacity which can not only manage large volumes of data, but also can “learn” from the data received, and react based on the results processed. This automatic capacity of learning is also known as machine learning.
Machine learning is today being used for supporting public administration on decision-making, especially in the context of big data. The more data the governments and public institutions manage to integrate into their systems, the higher the capacities of machine learning to take decisions based on this data.
Here is important to note that the integrity of the data used to train these systems (machine learning algorithms) is critical to take unbiased decisions. Therefore, if the quality of the information is poor then the output will be equally bad (the collected data should provide an accurate representation of the real world). Now, this brings serious concerns regarding the scalability of taking automatically biased decisions that can affect large sectors of the population (particularly minorities and more vulnerable communities).
In the private sector for instance, if young males tend to suffer more car accidents than young females, the algorithms used by car insurance companies (trained based on their existing records) are expected to recommend charging higher fees to young males than young females. A similar situation could apply to ethnic minorities, migrants, people with disabilities, among others.
A similar example can be found in the context of public administration. For instance, when assigning scholarships to students, or benefiting certain groups of citizens to receive a social service. If the algorithms used by the social welfare department are trained with historical information (e.g. more males than females have traditionally been beneficiaries), then it is likely that the system will tend to replicate the gender bias during its future decisions.
A growing body of evidence is indicating that algorithms are increasingly affecting decisions such as selecting citizens who can benefit from a welfare program, or a person who can be eligible to be hired in the public administration. In both cases, there are machine learning tools which can automatically take decisions or make recommendations to public officers regarding the best decision to take. With the emergence of artificial intelligence (AI), there are several risks of replicating unequal or unfair treatments.
For those interested, here the Draft Ethics guidelines for trustworthy AI produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG)
All these rapid innovations are bringing new challenges for the digital transformation of public services. In several regions, governments are far behind the capacity of transformation adopted by large digital companies, which are swiftly developing cutting-edge AI systems.
In order to overcome the growing risk of information asymmetry between the public sector and the technological capacities of machine learning developments (big data, AI, Internet of things, among others), it is critical to develop different strategies. Some of the most relevant actions that need to be adopted for the current and coming digital government agenda, are as follows:
1. The public sector needs to develop capacities to understand, regulate and acknowledge both the opportunities and challenges behind AI in public administration services. Adopting more transparent and non-discriminatory electronic governance policies is critical to prevent these AI systems perpetuating, or even amplifying, existing inequalities. Raising awareness is one of the first steps to take.
2. It is critical to implement frameworks, good practices and enforce that when the public administrations are adopting AI tools, there will be a clear necessity of transparency on how the algorithms are designed, identifying who takes part in the elaboration of the software and how they work. It will be necessary to adopt a set of digital ethics principles to ensure gender parity for impartial treatment, safeguard the protection of data privacy of beneficiaries (citizens), but also to promote the adoption of more transparent actions (e.g. algorithmic auditing on fare treatment regardless of race).
3. To adopt internal data governance mechanisms, regular monitoring and accountability practices ensuring that public information systems have minimized (as much as possible) any risk of gender bias in data processing. This will require the development of rigorous methodologies and tools to confirm that the data adapted to train these systems does not bring any source of preference/bias/imbalance for males against females. Likewise, to avoid any minority being affected. Creating an advisory board, including gender experts, as well as female representatives is also considered a good practice.
4. Build a strong digital citizenship: a number of actions need to be taken in order to educate and up-skill the society in this area. This means to enforce and promote actions that ensure a broader understanding of digital citizenship. Not only limited to learning how to use the digital information tools, but also promoting an active and dynamic participation of citizens to ensure that all the risks associated with the broad adoption of big data and machine learning are minimized as much as possible, within the public sector. At the civil society level, a number of watchdog organizations already exist (i.e. see algorithmwatch.org in Berlin) which are increasingly demanding the need of gender bias auditing, as well as discussing the relevance of these topics. These initiatives need to be promoted and supported.
5. To adopt policies and standards, at the electronic governance level, to ensure that the digital tool is not affecting democracy or any of the communities from the civil society which might be under risk of discrimination or under-representation. Here, an action to consider is the implementation of gender transparency and equity indices which can help to clarify and compare to what extent the AI information systems have been checked, to ensure minimal gender bias, or any other form of gender imbalanced practice.
If you are interested in this topic, you probably would enjoy reading this book "Algorithms of Oppression: How Search Engines Reinforce Racism" by Safiya Umoja Noble
Posted on Jan 07, 2019