Gender and racial bias found in affirmative action algorithm

In-built forms of discrimination can fatally undermine the right to equality and to social protection for women, especially women belonging to excluded racial and social groups.  

The future is being built on technologies and decision-making systems that are created under the “standardized male” default, which assumes its average users are white, hetero, educated men from wealthy countries.  As the adoption of Automated Decision-Making (ADM) systems and AI rapidly expands around the world, this default has proved to be extremely damaging for women and girls, especially those belonging to non-white groups,  worldwide. Biased algorithms are hindering their possibilities and threatening liberties of collectives. Stories of harm are only anecdotal and there is no concrete, binding proposal to fix the problem at the table of decision-makers.

The problem needs an urgent, positive response at the diplomatic, standards, public policy and technical levels, simultaneously. As countries accelerate their pace drafting National AI Strategies and rush to adopt ADM systems to address different social problems, we need to equip policymakers, technologists and governments with the basic skills to use human rights as guiding principles in design in order to fully realize the potential of technology to identify and counteract biases when designing the digital future.  

APPROACH

This project will increase the influence of the <A+> Alliance, a multidisciplinary, diverse, and feminist global coalition of expert practitioners, academics and activists working to create and apply Affirmative Action Algorithms <A+> that upturn the current path of ADM at a critical turning point in history.  The focus of my work will be at the intersection of gender and race.

The final goal will be achieving systems change at the institutional multilateral level, through innovative public policy proposals, high-level diplomatic engagement, partnerships and capacity building of strategic actors that place an inclusive digital future at the core of their initiatives.

PROCESS

Drafting innovative public policy recommendations addressing the technological, social and administrative challenges to implement Affirmative Action Algorithms.

Engaging in diplomatic multilateral spaces both within the UN system (WTO, ITU, WIPO, CDAW, UNHRC)  and alternative ones (WEF, Barcelona Mobile Congress, Consumers Associations), contributing to actionable public policy recommendations at the highest level of decision-making, breaking down silos that place gender and race as something separated from technology and diplomacy, and elevate the urgency to lay down the foundation of an inclusive digital future.  

Engaging with and training the next generation of innovators, both on the public and private sector, to guide them and train them in the design an inclusive digital future.

As a final output, a viable plan will be delivered, suggesting a UN agencies-wide review of the application of existing international human rights laws and standards for ADM, machine learning, race and gender.

Stay connected.
Stay updated.
Stay informed:

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

hello@locksley.org
312-OBJECT6

Site Map

Home
Contact
About
Articles

Navigation

Issues