Cristina Di Stazio
In the collective imagination, the artificial intelligence systems (“A.I.”) are perceived as perfectly logical and objective, far from the influence of human prejudices which the individual is often slave to.
Systems of A.I. have been touted as tools without these kinds of “human inefficiencies”.
Surprisingly, numerous studies have shown that actually these systems exhibit prejudices against racial differences.
The solutions of artificial intelligence are subject to distortions: “Bias”. The meaning of the term bias, in the landscape of the racial prejudices exercised by the operations of A.I., consists in the lack of “equity” that emerges from the output data of a computer system.
The biases derive from an objective programming system which is not affected by “inequitable criteria”, but it is based on the processing of partial and discriminatory data entered at source in systems of A.I.
In particular, the system of A.I. taken into consideration for the purposes of the treatment of this analysis are the systems of machine learning.