Artificial intelligence can cause discrimination

 Artificial Intelligence Can Cause Discrimination

Read More

A report by experts on AI has described artificial intelligence-based systems as a "black box". German officials say clear and understandable rules are needed to prevent AI from becoming a nightmare. "Artificial intelligence (AI) makes many things easier. Unfortunately, also discrimination," commented Farda Atman, the German government's independent anti-discrimination commissioner, at a conference in Berlin on Wednesday morning. What happened at the press conference?

Protecting People from Potential Discrimination

Atman also presented an expert report on the occasion on protecting people from potential discrimination through automated algorithmic decision-making (ADM) based systems. This report cites many examples of the use of ADM in artificial intelligence. These include application procedures, loans from banks, and insurance companies, or distribution of state benefits such as social welfare.

How can AI fuel biases?

The Anti-Discrimination Commissioner said, "There are probabilistic statements made on the basis of group characteristics. What appears to be objective at first sight can give rise to prejudices and stereotypes. In any case, we need to be digital." The dangers of discrimination should not be underestimated."

The Netherlands Experienced

In 2019, more than 20,000 people in the Netherlands experienced what can be done using technology based on the assumption that it can never go wrong. Meanwhile, they were wrongly ordered to pay back child benefits under the threat of huge fines. A discriminatory algorithm in the software was partly responsible for the situation, and people with dual citizenship were particularly affected.

To Prevent Such Cases

To prevent such cases, Atman demands that companies work in a transparent manner. In other words, they want to provide data on companies using AI and information about how their systems work.  This expert report on AI, written by legal scholar Indra Speaker and her colleague Emmanuel V. Tofug, describes artificial intelligence-based systems as a "black box." It is virtually impossible for those affected by this system to trace the causes of their failure. "A particular trend of ADM use is that the ability to discriminate may already be present in their systems," the report states. This may be due to a data set, which is poor, unfit for purpose, or distorted.

Possible Distinguishing feature Postal code

What this means is explained in the report with typical examples "Postal code which is not in itself discriminatory but becomes a means of discriminating. For example, for some historical reasons, many "If immigrants live in a certain city district," it can have negative consequences for the people who live there. For example, if they apply for loans, they may be seen as a financial risk because they may not be able to repay their loans. Experts call this "statistical discrimination” the practice of attributing statistically derived characteristics to a group based on actual or assumed mean values.

Demand for the Establishment of an Arbitration Body

Farda Atman wants to set up a conciliation office in her agency to address similar issues and is also demanding that the General Equal Treatment Act (AGG) be supplemented by a mandatory conciliation mechanism.

Read More

Digitization is the future

Atman's conclusion from the report is simple: "Digitization is the future. But it doesn't have to be a nightmare. People should be able to trust that they won't be discriminated against by AI." " And that they can defend themselves if that happens. Therefore, Atman demands that there is a need for clear and understandable principles in this regard.

Read More

 

Next Post Previous Post
No Comment
Add Comment
comment url