News, tips, advice, support for Windows, Office, PCs & more. Tech help. No bull. We're community supported by donations from our Plus Members, and proud of it
Home icon Home icon Home icon Email icon RSS icon
  • Using AI to Detect Discrimmination

    Home Forums Outside the box Rumors and what-ifs Using AI to Detect Discrimmination

    This topic contains 1 reply, has 2 voices, and was last updated by  OscarCP 4 months ago.

    • Author
      Posts
    • #1874351 Reply

      Geo
      AskWoody Plus

      The team created an AI tool for detecting discrimination with respect to a protected attribute, such as race or gender, by human decision makers or AI systems that is based on the concept of causality in which one thing — a cause — causes another thing — an effect. https://news.psu.edu/story/580213/2019/07/11/research/using-artificial-intelligence-detect-discrimination?utm_source=newswire&utm_medium=email&utm_term=580490_HTML&utm_content=07-11-2019-23-09&utm_campaign=Penn%20State%20Today

      Edit to remove HTML. Please use the “Text” tab in the entry box when you copy/paste.

    • #1874422 Reply

      OscarCP
      AskWoody Plus

      There is some interesting information on AI-caused problems in this site:

      https://www.techrepublic.com/article/the-10-biggest-ai-failures-of-2017/

      In general, this type of algorithms does offer some benefits, but also can create serious problems. Biases based on the way information used to train neural network algorithms has been collected (e.g., of mostly cardiology cases of patients of European descent) then results in bias in the AI results (misdiagnosing people with a different ancestry). According to a number of articles I have seen recently in publications such as “New Scientist” (of the “Nature” group of scientific publications), “Scientific American” (in the July 2019 issue) and elsewhere, face-recognition AI technology is a prime example of “not ready to be brought to market” that is being brought to market all the same. Particularly problematic is biased training that can lead to AIs identifying as criminals or trouble-makers to some people, in part, because of their ethnicity. It is particularly worrying when used by the police and the national security agencies, and also as  the tool of repressive regimes to keep track and gather evidence against opponents that can be used to find, prosecute and, or even “disappear” them. Some governments, in the US at the municipal level so far, for example San Francisco, have banned its use by their own agencies and police departments. I understand that some kind of preventive action is also under discussion in some states as well as at the national government level, although I am of the impression that these are early days in this respect. In the case of its use by democratic governments respectful of their citizen’s rights, one potentially serious problem is that mistakes in identifying the correct subject based on a scan of photos might produce the wrong suspects and completely innocent people then have their lives turned upside down as a result.

      Windows 7 Professional, SP1, x64 Group B & macOS + Linux (Mint) => Win7 Group W(?) + Mac&Lx

    Please follow the -Lounge Rules- no personal attacks, no swearing, and politics/religion are relegated to the Rants forum.

    Reply To: Using AI to Detect Discrimmination

    You can use BBCodes to format your content.
    Your account can't use Advanced BBCodes, they will be stripped before saving.