足ることを知らず

Data Science, global business, management and MBA

Day 109 in MIT Sloan Fellows Class 2023, AI for business 2 "Risk of AI"

 AI would be a true threat to humans?

If you are familiar with sci-fi, AI is a definitely threatening technology that potentially destroys human beings as Skynet in Terminator 2 did. 

But will it really happen?

waitbutwhy.com

The point is that humans tend to underestimate growth, especially for exponential growth

So, the following gap will emerge.

But reality is... 

 

Overestimation

Also, we tend to overestimate the ability of AI. 

However, intelligence is misleading. Thinking of AI as intelligence will lead you astray. You're better off thinking of AI as statistics than as intelligence.


www.youtube.com


www.youtube.com

Rough and better mental model of AI 

Strength

  • Can process data quickly
  • Consistent, doesn’t get bored
  • Precise
  • Internal representations don’t
    match our own

Weakness

  • No “common sense”
  • Inflexible
  • Brittle
  • Internal representations don’t
    match our own

Real risk of AI

Google’s medical AI was super accurate in a lab. Real life was a different story. | MIT Technology Review

Hundreds of AI tools have been built to catch covid. None of them helped. | MIT Technology Review

Stories of AI Failure and How to Avoid Similar AI Fails - Lexalytics

A simple AI eye test could accurately predict a future fatal heart attack | Euronews

Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day - The Verge

Amazon scraps secret AI recruiting tool that showed bias against women | Reuters

UK ditches exam results generated by biased algorithm after student protests - The Verge

Wrong ground truth: datasets were messy and badly curated.

Also, it is human failure, but the data we have is not equal to the data we want. However, so many AI projects start from what they have.

There are some typical mistake patterns.

  • AI misunderstands Causation and Correlation because of poor data description
  • AI is vulnerable for unpredictable change
  • AI is easily biased by biased training data
  • AI does not have normative sensitibities humans do
  • Good prediction does not necessarily mean good public policy or governance

For example, Kate Craawford introduced some fatal biases and drawbacks in ImageNet, which was distributed by FeiFei Li. 

www.youtube.com

 

  • Countless click workers labelling on ImageNet and it means no necessity of expertise in labeling. Is it true?
    • "Flight attendant" is highly skewed to Asian women
    • "CEO" is highly sexualized
    • "Bad person" or "drug addiction" are labelled by very "subjective" classification. 
  • So many people already used ImageNet as a ground truth because it is public data. A lot of AIs already polluted by a lot of "wrong" datasets.