A team of MIT researchers have created a new AI, one that happens to be a psychopath. Researchers set out to prove that the data that is used to teach a machine learning algorithm can significantly influence its behavior. Norman was trained to perform image captioning, however instead of learning on a standard image captioning dataset, Norman was trained on an unnamed subreddit that is dedicated to document and observe death. Norman' responses were then compared with responses with a standard image captioning AI on Rorschach inkblots.
What I take from this is all AI will be inherently bias depending on the dataset used to train it. Unless someone comes up with a completely neutral dataset, or somehow can provide the AI with all possible data, there will always be influence into the AIs "thinking." Interesting, and scary stuff to ponder.
Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set.
What I take from this is all AI will be inherently bias depending on the dataset used to train it. Unless someone comes up with a completely neutral dataset, or somehow can provide the AI with all possible data, there will always be influence into the AIs "thinking." Interesting, and scary stuff to ponder.
Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set.