Let AI Tell You About It’s Flaws
Exposing more bias in Data Science, Artificial Intelligence, Machine Learning, and Advanced Analytics
Here is an idea. Let AI tell you about its own flaws. The results surprised me. The responses were just a bit too self-interested for my comfort.
Below I use an artificial intelligence enabled writing assistant to write about “itself.” In the quotations (where you see a vertical line to the left of the text) are what the artificial intelligence wrote. All other text is what I wrote.
Asking About Artificial Intelligence
A key flaw in applications that utilize artificial intelligence is the bias those applications exhibit. As their creators, we “taught” them to be biased.
Bias refers to when the algorithms that power AI and machine learning systems incorporate ethnically, racially, or gender-based biases from the data used to train them. This can lead to serious implications for how people are treated in various environments.
In the above passage I used a popular AI writing assistant known as Jasper. It works by “reading” preceding text and then “predicting” what text an author might write next.
In this case, instead of continuing the discussion as to how and why artificial intelligence is biased it gives a definition of bias. A definition is helpful. But, a human author may have continued writing about artificial intelligence’s bias. This work from the AI feels a bit self-interested because it avoids writing too critically about itself.
By using AI to detect its own bias, companies can take steps towards reducing it by creating more accurate and inclusive datasets. By doing this, AI-powered systems can be more effective at recognizing patterns and identifying individuals without allowing for any false positives or negatives.
In the immediately preceding passage, also written by Jasper, the artificial intelligence wrote about how it can assist in exposing bias. If believed, the passage seems to partially…