Summary: This is a discussion of social injustice, real or perceived, promulgated or perpetuated by machine learning models. We propose a simple solution based on wide spread misunderstanding of what ML models can do.
Added by William Vorhies on September 11, 2020 at 1:38pm — No Comments
Summary: Bias in modeling has long been a public concern that is now amplified and focused on the disparate treatment models may cause for African Americans. Defining and correcting the bias presents difficult issues for data scientists that need to be carefully thought through before reaching conclusions.
Added by William Vorhies on June 29, 2020 at 11:31am — No Comments
Summary: There is a great hue and cry about the danger of bias in our predictive models when applied to high significance events like who gets a loan, insurance, a good school assignment, or bail. It’s not as simple as it seems and here we try to take a more nuanced look. The result is not as threatening as many headlines make it seem.
Added by William Vorhies on June 5, 2018 at 8:00am — No Comments
Summary: Flawed data analysis leads to faulty conclusions and bad business outcomes. Beware of these seven types of bias that commonly challenge organizations' ability to make smart decisions.
This is a great article by Lisa Morgan originally published on InformationWeek.com. See the original article…Continue