1 Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy
margartrincon2 edited this page 5 months ago


Machine-learning designs can fail when they try to make forecasts for people who were underrepresented in the datasets they were trained on.

For example, a design that anticipates the very best treatment option for somebody with a persistent disease may be trained using a dataset that contains mainly male patients. That model might make incorrect predictions for female clients when released in a healthcare facility.

To improve outcomes, engineers can attempt balancing the training dataset by removing data points till all subgroups are represented similarly. While dataset balancing is appealing, it typically requires getting rid of large quantity of information, hurting the model's total efficiency.

MIT researchers developed a new strategy that determines and removes particular points in a training dataset that contribute most to a model's failures on minority subgroups. By removing far fewer datapoints than other approaches, this method maintains the overall accuracy of the design while improving its efficiency concerning underrepresented groups.

In addition, the technique can recognize surprise sources of predisposition in a training dataset that does not have labels. Unlabeled data are even more common than identified information for many applications.

This approach might also be integrated with other techniques to enhance the fairness of machine-learning models deployed in high-stakes circumstances. For instance, it may sooner or grandtribunal.org later assist ensure underrepresented clients aren't misdiagnosed due to a biased AI design.

"Many other algorithms that attempt to address this problem assume each datapoint matters as much as every other datapoint. In this paper, we are revealing that presumption is not real. There are particular points in our dataset that are adding to this bias, and we can discover those information points, eliminate them, and improve efficiency," says Kimia Hamidieh, an electrical engineering and computer technology (EECS) graduate trainee at MIT and co-lead author of a paper on this method.

She composed the paper with co-lead authors Saachi Jain PhD '24 and fellow EECS graduate trainee Kristian Georgiev