Việc này sẽ xóa trang "Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy"
. Xin vui lòng chắc chắn.
Machine-learning models can fail when they attempt to make predictions for people who were underrepresented in the datasets they were trained on.
For instance, a model that anticipates the best treatment choice for somebody with a chronic disease might be trained using a dataset that contains mainly male clients. That model may make inaccurate forecasts for female clients when deployed in a health center.
To improve outcomes, engineers can try balancing the training dataset by getting rid of information points till all subgroups are represented equally. While dataset balancing is appealing, it frequently needs removing large quantity of data, injuring the design's overall efficiency.
MIT scientists established a new strategy that recognizes and removes particular points in a training dataset that contribute most to a model's failures on minority subgroups. By eliminating far fewer datapoints than other methods, this technique maintains the total accuracy of the design while improving its efficiency concerning underrepresented groups.
In addition, the strategy can identify covert sources of predisposition in a training dataset that does not have labels. Unlabeled information are even more widespread than labeled data for many applications.
This technique might likewise be combined with other approaches to enhance the fairness of machine-learning models deployed in high-stakes scenarios. For instance, it might sooner or later help ensure underrepresented clients aren't misdiagnosed due to a prejudiced AI model.
"Many other algorithms that attempt to resolve this concern assume each datapoint matters as much as every other datapoint. In this paper, we are showing that assumption is not real. There specify points in our dataset that are adding to this predisposition, and we can discover those information points, eliminate them, and improve efficiency," states Kimia Hamidieh, an electrical engineering and computer system science (EECS) graduate trainee at MIT and co-lead author of a paper on this method.
She composed the paper with co-lead authors Saachi Jain PhD '24 and fellow EECS graduate trainee Kristian Georgiev
Việc này sẽ xóa trang "Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy"
. Xin vui lòng chắc chắn.