News Release 

Algorithm for preventing 'undesirable behavior' works in gender fairness and health tests

American Association for the Advancement of Science

A new framework for designing machine learning algorithms helps to prevent intelligent machines from exhibiting undesirable behavior, researchers report. The framework, which was tested in a bias and also a health context, achieves this feat by shifting the burden of avoiding the undesirable behavior from the user (most often not a computer scientist) to the algorithm designer. Making "well-behaved" algorithms using the new framework will not only improve machine learning tools broadly but could also pave the way for new opportunities of use, particularly in applications where machine learning was previously considered too risky. From medical diagnoses to financial predictions, machine learning (ML) algorithms are becoming an ever more ubiquitous tool. Given their increasing mastery of various tasks - many of which could directly impact the quality of life - it is critical to ensure that they do not exhibit undesirable behavior, including that which could cause harm to humans. Examples of algorithms exhibiting discriminatory biases or delaying medical diagnoses are known. Standard ML approaches require that users specify and encode constraints on an algorithm to preclude unwanted behavior. However, many users lack the knowledge of ML and statistics required to do so. Because of this, the safe and responsible use of ML can be difficult in some critical applications. To address this problem, Phillip Thomas and colleagues present a framework for designing ML algorithms that shifts the burden from the user to the designer. Instead of the user needing to encode constraints on the algorithm's behavior, work that requires extensive domain knowledge or additional data analysis, Thomas et al.'s approach makes this process one the user can do much more easily, without requiring additional complex data analysis. The authors demonstrate the benefits of their approach by designing ML algorithms and applying them to examples in gender fairness and diabetes management. Their algorithms precluded the dangerous behavior caused by standard machine learning algorithms in these settings, they show.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.