News Release

Open-environment machine learning

Peer-Reviewed Publication

Science China Press

Important factors in machine learning

image: The four important factors (feature, label, distribution, objective) of the machine learning process are shown in the figure. In conventional machine learning, all those factors are assumed to hold invariant. view more 

Credit: ©Science China Press

Machine learning (ML) is widely applied in a variety of applications. Despite its great success, it is commonly developed in closed-environment scenarios where some important factors of the learning process hold invariant. However, in many real-world applications, the learning environments are open such that various learning factors can be changing over time. As a result, it is of great importance to develop new machine learning techniques to adapt to open environments.

In a recent article “Open-environment Machine Learning” published at National Science Review, Prof. Zhi-Hua Zhou, from Nanjing University, defined the research scope of open-environment machine learning (or “open ML” for short) and reviewed recent advances on this subject.

Specifically, the article specified four important challenges in open ML and introduced some general principles and strategies. Consider the task of predicting forest disease with sensor signals in a forest.

  1. Emerging new classes: there are some new classes in the testing stage that are never encountered in the training dataset. For example, some novel forest diseases may occur. In this case, one possible general strategy is to first detect the occurrence of the new class by anomaly detection, then refine and update the model to accommodate the new class by incremental learning.
  2. Decremental/Incremental features: some changes may happen in the feature space. For example, some sensors for forest disease prediction could not send signals due to battery exhaustion, whereas some new sensors are deployed additionally before the old ones are removed, leading to decremental or incremental features. To address this issue, one can learn the relationship between different features, which makes it possible to reuse the classifier with old features in the task with new features.
  3. Changing data distributions: training and testing data do not share the identical distribution. For example, diseases that happen in forests may differ between summer and winter. Such tasks are generally impossible without knowledge about how data distribution changes. Fortunately, it is usually the case that current observation has close relation with recent observations. Under the circumstance, one can handle the distribution change with some approaches based on sliding window, forgetting, or ensemble mechanisms.
  4. Varied learning objectives: the learning objectives may differ between different demands. For example, many sensors are initially dispatched to pursue high prediction accuracy, whereas after achieving that, new sensors are dispatched to ensure the system work with energy consumption as low as possible. Some studies disclosed that different performance measures are usually relevant. Therefore, to adjust to the new optimization objective efficiently, one can learn a corresponding classifier that takes the original prediction as input.

“It is fundamentally important to enable machine learning models to achieve excellent performance in usual case while keeping satisfactory performance no matter what unexpected unfortunate issues occur. This is crucial for achieving robust artificial intelligence and carries the desired properties of learnware” said Prof. Zhi-Hua Zhou.

###

See the article:

Open-environment Machine Learning

https://doi.org/10.1093/nsr/nwac123


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.