Here we show how to develop a new metric with an example of CustomMetric
as the following.
Create a new file mmseg/evaluation/metrics/custom_metric.py
.
from typing import List, Sequence
from mmengine.evaluator import BaseMetric
from mmseg.registry import METRICS
@METRICS.register_module()
class CustomMetric(BaseMetric):
def __init__(self, arg1, arg2):
"""
The metric first processes each batch of data_samples and predictions,
and appends the processed results to the results list. Then it
collects all results together from all ranks if distributed training
is used. Finally, it computes the metrics of the entire dataset.
"""
def process(self, data_batch: dict, data_samples: Sequence[dict]) -> None:
pass
def compute_metrics(self, results: list) -> dict:
pass
def evaluate(self, size: int) -> dict:
pass
In the above example, CustomMetric
is a subclass of BaseMetric
. It has three methods: process
, compute_metrics
and evaluate
.
process()
process one batch of data samples and predictions. The processed results are stored in self.results
which will be used to compute the metrics after all the data samples are processed. Please refer to MMEngine documentation for more details.
compute_metrics()
is used to compute the metrics from the processed results.
evaluate()
is an interface to compute the metrics and return the results. It will be called by ValLoop
or TestLoop
in the Runner
. In most cases, you don't need to override this method, but you can override it if you want to do some extra work.
Note: You might find the details of calling evaluate()
method in the Runner
here. The Runner
is the executor of the training and testing process, you can find more details about it at the engine document.
Import the new metric in mmseg/evaluation/metrics/__init__.py
.
from .custom_metric import CustomMetric
__all__ = ['CustomMetric', ...]
Add the new metric to the config file.
val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
The above example shows how to develop a new metric with the source code of MMSegmentation. If you want to develop a new metric with the released version of MMSegmentation, you can follow the following steps.
Create a new file /Path/to/metrics/custom_metric.py
, implement the process
, compute_metrics
and evaluate
methods, evaluate
method is optional.
Import the new metric in your code or config file.
from path.to.metrics import CustomMetric
or
custom_imports = dict(imports=['/Path/to/metrics'], allow_failed_imports=False)
val_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
test_evaluator = dict(type='CustomMetric', arg1=xxx, arg2=xxx)
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。