Graviti Help Center
  • Graviti Help Center
  • Developer Documentation
    • Developer Tools
      • Graviti Python SDK
      • Graviti CLI
      • Graviti Open API
        • DataSet Operation
        • Data Operation
        • Examples
          • Model training
          • Data mining
  • Product Documentation
    • TensorBay
      • TensorBay: For All Stages of Algorithm Research
      • Quick Start for Developers
      • Quick Start for Team
      • How to Quickly Prepare a Dataset
        • How to Create a New Dataset
        • Manage Data in Cloud Storage
        • Create a Dataset by Filtering
        • Create a Dataset by Merging
        • Quick Use of Open Datasets by Forking
      • Version Control
        • Manage Data Files
        • Manage Annotations
        • Manage Dataset Information
        • Manage Versions
        • Manage Dataset Branches
        • Dataset Activities
        • Dataset Settings
      • Pharos Online Data Visualization
        • Explore Pharos
        • Pharos Visualization Widgets Instruction
      • Collaboration
        • Create a New Team
        • Invite Team Members
        • Team Management
        • Dataset Management
        • View Activity Log
      • How to Integrate TensorBay into Your Pipeline
      • Action
        • Create a Workflow
        • Run Workflows
        • Manage Workflows
        • YAML Syntax Description
        • Crontab syntax Description
        • Automatic Configurations
    • Open Datasets
      • Basic Concepts
      • Search for Datasets
      • Preview Data and Label Distribution Online
      • Use and Manage Datasets Online
      • Download Open Datasets
      • Didn't find the dataset you want?
  • APPs
    • GroundTruth Tools
      • Annotate pictures
    • Sextant
      • Create an Evaluation
      • Custom Metrics
      • Start to Evaluate
      • View Results
  • Release Note
  • Graviti Official Website
Powered by GitBook
On this page
  • Available Types of Indicators
  • Float
  • Curve
  • Valid Range
  • ‌For Data
  • ‌For Dataset
  • Rules
  • Code Examples
  • Example of Directory Structure
  • Init.py Example
  • Requirements.txt Example.
  • Evaluator.py Example

Was this helpful?

  1. APPs
  2. Sextant

Custom Metrics

PreviousCreate an EvaluationNextStart to Evaluate

Last updated 3 years ago

Was this helpful?

Sextant supports custom Metrics algorithms, you just need to upload the algorithm of your metrics to Github Repo and enter its Url into Sextant.

The algorithm of your metrics should be written with python and upload to Github.

Available Types of Indicators

‌Sextant supports two types of indicators, Float and Curve, and users can set their valid range.

Float

‌The output is a floating point value, which is displayed on the front end as a "name=value" style, e.g. mAP=0.75.

Curve

The output is two one-dimensional arrays named x and y. It will be displayed as a curve in the front-end, with the values of x and y corresponding to the horizontal and vertical coordinates of the points on the curve.

Valid Range

‌Sextant supports setting the valid range of metrics to data or dataset.

‌For Data

Each data evaluated will return a corresponding value, such as the average value of IoU.

‌For Dataset

Each evaluation will only return one corresponding value, such as the mAP of this evaluation.

Rules

The algorithm of your metrics needs to comply with the following rules:

  1. If you need additional dependency packages in your code, you need to create requirements.txt in the root directory and write dependencies in it (GPU operations for deep learning frameworks such as torch, tensorflow, etc. are not supported).

  2. You need to create an __init__.py, file in the root path of GitHub to ensure that the path to the repo can generate a class called Evaluator as a python package when it is cloned locally.

  3. There is one and only one class named Evaluator in the python library.

  4. The Evaluator needs to have the method to obtain the annotation scores of a single image. evaluate_one_data(input_source: dict, input_target: dict) -> dict method. Please refer to Graviti's docs for input_source and input_target values.

  5. The Evaluator needs to have a method to obtain the overall annotation scores. get_result() -> dict.

  6. The return values of the above two methods must comply with the following structure. (Currently, only two types of data are supported: float and curve.)

Code Examples

Example of Directory Structure

(yourgithubrepo_root) 
    -- __init.py 
    -- Evaluator.py 
    -- requirements.txt

Init.py Example

from .Evaluator import Evaluator
all = ["Evaluator"]

Requirements.txt Example.

numpy=1.21.0

Evaluator.py Example

import numpy as np

class Evaluator:
   def __init__(self):
       """
       You can initialize your model here
       """
       ...


   evaluate_one_data(self, input_source: dict, input_target: dict) -> dict:
       """
       Do the evaluation job
       :param input_source: Ground truth boxes in one image
       :param input_target:Target boxes in the same image
       :return: A dict containing evaluation on one image and each category within it.
       """
       ...
   def get_result(self) -> dict:
       """Overall evaluation.


       Returns:
           A dict containing overall evaluation on all images and all categories.
       """
```
{
   'scope':1,//1-data级别;2-dataset级别
   'overall': {
       'mAP':0.123, // Float类型返回
       'pr':{  // Curve类型返回
             'x':[1,0.5],
             'y':[1,0.5]//x数据将在前端渲染为横轴数据,y数据将在前端渲染为纵轴数据,x和y均为list,且长度一致。
            }
   },
   'categories': {
       '{your_category}': {
            'mAP': np.mean([1, 2, 3]).tolist(),
       }
   }
}