Graviti Help Center
  • Graviti Help Center
  • Developer Documentation
    • Developer Tools
      • Graviti Python SDK
      • Graviti CLI
      • Graviti Open API
        • DataSet Operation
        • Data Operation
        • Examples
          • Model training
          • Data mining
  • Product Documentation
    • TensorBay
      • TensorBay: For All Stages of Algorithm Research
      • Quick Start for Developers
      • Quick Start for Team
      • How to Quickly Prepare a Dataset
        • How to Create a New Dataset
        • Manage Data in Cloud Storage
        • Create a Dataset by Filtering
        • Create a Dataset by Merging
        • Quick Use of Open Datasets by Forking
      • Version Control
        • Manage Data Files
        • Manage Annotations
        • Manage Dataset Information
        • Manage Versions
        • Manage Dataset Branches
        • Dataset Activities
        • Dataset Settings
      • Pharos Online Data Visualization
        • Explore Pharos
        • Pharos Visualization Widgets Instruction
      • Collaboration
        • Create a New Team
        • Invite Team Members
        • Team Management
        • Dataset Management
        • View Activity Log
      • How to Integrate TensorBay into Your Pipeline
      • Action
        • Create a Workflow
        • Run Workflows
        • Manage Workflows
        • YAML Syntax Description
        • Crontab syntax Description
        • Automatic Configurations
    • Open Datasets
      • Basic Concepts
      • Search for Datasets
      • Preview Data and Label Distribution Online
      • Use and Manage Datasets Online
      • Download Open Datasets
      • Didn't find the dataset you want?
  • APPs
    • GroundTruth Tools
      • Annotate pictures
    • Sextant
      • Create an Evaluation
      • Custom Metrics
      • Start to Evaluate
      • View Results
  • Release Note
  • Graviti Official Website
Powered by GitBook
On this page
  • Use a TensorBay Dataset to Start an Evaluation‌
  • Load a Model from GitHub to Start an Evaluation
  • How to prepare a suitable algorithm model for Sextant
  • View Evaluation Logs

Was this helpful?

  1. APPs
  2. Sextant

Start to Evaluate

PreviousCustom MetricsNextView Results

Last updated 3 years ago

Was this helpful?

You can join the evaluation created by yourself or your team members through Sextant. Sextant supports both modes of participation, using a TensorBay dataset and loading a model from GitHub.

If you have no permission to use the benchmark data, please apply according to the prompts first.

Use a TensorBay Dataset to Start an Evaluation‌

  • Find the evaluation you want to join on the evaluation list page and click View to enter the corresponding evaluation details page.

  • Click Start to Evaluate on the upper right corner of the evaluation details page.‌

  • Click Choose a Dataset from TensorBay in the pop-up window and select the dataset that needs to be evaluated and then choose the dataset version. Click Confirm, then the evaluation will start automatically. Meanwhile, the system will also automatically generate an evaluation record.

Load a Model from GitHub to Start an Evaluation

  • Find the evaluation you want to join on the evaluation list page and click View to enter the corresponding evaluation details page.‌

  • Click Start to Evaluate on the upper right corner of the evaluation details page.

How to prepare a suitable algorithm model for Sextant

  1. First prepare the algorithm that needs to be used in evaluation and verify its availability.

  2. Write python code according to the following structure:

  • There is only one class named Predictor in the python library.

  • There is a predict() method in the Predictor class. Please refer to Graviti’s docs for the return value.

  • The model on which the algorithm depends must can be used by the algorithm.

class Predict:
    def __init__(self):
        """
        You can initialize your model here
        """
        ...
    def predict(self, img_data: bytes) -> Dict[str, Any]:
        """
        Do the predict job
        :param img_data: the binary data of one image file
        :return: the predict result
        """
        ...

"""
Box2D Example

{
    "BOX2D": [
        {
            "box2d": { "xmin": 1, "ymin": 2, "xmax": 3, "ymax": 4 },
            "category": "cat"
        },
        {
            "box2d": { "xmin": 5, "ymin": 4, "xmax": 6, "ymax": 9},
            "category": "dog"
        }
    ]
}
"""

4. Upload the code file to Github and copy and paste the .git link to Sextant to start an evaluation.

If your code relies on a model, please ensure that the model can be accessed by the code successfully.

View Evaluation Logs

Sextant will record the system logs during the evaluation process for users to track the evaluation process and resolve potential bugs in advance.‌Viewing steps are as following:

  • Find the evaluation you want to view on the evaluation list page and click View to enter the corresponding evaluation details page.‌

  • Find the record you want to view on the evaluation history page and click Log on the right side.

  • Select specific steps of log that you want to view in the pop-up window, and then the required log information will be displayed on the right side.

The status of evaluations is divided into three types: in progress, completed, and failed. If an evaluation failed, please check its log to troubleshoot and retry. If you need help, please

Select Load a Model from GitHub and add the corresponding GitHub Repo URL, for instance, . Click Confirm, and then the evaluation will start automatically. Meanwhile, the system will also automatically generate an evaluation record.

3. For details, please see the .

send us feedback.
https://github.com/Graviti-AI/tensorbay-python-sdk.git
example