Start to Evaluate

You can join the evaluation created by yourself or your team members through Sextant. Sextant supports both modes of participation, using a TensorBay dataset and loading a model from GitHub.

If you have no permission to use the benchmark data, please apply according to the prompts first.

Use a TensorBay Dataset to Start an Evaluation‌

  • Find the evaluation you want to join on the evaluation list page and click View to enter the corresponding evaluation details page.

  • Click Start to Evaluate on the upper right corner of the evaluation details page.‌

  • Click Choose a Dataset from TensorBay in the pop-up window and select the dataset that needs to be evaluated and then choose the dataset version. Click Confirm, then the evaluation will start automatically. Meanwhile, the system will also automatically generate an evaluation record.

The status of evaluations is divided into three types: in progress, completed, and failed. If an evaluation failed, please check its log to troubleshoot and retry. If you need help, please send us feedback.

Load a Model from GitHub to Start an Evaluation

  • Find the evaluation you want to join on the evaluation list page and click View to enter the corresponding evaluation details page.‌

  • Click Start to Evaluate on the upper right corner of the evaluation details page.

  • Select Load a Model from GitHub and add the corresponding GitHub Repo URL, for instance, https://github.com/Graviti-AI/tensorbay-python-sdk.git. Click Confirm, and then the evaluation will start automatically. Meanwhile, the system will also automatically generate an evaluation record.

How to prepare a suitable algorithm model for Sextant

  1. First prepare the algorithm that needs to be used in evaluation and verify its availability.

  2. Write python code according to the following structure:

  • There is only one class named Predictor in the python library.

  • There is a predict() method in the Predictor class. Please refer to Graviti’s docs for the return value.

  • The model on which the algorithm depends must can be used by the algorithm.

class Predict:
    def __init__(self):
        """
        You can initialize your model here
        """
        ...
    def predict(self, img_data: bytes) -> Dict[str, Any]:
        """
        Do the predict job
        :param img_data: the binary data of one image file
        :return: the predict result
        """
        ...

"""
Box2D Example

{
    "BOX2D": [
        {
            "box2d": { "xmin": 1, "ymin": 2, "xmax": 3, "ymax": 4 },
            "category": "cat"
        },
        {
            "box2d": { "xmin": 5, "ymin": 4, "xmax": 6, "ymax": 9},
            "category": "dog"
        }
    ]
}
"""

3. For details, please see the example.

4. Upload the code file to Github and copy and paste the .git link to Sextant to start an evaluation.

If your code relies on a model, please ensure that the model can be accessed by the code successfully.

View Evaluation Logs

Sextant will record the system logs during the evaluation process for users to track the evaluation process and resolve potential bugs in advance.‌Viewing steps are as following:

  • Find the evaluation you want to view on the evaluation list page and click View to enter the corresponding evaluation details page.‌

  • Find the record you want to view on the evaluation history page and click Log on the right side.

  • Select specific steps of log that you want to view in the pop-up window, and then the required log information will be displayed on the right side.

Last updated