Start to Evaluate
Last updated
Last updated
You can join the evaluation created by yourself or your team members through Sextant. Sextant supports both modes of participation, using a TensorBay dataset and loading a model from GitHub.
If you have no permission to use the benchmark data, please apply according to the prompts first.
Find the evaluation you want to join on the evaluation list page and click View to enter the corresponding evaluation details page.
Click Start to Evaluate on the upper right corner of the evaluation details page.
Click Choose a Dataset from TensorBay in the pop-up window and select the dataset that needs to be evaluated and then choose the dataset version. Click Confirm, then the evaluation will start automatically. Meanwhile, the system will also automatically generate an evaluation record.
The status of evaluations is divided into three types: in progress, completed, and failed. If an evaluation failed, please check its log to troubleshoot and retry. If you need help, please send us feedback.
Find the evaluation you want to join on the evaluation list page and click View to enter the corresponding evaluation details page.
Click Start to Evaluate on the upper right corner of the evaluation details page.
Select Load a Model from GitHub and add the corresponding GitHub Repo URL, for instance, https://github.com/Graviti-AI/tensorbay-python-sdk.git. Click Confirm, and then the evaluation will start automatically. Meanwhile, the system will also automatically generate an evaluation record.
First prepare the algorithm that needs to be used in evaluation and verify its availability.
Write python code according to the following structure:
There is only one class named Predictor in the python library.
There is a predict() method in the Predictor class. Please refer to Graviti’s docs for the return value.
The model on which the algorithm depends must can be used by the algorithm.
3. For details, please see the example.
4. Upload the code file to Github and copy and paste the .git link to Sextant to start an evaluation.
If your code relies on a model, please ensure that the model can be accessed by the code successfully.
Sextant will record the system logs during the evaluation process for users to track the evaluation process and resolve potential bugs in advance.Viewing steps are as following:
Find the evaluation you want to view on the evaluation list page and click View to enter the corresponding evaluation details page.
Find the record you want to view on the evaluation history page and click Log on the right side.
Select specific steps of log that you want to view in the pop-up window, and then the required log information will be displayed on the right side.