Quick Evaluation Tools for Data and Models

Graviti’s Sextant are efficient evaluation tools for data and models. Sextant supports data visualization and filtering of evaluation data by metrics. You can use it to complete quality assurance for preparing high-quality data, evaluate the accuracy of models, and launch various competitions.

Sextant currently only supports the evaluation of Box2D type of data and mAP metrics. More annotation types and Metrics will be supported in the future, please stay tuned!

Data Evaluation

Sextant supports the comparison of labels across two datasets, which helps you gain insight into differences of datasets and quickly spot data quality defects. After filtering out the specified data, Sextant can be seamlessly integrated with GroundTruth Tools for quality assurance and relabeling to instantly improve the quality of your datasets and help you to train high-quality AI.

Evaluation Process:

  1. Upload the data that need to be evaluated to TensorBay

  2. Create an evaluation, and select suitable benchmark data

  3. Use other dataset to start an evaluation to obtain metrics

  4. Filter the data by metrics and spot the data quality defects

  5. Save the filtering results as a new dataset and start data quality assurance and optimization

Model Evaluation

Sextant supports the evaluation of model accuracy, which helps you to quickly know the model performance, identify its weak scenarios, and provide you with references for debugging model performance. It also supports the comparison of multiple model evaluation results, which enable you to pick out the most suitable model for specific application scenarios.

Evaluation Process

  1. Upload the ground truth data to TensorBay

  2. Create an evaluation and select ground truth data as the benchmark data

  3. Upload a suitable model to Github and use the model to start an evaluation

  4. View metrics and comparison results

Last updated