Graviti’s Sextant are efficient evaluation tools for data and models. Sextant supports data visualization and filtering of evaluation data by metrics. You can use it to complete quality assurance for preparing high-quality data, evaluate the accuracy of models, and launch various competitions.
Sextant supports the comparison of labels across two datasets, which helps you gain insight into differences of datasets and quickly spot data quality defects. After filtering out the specified data, Sextant can be seamlessly integrated with GroundTruth Tools for quality assurance and relabeling to instantly improve the quality of your datasets and help you to train high-quality AI.
Upload the data that need to be evaluated to TensorBay
Create an evaluation, and select suitable benchmark data
Use other dataset to start an evaluation to obtain metrics
Filter the data by metrics and spot the data quality defects
Save the filtering results as a new dataset and start data quality assurance and optimization
Sextant supports the evaluation of model accuracy, which helps you to quickly know the model performance, identify its weak scenarios, and provide you with references for debugging model performance. It also supports the comparison of multiple model evaluation results, which enable you to pick out the most suitable model for specific application scenarios.
Upload the ground truth data to TensorBay
Create an evaluation and select ground truth data as the benchmark data
Upload a suitable model to Github and use the model to start an evaluation
View metrics and comparison results