Artifacts
This section describes the expected artifact that should be generated at the end of the dAIEdge-VLab Pipeline.
Benchmark Types
The generated artifacts depends on the benchmark type. The benchmark type is given by the value in the environment variable $BENCHMARK_TYPE
. The following table state the differences of benchmark types.
BENCHMARK_TYPE | Description |
---|---|
TYPE1 | In this mode, the benchmark of the model is performed uesing randomly generated data. The number of inference is either fixed by the benchmarking tool used or follow the variable $NB_INFERENCE |
TYPE2 | In this mode, the benchmark of the model is performed uesing the preprocessed dataset given by the user. This dataset is given in a binary format. The number of inference performed is based on the size of the dataset. The dataset file name is available in the variable $DATASET_FILENAME and is automatically placed in the root folder. |
TYPE3 | In this mode, the target run an on device training experiment and extrat some key metrics. The target get as input a model, a test dataset and a train dataset. The variable dataset names are available in $TEST_DATASET_FILENAME and $TEST_DATASET_LABELS . |
Here find the list of artifacts that must be generated by the different benchmark types.
Benchmark Type 1
When the variable $BENCHMARK_TYPE
is set to TYPE1
the following artifacts must be provided:
- Benchmark report : report.json
- Log files : user.log & error.log
Benchmark Type 2
When the variable $BENCHMARK_TYPE
is set to TYPE2
the following artifacts must be provided:
- Benchmark report : report.json
- Log files : user.log & error.log
- Model output : raw_outpout.bin
Benchmark Type 3
When the variable $BENCHMARK_TYPE
is set to TYPE3
the following artifacts must be provided:
- Benchmark report : report_odt.json
- Log files : user.log & error.log
- New Model : model.extension