-
Notifications
You must be signed in to change notification settings - Fork 4
Deploy_Model
Details on each argument of the deploy_model function.
This argument specifies where CameraTrapDetectoR should look for images. Please specify the full path as a character string.
When using the R Shiny app, click the data_dir button to open your file directory, then use the black arrows to the left of the folders to navigate to your image directory. Once you navigate to the folder that contains all your images, click the text on the left pane to select that folder.
If you have organized your images in multiple folders, select the folder that contains all your separate image folders and set the recursive argument to TRUE so the model sees all your images. If you wish to only run images in a particular folder or sub-directory, make that selection carefully so you do not send extra images through the model.
Once you have set your data directory, you are able to run the model with all other arguments set at default values. However, we encourage you to consider your research questions and thoughtfully set the remaining arguments before running the model.
This drop-down menu allows you to select which model you would like to run. CameraTrapDetectoR supports four different model types.
- The general model predicts to the taxonomic class level of mammal and bird, and includes categories for vehicles, humans, and empty images. This model currently offers best performance at identifying empty images. Current version of the general model is V2. More details about the model can be found here
- The family model predicts to the taxonomic family level. Current version of the family model is V2. More details about the model can be found here
- The species model predicts to the taxonomic species level. Current version of the species model is V2. Version 1 can be used by specifying "species_v1" in your model argument. More details about the model can be found here
- The pig_only model predicts wild pigs, and identifies all other detections as "not pig". Current version of the pig model is V2. More details about the model can be found here
This TRUE / FALSE argument tells the program if it should search for images within all folders inside your selected data_dir. If you have images stored in different folders you wish to send to the model, keep the TRUE default value.
CameraTrapDetectoR currently supports the following file types: .jpg, .png, .tif, and .pdf. The default is .jpg; additional options can be specified as a vector of character strings.
In the Shiny app, specify these file types by checking or unchecking the boxes on the Arguments menu.
The model will ignore any files inside your data_dir that are not one of the accepted or selected file types.
This TRUE/FALSE argument makes copies of your images with boxes drawn around predictions in real time. The default is TRUE.
This TRUE/FALSE argument adds predicted class labels to the plotted image copies. The default is TRUE.
This argument determines where your model output is stored. The default argument is to leave it empty, which creates a folder inside your data_dir named after model_type and date-time the model began running.
This TRUE / FALSE argument will run the model on a random sample of 50 images in the data_dir; default is FALSE. A large image data set will take a long time to run; you may want to test various model arguments on a smaller sample before committing to a full model run.
This TRUE/FALSE argument gives the user the option to return a .csv file in the output directory titled <model_type>_predicted_bboxes. This file has a separate row for each prediction with the following fields: full image path; predicted class; confidence in prediction;total predictions per image; and bounding box coordinates. Bounding box coordinates are given with (XMin, YMin) and (XMax, YMax) points corresponding to the upper left and lower right corners of the bounding box, in proportion to image size (i.e. coordinates are in the range [0, 1]).
The file will contain all model predictions, even those below the chosen score_threshold.
The model provides a confidence score for every prediction, indicating the model's level of confidence in that prediction. CameraTrapDetectoR only reports predictions above the set score threshold. A lower score threshold may capture more true predictions, but may also report more false predictions. A higher score threshold will reduce false predictions, but may also fail to capture some true predictions. An optimal score threshold will depend on your research questions and your images. You may want to run a small sample through the model using different score thresholds to determine what threshold is best for your data. You can either type in your selection between 0 and 0.99, or click the arrows to toggle by increments of 0.01.
It is possible that predicted bounding boxes on the same image may overlap. Overlapping bounding boxes may be due to the presence of multiple individuals occupying close space. Alternatively, overlapping bounding boxes may be caused by multiple predictions of the same individual. If you wish to assess overlapping boxes and combine boxes that overlap into a single prediction, set this argument to TRUE and choose a reasonable overlap_threshold. Using this feature depends on the context of your research question and your data.
If you use an overlap correction, this argument sets the proportion of overlapping area of two boxes for them to be returned as a single detection. An optimal overlap threshold will depend on your research questions and your images. You may want to run a small sample through the model using different overlap thresholds to determine what threshold is best for your data. You can either type in your selection between 0 and 0.99, or click the arrows to toggle by increments of 0.01.
This TRUE / FALSE argument gives the user the option to write model predictions to image metadata. Four tags will be written representing predicted class, predicted count, confidence in prediction, and review status. The tag naming convention is model specific in the format (ex. CTD_<model_version>_PredictedClass) so predictions for multiple models can be written to the metadata.
This TRUE / FALSE argument gives the user the option to extract common image metadata tags while running the model. This information will be included in the model_predictions.csv
Specify the number of images to run between saving a results checkpoint. Default is 10 images.
You can filter predictions from the species model by location. If all your images originate from the same location, you can enter the latitude and longitude (in degrees) here to filter out species whose ranges do not include this location. If the model predicts a species non-existent at your location, CameraTrapDetectoR will review similar possible species to the prediction and make an adjusted prediction. If no similar species exist in the prediction classes, CameraTrapDetectoR will label this detection as "Animal". All images with this prediction should be manually reviewed.
If you decide to plot your bounding boxes, CameraTrapDetectoR will automatically create image copies with the same dimensions as your original image. If you want to change those dimensions, enter pixel values for height and width in these arguments. Setting image dimensions to the model defaults,
If you decide to plot your bounding boxes, you can adjust the line type of the bounding boxes using this argument. It accepts intergers 1-6 corresponding to the following values: 1 = solid (default); 2 = dashed, 3 = dotted, 4 = dotdash, 5 = longdash, 6 = twodash.
If you decide to plot your bounding boxes, you can adjust the line thickness of the bounding boxes using this argument. It accepts numbers greater than 0; the default is 2.
If you decide to plot your bounding boxes, you can adjust the line color of the bounding boxes using this argument. The drop-down argument provides various color options.