diff --git a/PW_FT_classification/README.md b/PW_FT_classification/README.md index b370f6091..7797d4dba 100644 --- a/PW_FT_classification/README.md +++ b/PW_FT_classification/README.md @@ -66,7 +66,7 @@ The CSV file should have the previously mentioned structure. The code will then If you don't require data splitting, you can set the `split_data` parameter to `False` in the `config.yaml` file. ### Demo data -You can download some example [demo data](https://zenodo.org/records/15376499/files/demo_data_clf.zip?download=1) to test the codebase. Before using the data, make sure to decompress the zip file following the [data directory structure](#data-structure), and check if the `dataset_root` entry in the [config file](./configs/config.yaml) is pointing to the data directory. The testing demo data also has ***an annotation example*** shows how the prefered annotation format looks like. +You can download some example [demo data](https://zenodo.org/records/15376499/files/demo_data_clf.zip?download=1) to test the codebase. Before using the data, make sure to decompress the zip file following the [data directory structure](#data-structure), and check if the `dataset_root` entry in the [config file](./configs/config.yaml) is pointing to the data directory. The testing demo data also has ***an annotation example*** which shows what the preferred annotation format looks like. ## Configuration diff --git a/PW_FT_detection/README.md b/PW_FT_detection/README.md index 85f47a8e7..181ed786e 100644 --- a/PW_FT_detection/README.md +++ b/PW_FT_detection/README.md @@ -64,11 +64,11 @@ The `.data/data_example.yaml` file shows an example of the structure. The .txt files inside each folder of `./data/labels/` must be structured containing each object on a separate line, following the format: class x_center y_center width height. The coordinates for the bounding box should be normalized in the xywh format, with values ranging from 0 to 1. ### Demo data -You can download some example [demo data](https://zenodo.org/records/15376499/files/demo_data_det.zip?download=1) to test the codebase. Before using the data, make sure to decompress the zip file following the [data directory structure](#data-structure), and check if the `data` and `test_data` entries in the [config file](./config.yaml) are pointing to the data directory. The testing demo data also has ***an annotation example*** shows how the prefered annotation format looks like. +You can download some example [demo data](https://zenodo.org/records/15376499/files/demo_data_det.zip?download=1) to test the codebase. Before using the data, make sure to decompress the zip file following the [data directory structure](#data-structure), and check if the `data` and `test_data` entries in the [config file](./config.yaml) are pointing to the data directory. The testing demo data also has ***an annotation example*** which shows what the preferred annotation format looks like. ## Detection models available for Finetuning -Below you find the models that you can use for fine-tuning, along with their respective names to use in the configuration file. +Below you will find the models that you can use for fine-tuning, along with their respective names to use in the configuration file. |Model|Name|License| |---|---|---| diff --git a/README.md b/README.md index c9077e986..91b3e1333 100644 --- a/README.md +++ b/README.md @@ -37,13 +37,13 @@ ## 👋 Welcome to Pytorch-Wildlife -**PyTorch-Wildlife** is an AI platform designed for the AI for Conservation community to create, modify, and share powerful AI conservation models. It allows users to directly load a variety of models including [MegaDetector](https://microsoft.github.io/CameraTraps/megadetector/), [DeepFaune](https://microsoft.github.io/CameraTraps/megadetector/), and [HerdNet](https://github.com/Alexandre-Delplanque/HerdNet) from our ever expanding [model zoo](https://microsoft.github.io/CameraTraps/model_zoo/megadetector/) for both animal detection and classification. In the future, we will also include models that can be used for applications, including underwater images and bioacoustics. We want to provide a unified and straightforward experience for both practicioners and developers in the AI for conservation field. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have. +**PyTorch-Wildlife** is an AI platform designed for the AI for Conservation community to create, modify, and share powerful AI conservation models. It allows users to directly load a variety of models including [MegaDetector](https://microsoft.github.io/CameraTraps/megadetector/), [DeepFaune](https://microsoft.github.io/CameraTraps/megadetector/), and [HerdNet](https://github.com/Alexandre-Delplanque/HerdNet) from our ever expanding [model zoo](https://microsoft.github.io/CameraTraps/model_zoo/megadetector/) for both animal detection and classification. In the future, we will also include models that can be used for applications, including underwater images and bioacoustics. We want to provide a unified and straightforward experience for both practitioners and developers in the AI for conservation field. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have. Explore the codebase, functionalities and user interfaces of **Pytorch-Wildlife** through our [documentation](https://microsoft.github.io/CameraTraps/), interactive [HuggingFace web app](https://huggingface.co/spaces/AndresHdzC/pytorch-wildlife) or local [demos and notebooks](./demo). ## 🚀 Quick Start -👇 Here is a quick example on how to perform detection and classification on a single image using `PyTorch-wildlife` +👇 Here is a quick example of how to perform detection and classification on a single image using `PyTorch-wildlife` ```python import numpy as np from PytorchWildlife.models import detection as pw_detection @@ -68,7 +68,7 @@ pip install PytorchWildlife Please refer to our [installation guide](https://microsoft.github.io/CameraTraps/installation/) for more installation information. ## 📃 Documentation -Please also go to our newly made dofumentation page for more information: [![](https://img.shields.io/badge/Docs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://microsoft.github.io/CameraTraps/) +Please also go to our newly made documentation page for more information: [![](https://img.shields.io/badge/Docs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://microsoft.github.io/CameraTraps/) ## 🖼️ Examples diff --git a/docs/core_features.md b/docs/core_features.md index d116a2968..828c7e632 100644 --- a/docs/core_features.md +++ b/docs/core_features.md @@ -15,7 +15,7 @@ In the provided graph, boxes outlined in red represent elements that will be add ### 🚀 Inaugural Model: -We're kickstarting with YOLO as our first available model, complemented by pre-trained weights from `MegaDetector`. We have `MegaDetectorV5`, which is the same `MegaDetectorV5` model from the previous repository, and many different versions of `MegaDetectorV6` for different usecases. +We're kickstarting with YOLO as our first available model, complemented by pre-trained weights from `MegaDetector`. We have `MegaDetectorV5`, which is the same `MegaDetectorV5` model from the previous repository, and many different versions of `MegaDetectorV6` for different use cases. ### 📚 Expandable Repository: diff --git a/docs/demo_and_ui/ecoassist.md b/docs/demo_and_ui/ecoassist.md index a28fcc04b..7e7f96e07 100644 --- a/docs/demo_and_ui/ecoassist.md +++ b/docs/demo_and_ui/ecoassist.md @@ -1,2 +1,2 @@ -# Pytorch-Wildlife modelsa are available with AddaxAI (formerly EcoAssist)! +# Pytorch-Wildlife models are available with AddaxAI (formerly EcoAssist)! We are thrilled to announce our collaboration with [AddaxAI](https://addaxdatascience.com/addaxai/#spp-models)---a powerful user interface software that enables users to directly load models from the PyTorch-Wildlife model zoo for image analysis on local computers. With AddaxAI, you can now utilize MegaDetectorV5 and the classification models---AI4GAmazonRainforest and AI4GOpossum---for automatic animal detection and identification, alongside a comprehensive suite of pre- and post-processing tools. This partnership aims to enhance the overall user experience with PyTorch-Wildlife models for a general audience. We will work closely to bring more features together for more efficient and effective wildlife analysis in the future. Please refer to their tutorials on how to use Pytorch-Wildlife models with AddaxAI. \ No newline at end of file diff --git a/docs/index.md b/docs/index.md index eb5d80457..176803755 100644 --- a/docs/index.md +++ b/docs/index.md @@ -18,12 +18,12 @@ ## 👋 Welcome to Pytorch-Wildlife -**PyTorch-Wildlife** is an AI platform designed for the AI for Conservation community to create, modify, and share powerful AI conservation models. It allows users to directly load a variety of models including [MegaDetector](https://microsoft.github.io/CameraTraps/megadetector/), [DeepFaune](https://microsoft.github.io/CameraTraps/megadetector/), and [HerdNet](https://github.com/Alexandre-Delplanque/HerdNet) from our ever expanding [model zoo](model_zoo/megadetector.md) for both animal detection and classification. In the future, we will also include models that can be used for applications, including underwater images and bioacoustics. We want to provide a unified and straightforward experience for both practicioners and developers in the AI for conservation field. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have. +**PyTorch-Wildlife** is an AI platform designed for the AI for Conservation community to create, modify, and share powerful AI conservation models. It allows users to directly load a variety of models including [MegaDetector](https://microsoft.github.io/CameraTraps/megadetector/), [DeepFaune](https://microsoft.github.io/CameraTraps/megadetector/), and [HerdNet](https://github.com/Alexandre-Delplanque/HerdNet) from our ever expanding [model zoo](model_zoo/megadetector.md) for both animal detection and classification. In the future, we will also include models that can be used for applications, including underwater images and bioacoustics. We want to provide a unified and straightforward experience for both practitioners and developers in the AI for conservation field. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have. ## 🚀 Quick Start -👇 Here is a brief example on how to perform detection and classification on a single image using `PyTorch-wildlife` +👇 Here is a brief example of how to perform detection and classification on a single image using `PyTorch-wildlife` ```python import numpy as np from PytorchWildlife.models import detection as pw_detection diff --git a/docs/installation.md b/docs/installation.md index 7ccb74190..a93e2aa22 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -57,7 +57,7 @@ docker run -p 80:80 andreshdz/pytorchwildlife:1.0.2.3 python demo/gradio_demo.py 4. If you want to run any code using the docker image, please use `docker run andreshdz/pytorchwildlife:1.0.2.3` followed by the command that you want to execute. ## Running the Demo -Here is a brief example on how to perform detection and classification on a single image using `PyTorch-wildlife`: +Here is a brief example of how to perform detection and classification on a single image using `PyTorch-wildlife`: ```python import numpy as np