Features: - Write reinforcement learning algorithms in python - Train RL models to use with current RL python packages - Tune hyperparameters using Optuna - Use wrappers to track model performance - Create custom RL environments to use with RL and MARL packages
The documentation is built automatically using Sphinx and deployed to GitHub Pages. You can find the latest documentation at:
https://bwhewe-13.github.io/pyrl/
To build the documentation locally:
# Install dependencies
python -m pip install -r docs/requirements.txt
# Build HTML docs
cd docs
make html
# Output will be in docs/build/htmlTo set up the development environment:
# Clone the repository
git clone https://github.com/bwhewe-13/pyrl.git
cd pyrl
# Install package with development dependencies
python -m pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit installTo run the tests with coverage reporting:
pytestCoverage reports will be generated in coverage_html/index.html and coverage.xml.