Educational notebooks of the reinforcement learning algorithms tabular Q-learning and DQN for chemical engineering applications.
To get started, clone the repository from GitHub:
git clone https://github.com/MaximilianB2/chemengRL.git
cd chemengRLYou can use either Conda or the built-in Python venv.
conda env create -f practical_rl.yml
conda activate prac_rlmacOS/Linux:
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Optional: make the venv available in Jupyter
python -m ipykernel install --user --name chemengrl-venv --display-name "Python (chemengRL)"Windows (PowerShell):
py -3 -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r requirements.txt
# Optional: register the kernel for Jupyter
python -m ipykernel install --user --name chemengrl-venv --display-name "Python (chemengRL)"Windows (Cmd):
py -3 -m venv .venv
.venv\Scripts\activate.bat
pip install -r requirements.txtTo deactivate the venv later, run deactivate.
The notebooks for the discretised chemical reactor and the continuous reactor can be found in the src directory.
Start with the tabular setting (src/TabularQlearning.ipynb)first then move onto the more complex DQN notebook (src/DQN.ipynb).
