Control Traffic lights intelligently with Reinforcement Learning!
-
Updated
Jun 11, 2020 - Jupyter Notebook
Control Traffic lights intelligently with Reinforcement Learning!
Implemenation of DDPG with numpy only (without Tensorflow)
Saves replay buffer files to game-specific folders (like ShadowPlay).
Look Back When Surprised: Stabilizing Reverse Experience Replay for Neural Approximation
This repository contains Double deep Q learning algorithm implemented on Atari Space Invaders game. The agent is trained completely on Google Colab.
A simple numpy memmap replay buffer for RL and personal use-cases
Moves OBS replay buffer recordings into folders based on the active window.
Design and training of an RL agent to control a Quadcopter using Actor-Critic RL method
Replay buffer implementations for reinforcement learning agents
An OBS Studio plugin that extends the built-in Replay Buffer. Save recent footage at different lengths (e.g., 30 seconds, 5 minutes), automatically trimming the replay buffer without re-encoding.
Restarts the replay buffer when saving a clip to prevent clip overlapping and plays audio
A simple buffer for experience replay in reinforcement learning, etc.
Clean, modular DQN in PyTorch with Double/Dueling options and MLP/CNN/LSTM backbones—plug-and-play for Gymnasium environments.
A modular PyTorch Lightning implementation of a self-supervised DINO pretraining on ChestMNIST, followed by finetuning on Shenzhen and continual learning on Montgomery & MIAS using replay + EWC.
INSTANT REPLAYYYYYY with crossfades ✨
Add a description, image, and links to the replay-buffer topic page so that developers can more easily learn about it.
To associate your repository with the replay-buffer topic, visit your repo's landing page and select "manage topics."