From 0ee205797d707db8732be1aa981eadcbd237b85c Mon Sep 17 00:00:00 2001 From: Elizabeth Fons Date: Wed, 13 Feb 2019 00:27:32 +0000 Subject: [PATCH 1/2] Update README.md --- README.md | 47 ++++++++++++++++++----------------------------- 1 file changed, 18 insertions(+), 29 deletions(-) diff --git a/README.md b/README.md index 393c13a..c6c5093 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,11 @@ For this project, you will work with the [Tennis](https://github.com/Unity-Techn ![Trained Agent][image1] -In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play. +In this environment, two agents control rackets to bounce a ball over a net. + +### Environment + +If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play. The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Each agent receives its own, local observation. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping. @@ -23,40 +27,25 @@ The task is episodic, and in order to solve the environment, your agents must ge The environment is considered solved, when the average (over 100 episodes) of those **scores** is at least +0.5. -### Getting Started +### Installation + +1. Clone repository: + +* git clone https://github.com/elifons/DeepRL-Multi-agent.git +* cd DeepRL-Multi-agent +* pip install -r requirements.txt -1. Download the environment from one of the links below. You need only select the environment that matches your operating system: +Alternatively, follow the instractions on this link https://github.com/udacity/deep-reinforcement-learning#dependencies to set up a python environment. + +2. Download the environment from one of the links below. You need only select the environment that matches your operating system: - Linux: [click here](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Tennis/Tennis_Linux.zip) - Mac OSX: [click here](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Tennis/Tennis.app.zip) - Windows (32-bit): [click here](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Tennis/Tennis_Windows_x86.zip) - Windows (64-bit): [click here](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Tennis/Tennis_Windows_x86_64.zip) - - (_For Windows users_) Check out [this link](https://support.microsoft.com/en-us/help/827218/how-to-determine-whether-a-computer-is-running-a-32-bit-version-or-64) if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system. - - (_For AWS_) If you'd like to train the agent on AWS (and have not [enabled a virtual screen](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-on-Amazon-Web-Service.md)), then please use [this link](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Tennis/Tennis_Linux_NoVis.zip) to obtain the "headless" version of the environment. You will **not** be able to watch the agent without enabling a virtual screen, but you will be able to train the agent. (_To watch the agent, you should follow the instructions to [enable a virtual screen](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-on-Amazon-Web-Service.md), and then download the environment for the **Linux** operating system above._) - -2. Place the file in the DRLND GitHub repository, in the `p3_collab-compet/` folder, and unzip (or decompress) the file. - -### Instructions - -Follow the instructions in `Tennis.ipynb` to get started with training your own agent! - -### (Optional) Challenge: Crawler Environment - -After you have successfully completed the project, you might like to solve the more difficult **Soccer** environment. - -![Soccer][image2] - -In this environment, the goal is to train a team of agents to play soccer. -You can read more about this environment in the ML-Agents GitHub [here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Learning-Environment-Examples.md#soccer-twos). To solve this harder task, you'll need to download a new Unity environment. (**Note**: Udacity students should not submit a project with this new environment.) +3. Place the file in the DRLND GitHub repository, in the `p3_collab-compet/` folder, and unzip (or decompress) the file. -You need only select the environment that matches your operating system: -- Linux: [click here](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Soccer/Soccer_Linux.zip) -- Mac OSX: [click here](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Soccer/Soccer.app.zip) -- Windows (32-bit): [click here](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Soccer/Soccer_Windows_x86.zip) -- Windows (64-bit): [click here](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Soccer/Soccer_Windows_x86_64.zip) +### Getting started -Then, place the file in the `p3_collab-compet/` folder in the DRLND GitHub repository, and unzip (or decompress) the file. Next, open `Soccer.ipynb` and follow the instructions to learn how to use the Python API to control the agent. +Follow the instructions in `Continuous_Control.ipynb` to get started with training the agent! -(_For AWS_) If you'd like to train the agents on AWS (and have not [enabled a virtual screen](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-on-Amazon-Web-Service.md)), then please use [this link](https://s3-us-west-1.amazonaws.com/udacity-drlnd/P3/Soccer/Soccer_Linux_NoVis.zip) to obtain the "headless" version of the environment. You will **not** be able to watch the agents without enabling a virtual screen, but you will be able to train the agents. (_To watch the agents, you should follow the instructions to [enable a virtual screen](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-on-Amazon-Web-Service.md), and then download the environment for the **Linux** operating system above._) From 7ed0f3307c1e925fd77980e6859d7c21770cc5b0 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 20 Apr 2021 17:53:28 +0000 Subject: [PATCH 2/2] Bump py from 1.7.0 to 1.10.0 Bumps [py](https://github.com/pytest-dev/py) from 1.7.0 to 1.10.0. - [Release notes](https://github.com/pytest-dev/py/releases) - [Changelog](https://github.com/pytest-dev/py/blob/master/CHANGELOG.rst) - [Commits](https://github.com/pytest-dev/py/compare/1.7.0...1.10.0) Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 9c4aba2..cb0234d 100644 --- a/requirements.txt +++ b/requirements.txt @@ -62,7 +62,7 @@ prometheus-client==0.5.0 prompt-toolkit==2.0.7 protobuf==3.5.2 ptyprocess==0.6.0 -py==1.7.0 +py==1.10.0 pyglet==1.3.2 Pygments==2.3.1 PyOpenGL==3.1.0