diff --git a/pages/_meta.json b/pages/_meta.json index d73d95d..78bed31 100644 --- a/pages/_meta.json +++ b/pages/_meta.json @@ -6,6 +6,7 @@ } }, "multiplayer": "đŸ•šī¸ | Making Multiplayer", + "bots": "🤖 | Bots", "typesofgames": "🎮 | Types of Games", "setup": "🧩 | Setup", "usage": "📙 | Usage", diff --git a/pages/apidocs.mdx b/pages/apidocs.mdx index 4c999e7..783ffbc 100644 --- a/pages/apidocs.mdx +++ b/pages/apidocs.mdx @@ -21,6 +21,8 @@ await insertCoin(); | `allowGamepads` | boolean | false | If `true` , Playroom will let players play game using gamepads connected to the stream device itself. This requires `streamMode` to also be `true`.

The gamepads need to be connected to device where stream screen is running. No phones are required in this mode but are optionally allowed as an alternative controller if there aren't enough physical gamepads in the room. Players who join via phone in this mode see an on-screen [Joystick](/multiplayer/joystick). | | `baseUrl` | string | *Current Page URL* | The base URL used for generating room link that host shares with other players. | | `avatars` | Array<string> | *Default Avatars* | An array of URLs to images that players can pick as their avatar. This will override the default avatars system that Playroom provides. | +| `enableBots` | boolean | false | If 'true' , Playroom initializes a bot using the provided botOptions. | +| `botOptions` | Object | undefined | An object containing parameters for bot instantiation and configuration. | ## `getState(key: string): any` diff --git a/pages/bots.mdx b/pages/bots.mdx new file mode 100644 index 0000000..5cf5ed0 --- /dev/null +++ b/pages/bots.mdx @@ -0,0 +1,17 @@ +import { Callout } from 'nextra-theme-docs' + +# Bots in Playroom + +Playroom lets you define bots for your game. These bots act the same as players; have player state, but can hold custom logic to act within the game loop. + +Once you set up bots, players see a new `🤖 +` button which adds bots to their game room. + +### Why Should I Use Bots? + +Playing with friends is fun but you can't always have your friends around. Having bots in your game increases engagement and retention. Bots can fill empty rooms in your game, or provide a single player experience. + +It's time to dive into the specifics. Choose the path that best suits your development needs: + +**[Bot](/bots/bot)**: Explore the process of creating your own bot and integrating it to your game's unique mechanics. + +**[DQN Bot](/bots/dqnbot)**: Explore the realm of reinforcement learning with our advanced Deep Q-Network bot that learns and makes smart decisions. diff --git a/pages/bots/_meta.json b/pages/bots/_meta.json new file mode 100644 index 0000000..8e7119f --- /dev/null +++ b/pages/bots/_meta.json @@ -0,0 +1,4 @@ +{ + "bot":"Bot", + "dqnbot":"DQN Bot" +} \ No newline at end of file diff --git a/pages/bots/bot.mdx b/pages/bots/bot.mdx new file mode 100644 index 0000000..a5f42a1 --- /dev/null +++ b/pages/bots/bot.mdx @@ -0,0 +1,153 @@ +import { Callout } from 'nextra-theme-docs' + +# Bot +## Overview +With Playroom's SDK, you can easily create bots +tailored to your game's unique mechanics and dynamics. + +## Adding Bot to Your Game + +### 1. Define the Bot Class +Start by importing the `Bot` class from the Playroom. + +```js +import { insertCoin, isHost, onPlayerJoin, Bot } from 'playroomkit'; +``` + +Extend the Bot class to create your bot with your custom game logic. Bots are essentially players, you can utilize all methods from the [PlayerState API Reference](/apidocs#playerstate). Make sure to receive `botParams` in the constructor and pass it to base class by doing `super(botParams)` in your bot's constructor. + +```js +class YourBot extends Bot { + // Implement your bot logic and methods here + + // Sample Bot Code + constructor(botParams) { + super(botParams); + this.setState("health", 100); + } + + // A simple method for the bot to take action based on some game state + decideAction() { + const gameState = this.getState("gameState") + if (gameState.enemyNearby) { + return 'ATTACK'; + } + return 'MOVE_FORWARD'; + } + + // Receive damage and reduce health + takeDamage(damageAmount) { + let currentHealth = this.getState("health") + this.setState("health", currentHealth-damageAmount) + } + + // Check if the bot is still alive + isAlive() { + return this.getState("health") > 0; + } +} +``` + + The constructor for your bot is called multiple times. If you want to set some state or perform an action only once, do it on the host using the condition if (!isHost()) return. An example is shown below. + +```js + constructor(botParams) { + super(botParams); + if (!isHost()) return; + this.setState("health", 100); + } +``` + +### 2. Tell Playroom to Use Your Bot + +Once your bot class is ready, use the insertCoin() method +provided by the SDK to pass your bot type and its parameters. +This step allows the SDK to recognize your bot. + +```js +await insertCoin({ + ... other parameters, + + enableBots: true, // Tells Playroom to activate bots in your game + + botOptions: { + botClass: YourBot, // Specifies the bot class to be utilized by the SDK + + // OPTIONAL: You can define custom attributes in the botParams object if you need them during bot initialization. + // Sample botParams + botParams: { + health: 100 + } + }, +}) +``` + +The code above recognize bots in your game. It will provide botParams to your botClass. Here's how you'd use botParams in your constructor: + +```js +class YourBot extends Bot { + // Implement your bot logic and methods here + + // Sample Bot with botParams Code + constructor(botParams) { + super(botParams); + this.setState("health", botParams.health) + } + + // Rest of your implementation +} +``` + +### 3. Try Your Bot in Playroom + +Once you have defined your bot, game host see a new `🤖 +` button to add bots to the game. Tap on the button to add a bot to the room. + + + + +### 4. Integrate Bot into the Game Loop + +After initialization, the player.isBot() method allows you to check +if a player is a bot or a human player. You can use this information to +integrate your bot's actions within the game loop, ensuring that it interacts +with the game environment as intended. + +```js +let players = []; + +onPlayerJoin(async (player) => { + // Custom logic for handling player join events. + + // Appending player to players array in order to access it within gameloop + players.push(player); +}); + +function gameLoop() { + // Custom Logic + + for (const player in players) { + // Custom Logic + + // Bot usage + if (player.isBot()) { + // Logic to make the bot act within the game loop + + // Sample implementation + if (!player.bot.isAlive()) { return } + + // Call the methods defined in your bot class + const action = player.bot.decideAction(); + if (action === "ATTACK") { + // Attack Logic Here + } + + if (action === "MOVE_FORWARD") { + // Move Forward Logic Here + } + + // Updating the damage taken + player.bot.takeDamage(damageAmount); + } + } +} +``` \ No newline at end of file diff --git a/pages/bots/dqnbot.mdx b/pages/bots/dqnbot.mdx new file mode 100644 index 0000000..a82b12c --- /dev/null +++ b/pages/bots/dqnbot.mdx @@ -0,0 +1,254 @@ +# DQN Bot +## Overview +Playroom SDK seamlessly integrates with the DQN Bot, a smart computer player that learns and plays games effectively using a special technique called Deep Q-Network (DQN). This integration offers a robust tool for game developers looking to use reinforcement learning in their games without starting from scratch. You can easily configure, train (if needed), and use the DQN Bot. + +We provide two options for the DQN Bot: +1. **[DQN Bot (Joystick):](#how-dqn-bot-joystick-integration-works)** Designed for Playroom's Joystick users. +2. **[DQN Bot:](#how-dqn-bot-integration-works)**: Suited for those who don't use Playroom's Joystick. + + +## How DQN Bot (Joystick) Integration Works + +### 1. Importing DQN Bot +Begin by importing the `DQNBotJoystick` from the Playroom. + +```js +import { insertCoin, isHost, onPlayerJoin, DQNBotJoystick } from 'playroomkit'; +``` + +### 2. Tell Playroom to Use DQN Bot + +Once you have imported DQN Bot, use the insertCoin() method +provided by the SDK to pass the bot and its parameters. +This step allows the SDK to recognize DQN bot. + +```js +await insertCoin({ + ... other parameters, + + enableBots: true, // Activate the bot in the SDK + + botOptions: { + botClass: DQNBotJoystick, // Specify the DQN bot class + + botParams: { + numberOfStates: x, // Replace 'x' with the number of states to pass to the agent + + // Your joystick configuration here. Actions will be derived from this joystick config. + joystickConfig: { // Sample configuration + type: "dpad", + buttons: [ + {id: "jump", label: "Jump"} + ] + } + + // OPTIONAL: Customize hyperparameters. following values would be used if not specified. + specifications: { + gamma: 0.75, // Discount factor for future rewards + epsilon: 0.1, // Exploration rate in epsilon-greedy strategy + alpha: 0.01, // Learning rate + experience_add_every: 25, // Frequency of adding experiences to the replay memory + experience_size: 5000, // Size of the replay memory + learning_steps_per_iteration: 10, // Number of learning steps per iteration + tderror_clamp: 1.0, // Clamp to prevent large TD errors + num_hidden_units: 100, // Number of neurons in the hidden layer + } + + // OPTIONAL: Initialize with weights if you have pretrained weights or previous training data + weights: { + + // Serialized pretrained weights + modelWeights: "{\"nh\":100,\"ns\":11,\"na\":19,\"net\":{\"W1\":{\"n\":100,\"d\":11,\"w\":{\"0\":0.0013967478508977644,\"1\":-0.004024754355529853,\"2\":-0.005107000776689768,\"3\,....}}}}" + + iterations: previousIterationsCount, // Previous training iterations count + rewards: previousRewardValue, // Cumulative reward from previous sessions + elapsedTime: previousElapsedime, // Elapsed Time time from previous training sessions + }, + + // OPTIONAL: Automatically stores model weights to local storage and load while initialization of bot + // Defaults to true, won't retrieve from local storage if weights are given + retrieveFromLocalStorage: true + } + + // OPTIONAL: Enable training mode to visualize a graph of iteration/rewards. + trainingMode: true + }, +}); +``` + +You can find additional information about hyperparameters **[here:](#dqn-bot-hyperparameters)**. + +### 3. Try Your Bot in Playroom + +Once you have defined your bot, game host see a new `🤖 +` button to add bots to the game. Tap on the button to add a bot to the room. + + + + +### 4. Integrating DQN Bot into the Game Loop + +After initialization, the player.isBot() method allows you to check if a player is a bot or a human player. You can use this information to integrate your bot's actions within the game loop, ensuring that it interacts with the game environment as intended. + +```js +let players = []; + +onPlayerJoin(async (player) => { + // Custom logic for handling player join events. + + // Appending player to players array in order to access it within gameloop + players.push(player); +}); + +function gameLoop() { + // Custom Logic + + for (const player in players) { + // Custom Logic + + // Bot usage + if (player.isBot()) { + const currentState = [/* array of x numbers representing the game state */]; + player.bot.setDQNBotState(currentState); // This will set bot state and set the predicted joystick action + } + + // Game Logic + + if (player.isBot()) { + + // If you're training the bot, provide a reward based on the outcome of the chosen action + const rewardValue = computeReward(); + player.bot.learn(rewardValue); + } + } +} +``` + +## How DQN Bot Integration Works + +### 1. Importing DQN Bot +Begin by importing the `DQNBaseBot` from the Playroom. + +```js +import { insertCoin, isHost, onPlayerJoin, DQNBaseBot } from 'playroomkit'; +``` + +### 2. Tell Playroom to Use DQN Bot + +Once you have imported DQN Bot, use the insertCoin() method +provided by the SDK to pass the bot and its parameters. +This step allows the SDK to recognize DQN bot. + +```js +await insertCoin({ + ... other parameters, + + enableBots: true, // Activate the bot in the SDK + + botOptions: { + botClass: DQNBaseBot, // Specify the DQN bot class + + botParams: { + numberOfStates: x, // Replace 'x' with the number of states to pass to the agent + numberOfActions: y, // Replace 'y' with the number of potential actions the bot can take + + // OPTIONAL: Customize hyperparameters. following values would be used if not specified. + specifications: { + gamma: 0.75, // Discount factor for future rewards + epsilon: 0.1, // Exploration rate in epsilon-greedy strategy + alpha: 0.01, // Learning rate + experience_add_every: 25, // Frequency of adding experiences to the replay memory + experience_size: 5000, // Size of the replay memory + learning_steps_per_iteration: 10, // Number of learning steps per iteration + tderror_clamp: 1.0, // Clamp to prevent large TD errors + num_hidden_units: 100, // Number of neurons in the hidden layer + } + + // OPTIONAL: Initialize with weights if you have pretrained weights or previous training data + weights: { + + // Serialized pretrained weights + modelWeights: "{\"nh\":100,\"ns\":11,\"na\":19,\"net\":{\"W1\":{\"n\":100,\"d\":11,\"w\":{\"0\":0.0013967478508977644,\"1\":-0.004024754355529853,\"2\":-0.005107000776689768,\"3\,....}}}}" + + iterations: previousIterationsCount, // Previous training iterations count + rewards: previousRewardValue, // Cumulative reward from previous sessions + elapsedTime: previousElapsedime, // Elapsed Time from previous training sessions + }, + + // OPTIONAL: Automatically stores model weights to local storage and load while initialization of bot + // Defaults to true, won't retrieve from local storage if weights are given + retrieveFromLocalStorage: true + } + + // OPTIONAL: Enable training mode to visualize a graph of iteration/rewards. + trainingMode: true + }, +}); +``` + +You can find additional information about hyperparameters **[here:](#dqn-bot-hyperparameters)**. + +### 3. Try Your Bot in Playroom + +Once you have defined your bot, game host see a new `🤖 +` button to add bots to the game. Tap on the button to add a bot to the room. + + + + +### 4. Integrating DQN Bot into the Game Loop + +After initialization, the player.isBot() method allows you to check if a player is a bot or a human player. You can use this information to integrate your bot's actions within the game loop, ensuring that it interacts with the game environment as intended. + +```js +let players = []; +let chosenAction = undefined + +onPlayerJoin(async (player) => { + // Custom logic for handling player join events. + + // Appending player to players array in order to access it within gameloop + players.push(player); +}); + +function gameLoop() { + // Custom Logic + + for (const player in players) { + // Custom Logic + + // Bot usage + if (player.isBot()) { + const currentState = [/* array of x numbers or booleans representing the game state */]; + chosenAction = player.bot.act(currentState); // Get the bot's chosen action based on the current state + } + + // Game Logic - Implement the chosen action in the game + + if (player.isBot()) { + // If you're training the bot, provide a reward based on the outcome of the chosen action + const rewardValue = computeReward(chosenAction); + player.bot.learn(rewardValue); + } + } +} +``` + +To train the bot, continuously provide rewards based on its actions to refine its decision-making abilities. If you want to use the bot without additional training, simply utilize the DQN bot's decisions without supplying reward feedback. + +## Additional Information +### DQN Bot Hyperparameters + +1. **Gamma (Discount Factor):** A number between 0 and 1 (e.g., 0.75), it determines how much the agent cares about future rewards. A higher value (e.g., 0.9) makes the agent consider long-term rewards more. + +2. **Epsilon (Exploration Rate):** A value between 0 and 1 (e.g., 0.1) that controls the agent's exploration. A lower value (e.g., 0.05) makes the agent explore less and exploit known strategies more. + +3. **Alpha (Learning Rate):** A small value like 0.01 that influences how quickly the agent adapts to new information. Smaller values (e.g., 0.001) make learning more gradual. + +4. **Experience Replay:** It stores past experiences with a maximum size (e.g., 5000) and adds new experiences every few steps (e.g., every 25 steps). This helps the agent learn from its history. + +5. **Learning Steps per Iteration:** The number of learning steps the agent takes in one iteration (e.g., 10). A higher value (e.g., 20) can speed up learning but might be computationally expensive. + +6. **TD Error Clamp:** It's used to prevent large TD errors. TD errors measure the agent's prediction errors. The value (e.g., 1.0) determines the maximum allowed error. + +7. **Number of Hidden Units:** The number of neurons in the hidden layer of the agent's neural network (e.g., 100). More units may capture complex patterns but can require more computational resources. + +These values can be adjusted based on your specific game and training requirements. \ No newline at end of file diff --git a/public/.DS_Store b/public/.DS_Store index 64bd4be..de8c0ad 100644 Binary files a/public/.DS_Store and b/public/.DS_Store differ diff --git a/public/images/add-bot.png b/public/images/add-bot.png new file mode 100644 index 0000000..5b895b7 Binary files /dev/null and b/public/images/add-bot.png differ