-
Notifications
You must be signed in to change notification settings - Fork 10
Add Instanced challenges #450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: bl4ze/dev
Are you sure you want to change the base?
Conversation
|
will fix (squash) commits soon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request adds support for instanced challenges - a feature that allows each user to spawn their own isolated container instance for a challenge, rather than sharing a single deployed instance. This is particularly useful for challenges that require per-user isolation, such as pwn challenges or vulnerable web applications.
Changes:
- Adds Redis as a caching layer for managing dynamic port allocation and instance metadata
- Implements instance lifecycle management (spawn, extend, kill) with automatic expiration
- Introduces new database fields (Instanced, InstanceExpiration) to the Challenge model
- Adds comprehensive API endpoints for users and admins to manage instances
- Removes static port allocation from challenge configs in favor of dynamic allocation from port ranges
- Includes example challenges demonstrating both simple service and docker-compose based instancing
Reviewed changes
Copilot reviewed 40 out of 41 changed files in this pull request and generated 19 comments.
Show a summary per file
| File | Description |
|---|---|
| go.mod, go.sum | Adds redis client dependency and removes unused dependencies |
| core/config/config.go | Adds Redis config, instance config, and port range validation |
| core/config/challenge.go | Adds instanced metadata fields and simplifies port configuration |
| core/database/challenges.go | Adds Instanced and InstanceExpiration fields to Challenge model |
| core/cache/*.go | New package for Redis-based caching of instances and port allocations |
| core/manager/instance.go | New file implementing instance lifecycle management |
| core/manager/pipeline.go | Updates deployment to use dynamic port allocation |
| core/manager/health_check.go | Adds instance cleanup prober for expired instances |
| core/manager/challenge.go | Updates undeploy to kill active instances first |
| core/manager/utils.go | Updates port registration to use cache instead of static config |
| api/instance.go | New API handlers for instance management |
| api/router.go | Adds instance API routes |
| cmd/beast/*.go | Adds Redis initialization and cache management commands |
| utils/datatypes.go | Updates port mapping comment to reflect new usage |
| _examples/instanced-* | Example challenges demonstrating instancing feature |
Comments suppressed due to low confidence (1)
core/manager/pipeline.go:387
- The error check at line 382 is performed after registering ports (lines 375-380). If the container creation failed (err != nil), the ports should not be registered. The error check should be moved before port registration to avoid registering ports for a failed container creation. Additionally, if the container creation fails, the allocated ports should be freed.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # DO NOT specify ports for instanced challenges! | ||
| # Instead, use default_port to indicate which container port to expose |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment "# DO NOT specify ports for instanced challenges!" conflicts with the validation logic. According to core/config/challenge.go line 308, either ports or default_port must be specified. The comment should be updated to clarify that you should use default_port instead of ports for instanced challenges, not that you should avoid specifying any port information.
| # DO NOT specify ports for instanced challenges! | |
| # Instead, use default_port to indicate which container port to expose | |
| # Do not configure a ports list for instanced challenges. | |
| # Instead, you must use default_port to indicate which container port to expose. |
|
|
||
| _, err := Cache.Ping(context.Background()).Result() | ||
| if err != nil { | ||
| return fmt.Errorf("failed to connected to redis") |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message has a typo: "failed to connected to redis" should be "failed to connect to redis"
| return fmt.Errorf("failed to connected to redis") | |
| return fmt.Errorf("failed to connect to redis") |
| tags = ["web", "sql", "instanced"] | ||
| maxAttemptLimit = 100 | ||
| instanced = true | ||
| instance_expiration = 12 |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The instance_expiration value is set to 12 seconds, which is extremely short for a challenge instance. This seems like it's intended for testing purposes. For actual use, this should be a more reasonable duration (e.g., 600 seconds as mentioned in the README). Consider updating this to match the recommended value or adding a comment indicating this is a test value.
| instance_expiration = 12 | |
| instance_expiration = 600 |
| func RegisterFreePort(host string, containerId string, port uint32) error { | ||
| CacheMutex.Lock() | ||
| defer CacheMutex.Unlock() | ||
|
|
||
| ctx := context.Background() | ||
| instanceKey := utils.ContainerToKey(host, containerId) | ||
|
|
||
| result, err := Cache.SAdd(ctx, instanceKey, port).Result() | ||
| if err != nil { | ||
| return err | ||
| } | ||
|
|
||
| if result == 1 { | ||
| return nil | ||
| } | ||
|
|
||
| return fmt.Errorf("port: %v on host: %s is already registered to instance: %s", port, host, containerId) | ||
| } | ||
|
|
||
| func GetContainerPorts(host string, containerId string) ([]uint32, error) { | ||
| CacheMutex.Lock() | ||
| defer CacheMutex.Unlock() | ||
|
|
||
| ctx := context.Background() | ||
| instanceKey := utils.ContainerToKey(host, containerId) | ||
|
|
||
| result, err := Cache.SMembers(ctx, instanceKey).Result() | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
|
|
||
| ports := make([]uint32, len(result)) | ||
| for i, s := range result { | ||
| port, err := strconv.ParseUint(s, 10, 32) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
|
|
||
| ports[i] = uint32(port) | ||
| } | ||
|
|
||
| return ports, nil | ||
| } | ||
|
|
||
| func FreeContainerPorts(host string, containerId string) error { | ||
| CacheMutex.Lock() | ||
| defer CacheMutex.Unlock() | ||
|
|
||
| ctx := context.Background() | ||
| hostKey := utils.HostToKey(host) | ||
| instanceKey := utils.ContainerToKey(host, containerId) | ||
|
|
||
| result, err := Cache.SMembers(ctx, instanceKey).Result() | ||
| if err != nil { | ||
| return err | ||
| } | ||
|
|
||
| ports := make([]uint32, len(result)) | ||
| for i, portString := range result { | ||
| port, err := strconv.ParseUint(portString, 10, 32) | ||
| if err != nil { | ||
| return err | ||
| } | ||
|
|
||
| ports[i] = uint32(port) | ||
| Cache.SRem(ctx, instanceKey, port) | ||
| } | ||
|
|
||
| for _, port := range ports { | ||
| _, err = Cache.SRem(ctx, hostKey, port).Result() | ||
| if err != nil { | ||
| return err | ||
| } | ||
| } | ||
|
|
||
| return nil | ||
| } |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a design flaw in the port tracking logic. RegisterFreePort is called with container ports (e.g., 80, 443) but GetFreePort allocates and tracks host ports (e.g., 30001, 30002). When FreeContainerPorts tries to free ports, it reads container ports from the container set and then tries to remove those numbers from the host set, but the host set contains different port numbers (the allocated host ports). This will cause ports to not be properly freed, leading to port exhaustion over time. RegisterFreePort should track the host port that was allocated, not the container port.
| } | ||
|
|
||
| for _, portMap := range portMapping { | ||
| err = cache.RegisterFreePort(host, containerId, portMap.ContainerPort) |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RegisterFreePort should be called with the host port (portMap.HostPort), not the container port (portMap.ContainerPort). The host port is what was allocated by GetFreePort and needs to be tracked. Using the container port will cause the port freeing logic to fail since it won't match the port numbers stored in the host set.
| // Postgresql database for beast. The Db variable is the connection variable for the | ||
| // database, which is not closed after creating a connection here and can |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment says "Postgresql database" but this is for cache/Redis initialization. This comment was likely copied from the database package and should be updated to reflect that this is for Redis/cache initialization.
| // Postgresql database for beast. The Db variable is the connection variable for the | |
| // database, which is not closed after creating a connection here and can | |
| // Redis cache for beast. The Cache variable is the client connection for the | |
| // cache, which is not closed after creating a connection here and can |
| if Cache == nil { | ||
| cacheError = ConnectRedis() | ||
| if cacheError != nil { | ||
| log.Error("Error while initializing the database.", cacheError) |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message says "Error while initializing the database" but this is actually initializing the cache. The message should say "Error while initializing the cache" to be accurate.
| log.Error("Error while initializing the database.", cacheError) | |
| log.Error("Error while initializing the cache.", cacheError) |
cmd/beast/init.go
Outdated
|
|
||
| _, err := cache.Ping(context.Background()).Result() | ||
| if err != nil { | ||
| return fmt.Errorf("failed to connected to redis") |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message has a typo: "failed to connected to redis" should be "failed to connect to redis"
| return fmt.Errorf("failed to connected to redis") | |
| return fmt.Errorf("failed to connect to redis") |
cmd/beast/init.go
Outdated
| }) | ||
| } else { | ||
| cache = redis.NewClient(&redis.Options{ | ||
| Addr: fmt.Sprint("%s:%s", redisConfig.Host, redisConfig.Port), |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fmt.Sprint should be fmt.Sprintf to format the string with the arguments. Currently this will concatenate the format string with the arguments without formatting.
| Addr: fmt.Sprint("%s:%s", redisConfig.Host, redisConfig.Port), | |
| Addr: fmt.Sprintf("%s:%s", redisConfig.Host, redisConfig.Port), |
| } | ||
|
|
||
| /* both ports are inclusive */ | ||
| portRange := lastPort - firstPort - 1 |
Copilot
AI
Feb 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The port range calculation is incorrect. When both ports are inclusive, the formula should be portRange := lastPort - firstPort + 1, not portRange := lastPort - firstPort - 1. For example, if firstPort is 30000 and lastPort is 40000, the range should be 10001 ports (inclusive), not 9999.
a77897f to
9786785
Compare
9786785 to
42be447
Compare
No description provided.