Skip to content

Conversation

@gqvz
Copy link
Member

@gqvz gqvz commented Feb 5, 2026

No description provided.

@gqvz
Copy link
Member Author

gqvz commented Feb 5, 2026

will fix (squash) commits soon

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds support for instanced challenges - a feature that allows each user to spawn their own isolated container instance for a challenge, rather than sharing a single deployed instance. This is particularly useful for challenges that require per-user isolation, such as pwn challenges or vulnerable web applications.

Changes:

  • Adds Redis as a caching layer for managing dynamic port allocation and instance metadata
  • Implements instance lifecycle management (spawn, extend, kill) with automatic expiration
  • Introduces new database fields (Instanced, InstanceExpiration) to the Challenge model
  • Adds comprehensive API endpoints for users and admins to manage instances
  • Removes static port allocation from challenge configs in favor of dynamic allocation from port ranges
  • Includes example challenges demonstrating both simple service and docker-compose based instancing

Reviewed changes

Copilot reviewed 40 out of 41 changed files in this pull request and generated 19 comments.

Show a summary per file
File Description
go.mod, go.sum Adds redis client dependency and removes unused dependencies
core/config/config.go Adds Redis config, instance config, and port range validation
core/config/challenge.go Adds instanced metadata fields and simplifies port configuration
core/database/challenges.go Adds Instanced and InstanceExpiration fields to Challenge model
core/cache/*.go New package for Redis-based caching of instances and port allocations
core/manager/instance.go New file implementing instance lifecycle management
core/manager/pipeline.go Updates deployment to use dynamic port allocation
core/manager/health_check.go Adds instance cleanup prober for expired instances
core/manager/challenge.go Updates undeploy to kill active instances first
core/manager/utils.go Updates port registration to use cache instead of static config
api/instance.go New API handlers for instance management
api/router.go Adds instance API routes
cmd/beast/*.go Adds Redis initialization and cache management commands
utils/datatypes.go Updates port mapping comment to reflect new usage
_examples/instanced-* Example challenges demonstrating instancing feature
Comments suppressed due to low confidence (1)

core/manager/pipeline.go:387

  • The error check at line 382 is performed after registering ports (lines 375-380). If the container creation failed (err != nil), the ports should not be registered. The error check should be moved before port registration to avoid registering ports for a failed container creation. Additionally, if the container creation fails, the allocated ports should be freed.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +21 to +22
# DO NOT specify ports for instanced challenges!
# Instead, use default_port to indicate which container port to expose
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment "# DO NOT specify ports for instanced challenges!" conflicts with the validation logic. According to core/config/challenge.go line 308, either ports or default_port must be specified. The comment should be updated to clarify that you should use default_port instead of ports for instanced challenges, not that you should avoid specifying any port information.

Suggested change
# DO NOT specify ports for instanced challenges!
# Instead, use default_port to indicate which container port to expose
# Do not configure a ports list for instanced challenges.
# Instead, you must use default_port to indicate which container port to expose.

Copilot uses AI. Check for mistakes.

_, err := Cache.Ping(context.Background()).Result()
if err != nil {
return fmt.Errorf("failed to connected to redis")
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message has a typo: "failed to connected to redis" should be "failed to connect to redis"

Suggested change
return fmt.Errorf("failed to connected to redis")
return fmt.Errorf("failed to connect to redis")

Copilot uses AI. Check for mistakes.
tags = ["web", "sql", "instanced"]
maxAttemptLimit = 100
instanced = true
instance_expiration = 12
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instance_expiration value is set to 12 seconds, which is extremely short for a challenge instance. This seems like it's intended for testing purposes. For actual use, this should be a more reasonable duration (e.g., 600 seconds as mentioned in the README). Consider updating this to match the recommended value or adding a comment indicating this is a test value.

Suggested change
instance_expiration = 12
instance_expiration = 600

Copilot uses AI. Check for mistakes.
Comment on lines +32 to +108
func RegisterFreePort(host string, containerId string, port uint32) error {
CacheMutex.Lock()
defer CacheMutex.Unlock()

ctx := context.Background()
instanceKey := utils.ContainerToKey(host, containerId)

result, err := Cache.SAdd(ctx, instanceKey, port).Result()
if err != nil {
return err
}

if result == 1 {
return nil
}

return fmt.Errorf("port: %v on host: %s is already registered to instance: %s", port, host, containerId)
}

func GetContainerPorts(host string, containerId string) ([]uint32, error) {
CacheMutex.Lock()
defer CacheMutex.Unlock()

ctx := context.Background()
instanceKey := utils.ContainerToKey(host, containerId)

result, err := Cache.SMembers(ctx, instanceKey).Result()
if err != nil {
return nil, err
}

ports := make([]uint32, len(result))
for i, s := range result {
port, err := strconv.ParseUint(s, 10, 32)
if err != nil {
return nil, err
}

ports[i] = uint32(port)
}

return ports, nil
}

func FreeContainerPorts(host string, containerId string) error {
CacheMutex.Lock()
defer CacheMutex.Unlock()

ctx := context.Background()
hostKey := utils.HostToKey(host)
instanceKey := utils.ContainerToKey(host, containerId)

result, err := Cache.SMembers(ctx, instanceKey).Result()
if err != nil {
return err
}

ports := make([]uint32, len(result))
for i, portString := range result {
port, err := strconv.ParseUint(portString, 10, 32)
if err != nil {
return err
}

ports[i] = uint32(port)
Cache.SRem(ctx, instanceKey, port)
}

for _, port := range ports {
_, err = Cache.SRem(ctx, hostKey, port).Result()
if err != nil {
return err
}
}

return nil
}
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a design flaw in the port tracking logic. RegisterFreePort is called with container ports (e.g., 80, 443) but GetFreePort allocates and tracks host ports (e.g., 30001, 30002). When FreeContainerPorts tries to free ports, it reads container ports from the container set and then tries to remove those numbers from the host set, but the host set contains different port numbers (the allocated host ports). This will cause ports to not be properly freed, leading to port exhaustion over time. RegisterFreePort should track the host port that was allocated, not the container port.

Copilot uses AI. Check for mistakes.
}

for _, portMap := range portMapping {
err = cache.RegisterFreePort(host, containerId, portMap.ContainerPort)
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RegisterFreePort should be called with the host port (portMap.HostPort), not the container port (portMap.ContainerPort). The host port is what was allocated by GetFreePort and needs to be tracked. Using the container port will cause the port freeing logic to fail since it won't match the port numbers stored in the host set.

Copilot uses AI. Check for mistakes.
Comment on lines +72 to +73
// Postgresql database for beast. The Db variable is the connection variable for the
// database, which is not closed after creating a connection here and can
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "Postgresql database" but this is for cache/Redis initialization. This comment was likely copied from the database package and should be updated to reflect that this is for Redis/cache initialization.

Suggested change
// Postgresql database for beast. The Db variable is the connection variable for the
// database, which is not closed after creating a connection here and can
// Redis cache for beast. The Cache variable is the client connection for the
// cache, which is not closed after creating a connection here and can

Copilot uses AI. Check for mistakes.
if Cache == nil {
cacheError = ConnectRedis()
if cacheError != nil {
log.Error("Error while initializing the database.", cacheError)
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message says "Error while initializing the database" but this is actually initializing the cache. The message should say "Error while initializing the cache" to be accurate.

Suggested change
log.Error("Error while initializing the database.", cacheError)
log.Error("Error while initializing the cache.", cacheError)

Copilot uses AI. Check for mistakes.

_, err := cache.Ping(context.Background()).Result()
if err != nil {
return fmt.Errorf("failed to connected to redis")
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message has a typo: "failed to connected to redis" should be "failed to connect to redis"

Suggested change
return fmt.Errorf("failed to connected to redis")
return fmt.Errorf("failed to connect to redis")

Copilot uses AI. Check for mistakes.
})
} else {
cache = redis.NewClient(&redis.Options{
Addr: fmt.Sprint("%s:%s", redisConfig.Host, redisConfig.Port),
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fmt.Sprint should be fmt.Sprintf to format the string with the arguments. Currently this will concatenate the format string with the arguments without formatting.

Suggested change
Addr: fmt.Sprint("%s:%s", redisConfig.Host, redisConfig.Port),
Addr: fmt.Sprintf("%s:%s", redisConfig.Host, redisConfig.Port),

Copilot uses AI. Check for mistakes.
}

/* both ports are inclusive */
portRange := lastPort - firstPort - 1
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The port range calculation is incorrect. When both ports are inclusive, the formula should be portRange := lastPort - firstPort + 1, not portRange := lastPort - firstPort - 1. For example, if firstPort is 30000 and lastPort is 40000, the range should be 10001 ports (inclusive), not 9999.

Copilot uses AI. Check for mistakes.
@kunrex kunrex force-pushed the redis-instance branch 2 times, most recently from a77897f to 9786785 Compare February 9, 2026 14:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants