Skip to content

Conversation

@StarrryNight
Copy link
Contributor

@StarrryNight StarrryNight commented Nov 29, 2025

Description

Testing Done

Resolved Issues

Length Justification and Key Files to Review

Review Checklist

It is the reviewers responsibility to also make sure every item here has been covered

  • Function & Class comments: All function definitions (usually in the .h file) should have a javadoc style comment at the start of them. For examples, see the functions defined in thunderbots/software/geom. Similarly, all classes should have an associated Javadoc comment explaining the purpose of the class.
  • Remove all commented out code
  • Remove extra print statements: for example, those just used for testing
  • Resolve all TODO's: All TODO (or similar) statements should either be completed or associated with a github issue

S_pred = 0.7f * direction_component + 0.3f * speed_component;
}

return {0.4f * S_geo, 0.25f * S_pos, 0.1f * S_angle, 0.1f * S_vis, 0.15f * S_pred};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, there are a lot of magic numbers. Unless these are like, universal constants, we should try and parameterize them as constants or perhaps a set within the EnemyThreat class so that we can more easily modify these constants. Perhaps we should make it so that we can initialize multiple instances of an EnemyThreat object with different weightings on enemy threat components?

I'm not completely sure what this is for, but it may help for assigning robots to different tasks. E.g. a goalie may prioritize threats differently from a receiver.

Comment on lines +212 to +218
for (EnemyThreat a : threats)
{
std::vector<float> temp = getThreatScore(a, field);
LOG(INFO) << std::endl
<< a.robot.id() << " : " << temp[0] << " " << temp[1] << " " << temp[2]
<< " " << temp[3] << " " << temp[4];
}
Copy link
Member

@Apeiros-46B Apeiros-46B Jan 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still necessary? Also, from the log reader's perspective it is not very clear what is being logged

Comment on lines +227 to +260
float S_geo = expf(-1.0 * (to_goal.length()) / 4.5);

// Distance from goal
float num_pass = enemy.num_passes_to_get_possession;
float S_pos = expf(-num_pass);

// Angle to goal
float S_angle = enemy.goal_angle.toRadians();
// Visibility
float S_vis = 1.0f;
if (enemy.best_shot_angle.has_value() && enemy.goal_angle.toRadians() > 0.0f)
{
float block_frac =
enemy.best_shot_angle->toRadians() / enemy.goal_angle.toRadians();
S_vis = 1.0f - block_frac;
}

// predictive motion
float S_pred = 0.0f;
Vector velocity = enemy.robot.velocity();
float velocity_length = velocity.length();

// Only calculate if robot is moving and goal direction is valid
if (velocity_length > 0.0f && to_goal.length() > 0.0f)
{
float vel_dot = velocity.normalize().dot(to_goal.normalize());
float direction_component =
std::max(0.0f, vel_dot); // Only positive (toward goal)
float max_speed = enemy.robot.robotConstants().robot_max_speed_m_per_s;
float speed_component = std::min(1.0f, velocity_length / max_speed);
S_pred = 0.7f * direction_component + 0.3f * speed_component;
}

return {0.4f * S_geo, 0.25f * S_pos, 0.1f * S_angle, 0.1f * S_vis, 0.15f * S_pred};
Copy link
Member

@Apeiros-46B Apeiros-46B Jan 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of magic numbers in this section, should be parameterized or at least explained (probably still a good idea to move them into constants)

@Apeiros-46B Apeiros-46B self-requested a review January 10, 2026 01:58
@StarrryNight
Copy link
Contributor Author

I want some opinions because I am getting mixed feelings from this ticket. This remake is supposed to make the enemy evaluation more nuanced by turning it into a cost function, from its original form of being an if-statement tree.

After making a cost function, I compared the performance of the two evaluation methods. Both method passed original and new test cases. I further evaluated it it by constantly logging enemy threat rankings, randomly stopping matches and see if the logs match up with the current scenario in the simulation. This might sound too subjective for a testing method, but in reality its not really subjective cuz the ideal ranking is pretty obvious.

The result of this testing is that the cost function evaluation method SLIGHTLY wins over if-trees, especially for more complex scenarios like the opposition overloading one side when they have a runner on the opposite side (so the runner is free of marking, fast moving, and thus the most threatening). HOWEVER, the the cost function method is also a bit more inconsistent.
And I don't really think changing it into a cost function will bring a big improvement to defensive gameplay.

In my opinion the old system is more robust but the new one is more flexible and better at handling. Is there a way to combine the advantages of both? Or any other thoughts?

@GrayHoang
Copy link
Contributor

This might be something worth testing with real robots, just to see if the added control can help us tune the gameplay further.

1 similar comment
@GrayHoang
Copy link
Contributor

This might be something worth testing with real robots, just to see if the added control can help us tune the gameplay further.

Copy link
Contributor

@annieisawesome2 annieisawesome2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, it would make sense to keep the if-tree implementation if it proves to be more consistent. With real robots, the new approach may end up being even more finicky. I previously reviewed the enemy threat logic in detail for an earlier ticket and felt that the original implementation handled the problem effectively. While there is certainly room for improvement, I agree with Grayson that it would be best to evaluate both versions on real robots before deciding which one to keep.

Copy link
Contributor

@a-png129 a-png129 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm generally in favour of using the cost function because I think the flexibility creates more room for refinement and long term improvement. It would be helpful to understand which specific scenarios this evaluation is less consistent or performs worse than the if-statement tree. If we can identify when and why that happens, then we can reinforce the cost evaluation to be more robust. For example, is there flickering for near-ties now that the evaluation is more nuanced? This would just require a lot of testing, like Grayson and Annie said.

On the other hand, what I remember hearing is that our defense is really good already, and we want to focus on improving offense? So if you don't think this would be a big improvement to defensive play, maybe right now it's not worth the effort of testing and refining....?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants