-
Notifications
You must be signed in to change notification settings - Fork 123
Enemy threats#3373 #3534
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Enemy threats#3373 #3534
Conversation
| S_pred = 0.7f * direction_component + 0.3f * speed_component; | ||
| } | ||
|
|
||
| return {0.4f * S_geo, 0.25f * S_pos, 0.1f * S_angle, 0.1f * S_vis, 0.15f * S_pred}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, there are a lot of magic numbers. Unless these are like, universal constants, we should try and parameterize them as constants or perhaps a set within the EnemyThreat class so that we can more easily modify these constants. Perhaps we should make it so that we can initialize multiple instances of an EnemyThreat object with different weightings on enemy threat components?
I'm not completely sure what this is for, but it may help for assigning robots to different tasks. E.g. a goalie may prioritize threats differently from a receiver.
| for (EnemyThreat a : threats) | ||
| { | ||
| std::vector<float> temp = getThreatScore(a, field); | ||
| LOG(INFO) << std::endl | ||
| << a.robot.id() << " : " << temp[0] << " " << temp[1] << " " << temp[2] | ||
| << " " << temp[3] << " " << temp[4]; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this still necessary? Also, from the log reader's perspective it is not very clear what is being logged
| float S_geo = expf(-1.0 * (to_goal.length()) / 4.5); | ||
|
|
||
| // Distance from goal | ||
| float num_pass = enemy.num_passes_to_get_possession; | ||
| float S_pos = expf(-num_pass); | ||
|
|
||
| // Angle to goal | ||
| float S_angle = enemy.goal_angle.toRadians(); | ||
| // Visibility | ||
| float S_vis = 1.0f; | ||
| if (enemy.best_shot_angle.has_value() && enemy.goal_angle.toRadians() > 0.0f) | ||
| { | ||
| float block_frac = | ||
| enemy.best_shot_angle->toRadians() / enemy.goal_angle.toRadians(); | ||
| S_vis = 1.0f - block_frac; | ||
| } | ||
|
|
||
| // predictive motion | ||
| float S_pred = 0.0f; | ||
| Vector velocity = enemy.robot.velocity(); | ||
| float velocity_length = velocity.length(); | ||
|
|
||
| // Only calculate if robot is moving and goal direction is valid | ||
| if (velocity_length > 0.0f && to_goal.length() > 0.0f) | ||
| { | ||
| float vel_dot = velocity.normalize().dot(to_goal.normalize()); | ||
| float direction_component = | ||
| std::max(0.0f, vel_dot); // Only positive (toward goal) | ||
| float max_speed = enemy.robot.robotConstants().robot_max_speed_m_per_s; | ||
| float speed_component = std::min(1.0f, velocity_length / max_speed); | ||
| S_pred = 0.7f * direction_component + 0.3f * speed_component; | ||
| } | ||
|
|
||
| return {0.4f * S_geo, 0.25f * S_pos, 0.1f * S_angle, 0.1f * S_vis, 0.15f * S_pred}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of magic numbers in this section, should be parameterized or at least explained (probably still a good idea to move them into constants)
|
I want some opinions because I am getting mixed feelings from this ticket. This remake is supposed to make the enemy evaluation more nuanced by turning it into a cost function, from its original form of being an if-statement tree. After making a cost function, I compared the performance of the two evaluation methods. Both method passed original and new test cases. I further evaluated it it by constantly logging enemy threat rankings, randomly stopping matches and see if the logs match up with the current scenario in the simulation. This might sound too subjective for a testing method, but in reality its not really subjective cuz the ideal ranking is pretty obvious. The result of this testing is that the cost function evaluation method SLIGHTLY wins over if-trees, especially for more complex scenarios like the opposition overloading one side when they have a runner on the opposite side (so the runner is free of marking, fast moving, and thus the most threatening). HOWEVER, the the cost function method is also a bit more inconsistent. In my opinion the old system is more robust but the new one is more flexible and better at handling. Is there a way to combine the advantages of both? Or any other thoughts? |
|
This might be something worth testing with real robots, just to see if the added control can help us tune the gameplay further. |
1 similar comment
|
This might be something worth testing with real robots, just to see if the added control can help us tune the gameplay further. |
annieisawesome2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion, it would make sense to keep the if-tree implementation if it proves to be more consistent. With real robots, the new approach may end up being even more finicky. I previously reviewed the enemy threat logic in detail for an earlier ticket and felt that the original implementation handled the problem effectively. While there is certainly room for improvement, I agree with Grayson that it would be best to evaluate both versions on real robots before deciding which one to keep.
a-png129
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm generally in favour of using the cost function because I think the flexibility creates more room for refinement and long term improvement. It would be helpful to understand which specific scenarios this evaluation is less consistent or performs worse than the if-statement tree. If we can identify when and why that happens, then we can reinforce the cost evaluation to be more robust. For example, is there flickering for near-ties now that the evaluation is more nuanced? This would just require a lot of testing, like Grayson and Annie said.
On the other hand, what I remember hearing is that our defense is really good already, and we want to focus on improving offense? So if you don't think this would be a big improvement to defensive play, maybe right now it's not worth the effort of testing and refining....?
Description
Testing Done
Resolved Issues
Length Justification and Key Files to Review
Review Checklist
It is the reviewers responsibility to also make sure every item here has been covered
.hfile) should have a javadoc style comment at the start of them. For examples, see the functions defined inthunderbots/software/geom. Similarly, all classes should have an associated Javadoc comment explaining the purpose of the class.TODO(or similar) statements should either be completed or associated with a github issue