-
Notifications
You must be signed in to change notification settings - Fork 0
Add failure element to failed test cases #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Failure should be in a message element, taken from after the FAIL string of Unity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds support for including failure messages in the JUnit XML output for failed test cases. Some tools require the failure element with message attributes to properly display test failures, rather than relying solely on the summary statistics.
Changes:
- Added parsing and storage of failure reasons from Unity test output
- Added generation of
<failure>elements with message attributes for failed tests in the JUnit XML output - Added test case to verify correct generation of failure messages in XML output
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| mlx/unity2junit/unity2junit.py | Added logic to capture failure reasons during parsing and generate failure/skipped XML elements in the output |
| tests/unity_parsing_test.py | Added new test to verify failure messages are correctly included in generated XML |
| tests/test_in/utest_Failed_Runner.xml | Added expected XML output fixture showing failure element with message attribute |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughCapture optional failure reasons from FAIL results and include them as Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧹 Recent nitpick comments
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧬 Code graph analysis (1)tests/unity_parsing_test.py (1)
🔇 Additional comments (2)
✏️ Tip: You can disable this entire section by setting Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
mlx/unity2junit/unity2junit.py (1)
88-92: Incorrectfailuresandskippedattributes for SKIP test cases.When
case["result"]is"SKIP", the testsuite element will havefailures="1"andskipped="0", which is incorrect. A skipped test should havefailures="0"andskipped="1".🐛 Proposed fix
testsuite = ET.SubElement( testsuites, "testsuite", name=case["classname"], timestamp=timestamp, time="0.0", errors="0", tests="1", - failures="1" if case["result"] != "PASS" else "0", - skipped="0" + failures="1" if case["result"] == "FAIL" else "0", + skipped="1" if case["result"] == "SKIP" else "0" )
♻️ Duplicate comments (1)
tests/unity_parsing_test.py (1)
126-144: Remove debug print statement.Line 143 contains a debug
print(generated_xml)statement that should be removed before merging to avoid unnecessary test output.♻️ Proposed fix
converter.convert() tmp_output_file.seek(0) generated_xml = tmp_output_file.readlines() - print(generated_xml) self.assertListEqual(generated_xml, expected_xml)
🧹 Nitpick comments (1)
tests/unity_parsing_test.py (1)
88-124: Consider adding assertion for thereasonfield on failed test cases.This test validates that the
resultis correctly set to"FAIL"but doesn't verify that thereasonfield is captured from the Unity output. Adding an assertion would improve coverage for the new failure reason extraction feature.💡 Suggested enhancement
expected_test_cases_Failed_Runner['result'] = ['PASS', 'PASS', 'FAIL', 'PASS', 'PASS'] + expected_test_cases_Failed_Runner['reason'] = [None, None, 'Function Blah_SecondFunction. Called more times than expected.', None, None] for tc in test_cases: # Find some smart way to check the test case class, name and line number self.assertEqual(tc['classname'], expected_test_cases_Failed_Runner['classname'].pop(0)) self.assertEqual(tc['line'], expected_test_cases_Failed_Runner['line'].pop(0)) self.assertEqual(tc['name'], expected_test_cases_Failed_Runner['name'].pop(0)) self.assertEqual(tc['result'], expected_test_cases_Failed_Runner['result'].pop(0)) + expected_reason = expected_test_cases_Failed_Runner['reason'].pop(0) + if expected_reason: + self.assertEqual(tc.get('reason'), expected_reason) self.assertEqual(tc['file'], 'unit_test/utest_Init.c')
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
mlx/unity2junit/unity2junit.pytests/test_in/utest_Failed_Runner.xmltests/unity_parsing_test.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/unity_parsing_test.py (1)
mlx/unity2junit/unity2junit.py (2)
Unity2Junit(15-115)convert(112-115)
🔇 Additional comments (3)
tests/test_in/utest_Failed_Runner.xml (1)
1-21: LGTM!The test fixture correctly represents the expected JUnit XML output with the new
<failure message="...">element. The structure properly includes the failure message within the failing test case while keeping passing tests without any failure elements.mlx/unity2junit/unity2junit.py (2)
67-68: LGTM!The reason extraction logic correctly handles the optional reason field by stripping the leading colon and whitespace.
94-106: LGTM!The testcase element creation and conditional addition of
<failure>and<skipped>child elements is implemented correctly. The default "No reason" message provides a reasonable fallback.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
Left from the development. Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
Some tools require failure item when test case is failing and do not only rely on the statistics on the top. This adds message element to failure xml element.
Summary by CodeRabbit
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.