Skip to content

Conversation

@Letme
Copy link
Member

@Letme Letme commented Jan 15, 2026

Some tools require failure item when test case is failing and do not only rely on the statistics on the top. This adds message element to failure xml element.

Summary by CodeRabbit

  • New Features

    • Failure reasons are now included in generated JUnit XML failure messages; skipped tests emit dedicated elements.
  • Tests

    • Added end-to-end test and a sample test-results fixture to validate failure-reason handling and skipped-test output.

✏️ Tip: You can customize this high-level summary in your review settings.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for including failure messages in the JUnit XML output for failed test cases. Some tools require the failure element with message attributes to properly display test failures, rather than relying solely on the summary statistics.

Changes:

  • Added parsing and storage of failure reasons from Unity test output
  • Added generation of <failure> elements with message attributes for failed tests in the JUnit XML output
  • Added test case to verify correct generation of failure messages in XML output

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
mlx/unity2junit/unity2junit.py Added logic to capture failure reasons during parsing and generate failure/skipped XML elements in the output
tests/unity_parsing_test.py Added new test to verify failure messages are correctly included in generated XML
tests/test_in/utest_Failed_Runner.xml Added expected XML output fixture showing failure element with message attribute

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@coderabbitai
Copy link

coderabbitai bot commented Jan 15, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Capture optional failure reasons from FAIL results and include them as <failure> elements with a message attribute in generated JUnit XML; emit <skipped> elements for SKIP results. Add a test and fixture validating exact XML output for a failed runner log at a fixed timestamp.

Changes

Cohort / File(s) Summary
Core Logic
mlx/unity2junit/unity2junit.py
Parse and store optional failure reason for tests with result == "FAIL" (strip leading colon and surrounding whitespace). During XML generation assign the test case element to testcase_element and emit <failure message="..."> for FAIL (default "No reason" if absent) and <skipped> for SKIP.
Tests / Fixtures
tests/test_in/utest_Failed_Runner.xml, tests/unity_parsing_test.py
Add fixture tests/test_in/utest_Failed_Runner.xml. Add test_failed_runner_output() which mocks datetime to a fixed timestamp, runs the converter on utest_Failed_Runner.log, and asserts line-for-line equality with expected XML output (including failure reason and skipped handling).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐇 I hop through logs by lantern light,

I tease the reasons out of night.
Failures whisper, skips hum near,
I tuck their tales in XML cheer.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add failure element to failed test cases' directly and clearly describes the main change: adding a element to failed test cases in the JUnit XML output.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

🧹 Recent nitpick comments
tests/unity_parsing_test.py (1)

119-121: Consider also verifying absence of reason for passing tests.

The current logic only validates the reason when one is expected. For more complete coverage, consider also asserting that passing tests don't have an unexpected reason set.

♻️ Suggested improvement
                 expected_reason = expected_test_cases_Failed_Runner['reason'].pop(0)
                 if expected_reason:
                     self.assertEqual(tc.get('reason'), expected_reason)
+                else:
+                    self.assertIsNone(tc.get('reason'))

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 500a7c4 and 5cc205c.

📒 Files selected for processing (1)
  • tests/unity_parsing_test.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/unity_parsing_test.py (1)
mlx/unity2junit/unity2junit.py (2)
  • Unity2Junit (15-115)
  • convert (112-115)
🔇 Additional comments (2)
tests/unity_parsing_test.py (2)

109-111: LGTM!

The expected reason list correctly pairs with the test case results, providing None for passing tests and the expected failure message for the failing test.


132-149: LGTM!

The test correctly follows the established pattern from test_init_runner_output, using datetime patching to ensure reproducible timestamps and comparing the full XML output line-by-line. This provides good end-to-end verification of the failure message inclusion.

Minor note: The expected_xml = '' initialization on line 137 is unnecessary since it's immediately overwritten, but this is purely cosmetic.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
mlx/unity2junit/unity2junit.py (1)

88-92: Incorrect failures and skipped attributes for SKIP test cases.

When case["result"] is "SKIP", the testsuite element will have failures="1" and skipped="0", which is incorrect. A skipped test should have failures="0" and skipped="1".

🐛 Proposed fix
             testsuite = ET.SubElement(
                 testsuites, "testsuite",
                 name=case["classname"],
                 timestamp=timestamp,
                 time="0.0",
                 errors="0",
                 tests="1",
-                failures="1" if case["result"] != "PASS" else "0",
-                skipped="0"
+                failures="1" if case["result"] == "FAIL" else "0",
+                skipped="1" if case["result"] == "SKIP" else "0"
             )
♻️ Duplicate comments (1)
tests/unity_parsing_test.py (1)

126-144: Remove debug print statement.

Line 143 contains a debug print(generated_xml) statement that should be removed before merging to avoid unnecessary test output.

♻️ Proposed fix
                 converter.convert()
                 tmp_output_file.seek(0)
                 generated_xml = tmp_output_file.readlines()
-                print(generated_xml)
                 self.assertListEqual(generated_xml, expected_xml)
🧹 Nitpick comments (1)
tests/unity_parsing_test.py (1)

88-124: Consider adding assertion for the reason field on failed test cases.

This test validates that the result is correctly set to "FAIL" but doesn't verify that the reason field is captured from the Unity output. Adding an assertion would improve coverage for the new failure reason extraction feature.

💡 Suggested enhancement
             expected_test_cases_Failed_Runner['result'] = ['PASS', 'PASS', 'FAIL', 'PASS', 'PASS']
+            expected_test_cases_Failed_Runner['reason'] = [None, None, 'Function Blah_SecondFunction.  Called more times than expected.', None, None]

             for tc in test_cases:
                 # Find some smart way to check the test case class, name and line number
                 self.assertEqual(tc['classname'], expected_test_cases_Failed_Runner['classname'].pop(0))
                 self.assertEqual(tc['line'], expected_test_cases_Failed_Runner['line'].pop(0))
                 self.assertEqual(tc['name'], expected_test_cases_Failed_Runner['name'].pop(0))
                 self.assertEqual(tc['result'], expected_test_cases_Failed_Runner['result'].pop(0))
+                expected_reason = expected_test_cases_Failed_Runner['reason'].pop(0)
+                if expected_reason:
+                    self.assertEqual(tc.get('reason'), expected_reason)

                 self.assertEqual(tc['file'], 'unit_test/utest_Init.c')
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cc83535 and 74838f0.

📒 Files selected for processing (3)
  • mlx/unity2junit/unity2junit.py
  • tests/test_in/utest_Failed_Runner.xml
  • tests/unity_parsing_test.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/unity_parsing_test.py (1)
mlx/unity2junit/unity2junit.py (2)
  • Unity2Junit (15-115)
  • convert (112-115)
🔇 Additional comments (3)
tests/test_in/utest_Failed_Runner.xml (1)

1-21: LGTM!

The test fixture correctly represents the expected JUnit XML output with the new <failure message="..."> element. The structure properly includes the failure message within the failing test case while keeping passing tests without any failure elements.

mlx/unity2junit/unity2junit.py (2)

67-68: LGTM!

The reason extraction logic correctly handles the optional reason field by stripping the leading colon and whitespace.


94-106: LGTM!

The testcase element creation and conditional addition of <failure> and <skipped> child elements is implemented correctly. The default "No reason" message provides a reasonable fallback.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Letme and others added 2 commits January 15, 2026 13:27
Left from the development.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

@Letme Letme merged commit a8afecf into main Jan 15, 2026
5 checks passed
@Letme Letme deleted the add_failure_test_case branch January 15, 2026 13:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants