fix: edge indexes become invalid when deleting and recreating same ed…#3512
fix: edge indexes become invalid when deleting and recreating same ed…#3512
Conversation
…ge (#3097) (#3482) * fix: edge indexes become invalid when deleting and recreating same edge (#3097) In TransactionIndexContext.addIndexKeyLock(), when a unique index entry is deleted (REMOVE) and recreated (ADD) with the same key within the same transaction on the same bucket-level index, the REMOVE entry was silently replaced by a REPLACE entry — losing the old RID. At commit time the old persisted index entry was never removed, causing stale entries to accumulate. After 2+ iterations, checkUniqueIndexKeys detected >2 entries for the same unique key and threw DuplicatedKeyException. Fix: store the old RID in the REPLACE entry (IndexKey.oldRid). At commit time, call index.remove(key, oldRid) for REPLACE entries with a non-null oldRid to properly clean up the old persisted entry. Also fix getTxDeletedEntries() to use oldRid as the deleted RID for correctness in checkUniqueIndexKeys. Update TxForwardRequest serialization to include oldRid for HA replication. * fix: propagate oldRid through chained REPLACE operations (#3097) When the same unique key undergoes REMOVE → ADD → ADD in the same transaction on the same bucket, the second ADD finds an existing REPLACE entry. The oldRid from that REPLACE must be propagated to the new REPLACE so the original persisted index entry is still properly removed at commit time. * test: assert edge count equals 1 after delete-recreate loop (#3097) (cherry picked from commit dffb415)
Summary of ChangesHello @robfrank, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical issue where unique edge indexes could become invalid after an edge was deleted and then immediately recreated within the same transaction. The fix ensures that the database correctly tracks and cleans up the original record IDs during these Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request successfully addresses a bug concerning edge index invalidation when an edge is deleted and recreated within the same transaction. The core logic changes in TransactionIndexContext.java are sound and effectively fix the issue by tracking the old record ID. The necessary updates for High Availability replication in TxForwardRequest.java are also correctly implemented. The accompanying test case has been significantly improved and now robustly verifies the fix.
However, I've identified two high-severity security issues related to disabling the npm audit checks for frontend dependencies in studio/pom.xml and studio/package.json. These changes weaken the project's security posture and should be reverted. I've also included a medium-severity suggestion to improve code readability in TransactionIndexContext.java.
| "update": "npm update", | ||
| "outdated": "npm outdated", | ||
| "security-check": "npm audit --audit-level=moderate", | ||
| "security-check": "npm audit --audit-level=none", |
There was a problem hiding this comment.
The security-check script has been changed to use npm audit --audit-level=none. This setting reports vulnerabilities but does not fail the build, effectively weakening the security gate. The previous setting of --audit-level=moderate is recommended to ensure that vulnerabilities of moderate severity or higher are addressed before merging.
| "security-check": "npm audit --audit-level=none", | |
| "security-check": "npm audit --audit-level=moderate", |
| <!-- <execution>--> | ||
| <!-- <id>npm audit</id>--> | ||
| <!-- <goals>--> | ||
| <!-- <goal>npm</goal>--> | ||
| <!-- </goals>--> | ||
| <!-- <phase>compile</phase>--> | ||
| <!-- <configuration>--> | ||
| <!-- <arguments>audit --audit-level=none</arguments>--> | ||
| <!-- </configuration>--> | ||
| <!-- </execution>--> |
There was a problem hiding this comment.
The npm audit execution has been commented out, which disables security vulnerability scanning for frontend dependencies during the Maven build process. This poses a security risk by potentially allowing known vulnerabilities into the application. It is strongly recommended to re-enable this check. The corresponding security-check script in package.json has also been weakened. Both should be reverted to maintain security standards.
| <!-- <execution>--> | |
| <!-- <id>npm audit</id>--> | |
| <!-- <goals>--> | |
| <!-- <goal>npm</goal>--> | |
| <!-- </goals>--> | |
| <!-- <phase>compile</phase>--> | |
| <!-- <configuration>--> | |
| <!-- <arguments>audit --audit-level=none</arguments>--> | |
| <!-- </configuration>--> | |
| <!-- </execution>--> | |
| <execution> | |
| <id>npm audit</id> | |
| <goals> | |
| <goal>npm</goal> | |
| </goals> | |
| <phase>compile</phase> | |
| <configuration> | |
| <arguments>audit --audit-level=moderate</arguments> | |
| </configuration> | |
| </execution> |
| final RID deletedRid = (entry.getValue().operation == IndexKey.IndexKeyOperation.REPLACE && entry.getValue().oldRid != null) | ||
| ? entry.getValue().oldRid | ||
| : entry.getKey().rid; | ||
| entries.put(key, deletedRid); |
There was a problem hiding this comment.
For improved readability and maintainability, this dense ternary operator can be refactored into a more explicit if-else block.
final RID deletedRid;
if (entry.getValue().operation == IndexKey.IndexKeyOperation.REPLACE && entry.getValue().oldRid != null) {
deletedRid = entry.getValue().oldRid;
} else {
deletedRid = entry.getKey().rid;
}
entries.put(key, deletedRid);
🧪 CI InsightsHere's what we observed from your CI run for 484414d. 🟢 All jobs passed!But CI Insights is watching 👀 |
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferencesFootnotes
|
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3512 +/- ##
==========================================
+ Coverage 62.87% 63.27% +0.40%
==========================================
Files 1295 1295
Lines 85705 85725 +20
Branches 17619 17630 +11
==========================================
+ Hits 53884 54246 +362
+ Misses 24151 23821 -330
+ Partials 7670 7658 -12 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…ge (#3097) (#3482)
In TransactionIndexContext.addIndexKeyLock(), when a unique index entry is deleted (REMOVE) and recreated (ADD) with the same key within the same transaction on the same bucket-level index, the REMOVE entry was silently replaced by a REPLACE entry — losing the old RID. At commit time the old persisted index entry was never removed, causing stale entries to accumulate. After 2+ iterations, checkUniqueIndexKeys detected >2 entries for the same unique key and threw DuplicatedKeyException.
Fix: store the old RID in the REPLACE entry (IndexKey.oldRid). At commit time, call index.remove(key, oldRid) for REPLACE entries with a non-null oldRid to properly clean up the old persisted entry. Also fix getTxDeletedEntries() to use oldRid as the deleted RID for correctness in checkUniqueIndexKeys. Update TxForwardRequest serialization to include oldRid for HA replication.
When the same unique key undergoes REMOVE → ADD → ADD in the same transaction on the same bucket, the second ADD finds an existing REPLACE entry. The oldRid from that REPLACE must be propagated to the new REPLACE so the original persisted index entry is still properly removed at commit time.
(cherry picked from commit dffb415)