-
Notifications
You must be signed in to change notification settings - Fork 5
Description
Currently, concurrent executions of task can cause data loss due to the non-atomic read-modify-write cycle in delete and edit. I implemented a lockfile mechanism to serialize access to the database.
If you run task done 1 and task add "Buy Milk" at the exact same time in two terminal tabs or a script:
Process A (done) reads the file.
Process B (add) appends "Buy Milk" to the file.
Process A (done) finishes processing and overwrites the file with its old data (which doesn't have "Buy Milk").
Result: "Buy Milk" is lost forever.
fix --> I will implement a File Lock. This ensures that only one instance of task can touch the file at a time. The second instance will wait (or fail) until the first one finishes.
@anikchand461 @abhirajadhikary06 I already have the solution, should I work on this issue???