Hash Nimbus is a simple distributed key-value store that uses the Raft consensus model for leader election, log replication, and persistence. It ensures strong consistency and fault tolerance across multiple nodes.
- Leader Election: Uses the Raft algorithm to elect a leader.
- Log Replication: Ensures data consistency across nodes.
- Persistence: Stores logs for recovery after crashes.
- HTTP API: Simple endpoints for storing and retrieving key-value pairs.
Ensure you have Go installed. Then, build the project:
cd cmd/kvapi
go buildOpen three terminals and start three nodes:
./kvapi --node 0 --http :2021 --cluster "1,:3031;2,:3032;3,:3033"
./kvapi --node 1 --http :2022 --cluster "1,:3031;2,:3032;3,:3033"
./kvapi --node 2 --http :2023 --cluster "1,:3031;2,:3032;3,:3033"Send a SET request to store data in the cluster:
curl "http://localhost:2021/set?key=test&value=success"You can retrieve the value from any node:
curl "http://localhost:2022/get?key=test"Hash Nimbus consists of multiple nodes, one of which is elected as the leader. The leader handles all write operations, while followers replicate logs. If the leader fails, a new leader is elected.
- A client sends a
SETrequest to any node. - If the node is a follower, it forwards the request to the leader.
- The leader logs the operation and replicates it to followers.
- Once a majority acknowledges, the leader commits the change and responds.
- Any node can handle
GETrequests since the data is consistent across the cluster.
- Snapshotting: To prevent log growth.
- Automatic Rebalancing: Handle node joins and exits dynamically.
- Optimized Storage: Use persistent databases like BoltDB or BadgerDB.
Feel free to open issues or submit pull requests!
MIT License