Developed a fault-tolerant, distributed consensus system
from scratch using the Raft algorithm, ensuring strong
consistency and high availability by automatically electing
new leaders in under 80ms for recovery from node crashes,
guaranteeing cluster integrity and zero data loss.
Implemented core mechanisms like leader election and log
replication using gRPC, and integrated the Pebble KV store
with Write-Ahead Logging (WAL) for durable state
persistence.
Distributed Key-Value Store
Engineered a horizontally scalable, distributed key-value
store from the ground up, implementing replication and
sharding strategies to ensure high availability and fault
tolerance.
Implemented a self-healing system with a gRPC control plane
that automates leader failover and state recovery in under 4
seconds, using WAL and Quorum protocols to ensure strict
data consistency.
Operating System Kernel Modules
Developed a CPU scheduler with five distinct policies and
engineered core concurrency primitives including semaphores,
shared memory (IPC), and a user-level thread library.
Implemented an MLFQ scheduler, reducing average response
time for I/O-bound processes by 73% over FIFO and enabling
complex multi-threaded applications to run efficiently.