In Memory DataStore in Go
Lightweight, sharded, TTL-enabled, and LRU-based datastore for high-performance caching.
Kache is a Go-based in-memory datastore with sharding, TTL, and LRU eviction policies. Perfect for building fast and scalable systems.
go get github.com/vr-varad/kacheBuilt for performance and scalability with modern Go practices
Kache uses a sharded map data structure where keys are distributed across multiple shards using consistent hashing. This approach reduces lock contention and improves concurrent access performance by allowing multiple goroutines to operate on different shards simultaneously.
Each key can have an optional Time-To-Live (TTL) value. A background janitor goroutine periodically scans for expired keys and removes them automatically, ensuring memory efficiency and preventing stale data from accumulating.
The datastore enforces a configurable maximum size limit (default: 1000 keys). When the limit is reached, the LRU eviction policy automatically removes the least recently used keys to make space for new entries.
Kache implements a Least Recently Used (LRU) eviction policy that tracks access patterns and removes the oldest unused entries when memory pressure occurs. This ensures frequently accessed data remains available while optimizing memory usage.
Get started with Kache in your Go project
go get github.com/vr-varad/kachepackage main
import (
"fmt"
"github.com/vr-varad/kache"
)
func main() {
// Create a new Kache instance
cache := kache.NewKache()
// Set a value with TTL (10 seconds)
cache.Set("foo", "bar", kache.Options{TTL: 10})
// Get the value
value, exists := cache.Get("foo")
if exists {
fmt.Println("Value:", value) // Output: bar
}
// Check if key exists
if cache.Exists("foo") {
fmt.Println("Key 'foo' exists")
}
// Delete a key
cache.Delete("foo")
// Flush all keys
cache.Flush()
}cache.Set("key1", "value1", kache.Options{})
value, exists := cache.Get("key1")
// exists = true, value = "value1"value, exists := cache.Get("nonexistent")
// exists = false, value = nilcache.Set("temp", "data", kache.Options{})
cache.Delete("temp")
_, exists := cache.Get("temp")
// exists = falsecache.Set("check", "me", kache.Options{})
exists := cache.Exists("check")
// exists = truecache.Set("key1", "val1", kache.Options{})
cache.Set("key2", "val2", kache.Options{})
cache.Flush()
// All keys are removedcache.Set("expire", "soon", kache.Options{TTL: 1})
time.Sleep(2 * time.Second)
_, exists := cache.Get("expire")
// exists = false (expired)// Janitor runs automatically
cache.Set("auto", "clean", kache.Options{TTL: 5})
// After 5+ seconds, janitor removes expired key
// No manual intervention required// When cache reaches max size (1000 keys)
// Least recently used keys are automatically
// evicted to make room for new entries