Skip to content

jeremiah-masters/dlht

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DLHT

DLHT is a high-performance, lock-free hash table for high-concurrency workloads based on DLHT: A Non-blocking Resizable Hashtable with Fast Deletes and Memory-awareness (https://arxiv.org/abs/2406.09986). It provides lock-free Get, Insert, Delete, and Put operations with sophisticated memory ordering and cooperative resizing.

Features

  • Lock-free operations: All operations (Get, Insert, Delete, Put) are lock-free and linearizable
  • High concurrency: Scales well with multiple threads and cores
  • Automatic resizing: Cooperative resize algorithm that doesn't block operations
  • Memory efficient: Cache-line optimized data structures with bounded overflow
  • Type safe: Generic implementation supporting any comparable key type and any value type

Quick Start

package main

import (
    "fmt"
    "github.com/jeremiah-masters/dlht"
)

func main() {
    // Create a new DLHT Map
    m := dlht.New[string, int](dlht.Options{InitialSize: 64})

    // Insert key-value pairs
    m.Insert("apple", 5)
    m.Insert("banana", 3)

    // Get values
    if value, found := m.Get("apple"); found {
        fmt.Printf("Found: apple = %d\n", value)
    }

    // Update values atomically
    if oldValue, updated := m.Put("apple", 10); updated {
        fmt.Printf("Updated apple: %d -> %d\n", oldValue, 10)
    }

    // Delete keys
    if oldValue, deleted := m.Delete("banana"); deleted {
        fmt.Printf("Deleted banana (old value: %d)\n", oldValue)
    }

    // Iterate every (key, value) currently in the map.
    m.Range(func(k string, v int) bool {
        fmt.Printf("%s = %d\n", k, v)
        return true // return false to stop early
    })

    // Approximate entry count
	size := m.Size()
	fmt.Printf("Size: %d\n", size)

    // Get statistics
    stats := m.Stats()
    fmt.Printf("Load factor: %.3f\n", stats.LoadFactor)
}

API Reference

Types

  • Map[K Key, V any]: The main hash table type
  • Options: Configuration options for creating a new map
  • Stats: Statistics about the hash table's current state
  • Key: Type constraint for valid key types (uint64 or string)

API

  • New[K Key, V any](opts Options) *Map[K, V]: Create a new DLHT
  • Get(key K) (V, bool): Get a value by key
  • Insert(key K, value V) (V, bool): Insert a new key-value pair
  • Put(key K, value V) (V, bool): Update an existing key atomically
  • Delete(key K) (V, bool): Delete a key and return the old value
  • Size() uint64: Approximate count of valid entries
  • Stats() Stats: Get current statistics
  • Range(yield func(K, V) bool): Iterate every (key, value) pair; return false from yield to stop early
  • All() iter.Seq2[K, V]: Range adapter for for k, v := range m.All()
  • Keys() iter.Seq[K]: Range over keys only
  • Values() iter.Seq[V]: Range over values only

Implementation Details

This package provides a clean public API that wraps the allocator-based implementation. The project contains two lock-free variants built on the same core design:

  • allocator/: Generic implementation used by dlht.New; each slot stores a hash plus a pointer to a heap-allocated entry, supporting arbitrary comparable keys and values.
  • inline/: Specialized implementation exposed by dlht.NewInline; each slot stores the value inline for uint64 keys and integer values, avoiding per-entry allocation in that path.

Performance

DLHT is designed for high-performance concurrent workloads:

  • Lock-free: No blocking between operations
  • Cache-optimized: 64-byte buckets fit exactly in cache lines
  • Bounded overflow: Maximum 15 slots per bucket for predictable performance
  • Cooperative resizing: Non-blocking resize with work-stealing

Benchmarks

The charts below compare DLHT against sync.Map and other popular concurrent map implementations across four representative workloads as core count scales. Benchmarks were run on an Intel Xeon Platinum 8260 (2.40GHz); raw data is in benchmarks/results/results.txt.

Throughput Scaling

Throughput scaling across workloads

Latency Scaling

Latency scaling across workloads

Panels cover read-only reads, insert/delete churn, a put-heavy mix, and rapid growth (80% inserts). See benchmarks/workload.go for the full workload definitions and benchmarks/throughput_test.go for the harness.

License

Licensed under Apache v2.

About

High-performance, lock-free concurrent hash table in Go, based on DLHT, with cooperative resizing and cache-efficient buckets.

Topics

Resources

License

Stars

Watchers

Forks

Contributors