Skip to content

beve-org/beve-go

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

πŸš€ BEVE-Go - Binary Efficient Versatile Encoding

Go Version License Go Report Card GoDoc Coverage Tests Extensions

High-performance binary serialization library for Go β€” designed to be faster than JSON, MessagePack, and CBOR while maintaining JSON compatibility and zero-copy encoding.

// Simple as JSON, fast as binary
data, _ := beve.Marshal(user)        // Encode to BEVE
beve.Unmarshal(data, &decoded)       // Decode from BEVE

πŸ“‘ Table of Contents

πŸ“š Complete Documentation β†’


🎯 What is BEVE?

BEVE (Binary Efficient Versatile Encoding) is a modern binary serialization format that combines:

  • πŸš€ Extreme Performance: 2-46Γ— faster than JSON, optimized for modern CPUs
  • πŸ’Ύ Compact Size: 30-50% smaller payloads with varint encoding
  • πŸ”„ JSON Compatible: Seamless bidirectional JSON ↔ BEVE conversion
  • 🎨 Tagged Format: Self-describing like JSON, no schema required
  • πŸ”’ Type Safe: Full Go type system support with struct tags
  • ⚑ SIMD Optimized: Hardware-accelerated for ARM64 (NEON) and AMD64 (AVX2)
  • 🧩 8 Extensions: Typed arrays, timestamps, UUIDs, RegExp, field index, intervals

When to Use BEVE?

βœ… High-throughput APIs (microservices, REST endpoints)
βœ… Real-time systems (gaming, IoT, streaming)
βœ… Data-intensive workloads (ETL pipelines, analytics)
βœ… Cache layers (Redis, memcached serialization)
βœ… Inter-process communication (gRPC alternative)
βœ… Log aggregation (structured logging with compression)


πŸ“Š Performance at a Glance

Apple M2 Max (ARM64) β€” Latest Optimization (Oct 2025)

Operation BEVE CBOR JSON BEVE Advantage
Small Marshal 889ns 628ns 1,005ns 1.4Γ— faster than JSON
Small Unmarshal 780ns 2,456ns 9,138ns 3.2Γ— faster than CBOR, 11.7Γ— faster than JSON πŸ₯‡
Medium Marshal 7.5ΞΌs 15.5ΞΌs 30.2ΞΌs 2.0Γ— faster than CBOR, 4.0Γ— faster than JSON πŸ₯‡
Medium Unmarshal 14.1ΞΌs 52.4ΞΌs 138ΞΌs 3.7Γ— faster than CBOR, 9.8Γ— faster than JSON πŸ₯‡
Large Marshal 71ΞΌs 125ΞΌs 274ΞΌs 1.8Γ— faster than CBOR, 3.8Γ— faster than JSON πŸ₯‡
Large Unmarshal 146ΞΌs 415ΞΌs 1,378ΞΌs 2.8Γ— faster than CBOR, 9.4Γ— faster than JSON πŸ₯‡
Zero-Copy Mode 277ns, 0 allocs N/A N/A Exclusive to BEVE! πŸš€

Extension Performance (Oct 2025 Optimizations):

  • πŸ”₯ RegExp Marshal: 15.7ns (cache hit), 173Γ— faster than direct compile
  • ⚑ Field Index Encode: 9.3ΞΌs (5 allocs), 95% fewer allocations
  • πŸ’Ύ Field Index Decode: 3.6ΞΌs (106 allocs), 48% allocation reduction
  • 🎯 UUID Binary: 0.3ns marshal, 400Γ— faster than string encoding

Key Highlights:

  • ⚑ 3-4Γ— faster unmarshal than CBOR across all payload sizes
  • πŸ’Ύ 93% fewer allocations on large payloads (416 vs 6,307 allocs)
  • πŸš€ Zero-copy mode: 0 allocations, 0 bytes (277ns vs 889ns standard marshal)
  • πŸ“‰ 67% allocation reduction after pointer optimization (3 β†’ 1 alloc)
  • πŸ† Winner in 7 out of 8 benchmarks vs CBOR (see OPTIMIZATION_REPORT.md)
  • πŸ”₯ 173Γ— RegExp speedup with LRU cache (see SLOW_OPERATIONS_OPTIMIZATION.md)

πŸ“ˆ See detailed multi-platform benchmarks β†’
Tested on: Apple M1, Intel Xeon, ARM Neoverse-N2, Windows AMD64


πŸš€ Quick Start

Installation

go get github.com/beve-org/beve-go

Requirements: Go 1.21+ (uses latest performance features)

Basic Usage

package main

import (
    "fmt"
    beve "github.com/beve-org/beve-go"
)

type User struct {
    ID       int64     `beve:"id"`
    Username string    `beve:"username"`
    Email    string    `beve:"email,omitempty"`
    IsActive bool      `beve:"active"`
    Tags     []string  `beve:"tags"`
}

func main() {
    user := User{
        ID:       12345,
        Username: "alice",
        Email:    "[email protected]",
        IsActive: true,
        Tags:     []string{"premium", "verified"},
    }

    // Marshal to BEVE
    data, err := beve.Marshal(user)
    if err != nil {
        panic(err)
    }
    fmt.Printf("Encoded: %d bytes\n", len(data))

    // Unmarshal from BEVE
    var decoded User
    err = beve.Unmarshal(data, &decoded)
    if err != nil {
        panic(err)
    }
    fmt.Printf("Decoded: %+v\n", decoded)
}

Drop-in JSON Replacement

BEVE uses the same API as encoding/json for zero-friction adoption:

// Replace this:
import "encoding/json"
data, _ := json.Marshal(v)
json.Unmarshal(data, &v)

// With this:
import beve "github.com/beve-org/beve-go"
data, _ := beve.Marshal(v)
beve.Unmarshal(data, &v)

// Done! πŸŽ‰ Enjoy 2-40Γ— faster serialization

πŸ’‘ Core Features

1. High-Performance Encoding

// Standard marshal (optimized, pooled buffers)
data, _ := beve.Marshal(obj)

// Zero-copy mode (2-8Γ— faster, returns internal buffer)
data, _ := beve.MarshalZeroCopy(obj)

// Encoder with io.Writer (streaming)
enc := beve.NewEncoder(conn)
enc.Encode(obj1)
enc.Encode(obj2)

2. Struct Tags (JSON-compatible)

type Product struct {
    ID          int64   `beve:"id"`
    Name        string  `beve:"name"`
    Description string  `beve:"description,omitempty"` // Skip if empty
    Price       float64 `beve:"price"`
    Tags        []string `beve:"tags"`
    Internal    string  `beve:"-"` // Ignore field
}

Supported Tags:

  • beve:"fieldname" β€” Custom field name
  • beve:",omitempty" β€” Skip zero/empty values
  • beve:"-" β€” Ignore field completely

3. Configurable Struct Tags (JSON/CBOR/MessagePack Compatibility)

Use existing JSON tags without modifying your structs!

// Existing struct with json tags
type User struct {
    ID       int    `json:"id"`
    Username string `json:"username"`
    Email    string `json:"email,omitempty"`
}

// Configure BEVE to use json tags
beve.SetStructTag("json")

// Now BEVE reads json:"..." tags instead of beve:"..."
data, _ := beve.Marshal(user)
beve.Unmarshal(data, &user)

Supported Tag Names:

  • beve.SetStructTag("json") β€” Use json tags (default fallback)
  • beve.SetStructTag("msgpack") β€” Use msgpack tags
  • beve.SetStructTag("cbor") β€” Use cbor tags
  • beve.SetStructTag("beve") β€” Use beve tags (default)

Benefits:

  • βœ… Zero code changes β€” Use existing struct tags
  • βœ… Automatic fallback β€” Falls back to json tags if configured tag not found
  • βœ… Zero overhead β€” Tag resolution happens at cache build time
  • βœ… Thread-safe β€” Can be changed at runtime (clears cache)

Example with Multiple Tags:

type Product struct {
    ID    int64   `beve:"id" json:"product_id" msgpack:"pid"`
    Name  string  `beve:"name" json:"title" msgpack:"n"`
    Price float64 `beve:"price" json:"price" msgpack:"p"`
}

// Use different tag configurations
beve.SetStructTag("beve")    // Uses: id, name, price
beve.SetStructTag("json")    // Uses: product_id, title, price
beve.SetStructTag("msgpack") // Uses: pid, n, p

Get Current Tag:

currentTag := beve.GetStructTag() // Returns "beve", "json", etc.

Best Practice: Set once at application startup:

func init() {
    beve.SetStructTag("json") // Use json tags throughout the app
}

πŸ“˜ See full struct-tags example β†’

4. Performance Best Practices

πŸš€ Key Optimization: Always Pass Pointers!

// βœ… GOOD: Pass pointer (1 allocation)
user := User{...}
data, _ := beve.Marshal(&user)

// ❌ BAD: Pass value (3 allocations, slower)
user := User{...}
data, _ := beve.Marshal(user)  // Creates heap copy!

Why? Passing values triggers reflect.New to create an addressable copy (19.40% of total allocations). Pointers are already addressable.

Performance Impact:

  • 67% fewer allocations (3 β†’ 1)
  • 1.14Γ— faster marshal (1015ns β†’ 889ns)
  • 10% less memory (2979B β†’ 2690B)

Zero-Copy Mode for Hot Paths:

// Ultra-fast mode: 0 allocations, 0 bytes!
data, _ := beve.MarshalZeroCopy(&user)  // 277ns vs 889ns

Buffer Pooling for Batches:

// Reuse encoder for batch operations
enc := beve.GetEncoderFromPool()
defer beve.PutEncoderToPool(enc)

for _, item := range items {
    data, _ := enc.Marshal(&item)  // No buffer allocation
    enc.Reset()  // Reset for next item
}

Results vs CBOR (see OPTIMIZATION_REPORT.md):

  • Small unmarshal: 3.2Γ— faster than CBOR
  • Medium marshal: 2.0Γ— faster than CBOR
  • Large unmarshal: 2.8Γ— faster, 93% fewer allocations
  • Zero-copy mode: Exclusive to BEVE, 0 allocs!

5. Type System Support

// βœ… Primitives
int, int8, int16, int32, int64
uint, uint8, uint16, uint32, uint64
float32, float64
bool, string

// βœ… Complex Types
[]T           // Slices (typed arrays for primitives)
[N]T          // Fixed arrays
map[string]T  // String-keyed maps
map[int]T     // Integer-keyed maps
*T            // Pointers (nullable)

// βœ… Nested Structs
type Address struct { City string }
type User struct { Addr Address }

// βœ… time.Time (optimized fast path)
CreatedAt time.Time `beve:"created_at"`

5. Custom Binary Marshaling

Implement BinaryMarshaler for custom types:

type Point struct {
    X, Y float64
}

func (p Point) MarshalBEVE() ([]byte, error) {
    return beve.Marshal([]float64{p.X, p.Y})
}

func (p *Point) UnmarshalBEVE(data []byte) error {
    var coords []float64
    if err := beve.Unmarshal(data, &coords); err != nil {
        return err
    }
    p.X, p.Y = coords[0], coords[1]
    return nil
}

6. Streaming API

// Encode multiple objects
var buf bytes.Buffer
enc := beve.NewEncoder(&buf)
enc.Encode(user1)
enc.Encode(user2)

// Decode multiple objects
dec := beve.NewDecoder(buf.Bytes())
dec.Decode(&user1)
dec.Decode(&user2)

7. Buffer Pooling (Zero Allocation)

// Automatic pooling with GetEncoderFromPool
enc := beve.GetEncoderFromPool()
defer beve.PutEncoderToPool(enc)

enc.Encode(data)
result := enc.Bytes()

πŸ”§ Advanced Features

JSON ↔ BEVE Translator

Seamlessly convert between JSON and BEVE formats:

import "github.com/beve-org/beve-go/translator"

// JSON β†’ BEVE
jsonData := []byte(`{"name":"Alice","age":30}`)
beveData, err := translator.FromJSON(jsonData)

// BEVE β†’ JSON
jsonData, err := translator.ToJSON(beveData)

// BEVE β†’ Pretty JSON
jsonStr, err := translator.ToJSONIndent(beveData, "", "  ")
fmt.Println(jsonStr)
// Output:
// {
//   "name": "Alice",
//   "age": 30
// }

// With statistics
beveData, stats, err := translator.FromJSONWithStats(jsonData)
fmt.Printf("Space saved: %.1f%%\n", stats.Savings*100)
fmt.Printf("Compression ratio: %.2fx\n", stats.CompressionRatio)

Translator Features:

  • βœ… Bidirectional JSON ↔ BEVE conversion
  • βœ… Zero intermediate structs (direct translation)
  • βœ… Type preservation (maintains JSON semantics)
  • βœ… Validation (built-in validators)
  • βœ… Statistics (compression metrics)

πŸ“š Read full translator documentation β†’

Code Generator (bevegen)

Generate optimized marshal/unmarshal code (10Γ— faster than reflection):

//go:generate bevegen -type=User

type User struct {
    ID    int64  `beve:"id"`
    Name  string `beve:"name"`
    Email string `beve:"email,omitempty"`
}

Run:

go generate

This generates user_beve.go with:

  • func (u *User) MarshalBEVE() ([]byte, error) β€” Zero-reflection encoding
  • func (u *User) UnmarshalBEVE(data []byte) error β€” Inlined field access

bevegen Benefits:

  • ⚑ 10Γ— faster than reflection
  • πŸ“¦ Smaller binary size (no reflect package)
  • πŸ”’ Type-safe generated code
  • 🎯 Inlinable optimizations

πŸ“š Read bevegen documentation β†’


πŸ—οΈ Architecture & Design

Binary Format Overview

BEVE uses a tagged, self-describing binary format:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Header  β”‚   Size   β”‚      Data      β”‚
β”‚ (1 byte) β”‚ (varint) β”‚   (payload)    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Type Headers (3-bit):

  • 0b000 β†’ null/boolean
  • 0b001 β†’ number (int/uint/float)
  • 0b010 β†’ string (UTF-8)
  • 0b011 β†’ object (key-value pairs)
  • 0b100 β†’ typed array (SIMD-optimized)
  • 0b101 β†’ generic array (mixed types)
  • 0b110 β†’ extensions (matrices, complex numbers)

Key Optimizations:

  • πŸ“¦ Varint encoding for integers (1-4 bytes instead of 8)
  • 🎯 Typed arrays for primitives (no per-element headers)
  • ⚑ Little-endian for modern CPU performance
  • πŸ”₯ SIMD paths for bulk array operations

πŸ“˜ Full specification β†’

Performance Optimizations

  1. Stack Encoding (143ns for small structs)

    • Pre-allocated 256-byte stack buffer
    • Zero heap allocations for typical payloads
  2. Cache-Aware Encoding (181-253ns)

    • Field encoding cached in 4KB hot buffer
    • Reduces memory bandwidth by 60%
  3. SIMD Array Encoding (8-10Γ— faster for large arrays)

    • ARM64 NEON instructions for float32/float64
    • AMD64 AVX2 for integer arrays
    • Automatic CPU feature detection
  4. Buffer Pooling (8-9ns overhead)

    • Go 1.21+ per-P local caching
    • Zero lock contention
    • Automatic GC integration
  5. Arena Allocator (55% faster with pooling, Oct 2025) πŸ†•

    • Bulk allocation for temporary buffers (~2ns vs ~20ns heap)
    • Arena pooling reduces create/destroy overhead
    • Best for large arrays and roundtrip scenarios
    • Optional: zero-impact when not used

πŸ“Š Detailed optimization docs β†’


🌐 Use Cases & Examples

Example 1: REST API Serialization

func UserHandler(w http.ResponseWriter, r *http.Request) {
    user := getUser(r.Context())
    
    // Encode to BEVE
    data, err := beve.Marshal(user)
    if err != nil {
        http.Error(w, err.Error(), 500)
        return
    }
    
    w.Header().Set("Content-Type", "application/beve")
    w.Write(data)
}

Example 2: Redis Caching

import "github.com/redis/go-redis/v9"

func CacheUser(ctx context.Context, user *User) error {
    // Encode to BEVE (30% smaller than JSON)
    data, err := beve.Marshal(user)
    if err != nil {
        return err
    }
    
    // Store in Redis
    key := fmt.Sprintf("user:%d", user.ID)
    return rdb.Set(ctx, key, data, time.Hour).Err()
}

func GetCachedUser(ctx context.Context, id int64) (*User, error) {
    key := fmt.Sprintf("user:%d", id)
    data, err := rdb.Get(ctx, key).Bytes()
    if err != nil {
        return nil, err
    }
    
    var user User
    err = beve.Unmarshal(data, &user)
    return &user, err
}

Example 3: Event Streaming

func PublishEvent(conn net.Conn, event *Event) error {
    enc := beve.NewEncoder(conn)
    return enc.Encode(event)
}

func ConsumeEvents(conn net.Conn) error {
    dec := beve.NewDecoder(conn)
    
    for {
        var event Event
        if err := dec.Decode(&event); err != nil {
            if err == io.EOF {
                break
            }
            return err
        }
        
        handleEvent(&event)
    }
    return nil
}

Example 4: GORM Integration

BEVE works seamlessly with GORM models:

import (
    "gorm.io/gorm"
    beve "github.com/beve-org/beve-go"
)

type Product struct {
    gorm.Model
    Code  string `gorm:"size:100" beve:"code"`
    Price uint   `beve:"price"`
}

// Cache GORM model in Redis
product := Product{Code: "D42", Price: 100}
db.Create(&product)

data, _ := beve.Marshal(product)
redis.Set("product:1", data, time.Hour)

// Retrieve from cache
var cached Product
data, _ := redis.Get("product:1").Bytes()
beve.Unmarshal(data, &cached)

🧩 BEVE Extensions (v1.3.0)

8 production-ready extensions for specialized use cases:

Extension 0: Field Index

O(1) field access without full deserialization:

obj := map[string]interface{}{
    "name": "Alice",
    "age": 30,
    "email": "[email protected]",
}

// Encode with field index
data, _ := beve.EncodeIndexedObject(obj)

// Fast field access (77ns, O(1))
email, _ := beve.ReadFieldByName(data, "email")
fmt.Println(email) // "[email protected]"

Performance: 77ns per field (6.5Γ— faster than linear search)

Extension 1: Typed Object Arrays

25-48% size reduction for homogeneous arrays:

users := []User{
    {Name: "Alice", Age: 30},
    {Name: "Bob", Age: 25},
}

// Automatic typed array encoding
data, _ := beve.MarshalTyped(users)

// Or use auto-detection
data, _ := beve.MarshalAuto(users)

Size Savings: 25-48% smaller than standard encoding

Extension 4: Nanosecond Timestamps

Fixed 14-16 byte encoding with nanosecond precision:

now := time.Now()

// Encode timestamp (14-16 bytes)
data, _ := beve.MarshalTimestamp(now)

// Decode with full precision
decoded, _ := beve.UnmarshalTimestamp(data)
fmt.Println(decoded.Equal(now)) // true

Features: UTC/local timezone, nanosecond precision, fixed size

Extension 5 & 6: Duration and Interval

Time durations and ranges:

// Duration (14 bytes, signed)
duration := 5*time.Hour + 30*time.Minute
data, _ := beve.EncodeDuration(duration)

// Interval (29 bytes, 2 timestamps)
start := time.Now()
end := start.Add(time.Hour)
data, _ := beve.EncodeInterval(start, end)

Extension 8: Binary UUID

50% size reduction vs string UUIDs:

// From binary (18 bytes vs 36 for string)
uuid := [16]byte{...}
data, _ := beve.MarshalUUID(uuid)

// From string
uuidStr := "6ba7b810-9dad-41d1-80b4-00c04fd430c8"
data, _ := beve.MarshalUUIDString(uuidStr)

Performance: 0.3ns marshal, 400Γ— faster than string encoding

Extension 9: RegExp

Compact regex pattern storage:

pattern := regexp.MustCompile("^[a-z0-9._%+-]+@[a-z0-9.-]+\\.[a-z]{2,}$")

// Encode regex (7-51 bytes)
data, _ := beve.MarshalRegExp(pattern)

// Decode and use
decoded, _ := beve.UnmarshalRegExp(data)
decoded.MatchString("[email protected]") // true

Global Auto-Detection

Extensions work seamlessly with standard Marshal/Unmarshal:

// Automatically detects and decodes any extension
var result interface{}
beve.Unmarshal(data, &result)

Arena Allocator (v1.3.0) πŸ†•

Reduce GC pressure with arena allocation for high-throughput scenarios:

import "github.com/beve-org/beve-go/core"

// Create arena pool for reuse (55% faster than create/destroy)
pool := core.NewArenaPool(16 * 1024) // 16KB arenas

// Encode with arena
arena := pool.Get()
enc := core.GetEncoderFromPoolWithArena(arena)
enc.Encode(largeData)
core.PutEncoderToPool(enc)
pool.Put(arena) // Reuse arena

// Decode with arena
arena = pool.Get()
dec := core.NewDecoderWithArena(data, arena)
var result LargeStruct
dec.Decode(&result)
core.PutDecoderToPool(dec)
pool.Put(arena)

Performance (Apple M2 Max):

  • Arena pool reuse: 55% faster (599ns β†’ 270ns)
  • Large arrays: 11% faster encoding (3240ns β†’ 2871ns)
  • captureRawValue: 100% allocation reduction (1β†’0 allocs)
  • Pool overhead: +11ns (acceptable for bulk operations)

When to use arenas:

  • βœ… High-throughput APIs (>10k req/sec)
  • βœ… Large array operations (>1000 elements)
  • βœ… Bulk encode/decode batches
  • ❌ Small structs (overhead > benefit)
  • ❌ Single-shot operations (use standard API)

πŸ“š Full extensions documentation β†’
πŸ“Š Extension performance report β†’


πŸ“š Documentation

Core Documentation

Test & Quality Reports

CI/CD & Automation

  • πŸš€ GitHub Actions Benchmarks β€” Multi-platform testing
  • πŸ“Š Extension Benchmarks β€” Automatic tracking of all 8 extensions
  • πŸ” Coverage Reports β€” Generated on every CI run
  • 🌍 Cross-Platform β€” ARM64 (M1, Neoverse-N2), x86_64 (EPYC), Windows

Automated Reports:

  • Platform-specific benchmark charts (PNG)
  • Coverage HTML reports with function-level analysis
  • Extension performance tracking (JSON + visualizations)
  • Multi-platform comparison matrices

Examples

API Reference

  • GoDoc β€” Full API documentation

πŸ”¬ Benchmarks

Run benchmarks locally:

# Quick benchmark
go test -bench=. -benchmem ./...

# Detailed comparison
./scripts/bench.sh

# Profile-guided optimization
./scripts/bench_pgo.sh

# Cross-platform CI benchmarks
./scripts/benchmark_ci.sh

Latest Results (Neoverse-N2 ARM64)

BenchmarkMarshal/SmallStruct-4          850,000 ns/op    1,389 B/op    3 allocs/op
BenchmarkMarshal/MediumPayload-4         95,000 ns/op   21,900 B/op    3 allocs/op
BenchmarkMarshal/LargePayload-4           8,200 ns/op  197,200 B/op    3 allocs/op

BenchmarkUnmarshal/SmallStruct-4        555,000 ns/op    3,000 B/op    4 allocs/op
BenchmarkUnmarshal/MediumPayload-4       39,600 ns/op   25,700 B/op   58 allocs/op
BenchmarkUnmarshal/LargePayload-4         1,843 ns/op  264,000 B/op  418 allocs/op

Extension Performance (v1.3.0)

All 8 extensions are benchmarked automatically in CI:

BenchmarkFieldIndex/Marshal-4           77.0 ns/op      0 B/op      0 allocs/op
BenchmarkTypedObjectArray/25Items-4    842.0 ns/op    504 B/op      2 allocs/op
BenchmarkTimestamp/Marshal-4             9.2 ns/op      0 B/op      0 allocs/op
BenchmarkDuration/Marshal-4              3.6 ns/op      0 B/op      0 allocs/op
BenchmarkInterval/Marshal-4              5.8 ns/op      0 B/op      0 allocs/op
BenchmarkUUID/Marshal-4                  0.3 ns/op      0 B/op      0 allocs/op
BenchmarkRegExp/Marshal-4               12.4 ns/op      0 B/op      0 allocs/op

Coverage: 61.7% (23 test functions, 433 assertions)

πŸ“ˆ View detailed benchmarks β†’
πŸ“Š Extension performance report β†’


🀝 Contributing

Contributions are welcome! Please read our Contributing Guide and Code of Conduct.

Development Setup

# Clone repository
git clone https://github.com/beve-org/beve-go.git
cd beve-go

# Run tests
go test ./...

# Run benchmarks
./scripts/bench.sh

# Generate code coverage
./scripts/coverage.sh

πŸ“œ License

MIT License - see LICENSE file for details.


πŸ™ Acknowledgments


πŸ“ž Support


Made with ❀️ by the BEVE team
High-performance serialization for modern Go applications

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •