Skip to main content

Compute Units

Compute Units (CUs) are Thru’s mechanism for metering computational work and preventing infinite loops or resource exhaustion attacks. Every smart contract execution consumes compute units, and transactions have a limited budget that must be specified upfront.

How Compute Units Work

When you submit a transaction, you specify the maximum number of compute units your program is allowed to consume via the req_compute_units field. If your program exceeds this limit during execution, it will terminate with a TN_VM_SYSCALL_ERR_COMPUTE_UNITS_EXCEEDED error and all changes will be reverted. The CU charge model is simple: 1 CU per byte of instruction or data processed. This means that the cost of an instruction is directly proportional to its size in bytes, and memory operations are charged based on the amount of data accessed.

Instruction Costs

Different types of operations consume different amounts of compute units:

Basic Instructions

The cost of an instruction is determined by its size:
  • Regular (32-bit) Instructions: 4 CUs each
  • Compressed (16-bit) Instructions: 2 CUs each
This includes:
  • Arithmetic operations (add, sub, mul, div)
  • Logical operations (and, or, xor, shl, shr)
  • Branch and jump instructions (beq, bne, jal, jalr)
Examples:
add x1, x2, x3     # 4 CUs (32-bit instruction)
c.add x1, x2       # 2 CUs (16-bit compressed instruction)

System Calls

Base Syscall Cost: 512 CUs Every system call has a base cost of 512 compute units, representing the overhead of saving and restoring VM state (32 registers × 8 bytes × 2 operations). Additional costs may apply based on the specific syscall operation:
  • Memory allocation: Variable cost based on pages allocated
  • Account operations: Additional costs for data processing
  • Cross-program invocations: Base cost + target program execution
See the syscalls documentation for detailed costs of each system call.

Memory Operations

Memory Access: 1 CU per byte The total cost for memory access instructions is the sum of the instruction’s base cost plus the cost for the bytes being accessed (1 CU per byte).
  • Load/Store Data Costs:
    • lb / sb (byte): 1 CU
    • lh / sh (half-word): 2 CUs
    • lw / sw (word): 4 CUs
    • ld / sd (double-word): 8 CUs
Total Cost Example: A standard lw (load word) instruction is 32-bits, so its base cost is 4 CUs. It also loads a 4-byte word from memory, which costs an additional 4 CUs.
  • lw instruction (4 CUs) + data access (4 CUs) = 8 CUs total
Similarly, a standard sd (store double-word) instruction costs:
  • sd instruction (4 CUs) + data access (8 CUs) = 12 CUs total
Page Faults: 4,096 CUs When your program accesses memory that hasn’t been allocated or loaded, a page fault occurs. Each page fault costs exactly 4,096 compute units (equal to the page size).
Pre-allocate memory segments that you know you’ll need to avoid expensive page faults during critical execution paths.
Compute unit costs are deterministic and consistent across all executions of the same program with the same inputs, making them suitable for predictable fee estimation.

Memory Units

Memory Units (MUs) represent the scratch space allocation budget for your transaction. Each memory unit corresponds to 4,096 bytes (4KB) of memory that can be used for temporary storage during program execution.

How Memory Units Work

When you submit a transaction, you specify the maximum number of memory units your program is allowed to consume via the req_memory_units field. Unlike compute units, memory units can be allocated and released during execution as your program’s memory needs change.
Memory units are charged for the peak usage during transaction execution, not the total allocated over time. If you allocate 10 MUs, release 5 MUs, then allocate 3 more MUs, you are charged for 10 MUs (the peak), not 18 MUs.

Memory Unit Consumption

Memory units are consumed through various operations that require scratch space:

Anonymous Segment Operations

Growing Anonymous Segments
  • Stack allocation: Growing the stack segment consumes memory units
  • Heap allocation: Growing heap segments for dynamic memory
  • Each page (4KB) of growth consumes 1 memory unit
Examples:
# Allocating 8KB of stack space
# Consumes 2 memory units (8KB ÷ 4KB = 2 MUs)
Shrinking Anonymous Segments
  • Shrinking segments releases memory units back to your budget
  • Released memory units can be reused for other allocations
  • This allows efficient memory management patterns

Account Operations

Account Data Growth
  • Resizing account data to a larger size consumes memory units.
  • Each additional 4KB page requires 1 memory unit
  • Growth is rounded up to the nearest page boundary
Account Data Shrinkage
  • Reducing account data size releases memory units if the backing page of the account was dirty.
  • Released units become available for other operations
  • Helps optimize memory usage across account operations
Account Data Access and Copy-on-Write (CoW)
  • Read access: Reading account data does not consume memory units
  • Write access: First write to an account page triggers Copy-on-Write (CoW)
  • CoW allocation: When writing to a page for the first time, a private copy is created
  • Memory consumption: Each CoW operation consumes 1 memory unit (4KB page)
  • Subsequent writes: Additional writes to the same page do not consume more memory units
CoW Memory Pattern:
// Example: Writing to account data
account_data[0] = 1;      // First write to page 0: Consumes 1 MU (CoW)
account_data[100] = 2;    // Same page 0: No additional MU
account_data[4096] = 3;   // First write to page 1: Consumes 1 MU (CoW)
CoW allocation happens on the first write to each 4KB page of account data. Plan your memory budget to account for all pages you intend to modify, not just account growth.
Account Creation
  • Creating new accounts does not consume memory units (yet).
  • Ephemeral accounts follow the same memory unit rules
  • Account deletion releases all associated memory units

Event Emission

Event Buffer Growth
  • Emitting events grows the event buffer segment
  • Each 4KB of event data consumes 1 memory unit
  • Events accumulate throughout transaction execution
Event Memory Pattern:
// Example: Emitting multiple events
emit_event(small_event);    // May not consume MU if fits in existing buffer
emit_event(large_event);    // Grows buffer, consumes additional MUs

Memory Management Best Practices

1

Estimate peak memory usage

Calculate the maximum simultaneous memory allocation your program will need. Consider all active segments, account growth, and event buffers at their peak.
// Conservative estimate for a complex operation
let memory_units = base_stack_pages + max_account_growth_pages + event_buffer_pages;
2

Use memory efficiently

  • Release early: Shrink segments when no longer needed
  • Reuse space: Take advantage of released memory units
  • Batch operations: Group memory allocations to minimize peak usage
3

Handle allocation failures

Monitor available memory units before large allocations:
// Check available memory before growing segments
if available_memory_units() < required_pages {
    return Err(ProgramError::InsufficientMemory);
}

Memory Unit Scenarios

Simple Program Execution

~2-4 MUs
  • Base stack allocation: ~1-2 MUs
  • Small local variables: ~1 MU
  • Minimal event emission: ~1 MU

Account Creation & Growth

Variable (depends on data size)
  • New 8KB account: ~2 MUs
  • Growing existing account by 16KB: ~4 MUs
  • Multiple account operations: Sum of individual needs

Heavy Event Emission

Variable (depends on event size)
  • 100 small events (~50 bytes each): ~2 MUs
  • Large structured events: ~1 MU per 4KB
  • Event-heavy programs: Plan accordingly

Dynamic Memory Usage

Peak-based Charging
  • Allocate 20KB (5 MUs), release 12KB (3 MUs), allocate 8KB (2 MUs)
  • Charged: 5 MUs (peak usage)
  • Not: 10 MUs (total allocated)
Unlike compute units, memory units encourage efficient memory management through the release mechanism. Design your programs to shrink segments when possible to maximize available memory for subsequent operations.

State Units

This section of the specification is changing frequently. Check back soon for more details.