Blog, COLDSURF

Memoization and WeakMap

#js

WeakMap

Recently, I came across a data structure called WeakMap in JavaScript.

When I used to code in Swift, I remember the concept of weak self, which seems somewhat similar to the functionality of WeakMap. A Map object creates strong references, which means garbage collection (GC) cannot clean it up naturally. In contrast, values assigned to a WeakMap create weak references, allowing GC to manage and release memory as needed.

Considering performance, I’ve heard that using a WeakMap can sometimes be better than using a Map.

Let’s explore the differences and similarities between Map and WeakMap.

Map vs WeakMap

1. Differences Between Map and WeakMap

  • Map strongly references both keys and values, meaning data is retained in memory until it’s explicitly removed.
  • WeakMap weakly references keys, allowing them to be garbage-collected if no other references exist. In other words, once an object key is no longer referenced elsewhere, it is automatically removed from the WeakMap.

2. Advantages of WeakMap

  • Memory Management: Since WeakMap weakly references keys, keys that are no longer in use can be garbage-collected, helping manage memory more efficiently, especially when you have many object keys in memory-sensitive applications.

3. Drawbacks of WeakMap

  • Cannot Use Primitive Keys: In WeakMap, keys must be objects. Primitives like strings and numbers cannot be used as keys.
  • Limited Use Cases for Caching: WeakMap isn’t ideal for functions that rely on numeric arguments (like Fibonacci sequences) because numbers cannot be keys in WeakMap.

This shows that WeakMap differs significantly from Map in terms of key types (only objects are allowed), which is useful for cases where we only need object-based keys that can be garbage-collected when no longer in use.

Other data structures like Set and WeakSet also exist, but I’ll cover those later.

Memoization Function

The idea of WeakMap came up as I was thinking about creating a memoization function. Memoization is an in-memory caching technique that can improve performance by avoiding redundant calculations. However, caching too much data can eventually become a performance burden.

Here’s an example of a non-optimized Fibonacci function:

function fibonacci(n: number): number {
    if (n <= 1) return n;
    return fibonacci(n - 1) + fibonacci(n - 2);
}

In this example, performance degrades as n grows because the function repeatedly computes the same values.

We can optimize it by applying a memoization function, which stores already computed values in a cache, reducing redundant calculations.

Here’s an example of a memoization function:

function memoize<T extends (...args: any[]) => any>(fn: T): T {
	const cache = new Map<string, ReturnType<T>>();
	return function (...args: Parameters<T>): ReturnType<T> {
		const key = JSON.stringify(args);
		if (cache.has(key)) {
			return cache.get(key) as ReturnType<T>;
		}
		const result = fn(...args);
		cache.set(key, result);
		return result;
	} as T;
}

This memoization function works by:

  • Converting function arguments into a string key for storage in cache.
  • Checking if the cache contains the value for the given arguments and returning it if it exists.
  • If the value doesn’t exist, the function computes it, stores it in the cache, and then returns the result.

Using this function, we can optimize the Fibonacci function:

const fibonacci = memoize((n: number): number =>  {
    if (n <= 1) return n;
    return fibonacci(n - 1) + fibonacci(n - 2);
});

console.log(fibonacci(40)); // Slower on first calculation
console.log(fibonacci(40)); // Fast on subsequent calls due to caching

However, as more values are stored in the cache, performance issues may still arise due to excessive memory usage. LRU (Least Recently Used) Caching can help mitigate this.

LRU (Least Recently Used) Cache

1. Concept of LRU Caching

LRU caching keeps the most frequently accessed items in memory and removes the least recently used ones. This ensures that commonly accessed data remains available in the cache, while old, infrequently accessed data is removed.

Example applications:

  • Pagination-heavy web applications.
  • Apps loading large amounts of data where only recent data should be cached.

2. Implementing LRU Cache

Generally, an LRU cache uses a combination of Doubly Linked List and Hash Map to allow two essential operations:

  • Moving the most recently accessed items to the front.
  • Easily removing the least recently used items.

Using JavaScript’s Map, we can implement an LRU cache as follows:

class LRUCache<K, V> {
    private capacity: number;
    private cache: Map<K, V>;

    constructor(capacity: number) {
        this.capacity = capacity;
        this.cache = new Map();
    }

    get(key: K): V | undefined {
        if (!this.cache.has(key)) {
            return undefined;
        }
        const value = this.cache.get(key);
        this.cache.delete(key);
        this.cache.set(key, value as V);
        return value;
    }

    put(key: K, value: V): void {
        if (this.cache.has(key)) {
            this.cache.delete(key);
        }
        if (this.cache.size >= this.capacity) {
            const oldestKey = this.cache.keys().next().value;
            this.cache.delete(oldestKey);
        }
        this.cache.set(key, value);
    }
}

// Usage
const lruCache = new LRUCache<string, number>(3);
lruCache.put("a", 1);
lruCache.put("b", 2);
lruCache.put("c", 3);
console.log(lruCache.get("a")); // 1
lruCache.put("d", 4); // "b" is removed as it is the least recently used
console.log(lruCache.get("b")); // undefined

3. Pros and Cons of LRU Caching

  • Pros:
      • Prioritizes frequently accessed items, improving performance.
      • Limits memory usage by removing old items, helping prevent memory leaks.
  • Cons:
      • Has a limited capacity, so items beyond the limit are not stored.
      • Implementing an efficient LRU cache can increase complexity.

4. Other Cache Management Strategies

Other caching strategies include:

  • LFU (Least Frequently Used): Removes the least frequently accessed items.
  • FIFO (First In, First Out): Removes items based on their insertion order.
  • TTL (Time-To-Live): Removes items after a set time, useful when only the latest data is needed.

Set vs Map

1. Purpose and Structure

  • Set: A collection of unique values. Adding the same value multiple times will result in only one instance.
  • Map: A collection of key-value pairs, where keys are unique and each key maps to a single value.

2. Data Storage

  • Set: Stores only values. Each value must be unique, and there is no key-value pair structure.
  • Map: Stores data as key-value pairs, where each key is unique and maps to a value.
// Set Example
const set = new Set();
set.add("apple");
set.add("banana");
set.add("apple"); // Duplicate ignored
console.log(set); // Set { "apple", "banana" }

// Map Example
const map = new Map();
map.set("name", "Alice");
map.set("age", 30);
map.set("name", "Bob"); // Overrides existing key
console.log(map); // Map { "name" => "Bob", "age" => 30 }

3. Data Access

  • Set: No index or key-based access. Accessed through iteration.
  • Map: Direct access to values by key, using map.get(key).
// Set iteration
set.forEach(value => console.log(value));

// Map key access
console.log(map.get("name")); // "Bob"
console.log(map.get("age")); // 30

Summary Table

FeatureSetMap
StructureCollection of unique valuesCollection of key-value pairs
AccessIterationDirect key-based access
DuplicatesNot allowedKeys unique, values can duplicate
Example Use CasesUnique value listsKey-value pair storage
← Go home