Design and implement a data structure for a Least Frequently Used (LFU) cache.

Implement the  LFUCache  class:

  • LFUCache(int capacity) Initializes the object with the  capacity  of the data structure.
  • int get(int key)  Gets the value of the  key  if the  key  exists in the cache. Otherwise, returns  -1 .
  • void put(int key, int value) Update the value of the  key  if present, or inserts the  key  if not already present. When the cache reaches its  capacity , it should invalidate and remove the least frequently used key before inserting a new item. For this problem, when there is a tie (i.e., two or more keys with the same frequency), the least recently used  key  would be invalidated.

To determine the least frequently used key, a use counter is maintained for each key in the cache. The key with the smallest use counter is the least frequently used key.

When a key is first inserted into the cache, its use counter is set to  1  (due to the  put  operation). The use counterfor a key in the cache is incremented either a  get  or  put  operation is called on it.

Example 1:

["LFUCache", "put", "put", "get", "put", "get", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [3], [4, 4], [1], [3], [4]]
[null, null, null, 1, null, -1, 3, null, -1, 3, 4]


// cnt(x) = the use counter for key x

// cache=[] will show the last used order for tiebreakers (leftmost element is most recent)

LFUCache lfu = new LFUCache(2);

lfu.put(1, 1);

// cache=[1,_], cnt(1)=1 lfu.put(2, 2); /

/ cache=[2,1], cnt(2)=1, cnt(1)=1 lfu.get(1);

// return 1

// cache=[1,2], cnt(2)=1, cnt(1)=2 lfu.put(3, 3);

// 2 is the LFU key because cnt(2)=1 is the smallest, invalidate 2.

// cache=[3,1], cnt(3)=1, cnt(1)=2 lfu.get(2);

// return -1 (not found) lfu.get(3);

// return 3

// cache=[3,1], cnt(3)=2, cnt(1)=2 lfu.put(4, 4);

// Both 1 and 3 have the same cnt, but 1 is LRU, invalidate 1.

// cache=[4,3], cnt(4)=1, cnt(3)=2 lfu.get(1);

// return -1 (not found) lfu.get(3);

// return 3

// cache=[3,4], cnt(4)=1, cnt(3)=3 lfu.get(4);

// return 4

// cache=[3,4], cnt(4)=2, cnt(3)=3


  • 0 <= capacity, key, value <= 104
  • At most  105  calls will be made to  get  and  put .

Follow up: Could you do both operations in  O(1)  time complexity?


What is my data structure?

1. A Doubly linked Node

class Node:
+ key: int
+ value: int
+ freq: int
+ prev: Node
+ next: Node

2. A Doubly Linked List

Note: This part could be replaced by  OrderedDict , I implemented it by hand for clarity

class DLinkedList:
- sentinel: Node
+ size: int
+ append(node: Node) -> None
+ pop(node: Node) -> Node

3. Our LFUCache

class LFUCache:
- node: dict[key: int, node: Node]
- freq: dict[freq: int, lst: DlinkedList]
- minfreq: int
+ get(key: int) -> int
+ put(key: int, value: int) -> None




Each key is mapping to the corresponding node ( self._node ), where we can retrieve the node in  O(1)  time.

Each frequency  freq  is mapped to a Doubly Linked List ( self._freq ), where all nodes in the  DLinkedList  have the same frequency,  freq . Moreover, each node will be always inserted in the head (indicating most recently used).

A minimum frequency  self._minfreq  is maintained to keep track of the minimum frequency of across all nodes in this cache, such that the DLinkedList with the min frequency can always be retrieved in O(1) time.

Here is how the algorithm works


  1. query the  node  by calling  self._node[key]
  2. find the frequency by checking  node.freq , assigned as  f , and query the  DLinkedList  that this node is in, through calling  self._freq[f]
  3. pop this node
  4. update node’s frequence, append the node to the new  DLinkedList  with frequency  f+1
  5. if the  DLinkedList  is empty and  self._minfreq == f , update  self._minfreq  to  f+1 .
  6. return  node.val

put(key, value)

  • If key is already in cache, do the same thing as  get(key) , and update  node.val  as  value
  • Otherwise:
    1. if the cache is full, pop the least frequenly used element (*)
    2. add new node to  self._node
    3. add new node to  self._freq[1]
    4. reset  self._minfreq  to 1

(*) The least frequently used element is the tail element in the DLinkedList with frequency self._minfreq


Below is the implementation with detailed comment as well.

import collections
class Node:
def __init__(self, key, val):
self.key = key
self.val = val
self.freq = 1
self.prev = = None
class DLinkedList:
""" An implementation of doubly linked list.
Two APIs provided:
append(node): append the node to the head of the linked list.
pop(node=None): remove the referenced node. 
If None is given, remove the one from tail, which is the least recently used.
Both operation, apparently, are in O(1) complexity.
def __init__(self):
self._sentinel = Node(None, None) # dummy node = self._sentinel.prev = self._sentinel
self._size = 0
def __len__(self):
return self._size
def append(self, node): =
node.prev = self._sentinel = node = node
self._size += 1
def pop(self, node=None):
if self._size == 0:
if not node:
node = self._sentinel.prev = = node.prev
self._size -= 1
return node
class LFUCache:
def __init__(self, capacity):
:type capacity: int
Three things to maintain:
1. a dict, named as `self._node`, for the reference of all nodes given key.
That is, O(1) time to retrieve node given a key.
2. Each frequency has a doubly linked list, store in `self._freq`, where key
is the frequency, and value is an object of `DLinkedList`
3. The min frequency through all nodes. We can maintain this in O(1) time, taking
advantage of the fact that the frequency can only increment by 1. Use the following
two rules:
Rule 1: Whenever we see the size of the DLinkedList of current min frequency is 0,
the min frequency must increment by 1.
Rule 2: Whenever put in a new (key, value), the min frequency must 1 (the new node)
self._size = 0
self._capacity = capacity
self._node = dict() # key: Node
self._freq = collections.defaultdict(DLinkedList)
self._minfreq = 0
def _update(self, node):
This is a helper function that used in the following two cases:
1. when `get(key)` is called; and
2. when `put(key, value)` is called and the key exists.
The common point of these two cases is that:
1. no new node comes in, and
2. the node is visited one more times -> node.freq changed -> 
thus the place of this node will change
The logic of this function is:
1. pop the node from the old DLinkedList (with freq `f`)
2. append the node to new DLinkedList (with freq `f+1`)
3. if old DlinkedList has size 0 and self._minfreq is `f`,
update self._minfreq to `f+1`
All of the above opeartions took O(1) time.
freq = node.freq
if self._minfreq == freq and not self._freq[freq]:
self._minfreq += 1
node.freq += 1
freq = node.freq
def get(self, key):
Through checking self._node[key], we can get the node in O(1) time.
Just performs self._update, then we can return the value of node.
:type key: int
:rtype: int
if key not in self._node:
return -1
node = self._node[key]
return node.val
def put(self, key, value):
If `key` already exists in self._node, we do the same operations as `get`, except
updating the node.val to new value.
Otherwise, the following logic will be performed
1. if the cache reaches its capacity, pop the least frequently used item. (*)
2. add new node to self._node
3. add new node to the DLinkedList with frequency 1
4. reset self._minfreq to 1
(*) How to pop the least frequently used item? Two facts:
1. we maintain the self._minfreq, the minimum possible frequency in cache.
2. All cache with the same frequency are stored as a DLinkedList, with
recently used order (Always append at head)
Consequence? ==> The tail of the DLinkedList with self._minfreq is the least
recently used one, pop it...
:type key: int
:type value: int
:rtype: void
if self._capacity == 0:
if key in self._node:
node = self._node[key]
node.val = value
if self._size == self._capacity:
node = self._freq[self._minfreq].pop()
del self._node[node.key]
self._size -= 1
node = Node(key, value)
self._node[key] = node
self._minfreq = 1
self._size += 1

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x
error: Alert: Content is protected !!