LRU 来自于 Cache replacement policies,是缓存淘汰策略的一种,很实用也有比较简洁的实现方法,用来面试正好,足够难、又足够简单
Explain
Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations: get and put.
get(key) - Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
put(key, value) - Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.
The cache is initialized with a positive capacity.
Follow up
Could you do both operations in O(1) time complexity?
Example
LRUCache cache = new LRUCache( 2 /* capacity */ );
cache.put(1, 1); cache.put(2, 2); cache.get(1); // returns 1 cache.put(3, 3); // evicts key 2 cache.get(2); // returns -1 (not found) cache.put(4, 4); // evicts key 1 cache.get(1); // returns -1 (not found) cache.get(3); // returns 3 cache.get(4); // returns 4
Solve
class LRUCache:
def __init__(self, capacity: int):
self.capacity = capacity
self.dic = {}
self.dummy_head = Node(0, 0)
self.dummy_tail = Node(0, 0)
self.dummy_head.next = self.dummy_tail
self.dummy_tail.prev = self.dummy_head
def get(self, key: int) -> int:
if key not in self.dic:
return -1
node = self.dic[key]
self.remove(node)
self.append(node)
return node.val
def put(self, key: int, value: int) -> None:
if key in self.dic:
self.remove(self.dic[key])
node = Node(key, value)
self.append(node)
self.dic[key] = node
if len(self.dic) > self.capacity:
head = self.dummy_head.next
self.remove(head)
del self.dic[head.key]
def remove(self, node):
p = node.prev
n = node.next
p.next = n
n.prev = p
def append(self, node):
tail = self.dummy_tail.prev
tail.next = node
self.dummy_tail.prev = node
node.prev = tail
node.next = self.dummy_tail
class Node:
def __init__(self, k, v):
self.key = k
self.val = v
self.prev = None
self.next = None
# Your LRUCache object will be instantiated and called as such:
# obj = LRUCache(capacity)
# param_1 = obj.get(key)
# obj.put(key,value)
主要是参考了这篇解答,他的思路跟其他解法的思路是一样的,都是一个双链表解决增删节点时的O(1),然后HashMap解决获取节点时的O(1),但是这篇解答的优秀之处在于他用了dummy_tail
和dummy_head
两个变量名,我一下就理解到了head和tail其实是不放任何真实数据的,他们永远都是head和tail
Performance
Runtime: 244 ms, faster than 48.31% of Python3 online submissions for LRU Cache.
Memory Usage: 23.2 MB, less than 6.06% of Python3 online submissions for LRU Cache.
性能数据不够好主要是对 Python 来说还可以用 collections 里的 OrderedDict,用了 OrderedDict 的解答如下:
from collections import OrderedDict
class LRUCache:
def __init__(self, capacity: int):
self.capacity = capacity
self.ordered_dict = OrderedDict()
def get(self, key: int) -> int:
if key not in self.ordered_dict:
return -1
self.ordered_dict.move_to_end(key)
return self.ordered_dict[key]
def put(self, key: int, value: int) -> None:
if key in self.ordered_dict:
del self.ordered_dict[key]
elif len(self.ordered_dict) >= self.capacity:
self.ordered_dict.popitem(last=False)
self.ordered_dict[key] = value
甚至还更加简洁了,不过在面试过程中,面试官多半想要的还是 Double Linked List + HashMap.