2016-12-16 21 views
4

我有一个Java中的hashmap,我需要限制的大小(50000的顺序)。但我应该只删除最旧的项目。该项目的时间戳存储在输入对象的字段:从HashMap中删除最旧的对象以达到一定的大小?

Map<String, MyModel> snapshot = new HashMap<>(); 

public class MyModel { 
    private ZonedDateTime createdAt; 
    // other fields... 
} 

我也该时间戳将它们插入到地图中的顺序。

什么是最有效的方法来完成这种删除最旧的条目?请注意,时间的“阈值”是未知的,只有Map所需的最终大小。

+0

你添加项目到时间戳顺序地图? –

+0

@TJCrowder是我做 –

+1

然后我相信[Boris'answer](http://stackoverflow.com/a/41185016/157247)是做到这一点的最有效的方法,或者至少是他指向的'LinkedHashMap'无论您是使用removeEldestEntry还是直接删除条目(它都可以告诉您最老的密钥是什么)。 –

回答

13

HashMap没有“最老”的任何一个列表更好,它没有“第一”,它有no order

在另一方面,一个LinkedHashMap是专为正是这一点,它维护的条目之间的双向链表所以让他们在插入顺序,它也提供了一个removeEldestEntry方法:

public static void main(final String args[]) throws Exception { 
    final int maxSize = 4; 
    final LinkedHashMap<String, String> cache = new LinkedHashMap<String, String>() { 
     @Override 
     protected boolean removeEldestEntry(final Map.Entry eldest) { 
      return size() > maxSize; 
     } 
    }; 

    cache.put("A", "A"); 
    System.out.println(cache); 
    cache.put("B", "A"); 
    System.out.println(cache); 
    cache.put("C", "A"); 
    System.out.println(cache); 
    cache.put("D", "A"); 
    System.out.println(cache); 
    cache.put("E", "A"); 
    System.out.println(cache); 
    cache.put("F", "A"); 
    System.out.println(cache); 
    cache.put("G", "A"); 
} 

输出:

{A=A} 
{A=A, B=A} 
{A=A, B=A, C=A} 
{A=A, B=A, C=A, D=A} 
{B=A, C=A, D=A, E=A} 
{C=A, D=A, E=A, F=A} 

大打擂台

请注意,此实现不是​​。如果多个线程同时访问链接的哈希映射,并且至少有一个线程在结构上修改了映射,则它必须是​​。这一般通过synchronizing在一些自然地封装地图的对象上实现。如果不存在这样的对象,则应使用Collections.synchronizedMap方法“映射”地图。这是在创建时,这样做可以防止意外的不同步访问地图:

Map m = Collections.synchronizedMap(new LinkedHashMap(...));

LinkedHashMap JavaDoc

+1

嗯,它绝对是在对象上,但我已经要求OP澄清它们是否也按照该顺序插入... –

+1

I想知道其中一个'SortedMap'。似乎没有更多(可能更少)比维护一个单独的列表复杂。我可能会偷看'TreeMap'的内容,看它有多高效,但这似乎是一个好主意。 –

+1

OP已确认他们**正在以时间戳顺序插入! –

0

简单地说:那么一个HashMap不会。除了显而易见的:你已经迭代所有的值,检查属性;然后决定您打算删除哪些键。

换句话说:一个HashMap只有那个一个责任:将键映射到值。它不关心插入顺序,插入时间或访问密钥的频率。从这个意义上说:你应该考虑使用其他类型的Map接口的实现。

一种替代方法是使用TreeSet和客户比较器,它们根据这些时间戳自动排序

但要记住:只有2米在计算机科学坚硬的东西:

  1. 命名
  2. 缓存失效
+0

他的地图中的值包含时间。他不需要地图来照顾插入时间等。 – marstran

+0

如 - http://stackoverflow.com/a/1953516/6348498 – GurV

0

这可能是最简单的只是字符串对象添加到一个列表,只要你把东西放在地图上。那么你可以这样做:

while(map.size()>50000){ 
    map.remove(list.get(0)) 
    list.remove(0); 
} 

这是有效的,因为你实际上并不关心时间,只是顺序。

队列会比在这方面,你并不需要比接近和移出第一要素

0

我已经修改了LruCache类,从Android框架,这样做。

这里是完整的代码。

import java.util.LinkedHashMap; 
import java.util.Map; 

public class RemoveOldHashMap<K, V> { 
    private final LinkedHashMap<K, V> map; 
    /** Size of this cache in units. Not necessarily the number of elements. */ 
    private int size; 
    private int maxSize; 
    private int putCount; 
    private int createCount; 
    private int evictionCount; 
    private int hitCount; 
    private int missCount; 
    /** 
    * @param maxSize for caches that do not override {@link #sizeOf}, this is 
    *  the maximum number of entries in the cache. For all other caches, 
    *  this is the maximum sum of the sizes of the entries in this cache. 
    */ 
    public RemoveOldHashMap(int maxSize) { 
     if (maxSize <= 0) { 
      throw new IllegalArgumentException("maxSize <= 0"); 
     } 
     this.maxSize = maxSize; 
     this.map = new LinkedHashMap<K, V>(0, 0.75f, false); // false for "interaction order" 
    } 
    /** 
    * Returns the value for {@code key} if it exists in the cache or can be 
    * created by {@code #create}. If a value was returned, it is moved to the 
    * head of the queue. This returns null if a value is not cached and cannot 
    * be created. 
    */ 
    public synchronized final V get(K key) { 
     if (key == null) { 
      throw new NullPointerException("key == null"); 
     } 

     for (K k : map.keySet()) { 
      System.out.println("k = " + k); 
     } 

     V result = map.get(key); 

     for (K k : map.keySet()) { 
      System.out.println("k = " + k); 
     } 

     if (result != null) { 
      hitCount++; 
      return result; 
     } 
     missCount++; 
     // TODO: release the lock while calling this potentially slow user code 
     result = create(key); 
     if (result != null) { 
      createCount++; 
      size += safeSizeOf(key, result); 
      map.put(key, result); 
      trimToSize(maxSize); 
     } 
     return result; 
    } 
    /** 
    * Caches {@code value} for {@code key}. The value is moved to the head of 
    * the queue. 
    * 
    * @return the previous value mapped by {@code key}. Although that entry is 
    *  no longer cached, it has not been passed to {@link #entryEvicted}. 
    */ 
    public synchronized final V put(K key, V value) { 
     if (key == null || value == null) { 
      throw new NullPointerException("key == null || value == null"); 
     } 
     putCount++; 
     size += safeSizeOf(key, value); 
     V previous = map.put(key, value); 
     if (previous != null) { 
      size -= safeSizeOf(key, previous); 
     } 
     trimToSize(maxSize); 
     return previous; 
    } 
    private void trimToSize(int maxSize) { 
     while (size > maxSize && !map.isEmpty()) { 
      Map.Entry<K, V> toEvict = map.entrySet().iterator().next(); 
      if (toEvict == null) { 
       break; // map is empty; if size is not 0 then throw an error below 
      } 
      K key = toEvict.getKey(); 
      V value = toEvict.getValue(); 
      map.remove(key); 
      size -= safeSizeOf(key, value); 
      evictionCount++; 
      // TODO: release the lock while calling this potentially slow user code 
      entryEvicted(key, value); 
     } 
     if (size < 0 || (map.isEmpty() && size != 0)) { 
      throw new IllegalStateException(getClass().getName() 
        + ".sizeOf() is reporting inconsistent results!"); 
     } 
    } 
    /** 
    * Removes the entry for {@code key} if it exists. 
    * 
    * @return the previous value mapped by {@code key}. Although that entry is 
    *  no longer cached, it has not been passed to {@link #entryEvicted}. 
    */ 
    public synchronized final V remove(K key) { 
     if (key == null) { 
      throw new NullPointerException("key == null"); 
     } 
     V previous = map.remove(key); 
     if (previous != null) { 
      size -= safeSizeOf(key, previous); 
     } 
     return previous; 
    } 
    /** 
    * Called for entries that have reached the tail of the least recently used 
    * queue and are be removed. The default implementation does nothing. 
    */ 
    protected void entryEvicted(K key, V value) {} 
    /** 
    * Called after a cache miss to compute a value for the corresponding key. 
    * Returns the computed value or null if no value can be computed. The 
    * default implementation returns null. 
    */ 
    protected V create(K key) { 
     return null; 
    } 
    private int safeSizeOf(K key, V value) { 
     int result = sizeOf(key, value); 
     if (result < 0) { 
      throw new IllegalStateException("Negative size: " + key + "=" + value); 
     } 
     return result; 
    } 
    /** 
    * Returns the size of the entry for {@code key} and {@code value} in 
    * user-defined units. The default implementation returns 1 so that size 
    * is the number of entries and max size is the maximum number of entries. 
    * 
    * <p>An entry's size must not change while it is in the cache. 
    */ 
    protected int sizeOf(K key, V value) { 
     return 1; 
    } 
    /** 
    * Clear the cache, calling {@link #entryEvicted} on each removed entry. 
    */ 
    public synchronized final void evictAll() { 
     trimToSize(-1); // -1 will evict 0-sized elements 
    } 
    /** 
    * For caches that do not override {@link #sizeOf}, this returns the number 
    * of entries in the cache. For all other caches, this returns the sum of 
    * the sizes of the entries in this cache. 
    */ 
    public synchronized final int size() { 
     return size; 
    } 
    /** 
    * For caches that do not override {@link #sizeOf}, this returns the maximum 
    * number of entries in the cache. For all other caches, this returns the 
    * maximum sum of the sizes of the entries in this cache. 
    */ 
    public synchronized final int maxSize() { 
     return maxSize; 
    } 
    /** 
    * Returns the number of times {@link #get} returned a value. 
    */ 
    public synchronized final int hitCount() { 
     return hitCount; 
    } 
    /** 
    * Returns the number of times {@link #get} returned null or required a new 
    * value to be created. 
    */ 
    public synchronized final int missCount() { 
     return missCount; 
    } 
    /** 
    * Returns the number of times {@link #create(Object)} returned a value. 
    */ 
    public synchronized final int createCount() { 
     return createCount; 
    } 
    /** 
    * Returns the number of times {@link #put} was called. 
    */ 
    public synchronized final int putCount() { 
     return putCount; 
    } 
    /** 
    * Returns the number of values that have been evicted. 
    */ 
    public synchronized final int evictionCount() { 
     return evictionCount; 
    } 
    /** 
    * Returns a copy of the current contents of the cache, ordered from least 
    * recently accessed to most recently accessed. 
    */ 
    public synchronized final Map<K, V> snapshot() { 
     return new LinkedHashMap<K, V>(map); 
    } 
    @Override public synchronized final String toString() { 
     int accesses = hitCount + missCount; 
     int hitPercent = accesses != 0 ? (100 * hitCount/accesses) : 0; 
     return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]", 
       maxSize, hitCount, missCount, hitPercent); 
    } 
} 

如何使用 在我的例子,我会在整数密钥字符串对象映射。限制是2个对象,但你应该改变以实现你的目标。

RemoveOldHashMap<Integer, String> hash = new RemoveOldHashMap<Integer, String>(2 /* here is max value that the internal counter reaches */) { 
    // Override to tell how your object is measured 
    @Override 
    protected int sizeOf(Integer key, String value) { 
     return 1; // the size of your object 
    } 
}; 

参考:LruCache