Application performance highly depends on how the data is loaded into memory and how it is reused during application processing. If our application does database query for each static \client data which is rarely changing then we need to consider putting this static data in memory\cache. The application performance increase drastically if we reuse this static data from memory. In the market we have lots of open source cache implementation available like ehCache, osCache[Terracota], Guava library etc. In some cases these open source implementation can be more complicated compare to application requirement. After java 1.5 concurrent collections it is become very handy to write own cache. In static data cache we always want to control memory foot print and we use LRU [ Least recently Used] idiom to do this. We fix the size of cache and clear the oldest cache data if the cache needs to be refreshed behind its size. Here is example of LRU cache which is backed by ConcurrentHashMap and controlled by ConcurrentLinkedQueue.
Caching Implementation without concurrency using wait() and notify()
Cache simply means loading the data by some key and it should be in memory. If we are running our application in clustered mode then available implémentation is Coherence which can isolated interaction with DB and provided fail over and syncronization between multiple application cache.
Here is example how we can build LRU cache in java using concurrent package. This implementation will be fragile and blocking if we would have implemented cache using old synchronize idiom. Cache data is backed up by ConcurrentHashMap so that reading from cache is not blocked and writing to cache will have concurrent effect, it means only the buckets will be locked during write operation.
Caching Implementation without concurrency using wait() and notify()
Here queue is used to just check the size right ?. if so we can use linked hash map which can be of fixed size..
ReplyDeleteYes we can use, but the issue is it is not threadsafe. Ideally we can use concurrentlinkedhashmap [https://code.google.com/p/concurrentlinkedhashmap/] which is provided by google library.
ReplyDeleteCan we not use LinkedHashMap alone, ConcurrentLinkedQueue may be creating extra Objects.
ReplyDeleteNow days guava cache is most popular.For this we use google guava library and make cache in easy steps.see fully example
ReplyDeletehttp://www.javaproficiency.com/2015/05/guava-cache-memory-example.html
It's not an LRU cache, LRU means least recently *used* not least recently put. Also I can't see how you "concurrent" version is thread safe. You can have very odd behavior if multiple threads put some value for the same key.
ReplyDeletePlease note: There's a serious bug in JDK 8, which leads to memory issues and high CPU usage, because threads are stuck in ConcurrentLinkedQueue#remove.
ReplyDeletehttps://bugs.openjdk.java.net/browse/JDK-8054446
My workaround is to use a ConcurrentLinkedDeque instead.
Really something Grate in this article Thanks for sharing this. We are providing Online Training Classes. After reading this slightly I am changed my way of introduction about my training to people.
ReplyDeleteBest Linux training in Noida
Linux Training Institute in Noida
Shell Scripting Training Institute in Noida