Most Important topic to understand for Java Interviews

It is amazing to see that interviewers are having so much interest in HashMap implementation. Whatever topics We have in this blog they are of no use If We do not include HashMap implementation. In my recent interviews almost in each interview people asked me about HashMap implementation. I am including commonly asked questions to me during recent java interviews.

1) How HashMap is implemented?
Answer : HashMap is collection which stores the key-value pair in the Collection. It is implemented in basis of hashing algorithm. HashMap creates buckets on basis of specified size. Each time any key - value stored in HashMap, it calculates hashcode of the key and it will try to figure out specific bucket for that key and store the key-value pair in specific bucket location.

2) How retrieval works in HashMap?
Answer : HashMap retrieval is based on hashcode of the key. Whenever we try to retrieve values from HashMap, we pass key to get associated value. HashMap calls hashcode() method to get hashcode, after this HashMap will locate the bucket where this hashcode can be stored. Once it figured out the bucket, it will find exact location for this hashcode in the bucket and then returns the associated value. Here we understand why overriding the hashcode() method is so important in any application specific Key object.

3) What if two keys are returning same hashcode value through hashcode() method? How they will be stored and How to retrive these values?
Answer : This is very interesting question and if you answer this question it means you are very much clear about hashmap internals. Lets take below example where keys 'John Smith' and 'Sandra Dee' is returning the same hashcode.

Lets understand how the key and values are stored in hashmap:
  In hashmap data is stored as Map.Entry object which helps hashmap to return associated value. here are methods defined in Map.Entry interface and each method is key to understand the hashmap internals.


Interface Map.Entry - return type : method()
boolean    
equals(Object o)
 Compares the specified object with this entry for equality.
K
getKey()
 Returns the key corresponding to this entry.
V
getValue()
 Returns the value corresponding to this entry.
int
hashCode()
 Returns the hash code value for this map entry.
V
setValue(V value)
 Replaces the value corresponding to this entry with the specified value (optional operation).

In Above picture we can see same hashcode key Map.Entry objects are located in same space. And they are stored as LinkedList. All keys which are having same hashcode will be stored in this LinkedList and This concept is called Hash collision as we are trying to store key-value in hashmap which is having same hashcode. 

Lets see how to retrieve values from HashMap which are having keys with same hashcode(). If we understand the key-values are stored in linkedList then it is easy to explain how these values can be retrieved. These values are retrieved using key's equals() method. When we pass these keys hashmap will figure out the exact location in buckets and then it will get the linkedList in that location. Now hashmap will use key's equal method to get exact key-value Map.Entry object. It will return value for which equals() method is returning true. This is the reason it is very important to implement proper equals() method in Key Object.

4) Why proper hashcode() implementation is important for HashMap performance?
Answer : As we have seen in previous questions if keys are returning same hashcode then they are stored as LinkedList in same bucket. and in retrieval we traverse in linkedlist for each key and compare these keys using equals() method. In this case the performance of hashmap become O(n) and we miss the opportunity to get normal performance of O(1) which is given by hashmap, where keys are evenly distributed.

5) Recently I came across this interesting question : We have hashmap of fix size of 1000 in our application, and during peak load the ke-values are increased to 10,000 and we started getting performance degrade in hashmap. Please explain the possible reason for this?
Answer : Here we have understand the another concept of initial capacity and load factor in hashmap. Here are 3 most commonly used constructor for hashmap. In default constructor without mentioning the initialCapacity and load factor, these values are set by JVM to 16 and 0.75 It means we have 16 locations to store key-values and hashamp will resize itself when it reaches to 0.75 times of the initialCapacity. In this case when we store 12'th key-value in hashmap, it will increase the size of hashmap to double (32). In this process it will recreate the buckets and one by one will put all key-value in resized hashmap. This is resource consuming process and it will definitely degrade the hashmap performance.


HashMap()
     default initial capacity (16) and the default load factor (0.75).
HashMap(int initialCapacity)
    specified initial capacity and the default load factor (0.75).
HashMap(int initialCapacity, float loadFactor)
    specified initial capacity and load factor.

To overcome this issue we can decide the proper initialCapacity of hashmap and we should try to avoid the resizing of it.

6) Why it is important to override hashcode() and equals() method of hashmap key?
Answer : In HashMap hashcode() method is used to store the key-value in specific location and it is used in retrieval. equals() method is useful when we have same hashcode() return by key object and then equals() method is used to figure out exact key object in linkedlist as mentioned in previous question.

7) What is difference in HashMap and Hashtable?
Answer : Main difference between these two is: HashMap is not thread safe and HashMap allows null value as key. We will use HashMap in application where we are not worried about concurrent updates in the map.
   Hashtable is good in case, where we do not have frequent reads and writes on Hashtable at same time. Hashtable can be bottleneck is application because whole HashTable  get blocked if we do read and write from it. Hashtable is having single lock on whole data structure which becomes bottleneck for application. This problem is solved in java 1.5 version, where we have concurrent implementation of HashMap which is ConcurrentHashMap. It does not block on whole map instead it uses Lock Striping where lock is held on individual buckets in HashMap.


  • We are covering here -'Java garbage collection interview questions' or 'Java memory interview questions' in d...

  • ' Java database questions ' or ' Database interview questions for java developer ' is covered in this blog. All ent...

  • Java Concurrency interview question - In year 2004 when technology gurus said innovation in Java is gone down and Sun Microsystems [Now Or...

  • 'Java investment bank interview' generally contains 'Java Design Pattern' questions. If you want to be a professional Java ...
  • 7 comments:

    1. "And they are stored as LinkedList. All keys which are having same hashcode will be stored in this LinkedList and This concept is called Hash collision"

      The collision handling strategy is known as "Seperate Chaining"

      PS: One of my favourite interview questions; when can a race condition occur while using hashMap? :)

      ReplyDelete
    2. Yes, there are potential race conditions with Hashmap

      - When resizing an HashMap by two threads at the same time.

      - When collisions happens. Collision can happen when two elements map to the same cell even if they have a different hashcode. During the conflict resolution, there can be a race condition and one added key/value pair could be overwritten by another pair inserted by another thread.

      ReplyDelete
    3. very good explanation..
      www.mindclues.com also have a good tutorials

      ReplyDelete
    4. This comment has been removed by the author.

      ReplyDelete
    5. Finding the time and actual effort to create a superb article like this is great thing. I’ll learn many new stuff right here! Good luck for the next post buddy..
      Java Training in Chennai

      ReplyDelete
    6. This comment has been removed by the author.

      ReplyDelete