heap size issue in Hive Metastore

  • 0

heap size issue in Hive Metastore

Category : Hive

Sometime during your job running you may see job failure due to heap size. It might be because of metastore heap issue. It is encountering OutOfMemory errors, or is known to be insufficient to handle the cluster workload.

Resolution: To fix this issue you have to increase heap size for metastore in hive-end.sh (or hive-end.cmd) file. 

  • If the cluster is managed by Ambari, edit the configuration for the hive-end.sh or .cmd file in the Configuration tab for the Hive Service.
  • If the cluster is not managed by Ambari, edit the file directly and distribute it throughout the cluster
  • Locate the string that looks like this

export HADOOP_CLIENT_OPTS=”-Xmx${HADOOP_HEAPSIZE}m $HADOOP_CLIENT_OPTS ${HIVEMETASTORE_JMX_OPTS}”​ 

  • Change the last line (covering the HADOOP_CLIENT_OPTS to declare the requested heap size. For example, to allocate up to 2048m of heap, the last line would be changed to this

export HADOOP_CLIENT_OPTS=”$HADOOP_CLIENT_OPTS ${HIVEMETASTORE_JMX_OPTS} –Xmx2048m

 


Leave a Reply