Category Archives: atlas

  • 0

Hive metastore critical alerts with ExecutionFailed: Execution of ‘export HIVE_CONF_DIR=’/usr/hdp/current/hive-metastore/conf

When you install Atlas and configure it then you may see following alert in Ambari Hive Service.

And once you check this alert details, you will see following error :

Metastore on m1.hdp22 failed (Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/HIVE/”, line 200, in execute
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 155, in __init__
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 160, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 124, in run_action
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/”, line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 72, in inner
result = function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File “/usr/lib/python2.6/site-packages/resource_management/core/”, line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of ‘export HIVE_CONF_DIR=’/usr/hdp/current/hive-metastore/conf’ ; hive –hiveconf hive.metastore.uris=thrift://m1.hdp22:9083 –hiveconf hive.metastore.client.connect.retry.delay=1 –hiveconf hive.metastore.failure.retries=1 –hiveconf hive.metastore.connect.retries=1 –hiveconf hive.metastore.client.socket.timeout=14 –hiveconf hive.execution.engine=mr -e ‘show databases;” returned 1. log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Logging initialized using configuration in file:/etc/hive/
Exception in thread “main” java.lang.ExceptionInInitializerError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(
at org.apache.atlas.hive.hook.HiveHook.initialize(
at org.apache.atlas.hive.hook.HiveHook.<init>(
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
at java.lang.reflect.Constructor.newInstance(
at java.lang.Class.newInstance(
at org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(
at org.apache.hadoop.hive.ql.Driver.getHooks(
at org.apache.hadoop.hive.ql.Driver.getHooks(
at org.apache.hadoop.hive.ql.Driver.execute(
at org.apache.hadoop.hive.ql.Driver.runInternal(
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(
at org.apache.hadoop.hive.cli.CliDriver.processCmd(
at org.apache.hadoop.hive.cli.CliDriver.processLine(
at org.apache.hadoop.hive.cli.CliDriver.processLine(
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(
at org.apache.hadoop.hive.cli.CliDriver.main(
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at org.apache.hadoop.util.RunJar.main(
Caused by: java.lang.NullPointerException
at org.apache.atlas.hook.AtlasHook.<clinit>(
… 29 more

Root Cause: This happens when you have installed Atlas on that server where you do not have hive client. Actually you have org.apache.atlas.hive.hook.HiveHook in hive property.

Solution :  So to get rid of this alert we need to either remove this parameters from property but as we are using Atlas so we can’t delete it then another option is installed hive client on the same server where you have atlas server.


Please feel free to give your valuable feedback to improve articles.

  • 0

Sqoop import is failing after enabling atlas with ERROR security.InMemoryJAASConfiguration: Unable to add JAAS configuration

When you run Sqoop import with teradata or mysql/oracle then it might fail after installing and enabling atlas in your cluster with following error.
17/08/10 04:31:56 ERROR security.InMemoryJAASConfiguration: Unable to add JAAS configuration for client [KafkaClient] as it is missing param [atlas.jaas.KafkaClient.loginModuleName]. Skipping JAAS config for [KafkaClient]
17/08/10 04:31:58 INFO checking on the exit code
17/08/10 04:31:58 ERROR:Error with sqoop command :17/08/10 04:31:56 ERROR security.InMemoryJAASConfiguration: Unable to add JAAS configuration for client [KafkaClient] as it is missing param [atlas.jaas.KafkaClient.loginModuleName]. Skipping JAAS config for [KafkaClient]

Root Cause:

This issue is caused by authentication problems in atlas in case you have not enabled kerberos. Following parameters are set true causing the problem,You can check these parameters in /etc/sqoop/

[s0998dnz@m1.hdp22 ~]$ cat /etc/sqoop/
# Generated by Apache Ambari. Tue Aug 22 06:00:47 2017


Solution : If you do not have kerberos enabled in your cluster then you need to set them false or sometime setting them false does not work then you have to delete these properties from ambari with following method. 

Option 1. you manually need to edit the file and change the above mentioned properties to false.


But if still it is failing then you need to remove these properties from ambari like below:

Option 2: Login to ambari server and remove both parameters by running the below commands 

/var/lib/ambari-server/resources/scripts/ -u admin -p admin delete localhost <Your cluster Name> atlas.jaas.KafkaClient.option.renewTicket 

/var/lib/ambari-server/resources/scripts/ -u admin -p admin delete localhost <Your cluster Name> atlas.jaas.KafkaClient.option.useTicketCache

  • 1

/usr/hdp/ is failing with Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/Bytes

When you have installed atlas on top of your cluster and you want to sync your hive data to atlas via following method then you may see following error after sometime(~20-30 mins) running your command.

[hive@m1.hdp22 ~]$ export HADOOP_CLASSPATH=`hadoop classpath`
[hive@m1.hdp22 ~]$ export HIVE_CONF_DIR=/etc/hive/conf
[hive@m1.hdp22 ~]$ /usr/hdp/
Using Hive configuration directory [/etc/hive/conf]
Log file for import is /usr/hdp/
Enter username for atlas :- saurkuma
Enter password for atlas :-

Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/Bytes
at org.apache.hadoop.hive.hbase.HBaseSerDe.parseColumnsMapping(
at org.apache.hadoop.hive.hbase.HBaseSerDeParameters.<init>(
at org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(
at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(
at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(
at org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(
at org.apache.hadoop.hive.ql.metadata.Table.getCols(
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.createOrUpdateTableInstance(
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.createTableInstance(
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerTable(
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importTable(
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importTables(
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importHiveMetadata(
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.util.Bytes
at java.lang.ClassLoader.loadClass(
at sun.misc.Launcher$AppClassLoader.loadClass(
at java.lang.ClassLoader.loadClass(
… 19 more
Failed to import Hive Data Model!!!


Root Cause : This issue seems to be a bug . So, you need to apply hot fix on hive side. 

Resolution : To apply hot fix you can can download attached jar file ( hive-metastore-1.2.1000. from given URl for this issue, please follow below steps to replace the jar.

Steps to apply this hot fix:
1. Back up the hive-metastore jar from /usr/hdp/ to some place on hiveserver 2 and hive-metastor servers.
2. Download and copy the jar at same location .
3. restart hive-metastore,hive-server2.


Please feel free to give your valuable feedback or suggestion to improve article.

  • 1

Atlas Metadata Server error HTTP 503 response from http://localhost:21000/api/atlas/admin/status in 0.000s (HTTP Error 503: Service Unavailable)

In case if you are not able to access your atlas portal or you see following error in your browser or logs.

HTTP 503 response from http://localhost:21000/api/atlas/admin/status in 0.000s (HTTP Error 503: Service Unavailable)

Then please check application.log file in /var/log/atlas location and if you see following error in logs then do not worry,following the given steps and you would resolve it easily.

Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘userService’: Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private org.apache.atlas.web.dao.UserDao org.apache.atlas.web.service.UserService.userDao; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘userDao’: Invocation of init method failed; nested exception is java.lang.RuntimeException: org.apache.atlas.AtlasException: /usr/hdp/current/atlas-server/conf/ not found in file system or as class loader resource


/usr/hdp/current/atlas-server/conf/policy-store.txt not found in file system or as class loader resource


Step 1: login as atlas user or sudo to atlas then goto /usr/hdp/current/atlas-server/conf/ dir and create these files.

[s0998dnz@m1 ~]$ sudo su – atlas

[atlas@m1 ~]$ cd /usr/hdp/current/atlas-server/conf/

[atlas@m1 conf]$ touch

[atlas@m1 conf]$ touch policy-store.txt

Step 2: Now you have to update files according to your requirement. but formate would be like  “username=group::sha256-password “
e.x in my case I have following



Note:-password is encoded with sha256 encoding method and can be generated using unix tool.

For e.g.

echo -n “Password” | sha256sum
e7cf3ef4f17c3999a94f2c6f612e8a888e5b1026878e4e19398b23bd38ec221a –

And policy-store.txt should have following values. 

The policy store file format is as follows:

eg. of my policy file:


Now restart atlas and you should be good with atlas.