Category Archives: atlas

  • 0

Hive metastore critical alerts with ExecutionFailed: Execution of ‘export HIVE_CONF_DIR=’/usr/hdp/current/hive-metastore/conf

When you install Atlas and configure it then you may see following alert in Ambari Hive Service.

And once you check this alert details, you will see following error :

Metastore on m1.hdp22 failed (Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/alerts/alert_hive_metastore.py”, line 200, in execute
timeout_kill_strategy=TerminateStrategy.KILL_PROCESS_TREE,
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 155, in __init__
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 160, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 124, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 72, in inner
result = function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of ‘export HIVE_CONF_DIR=’/usr/hdp/current/hive-metastore/conf’ ; hive –hiveconf hive.metastore.uris=thrift://m1.hdp22:9083 –hiveconf hive.metastore.client.connect.retry.delay=1 –hiveconf hive.metastore.failure.retries=1 –hiveconf hive.metastore.connect.retries=1 –hiveconf hive.metastore.client.socket.timeout=14 –hiveconf hive.execution.engine=mr -e ‘show databases;” returned 1. log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Logging initialized using configuration in file:/etc/hive/2.6.1.0-129/0/hive-log4j.properties
Exception in thread “main” java.lang.ExceptionInInitializerError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.atlas.hive.hook.HiveHook.initialize(HiveHook.java:71)
at org.apache.atlas.hive.hook.HiveHook.<init>(HiveHook.java:41)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(HookUtils.java:60)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1386)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1370)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1598)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1291)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1158)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1148)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:217)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:169)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:380)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:315)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:712)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:685)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.lang.NullPointerException
at org.apache.atlas.hook.AtlasHook.<clinit>(AtlasHook.java:74)
… 29 more
)

Root Cause: This happens when you have installed Atlas on that server where you do not have hive client. Actually you have org.apache.atlas.hive.hook.HiveHook in hive.exec.post.hooks hive property.

Solution :  So to get rid of this alert we need to either remove this parameters from property but as we are using Atlas so we can’t delete it then another option is installed hive client on the same server where you have atlas server.

 

Please feel free to give your valuable feedback to improve articles.


  • 0

Sqoop import is failing after enabling atlas with ERROR security.InMemoryJAASConfiguration: Unable to add JAAS configuration

When you run Sqoop import with teradata or mysql/oracle then it might fail after installing and enabling atlas in your cluster with following error.
17/08/10 04:31:56 ERROR security.InMemoryJAASConfiguration: Unable to add JAAS configuration for client [KafkaClient] as it is missing param [atlas.jaas.KafkaClient.loginModuleName]. Skipping JAAS config for [KafkaClient]
17/08/10 04:31:58 INFO checking on the exit code
17/08/10 04:31:58 ERROR:Error with sqoop command :17/08/10 04:31:56 ERROR security.InMemoryJAASConfiguration: Unable to add JAAS configuration for client [KafkaClient] as it is missing param [atlas.jaas.KafkaClient.loginModuleName]. Skipping JAAS config for [KafkaClient]

Root Cause:

This issue is caused by authentication problems in atlas in case you have not enabled kerberos. Following parameters are set true causing the problem,You can check these parameters in /etc/sqoop/2.6.1.0-129/0/atlas-application.properties
atlas.jaas.KafkaClient.option.renewTicket=true
atlas.jaas.KafkaClient.option.useTicketCache=true

[s0998dnz@m1.hdp22 ~]$ cat /etc/sqoop/2.6.1.0-129/0/atlas-application.properties
# Generated by Apache Ambari. Tue Aug 22 06:00:47 2017

atlas.authentication.method.kerberos=False
atlas.cluster.name=HDPPROD
atlas.jaas.KafkaClient.option.renewTicket=true
atlas.jaas.KafkaClient.option.useTicketCache=true
atlas.kafka.bootstrap.servers=m2.hdp22:6667
atlas.kafka.hook.group.id=atlas
atlas.kafka.security.protocol=PLAINTEXT
atlas.kafka.zookeeper.connect=m1.hdp22:2181,m2.hdp22:2181,m3.hdp22:2181
atlas.kafka.zookeeper.connection.timeout.ms=30000
atlas.kafka.zookeeper.session.timeout.ms=60000
atlas.kafka.zookeeper.sync.time.ms=20
atlas.notification.create.topics=True
atlas.notification.replicas=1
atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
atlas.rest.address=http://m1.hdp22:21000

Solution : If you do not have kerberos enabled in your cluster then you need to set them false or sometime setting them false does not work then you have to delete these properties from ambari with following method. 

Option 1. you manually need to edit the atlas-application.properties file and change the above mentioned properties to false.

atlas.jaas.KafkaClient.option.renewTicket=false
atlas.jaas.KafkaClient.option.useTicketCache=false

But if still it is failing then you need to remove these properties from ambari like below:

Option 2: Login to ambari server and remove both parameters by running the below commands 

/var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin delete localhost <Your cluster Name> sqoop-atlas-application.properties atlas.jaas.KafkaClient.option.renewTicket 

/var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin delete localhost <Your cluster Name> sqoop-atlas-application.properties atlas.jaas.KafkaClient.option.useTicketCache

  • 1

/usr/hdp/2.6.1.0-129/atlas/hook-bin/import-hive.sh is failing with Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/Bytes

When you have installed atlas on top of your cluster and you want to sync your hive data to atlas via following method then you may see following error after sometime(~20-30 mins) running your command.

[hive@m1.hdp22 ~]$ export HADOOP_CLASSPATH=`hadoop classpath`
[hive@m1.hdp22 ~]$ export HIVE_CONF_DIR=/etc/hive/conf
[hive@m1.hdp22 ~]$ /usr/hdp/2.6.1.0-129/atlas/hook-bin/import-hive.sh
Using Hive configuration directory [/etc/hive/conf]
Log file for import is /usr/hdp/2.6.1.0-129/atlas/logs/import-hive.log
Enter username for atlas :- saurkuma
Enter password for atlas :-

Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/Bytes
at org.apache.hadoop.hive.hbase.HBaseSerDe.parseColumnsMapping(HBaseSerDe.java:184)
at org.apache.hadoop.hive.hbase.HBaseSerDeParameters.<init>(HBaseSerDeParameters.java:73)
at org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:117)
at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:54)
at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:521)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:410)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:397)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:278)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:260)
at org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:630)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:613)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.createOrUpdateTableInstance(HiveMetaStoreBridge.java:488)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.createTableInstance(HiveMetaStoreBridge.java:424)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerTable(HiveMetaStoreBridge.java:505)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importTable(HiveMetaStoreBridge.java:289)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importTables(HiveMetaStoreBridge.java:272)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(HiveMetaStoreBridge.java:143)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importHiveMetadata(HiveMetaStoreBridge.java:134)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:647)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.util.Bytes
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
… 19 more
Failed to import Hive Data Model!!!

 

Root Cause : This issue seems to be a bug . So, you need to apply hot fix on hive side. 

Resolution : To apply hot fix you can can download attached jar file ( hive-metastore-1.2.1000.2.6.1.0-129.jar) from given URl for this issue, please follow below steps to replace the jar.

https://github.com/hadoopBrogrammers/hadoop-commander/blob/master/hive-metastore-1.2.1000.2.6.1.0-129.jar

Steps to apply this hot fix:
1. Back up the hive-metastore jar from /usr/hdp/2.6.1.0-129/hive/lib to some place on hiveserver 2 and hive-metastor servers.
2. Download and copy the jar at same location .
3. restart hive-metastore,hive-server2.

 

Please feel free to give your valuable feedback or suggestion to improve article.


  • 1

Atlas Metadata Server error HTTP 503 response from http://localhost:21000/api/atlas/admin/status in 0.000s (HTTP Error 503: Service Unavailable)

In case if you are not able to access your atlas portal or you see following error in your browser or logs.

HTTP 503 response from http://localhost:21000/api/atlas/admin/status in 0.000s (HTTP Error 503: Service Unavailable)

Then please check application.log file in /var/log/atlas location and if you see following error in logs then do not worry,following the given steps and you would resolve it easily.

Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘userService’: Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private org.apache.atlas.web.dao.UserDao org.apache.atlas.web.service.UserService.userDao; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘userDao’: Invocation of init method failed; nested exception is java.lang.RuntimeException: org.apache.atlas.AtlasException: /usr/hdp/current/atlas-server/conf/users-credentials.properties not found in file system or as class loader resource

or

/usr/hdp/current/atlas-server/conf/policy-store.txt not found in file system or as class loader resource

Resolution: 

Step 1: login as atlas user or sudo to atlas then goto /usr/hdp/current/atlas-server/conf/ dir and create these files.

[s0998dnz@m1 ~]$ sudo su – atlas

[atlas@m1 ~]$ cd /usr/hdp/current/atlas-server/conf/

[atlas@m1 conf]$ touch users-credentials.properties

[atlas@m1 conf]$ touch policy-store.txt

Step 2: Now you have to update users-credentials.properties files according to your requirement. but formate would be like  “username=group::sha256-password “
e.x in my case I have following

admin=ADMIN::e7cf3ef4f17c3999a94f2c6f612e8a888e5b1026878e4e19398b23bd38ec221a

Users group can be either ADMIN, DATA_STEWARD OR DATA_SCIENTIST

Note:-password is encoded with sha256 encoding method and can be generated using unix tool.

For e.g.

echo -n “Password” | sha256sum
e7cf3ef4f17c3999a94f2c6f612e8a888e5b1026878e4e19398b23bd38ec221a –

And policy-store.txt should have following values. 

The policy store file format is as follows:
Policy_Name;;User_Name:Operations_Allowed;;Group_Name:Operations_Allowed;;Resource_Type:Resource_Name

eg. of my policy file:

adminPolicy;;admin:rwud;;ROLE_ADMIN:rwud;;type:*,entity:*,operation:*,taxonomy:*,term:*

Now restart atlas and you should be good with atlas.