Author Archives: admin

  • 0

Insert overwrite query Failed with exception Unable to move source

If you have explicitly setup hive.exec.stagingdir to some location like /tmp/ or some other location then whenever you will run insert overwrite statment then you will get following error.

ERROR exec.Task (SessionState.java:printError(989)) – Failed with exception Unable to move source hdfs://clustername/apps/finance/nest/nest_audit_log_final/
.hive-staging_hive_2017-12-12_19-15-30_008_33149322272174981-1/-ext-10000 to
destination hdfs://clustername/apps/finance/nest/nest_audit_log_final

Example: 

INSERT OVERWRITE TABLE nest.nest_audit_log_final
SELECT
project_name
, application
, module_seq_num
, module_name
, script_seq_num
, script_name
, run_session_id
, load_ts
, max_posted_date
, currency
, processor
, load_date
FROM nest.nest_audit_log_final;

Then you will get following error:

INFO common.FileUtils (FileUtils.java:mkdir(519)) - 
Creating directory if it doesn't exist: 
hdfs://clustername/apps/finance/nest/nest_audit_log_final
2017-12-12 19:21:42,508 ERROR hdfs.KeyProviderCache (KeyProviderCache.java:createKeyProviderURI(87)) 
- Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
2017-12-12 19:21:42,525 ERROR exec.Task (SessionState.java:printError(989)) 
- Failed with exception Unable to move source 
hdfs://clustername/apps/finance/nest/nest_audit_log_final/
.hive-staging_hive_2017-12-12_19-15-30_008_33149322272174981-1/-ext-10000 to 
destination hdfs://clustername/apps/finance/nest/nest_audit_log_final
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source 
hdfs://clustername/apps/finance/nest/nest_audit_log_final/
.hive-staging_hive_2017-12-12_19-15-30_008_33149322272174981-1/-ext-10000 
to destination hdfs://clustername/apps/finance/nest/nest_audit_log_final
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2900)
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3140)
at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1727)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:353)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1745)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1491)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1289)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1156)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1146)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:217)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:169)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:380)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:315)
at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:413)
at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:429)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:718)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:685)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.io.IOException: rename for src path: 
hdfs://clustername/apps/finance/nest/nest_audit_log_final/
.hive-staging_hive_2017-12-12_19-15-30_008_33149322272174981-1/-ext-10000/000000_0 
to dest:hdfs://clustername/apps/finance/nest/nest_audit_log_final returned false
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2849)

Root Cause: It is because of a bug (HIVE-17063). 

Workaround:

set hive.exec.stagingdir=.hive-staging;

  • 0

last access time of a table is showing zero

If you many hundreds or thousands tables and you want to know when was the last time your hive table accessed then you can run following mysql query in mysql under hive database.

mysql> use hive;

mysql> select TBL_NAME,LAST_ACCESS_TIME from TBLS where DB_ID=<db_id>;
+—————————————————————————————————-+——————+
| TBL_NAME | LAST_ACCESS_TIME |
+—————————————————————————————————-+——————+
| df_nov_4 | 0 |
| google_feed_20151111 | 0 |
| null_recos | 0 |
| taxonomy_20160107 | 0 |

Note : but unfortunately due to bug https://issues.apache.org/jira/browse/HIVE-2526 you can not do that without making some configuration changes like follow.

1. From Ambari > Hive > Advanced > Custom hive-site, edit (if it exists) or add a new property:
hive.security.authorization.sqlstd.confwhitelist=hive\.exec\.pre\.hooks

2. From Ambari > Hive > Advanced > General, edit hive.exec.pre.hooks property and append the following to the end of the field value (comma separated):
org.apache.hadoop.hive.ql.hooks.UpdateInputAccessTimeHook$PreExec

3. Restart the affected Hive services after saving the config changes in Ambari.

4. After the restart, try running the query (select TBL_NAME,LAST_ACCESS_TIME from TBLS where DB_ID=<db_id>;) again and this time you should be able to see the last access time.

 

I hope it will help you to get your work done, feel free to give your valuable feedback.


  • 0

kill hive query where application id was not created

Tags :

Category : Hive

Sometime when you run hive queries then it does not launch application or get hung due to some resources or any other reason.

Now in this case you have to kill query to resubmit it. So, please use following steps to kill hive query itself.

 

  • hive> select * from table1;
    Query ID = mapr_201804547_2ad87f0f5627
    Total jobs = 1
    Launching Job 1 out of 1
  • Use “kill query” command, which is available from HDP 2.6.3:
    KILL QUERY <queryid1>

 

Please feel free to give your feedback.


  • 0

Purging history/old data in oozie database

After some period of time your oozie db will be big and it may start throwing space issue or might be some slowness during oozie UI load. There are some properties which will help you to purge your oozie data but sometime, the oozie purge service does not function as expected. It result to a huge oozie database size which leads to slowdown your oozie UI.

To reduce size of the tables, you can run the below query to delete some old historical records:

  • Backup the database (highly recommend).
    mysqldump -u root -p oozie > /tmp/oozie.sql 
  • Login to the oozie database.
    mysql -u root -p <password> 
  • Run the below queries to clean up the historical records old than specific date (please adjust the date accordingly):
DELETE FROM OOZIE.WF_ACTIONS where WF_ID IN (SELECT ID from OOZIE.WF_JOBS where end_time < timestamp('2015-09-01 00:00:00'));
DELETE FROM oozie.wf_jobs where end_time < timestamp('2015-09-01 00:00:00');

DELETE from oozie.coord_actions where JOB_ID in (select ID from oozie.coord_jobs where END_TIME < timestamp('2015-09-01 00:00:00'));

DELETE from oozie.coord_jobs where END_TIME < timestamp('2015-09-01 00:00:00'); 
  • If you are using mysql then run the following command to reduce the database size:
    mysqlcheck -u root -p<password> -o oozie

And now you need to happy as you have purge old data in oozie db.

You also can update oozie configuration to auto purge:

oozie.service.PurgeService.coord.older.than = 7
oozie.service.PurgeService.bundle.older.than = 7
oozie.service.PurgeService.purge.limit = 100
oozie.service.PurgeService.older.than = 7
oozie.service.PurgeService.purge.interval = 3600
oozie.service.PurgeService.purge.old.coord.action = true

I hope this article helped you, please feel free to give your valuable feedback.


  • 0

Attempt to add *.jar multiple times to the distributed cache

When we submit Spark2 action via oozie then we may see following exception in logs and job will fail:

exception: Attempt to add (hdfs://m1:8020/user/oozie/share/lib/lib_20171129113304/oozie/aws-java-sdk-core-1.10.6.jar) multiple times to the distributed cache.

java.lang.IllegalArgumentException: Attempt to add (hdfs://m1:8020/user/oozie/share/lib/lib_20171129113304/oozie/aws-java-sdk-core-1.10.6.jar) multiple times to the distributed cache.

The above error occurs because the same jar files exists in both(/user/oozie/share/lib/lib_20171129113304/oozie/ and  /user/oozie/share/lib/lib_20171129113304/spark2/) the locations.

Solution:

You need to deleted duplicate jars from Spark2 directory and will be left with only one copy in Oozie directory.

  1. Identify the oozie sharelib run the command:
    hdfs dfs -ls /user/oozie/share/lib/
  2. Use following command to list all jar files in directory Oozie:
    hdfs dfs -ls /user/oozie/share/lib/lib_<timestamp>/oozie | awk -F \/ ‘{print $8}’ > /tmp/list
  3. Use following command for deleting the jar files in Spark2 directory which matches with Oozie directory:
    for f in $(cat /tmp/list);do echo $f; hdfs dfs -rm -skipTrash /user/oozie/share/lib/lib_<timestamp>/spark2/$f;done
  4. Restart Oozie Service.

Thanks for visiting this blog, please feel free to give your valuable feedback.


  • 0

hive jdbc in zeppelin throwing permission error to anonymous user

When users run hive query in zeppelin via jdbc interperator then it is going to some anonymous user not an actual user.

INFO [2017-11-02 03:18:20,405] ({pool-2-thread-2} RemoteInterpreter.java[pushAngularObjectRegistryToRemote]:546) – Push local angular object registry from ZeppelinServer to remote interpreter group 2CNQZ1ES5:shared_process
WARN [2017-11-02 03:18:21,825] ({pool-2-thread-2} NotebookServer.java[afterStatusChange]:2058) – Job 20171031-075630_2029577092 is finished, status: ERROR, exception: null, result: %text org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: SemanticException Unable to fetch table ushi_gl. org.apache.hadoop.security.AccessControlException: Permission denied: user=anonymous, access=EXECUTE, inode=”/apps/hive/warehouse/adodb.db/ushi_gl”:hive:hdfs:drwxr-x— 
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:381)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:338)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1955)
at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:109)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4111)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1137)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:854)

 

RootCause: It is bug in zeppelin 0.7.0.2 and is going to fix in newer version of zeppelin.

Resolution:  Add your username and password in credential option in zeppelin.

 


  • 0

Namenode may keep crashing due to excessive logging

Namenode may keep crashing even if you restart all services and you have enough heap size. And you see following error in logs.

java.io.IOException: IPC’s epoch 197 is less than the last promised epoch 198

or

2017-09-28 09:16:11,371 INFO ha.ZKFailoverController (ZKFailoverController.java:setLastHealthState(851)) – Local service NameNode at m1.hdp22 entered state: SERVICE_NOT_RESPONDING 

Root Cause: In my case it was because too much logging was happening in namenode for Blockstatechange and hdfs.statechange. If the logging is constantly occurring nonstop, the NameNode takes time to respond to other rpc requests. Hence we need to increase the NN log level (from INFO to WARN) for certain classes to take some load off the namenode.

Solution: Increased the log level for two classes: Added the below in hdfs log4j using Ambari (Ambari UI > HDFS > Config > Advanced hdfs-log4j)

log4j.logger.BlockStateChange=ERROR
log4j.logger.org.apache.hadoop.hdfs.StateChange=ERROR


  • 0

ERROR : Failed with exception org.apache.hadoop.security.AccessControlException: Permission denied. user=user1 is not the owner of inode=test_copy_1

If users complain that they are not able to load data into hive tables via beeline. Actually while loading data into Hive table using load data inpath ‘/tmp/test’ into table sampledb.sample1 then getting following error:
load data inpath ‘/tmp/test’ into table adodevdb.sample1;
INFO : Loading data to table adodevdb.sample1 from hdfs://m1.hdp22/tmp/test
ERROR : Failed with exception org.apache.hadoop.security.AccessControlException: Permission denied. user=user1 is not the owner of inode=test_copy_1
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:250)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:227)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:381)
at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:338)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1955)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1939)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1908)
at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:63)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1824)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:821)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:464)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)

 

Root Cause: It is because rollback() never happens in case of failure, so this problem is there since the start.  BUG-62311 is raised for the same and unfortunately there is no fix for now.

Workaround: You can fix it by applying a following workaround:

set hive.mv.files.thread=0(zero) in hive-site.xml. 


  • 0

Select does not return any row in mr execution engine but returns in tez via beeline

When I ran a select statement via setting set hive.execution.engine=mr; then select * from table is not returning any rows in beeline but when I run it in tez then it is returning result.

0: jdbc:hive2://m1.hdp22:10001/default> select * from test_db.table1 limit 25;

+————————+————————-+————————-+—————————+—————————+—————————+————————-+————————-+————————-+——————————-+————————-+–+

| cus_id  | prx_nme  | fir_nme  | mid_1_nme  | mid_2_nme  | mid_3_nme  | lst_nme  | sfx_nme  | gen_nme  | lic_st_abr_id  | dsd_idc  |

+————————+————————-+————————-+—————————+—————————+—————————+————————-+————————-+————————-+——————————-+————————-+–+

+————————+————————-+————————-+—————————+—————————+—————————+————————-+————————-+————————-+——————————-+————————-+–+

No rows selected (0.108 seconds)

If you will check HiveServer2 logs then you will see following traces :

2017-08-31 09:02:02,239 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: parse.ParseDriver (ParseDriver.java:parse(185)) – Parsing command: select * from table1 limit

25 2017-08-31 09:02:02,241 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(855)) – 3: get_table : db=test_db tbl=table1

2017-08-31 09:02:02,241 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(406)) – ugi=saurkumaip=unknown-ip-addrcmd=get_table : db=test_db tbl=table1

2017-08-31 09:02:02,260 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(855)) – 3: get_table : db=test_db tbl=table1

2017-08-31 09:02:02,260 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(406)) – ugi=saurkumaip=unknown-ip-addrcmd=get_table : db=test_db tbl=table1

2017-08-31 09:02:02,269 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: ql.Driver (Driver.java:getSchema(253)) – Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:table1.cus_id, type:int, comment:null), FieldSchema(name:table1.prx_nme, type:char(15), comment:null), FieldSchema(name:table1.fir_nme, type:char(15), comment:null), FieldSchema(name:table1.mid_1_nme, type:char(15), comment:null), FieldSchema(name:table1.mid_2_nme, type:char(15), comment:null), FieldSchema(name:table1.mid_3_nme, type:char(15), comment:null), FieldSchema(name:table1.lst_nme, type:char(30), comment:null), FieldSchema(name:table1.sfx_nme, type:char(5), comment:null), FieldSchema(name:table1.gen_nme, type:char(10), comment:null), FieldSchema(name:table1.lic_st_abr_id, type:char(2), comment:null), FieldSchema(name:table1.dsd_idc, type:char(1), comment:null)], properties:null)

2017-08-31 09:02:02,271 INFO [HiveServer2-Background-Pool: Thread-161143]: ql.Driver (Driver.java:execute(1411)) – Starting command(queryId=hive_20170831090202_3dbbdf1c-c061-4289-b4dd-a2934cbec04d): select * from table1 limit 25

2017-08-31 09:02:02,278 INFO [Atlas Logger 2]: hook.HiveHook (HiveHook.java:registerProcess(697)) – Skipped query select * from table1 limit 25 for processing since it is a select query

Root Cause: Actually we ran insert overwrite which replaced all part files to same name dir and created files under those dirs. And as we are aware mr execution does not search recursively and thats why it was not returning any result in case of mr execution engine.

[s0998dnz@m1.hdp22 ~]$ hadoop fs -ls hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/
Found 50 items
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000000_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000001_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000002_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000003_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000004_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000005_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000006_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000007_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000008_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000009_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000010_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000011_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:11 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000012_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:11 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000013_0

Resolution: There are two solution we have to resolve this issue

  1. You can change the file structure by removeing dir and place file under the table dir.
  2. Or you can set following property SET mapred.input.dir.recursive=true; and then run sql. This property will tell to your engine to search recursively.

Please feel free to give your valuable suggestion or feedback.


  • 0

knox is not getting start, failing with error Gateway SSL Certificate is Expired

When you try to start knox then if it fails with following error then don’t worry, this article will help you to solve problem.

INFO hadoop.gateway (JettySSLService.java: logAndValidateCertificate(122)) – The Gateway SSL certificate is valid between:  FATAL hadoop.gateway (GatewayServer.java:main (120)) – Failed to start gateway: org.apache.hadoop.gateway.services. ServiceLifecycleException: Gateway SSL Certificate is Expired.

 

Root cause: It is because of your gateway.jks file corrupted.

Resolution: So to solve this issue you need to follow given steps:

  • On the knox gateway locate the gateway.jks file — it is usually in the path /var/lib/knox/data*/security/keystores/gateway.jks

[knox@m1.hdp22 ~]$ ls -ltrh /var/lib/knox/data-2.6.1.0-129/security/keystores/*
-rw-r--r-- 1 knox knox 32 Aug 28 05:42 /var/lib/knox/data-2.6.1.0-129/security/keystores/__gateway-credentials.jceks
-rw-r--r-- 1 knox knox 1.4K Aug 28 05:42 /var/lib/knox/data-2.6.1.0-129/security/keystores/gateway.jks
-rw-r--r-- 1 knox knox 511 Aug 28 08:53 /var/lib/knox/data-2.6.1.0-129/security/keystores/default-credentials.jceks

  • Move the original file gateway.jks to another directory as a backup copy
  • Restart the knox server