Select does not return any row in mr execution engine but returns in tez via beeline
Category : Hive
When I ran a select statement via setting set hive.execution.engine=mr; then select * from table is not returning any rows in beeline but when I run it in tez then it is returning result.
0: jdbc:hive2://m1.hdp22:10001/default> select * from test_db.table1 limit 25;
+————————+————————-+————————-+—————————+—————————+—————————+————————-+————————-+————————-+——————————-+————————-+–+
| cus_id | prx_nme | fir_nme | mid_1_nme | mid_2_nme | mid_3_nme | lst_nme | sfx_nme | gen_nme | lic_st_abr_id | dsd_idc |
+————————+————————-+————————-+—————————+—————————+—————————+————————-+————————-+————————-+——————————-+————————-+–+
+————————+————————-+————————-+—————————+—————————+—————————+————————-+————————-+————————-+——————————-+————————-+–+
No rows selected (0.108 seconds)
If you will check HiveServer2 logs then you will see following traces :
2017-08-31 09:02:02,239 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: parse.ParseDriver (ParseDriver.java:parse(185)) – Parsing command: select * from table1 limit
25 2017-08-31 09:02:02,241 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(855)) – 3: get_table : db=test_db tbl=table1
2017-08-31 09:02:02,241 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(406)) – ugi=saurkumaip=unknown-ip-addrcmd=get_table : db=test_db tbl=table1
2017-08-31 09:02:02,260 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(855)) – 3: get_table : db=test_db tbl=table1
2017-08-31 09:02:02,260 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(406)) – ugi=saurkumaip=unknown-ip-addrcmd=get_table : db=test_db tbl=table1
2017-08-31 09:02:02,269 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: ql.Driver (Driver.java:getSchema(253)) – Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:table1.cus_id, type:int, comment:null), FieldSchema(name:table1.prx_nme, type:char(15), comment:null), FieldSchema(name:table1.fir_nme, type:char(15), comment:null), FieldSchema(name:table1.mid_1_nme, type:char(15), comment:null), FieldSchema(name:table1.mid_2_nme, type:char(15), comment:null), FieldSchema(name:table1.mid_3_nme, type:char(15), comment:null), FieldSchema(name:table1.lst_nme, type:char(30), comment:null), FieldSchema(name:table1.sfx_nme, type:char(5), comment:null), FieldSchema(name:table1.gen_nme, type:char(10), comment:null), FieldSchema(name:table1.lic_st_abr_id, type:char(2), comment:null), FieldSchema(name:table1.dsd_idc, type:char(1), comment:null)], properties:null)
2017-08-31 09:02:02,271 INFO [HiveServer2-Background-Pool: Thread-161143]: ql.Driver (Driver.java:execute(1411)) – Starting command(queryId=hive_20170831090202_3dbbdf1c-c061-4289-b4dd-a2934cbec04d): select * from table1 limit 25
2017-08-31 09:02:02,278 INFO [Atlas Logger 2]: hook.HiveHook (HiveHook.java:registerProcess(697)) – Skipped query select * from table1 limit 25 for processing since it is a select query
Root Cause: Actually we ran insert overwrite which replaced all part files to same name dir and created files under those dirs. And as we are aware mr execution does not search recursively and thats why it was not returning any result in case of mr execution engine.
[s0998dnz@m1.hdp22 ~]$ hadoop fs -ls hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/
Found 50 items
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000000_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000001_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000002_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000003_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000004_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000005_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000006_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000007_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000008_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000009_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000010_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000011_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:11 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000012_0
drwxr-x— – dmcraig hdfs 0 2017-08-23 12:11 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000013_0
Resolution: There are two solution we have to resolve this issue
- You can change the file structure by removeing dir and place file under the table dir.
- Or you can set following property SET mapred.input.dir.recursive=true; and then run sql. This property will tell to your engine to search recursively.
Please feel free to give your valuable suggestion or feedback.