Monthly Archives: October 2016

  • 18

Map side join in Hive

Category : Hive

Many time we face a situation that we have very small tables in hive but when we query these tables then it takes long time.

Here I am going to explain Map side join and its advantages over the normal join operation in Hive. But before knowing about this, we should first understand the concept of ‘Join’ and what happens internally when we perform the join in Hive.

Join is a clause that combines the records of two tables (or Data-Sets).
Assume that we have two tables A and B. When we perform join operation on them, it will return the records which are the combination of all columns o f A and B.

Mapjoin is a little-known feature of Hive. It allows a table to be loaded into memory so that a (very fast) join could be performed entirely within a mapper without having to use a Map/Reduce step. If your queries frequently rely on small table joins (e.g. cities or countries, etc.) you might see a very substantial speed-up from using mapjoins.

There are two ways to enable it. First is by using a hint, which looks like /*+ MAPJOIN(aliasname), MAPJOIN(anothertable) */. This C-style comment should be placed immediately following the SELECT. It directs Hive to load aliasname (which is a table or alias of the query) into memory.

SELECT /*+ MAPJOIN(c) */ * FROM orders o JOIN cities c ON (o.city_id = c.id);

Another (better, in my opinion) way to turn on mapjoins is to let Hive do it automatically. Simply set hive.auto.convert.join to true in your config, and Hive will automatically use mapjoins for any tables smaller than hive.mapjoin.smalltable.filesize (default is 25MB).

Assume that we have two tables of which one of them is a small table. When we submit a map reduce task, a Map Reduce local task will be created before the original join Map Reduce task which will read data of the small table from HDFS and store it into an in-memory hash table. After reading, it serializes the in-memory hash table into a hash table file.

In the next stage, when the original join Map Reduce task is running, it moves the data in the hash table file to the Hadoop distributed cache, which populates these files to each mapper’s local disk. So all the mappers can load this persistent hash table file back into the memory and do the join work as before. The execution flow of the optimized map join is shown in the figure below. After optimization, the small table needs to be read just once. Also if multiple mappers are running on the same machine, the distributed cache only needs to push one copy of the hash table file to this machine.

Create two sample table and and insert some data into those table and then perform map join operation. 

hive> create table emp(name string,address string, deptid bigint) row format delimited fields terminated by ‘,’;

OK

Time taken: 20.218 seconds

hive> load data local inpath ‘/root/emp.txt’ overwrite into table emp;

Loading data to table test.emp

Table test.emp stats: [numFiles=1, numRows=0, totalSize=56, rawDataSize=0]

OK

Time taken: 0.713 seconds

hive> select * from emp;

OK

Saurabh AA 1

Babu AA 2

Nach BB 2

Jeba CC 1

Abhijit DD 1

Time taken: 20.105 seconds, Fetched: 5 row(s)

hive> create table dept(deptname string, deptid bigint) row format delimited fields terminated by ‘,’;

OK

Time taken: 20.192 seconds

hive> load data local inpath ‘/root/dept.txt’ overwrite into table dept;

Loading data to table test.dept

Table test.dept stats: [numFiles=1, numRows=0, totalSize=13, rawDataSize=0]

OK

Time taken: 20.705 seconds

hive> select * from dept;

OK

IT 1

Infra 2

Time taken: 0.081 seconds, Fetched: 2 row(s)

 

Without Map join: 

hive> select emp.name,dept.deptname from emp join dept on emp.deptid=dept.deptid;

Query ID = root_20161018080320_198dcd9e-7e47-440f-871d-5da56522fced

Total jobs = 1

Launching Job 1 out of 1

Number of reduce tasks not specified. Estimated from input data size: 1

In order to change the average load for a reducer (in bytes):

  set hive.exec.reducers.bytes.per.reducer=<number>

In order to limit the maximum number of reducers:

  set hive.exec.reducers.max=<number>

In order to set a constant number of reducers:

  set mapreduce.job.reduces=<number>

Starting Job = job_1476772397810_0003, Tracking URL = http://m1.hdp22:8088/proxy/application_1476772397810_0003/

Kill Command = /usr/hdp/2.3.0.0-2557/hadoop/bin/hadoop job  -kill job_1476772397810_0003

Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1

2016-10-18 08:04:53,713 Stage-1 map = 0%,  reduce = 0%

2016-10-18 08:05:52,017 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.08 sec

2016-10-18 08:06:39,906 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.37 sec

MapReduce Total cumulative CPU time: 4 seconds 370 msec

Ended Job = job_1476772397810_0003

MapReduce Jobs Launched:

Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 4.37 sec   HDFS Read: 13044 HDFS Write: 52 SUCCESS

Total MapReduce CPU Time Spent: 4 seconds 370 msec

OK

Abhijit IT

Jeba IT

Saurabh IT

Nach Infra

Babu Infra

Time taken: 201.293 seconds, Fetched: 5 row(s)

 

With Map Join :

hive> select /*+ MAPJOIN(dept) */ emp.name,dept.deptname from emp join dept on emp.deptid=dept.deptid;

Query ID = root_20161018075509_476aa0ce-704a-4e3b-91c5-c2a3444a9fd7

Total jobs = 1

WARNING: Use “yarn jar” to launch YARN applications.

Execution log at: /tmp/root/root_20161018075509_476aa0ce-704a-4e3b-91c5-c2a3444a9fd7.log

2016-10-18 07:55:37 Starting to launch local task to process map join; maximum memory = 1065484288

2016-10-18 07:55:59 Dump the side-table for tag: 1 with group count: 2 into file: file:/tmp/root/f9baf1b2-42f4-4f89-9ac8-a48f5e8b0170/hive_2016-10-18_07-55-09_410_1173820569043720345-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile11–.hashtable

2016-10-18 07:55:59 Uploaded 1 File to: file:/tmp/root/f9baf1b2-42f4-4f89-9ac8-a48f5e8b0170/hive_2016-10-18_07-55-09_410_1173820569043720345-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile11–.hashtable (307 bytes)

2016-10-18 07:55:59 End of local task; Time Taken: 22.621 sec.

Execution completed successfully

MapredLocal task succeeded

Launching Job 1 out of 1

Number of reduce tasks is set to 0 since there’s no reduce operator

Starting Job = job_1476772397810_0002, Tracking URL = http://m1.hdp22:8088/proxy/application_1476772397810_0002/

Kill Command = /usr/hdp/2.3.0.0-2557/hadoop/bin/hadoop job  -kill job_1476772397810_0002

Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0

2016-10-18 07:57:09,279 Stage-3 map = 0%,  reduce = 0%

2016-10-18 07:57:57,324 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.6 sec

MapReduce Total cumulative CPU time: 1 seconds 600 msec

Ended Job = job_1476772397810_0002

MapReduce Jobs Launched:

Stage-Stage-3: Map: 1   Cumulative CPU: 1.6 sec   HDFS Read: 6415 HDFS Write: 52 SUCCESS

Total MapReduce CPU Time Spent: 1 seconds 600 msec

OK

Saurabh IT

Babu Infra

Nach Infra

Jeba IT

Abhijit IT

Time taken: 169.01 seconds, Fetched: 5 row(s)

I hope it will have help you to understand map join.


  • 1

sql workbench connection to hadoop

Category : Hive

Many time we do not want to run our hive query through beeline or hive cli due to so many reason. Here I am not going to talk about reasons as its big debatable point, so in this article I have explain the steps to connect SQL Workbench to out hadoop cluster. In this article I have used hortonworks hdp 2.3 stack.

Step 1: Download SQL Workbench from the following URL and run it. 

http://www.sql-workbench.net/downloads.html

Step 2 : Download following driver list from your hadoop client to your local machine.  

FOR HDP 2.2.x From /usr/hdp/current/hive-client/lib/ sftp or scp to your local desktop

  • hive-*-0.14*.jar
  • httpclient-4.2.5.jar
  • httpcore-4.2.5.jar
  • libthrift-0.9.0.jar
  • libfb303-0.9.0.jar
  • commons-logging-1.1.3.jar

FOR HDP 2.3.x

From /usr/hdp/current/hive-client/lib/ sftp or scp to your local desktop

  • hive-jdbc.jar (you may have symlink so need to download main jar)

FOR BOTH

From /usr/hdp/current/hadoop-client

  • hadoop-common.jar (you may have symlink so need to download main jar)
  • hadoop-auth.jar

From /usr/hdp/current/hadoop-client/lib

  • log4j-1.2.17.jar
  • slf4j-*.jar

Step 3: Goto file and open Manage Driver to configure your driver:

Screen Shot 2016-10-11 at 1.31.51 PM

 

Step 4: Create new Driver with any name(like Hadoop or HiveServer2 ) and upload all required .jar files then click ok. 

Screen Shot 2016-10-11 at 1.36.19 PM

Step 5: Now go to File menu and then click on “Connection Window ” ,select your created driver also pass required parameters(JDBC URL,Username, password  etc): 

Screen Shot 2016-10-11 at 1.40.19 PM

Step 6: Now click on Test button to test connection and then click on ok button to save your configuration. 

Screen Shot 2016-10-11 at 1.40.36 PM

Now you can run your command or query through sql-workbench, I hope you this will help you to make connection to hdp via sql workbench


  • 0

Hive Actions with Oozie

Category : Hive

One of my friend was trying to run some hive .hql in their Oozie workflow and was getting error. Then I decided to replicate it on my cluster and finally I did it after some retry.

If you have the same requirement where you have to run hive sql via oozie then this article will help you to do your job.

Step 1: First create some dir inside hdfs(under your home dir) to have all script in same place and then run it from there: 

[hdfs@m1 ~]$ hadoop fs -mkdir -p /user/ambari-qa/tutorial/hive-oozie

[root@m1 ]# hadoop fs -mkdir -p /user/ambari-qa/tutorial/hive-input

Step 2: Now create your workflow.xml and job.properties: 

[root@m1 hive_oozie_demo]# cat workflow.xml

<workflow-app xmlns=”uri:oozie:workflow:0.4″ name=”hive-wf”>

    <start to=”hive-node”/>

    <action name=”hive-node”>

        <hive xmlns=”uri:oozie:hive-action:0.2″>

        <job-tracker>${jobTracker}</job-tracker>

        <name-node>${nameNode}</name-node>

        <job-xml>hive-site.xml</job-xml>

            <configuration>

                <property>

                    <name>mapred.job.queue.name</name>

                    <value>${queueName}</value>

                </property>              

            </configuration>

            <script>script.hql</script>

            <param>INPUT_PATH=${inputPath}</param>

        </hive>

        <ok to=”end”/>

        <error to=”fail”/>

    </action>

    <kill name=”fail”>

        <message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>

    </kill>

    <end name=”end”/>

</workflow-app>

[root@m1 hive_oozie_demo]# cat job.properties

nameNode=hdfs://HDPINF

jobTracker=192.168.56.42:50300

queueName=default

exampleRoot=example

oozie.use.system.libpath=true

oozie.libpath=/user/oozie/share/lib

oozie.wf.application.path=${nameNode}/user/ambari-qa/${exampleRoot}/hive-oozie

inputPath=${nameNode}/user/ambari-qa/${exampleRoot}/hive-input/*

Step 3: Now create hive table in hive database :

hive> create table demo(id int, name string);

Step 4: Now create your hive script :

[root@m1 hive_oozie_demo]# cat script.hql

insert into test.demo select * from test.demo1;

Step 5: Now you need to setup your Oozie workflow app folder. You need one very important file to execute Hive action through Oozie which is hive-site.xml. When Oozie executes a Hive action, it needs Hive’s configuration file. You can provide multiple configurations file in a single action. You can find your Hive configuration file from “/etc/hive/conf.dist/hive-site.xml” (default location). Copy that file and put it inside your workflow application path in HDFS.

[root@m1 hive_oozie_demo]# hadoop fs -put /etc/hive/conf/hive-site.xml /user/ambari-qa/tutorial/hive-oozie/

[root@m1 hive_oozie_demo]# hadoop fs -put script.hql /user/ambari-qa/tutorial/hive-oozie/

[root@m1 hive_oozie_demo]# hadoop fs -put workflow.xml /user/ambari-qa/tutorial/hive-oozie/

[root@m1 hive_oozie_demo]# hadoop fs -lsr /user/ambari-qa/tutorial/hive-oozie

lsr: DEPRECATED: Please use ‘ls -R’ instead.

-rw-r–r–   3 root hdfs      19542 2016-10-08 04:36 /user/ambari-qa/tutorial/hive-oozie/hive-site.xml

-rw-r–r–   3 root hdfs         65 2016-10-08 04:36 /user/ambari-qa/tutorial/hive-oozie/script.hql

-rw-r–r–   3 root hdfs        878 2016-10-08 04:38 /user/ambari-qa/tutorial/hive-oozie/workflow.xml

Look at the <job-xml> tag, since I’m putting hive-site.xml in my application path, so I’m just passing the file name not the whole location. If you want to keep that file in some other location of your HDFS, then you can pass the whole HDFS path there too. In older version of Hive, user had to provide the hive-default.xml file by using property key oozie.hive.defaults while running Oozie Hive action, but from now on (Hive 0.8+) it’s not required anymore.

Step 6: Now you need to submit oozie job to run it :

[ambari-qa@m1 ~]$ oozie job -oozie http://m2.hdp22:11000/oozie -config job.properties -run

job: 0000004-161008041417432-oozie-oozi-W

Now you can check your oozie workflow status via oozie web UI or command prompt :

[ambari-qa@m1 ~]$ oozie job -oozie http://m2.hdp22:11000/oozie -info 0000004-161008041417432-oozie-oozi-W

Job ID : 0000004-161008041417432-oozie-oozi-W

————————————————————————————————————————————

Workflow Name : hive-wf

App Path      : hdfs://HDPINF/user/ambari-qa/tutorial/hive-oozie

Status        : SUCCEEDED

Run           : 0

User          : ambari-qa

Group         : –

Created       : 2016-10-08 11:02 GMT

Started       : 2016-10-08 11:02 GMT

Last Modified : 2016-10-08 11:02 GMT

Ended         : 2016-10-08 11:02 GMT

CoordAction ID: –

Actions

————————————————————————————————————————————

ID                                                                            Status    Ext ID                 Ext Status Err Code

————————————————————————————————————————————

0000004-161008041417432-oozie-oozi-W@:start:                                  OK                              OK                  

————————————————————————————————————————————

0000004-161008041417432-oozie-oozi-W@hive-node                                OK        job_1475917713796_0007 SUCCEEDED           

————————————————————————————————————————————

0000004-161008041417432-oozie-oozi-W@end                                      OK                              OK                  

————————————————————————————————————————————

If it is successful then you can check your table as it would have been uploaded with data.

hive> select * from demo;

OK

1 saurabh

Time taken: 0.328 seconds, Fetched: 1 row(s)

I hope this article will help you to run your hive sql in oozie workflow. Please feel free to reach out to me in case of any suggestion or doubt.

Common issue : 

Issue 1:You may see namenode issue, if you have hard coded namenode URI in your job.properties and your mentioned nn is in standby then you will see this issue. 

[ambari-qa@m1 ~]$ oozie job -oozie http://m2.hdp22:11000 -config job.properties -run

Error: IO_ERROR : java.io.IOException: Error while connecting Oozie server. No of retries = 1. Exception = Could not authenticate, Authentication failed, status: 404, message: Not Found

[ambari-qa@m1 ~]$ oozie job -oozie http://m2.hdp22:11000/oozie -config job.properties -run

Error: E0501 : E0501: Could not perform authorization operation, Operation category READ is not supported in state standby  at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)  at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1786)  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1305)  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3851)  at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)  at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:843)  at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)  at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2081)  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2077)  at java.security.AccessController.doPrivileged(Native Method)  at javax.security.auth.Subject.doAs(Subject.java:415)  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2075)

Resolution : To resolve this issue you need to use your HA service id instead of hardcoded NN URI in job.properties. 

[root@m1 hive_oozie_demo]# cat job.properties

nameNode=hdfs://HDPINF

Issue 2: If you see exit code [40000] error with following message then you need to look into your job.properties.
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: Unable to determine if hdfs://HDPINF/apps/hive/warehouse/test.db/demo is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://HDPINF/apps/hive/warehouse/test.db/demo, expected: hdfs://HDPINF:8020
Intercepting System.exit(40000)
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [40000]

Resolution : You need to change your namenode URI, i.e remove port from the end of URI.
Please keep nameNode=hdfs://HDPINF instead ofnameNode=hdfs://HDPINF:8020.

Issue 3: If you see following error then you may need to change your hive sql or approach because of this issue seems to be unresolved. 

FAILED: SemanticException [Error 10028]: Line 1:17 Path is not legal ''hdfs://HDPINF:8020/user/ambari-qa/tutorial/hive-input/*'': 
Move from: hdfs://HDPINF:8020/user/ambari-qa/tutorial/hive-input/* to: hdfs://HDPINF/apps/hive/warehouse/test.db/demo is not valid.
 Please check that values for params "default.fs.name" and "hive.metastore.warehouse.dir" do not conflict.
Intercepting System.exit(10028)
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], exit code [10028]

Resolution : https://issues.apache.org/jira/browse/HIVE-8147

  • 4

Process xml file via apache pig

Category : Pig

If you want to work with XML in Pig, the Piggybank library (a user-contributed library of useful Pig code) contains an XMLLoader. It works in a similar way to our technique and captures all of the content between a start and end tag and supplies it as a single bytearray field in a Pig tuple.

Pig is a tool used to analyze large amounts of data by representing them as data flows. Using the Pig Latin scripting language operations like ETL (Extract, Transform and Load), adhoc data analysis and iterative processing can be easily achieved. Pig is an abstraction over MapReduce. In other words, all Pig scripts internally are converted into Map and Reduce tasks to get the task done. Pig was built to make programming MapReduce applications easier. Before Pig, Java was the only way to process the data stored on HDFS. Pig was first built in Yahoo! and later became a top level Apache project.

Sample input file :hadoop_books.xml

<CATALOG>
<BOOK>
<TITLE>Hadoop Defnitive Guide</TITLE>
<AUTHOR>Tom White</AUTHOR>
<COUNTRY>US</COUNTRY>
<COMPANY>CLOUDERA</COMPANY>
<PRICE>24.90</PRICE>
<YEAR>2012</YEAR>
</BOOK>
<BOOK>
<TITLE>Programming Pig</TITLE>
<AUTHOR>Alan Gates</AUTHOR>
<COUNTRY>USA</COUNTRY>
<COMPANY>Horton Works</COMPANY>
<PRICE>30.90</PRICE>
<YEAR>2013</YEAR>
</BOOK>
</CATALOG>
There are two approaches to parse an XML file in PIG.1. Using Regular Expression
2. Using XPath

Lets discuss one by one.

1. Using Regular Expression : Here using the XMLLoader() in piggy bank UDF to load the xml, so ensure that Piggy Bank UDF is registered.  Then I am using regular expression to parse the XML.

REGISTER piggybank.jar
 A =  LOAD ‘/user/test/hadoop_books.xml’ using org.apache.pig.piggybank.storage.XMLLoader(‘BOOK’) as (x:chararray);
 B = foreach A GENERATE FLATTEN(REGEX_EXTRACT_ALL(x,‘<BOOK>\\s*<TITLE>(.*)</TITLE>\\s*<AUTHOR>(.*)</AUTHOR>\\s*<COUNTRY>(.*)</COUNTRY>\\s*<COMPANY>(.*)</COMPANY>\\s*<PRICE>(.*)</PRICE>\\s*<YEAR>(.*)</YEAR>\\s*</BOOK>’));
 dump B;
Once you will run this pig script then you will see the following output on your console.
(Hadoop Defnitive Guide,Tom White,US,CLOUDERA,24.90,2012)
(Programming Pig,Alan Gates,USA,Horton Works,30.90,2013)

2. Using XPath : It is second approach to solve xml parsing problem through Pig. XPath is a function that allows text extraction from xml. Starting PIG 0.13 , Piggy bank UDF comes with XPath support. It eases the XML parsing in PIG scripts.

A sample script using XPath is as shown below.

REGISTER piggybank.jar
DEFINE XPath org.apache.pig.piggybank.evaluation.xml.XPath();
A =  LOAD /user/test/hadoop_books.xml’ using org.apache.pig.piggybank.storage.XMLLoader(‘BOOK’) as (x:chararray);
B = FOREACH A GENERATE XPath(x, ‘BOOK/AUTHOR’), XPath(x, ‘BOOK/PRICE’);
dump B;
Output:
(Tom White,24.90)
(Alan Gates,30.90)

  • 3

Process xml file via mapreduce

Category : YARN

When you have a requirement to process your data via hadoop which is not default input format then this article will help you. Hadoop provides default input formats like TextInputFormat, NLineInputFormat, KeyValueInputFormat etc., when you get a different types of files for processing you have to create your own custom input format for processing using MapReduce jobs Here I am going to show you how to processing XML files using MapReduce Job by creating custom XMLInputFormat (xmlinputformat hadoop)

So for example if you have following xml input file and you want to process it then you can do with the help of following steps.

<CATALOG>
<BOOK>
<TITLE>Hadoop Defnitive Guide</TITLE>
<AUTHOR>Tom White</AUTHOR>
<COUNTRY>US</COUNTRY>
<COMPANY>CLOUDERA</COMPANY>
<PRICE>24.90</PRICE>
<YEAR>2012</YEAR>
</BOOK>
<BOOK>
<TITLE>Programming Pig</TITLE>
<AUTHOR>Alan Gates</AUTHOR>
<COUNTRY>USA</COUNTRY>
<COMPANY>Horton Works</COMPANY>
<PRICE>30.90</PRICE>
<YEAR>2013</YEAR>
</BOOK>
</CATALOG>

 

Step 1:  Create XMLInputFormat.java: 

package xmlparsing.demo;

import java.io.IOException;
import java.util.List;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DataOutputBuffer;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;

public class XmlInputFormat extends TextInputFormat {
public static final String START_TAG_KEY = “<employee>”;
public static final String END_TAG_KEY = “</employee>”;

@Override
public RecordReader<LongWritable, Text> createRecordReader(
InputSplit split, TaskAttemptContext context) {
return new XmlRecordReader();
}

public static class XmlRecordReader extends
RecordReader<LongWritable, Text> {
private byte[] startTag;
private byte[] endTag;
private long start;
private long end;
private FSDataInputStream fsin;
private DataOutputBuffer buffer = new DataOutputBuffer();
private LongWritable key = new LongWritable();
private Text value = new Text();

@Override
public void initialize(InputSplit is, TaskAttemptContext tac)
throws IOException, InterruptedException {
FileSplit fileSplit = (FileSplit) is;
String START_TAG_KEY = “<employee>”;
String END_TAG_KEY = “</employee>”;
startTag = START_TAG_KEY.getBytes(“utf-8”);
endTag = END_TAG_KEY.getBytes(“utf-8”);

start = fileSplit.getStart();
end = start + fileSplit.getLength();
Path file = fileSplit.getPath();

FileSystem fs = file.getFileSystem(tac.getConfiguration());
fsin = fs.open(fileSplit.getPath());
fsin.seek(start);

}

@Override
public boolean nextKeyValue() throws IOException, InterruptedException {
if (fsin.getPos() < end) {
if (readUntilMatch(startTag, false)) {
try {
buffer.write(startTag);
if (readUntilMatch(endTag, true)) {

value.set(buffer.getData(), 0, buffer.getLength());
key.set(fsin.getPos());
return true;
}
} finally {
buffer.reset();
}
}
}
return false;
}

@Override
public LongWritable getCurrentKey() throws IOException,
InterruptedException {
return key;
}

@Override
public Text getCurrentValue() throws IOException, InterruptedException {
return value;

}

@Override
public float getProgress() throws IOException, InterruptedException {
return (fsin.getPos() – start) / (float) (end – start);
}

@Override
public void close() throws IOException {
fsin.close();
}

private boolean readUntilMatch(byte[] match, boolean withinBlock)
throws IOException {
int i = 0;
while (true) {
int b = fsin.read();

if (b == -1)
return false;

if (withinBlock)
buffer.write(b);

if (b == match[i]) {
i++;
if (i >= match.length)
return true;
} else
i = 0;

if (!withinBlock && i == 0 && fsin.getPos() >= end)
return false;
}
}

}

}

Step 2:  Create driver XMLDriver.java
package xmlparsing.demo;
importjavax.xml.stream.XMLInputFactory;
//import mrdp.logging.LogWriter;
importorg.apache.hadoop.fs.FileSystem;
importorg.apache.hadoop.fs.Path;
importorg.apache.hadoop.conf.*;
importorg.apache.hadoop.io.*;
importorg.apache.hadoop.mapred.TextOutputFormat;
importorg.apache.hadoop.mapreduce.*;
importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;
importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
importorg.apache.hadoop.util.GenericOptionsParser;
publicclassXMLDriver {
publicstaticvoidmain(String[] args) {
try{
Configuration conf = newConfiguration();
String[] arg = newGenericOptionsParser(conf, args).getRemainingArgs();
conf.set(“START_TAG_KEY”, “<employee>”);
conf.set(“END_TAG_KEY”, “</employee>”);
Job job = newJob(conf, “XML Processing Processing”);
job.setJarByClass(XMLDriver.class);
job.setMapperClass(MyMapper.class);
job.setNumReduceTasks(0);
job.setInputFormatClass(XmlInputFormat.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(LongWritable.class);
FileInputFormat.addInputPath(job, newPath(args[0]));
FileOutputFormat.setOutputPath(job, newPath(args[1]));
job.waitForCompletion(true);
} catch(Exception e) {
LogWriter.getInstance().WriteLog(“Driver Error: “+ e.getMessage());
System.out.println(e.getMessage().toString());
}
}
}Step 3: Create MyMapper.javapackage xmlparsing.demo;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
//import mrdp.logging.LogWriter;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;

public class MyMapper extends Mapper<LongWritable, Text, Text, NullWritable> {

private static final Log LOG = LogFactory.getLog(MyMapper.class);
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {

try {

InputStream is = new ByteArrayInputStream(value.toString().getBytes());
DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
Document doc = dBuilder.parse(is);

doc.getDocumentElement().normalize();

NodeList nList = doc.getElementsByTagName(“employee”);

for (int temp = 0; temp < nList.getLength(); temp++) {

Node nNode = nList.item(temp);

if (nNode.getNodeType() == Node.ELEMENT_NODE) {

Element eElement = (Element) nNode;

String id = eElement.getElementsByTagName(“id”).item(0).getTextContent();
String name = eElement.getElementsByTagName(“name”).item(0).getTextContent();
String gender = eElement.getElementsByTagName(“gender”).item(0).getTextContent();

// System.out.println(id + “,” + name + “,” + gender);
context.write(new Text(id + “,” + name + “,” + gender), NullWritable.get());

}
}
} catch (Exception e) {
LogWriter.getInstance().WriteLog(e.getMessage());
}

}

}

ref: thinkbigdataanalytics.com

  • 3

How to use Hive Query result in a variable for other query

Category : Hive

Many time we want to store one query result into a variable and then use this variable in some other query. So now it is possible in your favorite hadoop ecosystem i.e hive.

With the help of this article you can achieve it.

[root@m1 etc]# hive

16/10/04 02:40:45 WARN conf.HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist

16/10/04 02:40:45 WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist

16/10/04 02:40:45 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist

16/10/04 02:40:45 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist

Logging initialized using configuration in file:/etc/hive/conf/hive-log4j.properties

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/hdp/2.2.0.0-2041/hive/lib/hive-jdbc-0.14.0.2.2.0.0-2041-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

Step1: Check table test1 description:
hive> desc Table test1;
OK
columnA date
Time taken: 0.452 seconds, Fetched: 1 row(s)

Step2: Table test2 description:
hive> desc Table test2;
OK
columnB date
Time taken: 0.516 seconds, Fetched: 1 row(s)

Step3: Assigning subquery to variable:
hive> set hivevar:var1=(select columnB from Table test2 where columnB=’2016-09-19′);

Step4: Running main query with subquery:
hive> select * from Table test1 where columnA IN ${hivevar:var1};