Author Archives: admin

  • 2

Ssh action with oozie

When you want to run your shell script via oozie then following article will help you to do your job in easy way.

Following steps you need to setup Oozie workflow using ssh-action:

1. Configure job.properties
Example:

[s0998dnz@m1.hdp22 oozie_ssh_action]$ cat job.properties
#*************************************************
#  job.properties
#oozie-action for ssh
#*************************************************
nameNode=hdfs://m1.hdp22:8020
jobTracker=m2.hdp22:8050
queueName=default
oozie.libpath=${nameNode}/user/oozie/share/lib
oozie.use.system.libpath=true
oozie.wf.rerun.failnodes=true
oozieProjectRoot=${nameNode}/user/${user.name}/ooziesshaction
appPath=${oozieProjectRoot}
oozie.wf.application.path=${appPath}
focusNodeLogin=s0998dnz@m1.hdp22
shellScriptPath=~/oozie_ssh_action/sampletest.sh

2. Configure workflow.xml

Example:


<!--******************************************-->
<!--workflow.xml -->
<!--******************************************-->
<workflow-app name="WorkFlowForSshAction" xmlns="uri:oozie:workflow:0.1">
 <start to="sshAction"/>
 <action name="sshAction">
 <ssh xmlns="uri:oozie:ssh-action:0.1">
 <host>${focusNodeLogin}</host>
 <command>${shellScriptPath}</command>
 <capture-output/>
 </ssh>
 <ok to="end"/>
 <error to="killAction"/>
 </action>
<!-- <action name="sendEmail">
 <email xmlns="uri:oozie:email-action:0.1">
 <to>${emailToAddress}</to>
 <subject>Output of workflow ${wf:id()}</subject>
 <body>Status of the file move: ${wf:actionData('sshAction')['STATUS']}</body>
 </email>
 <ok to="end"/>
 <error to="end"/>
 </action>
 --> <kill name="killAction">
 <message>"Killed job due to error"</message>
 </kill>
 <end name="end"/>
</workflow-app>

3. Write sample sampletest.sh script

Example:

[s0998dnz@m1.hdp22 oozie_ssh_action]$ cat sampletest.sh 
#!/bin/bash
hadoop fs -ls / > /home/s0998dnz/oozie_ssh_action/output.txt

4. Upload workflow.xml to ${appPath} defined in job.properties

[s0998dnz@m1.hdp22 oozie_ssh_action]$ hadoop fs -put workflow.xml /user/s0998dnz/ooziesshaction/

5. Login to Oozie host by “oozie” user.

[oozie@m2.hdp22 ~]$

6. Generate a key pair,if it doesn’t exist already, using ‘ssh-keygen’ command:

[oozie@m2.hdp22 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oozie/.ssh/id_rsa): 
Created directory '/home/oozie/.ssh'
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oozie/.ssh/id_rsa.
Your public key has been saved in /home/oozie/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:EW8WSDG3QnVjGf65znS8bP0AeOrgoQuteYl3hIunO8c oozie@m1.hdp22
The key's randomart image is:
+---[RSA 2048]----+
.*++ =o <span class="Apple-converted-space">
..= *.. <span class="Apple-converted-space">  
o = .
= . . . 
.S . o o
.. .
o . o 
.+.+o .
+ +
++Eo.+ 
+.+o
+----[SHA256]-----+

7. On Oozie Server node copy ~/.ssh/id_rsa.pub and paste it to remote-node’s ~/.ssh/authorized_keys file (focus node)

8. Test password-less ssh from oozie@oozie-host to <username>@<remote-host>

9. Follow below command to run Oozie workflow

oozie job -oozie http://<oozie-server-hostname>:11000/oozie -config /$PATH/job.properties -run

I hope it helped you to do you job in quick time,please feel free to give your valuable feedback or suggestion.


  • 0

How to remove header from csv during loading to hive

Sometime we may have header in our data file and we do not want that header to loaded into our hive table or we want to ignore header then this article will help you.

[saurkuma@m1 ~]$ cat sampledata.csv

id,Name

1,Saurabh

2,Vishal

3,Jeba

4,Sonu

Step 1: Create a table with table properties to ignore it.

hive> create table test(id int,name string) row format delimited fields terminated by ‘,’ tblproperties(“skip.header.line.count”=”1”) ;

OK

Time taken: 0.233 seconds

hive> show tables;

OK

salesdata01

table1

table2

test

tmp

Time taken: 0.335 seconds, Fetched: 5 row(s)

hive> load data local inpath ‘/home/saurkuma/sampledata.csv’ overwrite into table test;

Loading data to table demo.test

Table demo.test stats: [numFiles=1, totalSize=41]

OK

Time taken: 0.979 seconds

hive> select * from test;

OK

1 Saurabh

2 Vishal

3 Jeba

4 Sonu

Time taken: 0.111 seconds, Fetched: 4 row(s)

To remove header in Pig:

A=load ‘sampledata.csv’ using PigStorage(‘,’);
B=FILTER A BY $0>1;

I hope this helped you to do your job in easy way. Please feel free to give your valuable suggestion or feedback.


  • 2

Insert date into hive tables shows null during select

When we try to create table on any files(csv or any other format) and load data into hive table then we may see that during select queries it is showing null value.

Hive_null_error

You can solve it in the following ways:

[saurkuma@m1 ~]$ ll

total 584

-rw-r–r– 1 saurkuma saurkuma 591414 Mar 16 02:31 SalesData01.csv

[saurkuma@m1 ~]$ hive

WARNING: Use “yarn jar” to launch YARN applications.

ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,file:/usr/hdp/2.3.4.0-3485/hadoop/lib/hadoop-lzo-0.6.0.2.3.4.0-3485-sources.jar!/ivysettings.xml will be used

Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties

hive> show databases;

OK

default

demo

testhive

Time taken: 3.341 seconds, Fetched: 3 row(s)

hive> use demo;

OK

Time taken: 1.24 seconds

hive> create table salesdata01 (Row_ID INT, Order_ID INT, Order_date String, Order_Priority STRING, Order_Quantity FLOAT, Sales FLOAT, Discount FLOAT, Shipping_Mode STRING, Profit FLOAT, Unit_Price FLOAT) row format delimited fields terminated by ‘,’;

OK

Time taken: 0.782 seconds

hive> select * from salesdata01;

OK

Time taken: 0.721 seconds

hive> load data local inpath ‘/home/saurkuma/SalesData01.csv’ overwrite into table salesdata01;

Loading data to table demo.salesdata01

Table demo.salesdata01 stats: [numFiles=1, totalSize=591414]

OK

Time taken: 1.921 seconds

hive> select * from salesdata01 limit 10;

OK

1 3 13-10-2010 Low 6.0 261.54 0.04 Regular Air -213.25 38.94

49 293 01-10-2012 High 49.0 10123.02 0.07 Delivery Truck 457.81 208.16

50 293 01-10-2012 High 27.0 244.57 0.01 Regular Air 46.71 8.69

80 483 10-07-2011 High 30.0 4965.7593 0.08 Regular Air 1198.97 195.99

85 515 28-08-2010 Not Specified 19.0 394.27 0.08 Regular Air 30.94 21.78

86 515 28-08-2010 Not Specified 21.0 146.69 0.05 Regular Air 4.43 6.64

97 613 17-06-2011 High 12.0 93.54 0.03 Regular Air -54.04 7.3

98 613 17-06-2011 High 22.0 905.08 0.09 Regular Air 127.7 42.76

103 643 24-03-2011 High 21.0 2781.82 0.07 Express Air -695.26 138.14

107 678 26-02-2010 Low 44.0 228.41 0.07 Regular Air -226.36 4.98

Time taken: 0.143 seconds, Fetched: 10 row(s)

hive> select * from salesdata01 where Order_date=’01-10-2012′ limit 10;

OK

49 293 01-10-2012 High 49.0 10123.02 0.07 Delivery Truck 457.81 208.16

50 293 01-10-2012 High 27.0 244.57 0.01 Regular Air 46.71 8.69

3204 22980 01-10-2012 Not Specified 17.0 224.09 0.0 Regular Air -27.92 12.44

3205 22980 01-10-2012 Not Specified 10.0 56.05 0.06 Regular Air -27.73 4.98

2857 20579 01-10-2012 Medium 16.0 1434.086 0.1 Regular Air -26.25 110.99

145 929 01-10-2012 High 21.0 227.66 0.04 Regular Air -100.16 10.97

146 929 01-10-2012 High 39.0 84.33 0.04 Regular Air -64.29 2.08

859 6150 01-10-2012 Critical 38.0 191.14 0.06 Regular Air 82.65 4.98

Time taken: 0.506 seconds, Fetched: 8 row(s)

hive> select Row_ID, cast(to_date(from_unixtime(unix_timestamp(Order_date, ‘dd-MM-yyyy’))) as date) from salesdata01 limit 10;

OK

1 2010-10-13

49 2012-10-01

50 2012-10-01

80 2011-07-10

85 2010-08-28

86 2010-08-28

97 2011-06-17

98 2011-06-17

103 2011-03-24

107 2010-02-26

hive> select Row_ID, from_unixtime(unix_timestamp(Order_date, ‘dd-MM-yyyy’),’yyyy-MM-dd’) from salesdata01 limit 10;

OK

1 2010-10-13

49 2012-10-01

50 2012-10-01

80 2011-07-10

85 2010-08-28

86 2010-08-28

97 2011-06-17

98 2011-06-17

103 2011-03-24

107 2010-02-26

Time taken: 0.157 seconds, Fetched: 10 row(s)

hive> select Row_ID, from_unixtime(unix_timestamp(Order_date, ‘dd-MM-yyyy’)) from salesdata01 limit 10;

OK

1 2010-10-13 00:00:00

49 2012-10-01 00:00:00

50 2012-10-01 00:00:00

80 2011-07-10 00:00:00

85 2010-08-28 00:00:00

86 2010-08-28 00:00:00

97 2011-06-17 00:00:00

98 2011-06-17 00:00:00

103 2011-03-24 00:00:00

107 2010-02-26 00:00:00

Time taken: 0.09 seconds, Fetched: 10 row(s)

hive> select Row_ID, from_unixtime(unix_timestamp(Order_date, ‘dd-MM-yyyy’),’dd-MM-yyyy’) from salesdata01 limit 10;

OK

1 13-10-2010

49 01-10-2012

50 01-10-2012

80 10-07-2011

85 28-08-2010

86 28-08-2010

97 17-06-2011

98 17-06-2011

103 24-03-2011

107 26-02-2010

Another example:

If you are trying to store the date and timestamp values in timestamp column using hive.The source file contain the values of date or sometimes timestamps.

Sample Data:

[saurkuma@m1 ~]$ cat sample.txt

1,2015-04-15 00:00:00

2,2015-04-16 00:00:00

3,2015-04-17

hive> create table table1 (id int,tsstr string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘,’LINES TERMINATED BY ‘\n’;

OK

Time taken: 0.241 seconds

hive> LOAD DATA LOCAL INPATH ‘/home/saurkuma/sample.txt’ INTO TABLE table1;

Loading data to table demo.table1

Table demo.table1 stats: [numFiles=1, totalSize=57]

OK

Time taken: 0.855 seconds

hive> select * from table1;

OK

1 2015-04-15 00:00:00

2 2015-04-16 00:00:00

3 2015-04-17

Time taken: 0.097 seconds, Fetched: 3 row(s)

hive> create table table2 (id int,mytimestamp timestamp) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘,’ LINES TERMINATED BY ‘\n’;

OK

Time taken: 0.24 seconds

hive> INSERT INTO TABLE table2 select id,if(length(tsstr) > 10, tsstr, concat(tsstr,’ 00:00:00′)) from table1;

Query ID = saurkuma_20170316032711_63d9129a-38c1-4ae8-89f4-e158218d2587

Total jobs = 3

Launching Job 1 out of 3

Number of reduce tasks is set to 0 since there’s no reduce operator

Starting Job = job_1489644687414_0001, Tracking URL = http://m2.hdp22:8088/proxy/application_1489644687414_0001/

Kill Command = /usr/hdp/2.3.4.0-3485/hadoop/bin/hadoop job  -kill job_1489644687414_0001

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0

2017-03-16 03:27:36,290 Stage-1 map = 0%,  reduce = 0%

2017-03-16 03:27:55,806 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.89 sec

MapReduce Total cumulative CPU time: 1 seconds 890 msec

Ended Job = job_1489644687414_0001

Stage-4 is selected by condition resolver.

Stage-3 is filtered out by condition resolver.

Stage-5 is filtered out by condition resolver.

Moving data to: hdfs://TESTHA/apps/hive/warehouse/demo.db/table2/.hive-staging_hive_2017-03-16_03-27-11_740_404528501642205352-1/-ext-10000

Loading data to table demo.table2

Table demo.table2 stats: [numFiles=1, numRows=3, totalSize=66, rawDataSize=63]

MapReduce Jobs Launched:

Stage-Stage-1: Map: 1   Cumulative CPU: 1.89 sec   HDFS Read: 4318 HDFS Write: 133 SUCCESS

Total MapReduce CPU Time Spent: 1 seconds 890 msec

OK

Time taken: 47.687 seconds

hive> select * from table2;

OK

1 2015-04-15 00:00:00

2 2015-04-16 00:00:00

3 2015-04-17 00:00:00

Time taken: 0.119 seconds, Fetched: 3 row(s)

I hope this helped you to solve your problem and feel free to give your valuable feedback or suggestions.


  • 1

Unix useful commands

Sometime we need a user who can do everything in our server as root does. So we may do the following:

  1. Create a new user with the same privileges as root
  2. Grant same same privileges to existing user as root

Case 1: Lets say we need to add a new user and grant him root privileges :

Use the following commands to create the new user temp, grand him the same privileges as root and set him a password :

[root@m1 ~]# useradd -ou 0 -g 0 temp

[root@m1 ~]# passwd temp

Changing password for user temp.

New password:

BAD PASSWORD: it is based on a dictionary word

BAD PASSWORD: is too simple

Retype new password:

passwd: all authentication tokens updated successfully.

We’ve just created the user temp, with UID 0 and GID 0, so he is in the same group and has the same permissions as root.

Case 2: Grant ROOT Privileges to an Existing USER: 
Perhaps you already have some user temp and you would like to give root permissions to a normal user.

[root@m1 ~]# grep temp1 /etc/passwd

temp1:x:1006:1006::/home/temp1:/bin/bash

Solu 1:

Edit /etc/passwd file and grant root permissions to the user temp1 by changing User and Group IDs to UID 0 and GID 0. 

Solu 2: Create a group and assign this existing user to that group. Also grant that group to sudo access.

[root@m1 ~]# groupadd test

[root@m1 ~]# usermod -g test temp1

[temp2@m1 ~]$ id temp1

uid=1006(temp1) gid=1007(test) groups=1007(test)

Edit /etc/sudoers file and add %test ALL=(ALL)       NOPASSWD: ALL line to file. 

[root@m1 ~]# grep -C4 test /etc/sudoers

# %wheel ALL=(ALL) ALL

## Same thing without a password

%wheel ALL=(ALL) NOPASSWD: ALL

%test ALL=(ALL)       NOPASSWD: ALL

[root@m1 ~]# su temp1

[temp1@m1 ~]$ sudo su – hdfs

[hdfs@m1 ~]$ exit

logout

[temp1@m1 ~]$ sudo su – root

[root@m1 ~]# exit

logout

Delete a USER Account with UID 0 : You won’t be able to delete second root user with another UID 0 using userdel command.
[root@m1 ~]# userdel temp
userdel: user temp is currently used by process 1

To delete user temp with UID 0, open /etc/passwd file and change temp’s UID.
[root@m1 ~]# vi /etc/passwd
[root@m1 ~]# id temp
temp:x:1111:0::/home/temp:/bin/sh

Now, you’ll be able to delete user temp with userdel command :
[root@m1 ~]# userdel temp
[root@m1 ~]# id temp

id: temp: No such user

 

How to make sure /etc/resolv.conf Never Get Updated By DHCP Client in centos 6 :

I using GNU/Linux with the Internet Systems Consortium DHCP Client. It also updates my /etc/resolv.conf file each time my laptop connects to different network or after restart machine. I would like to keep my existing nameservers. How do I skip /etc/resolv.conf update on a Linux based system?

The DHCP protocol allows a host to contact a central server which maintains a list of IP addresses which may be assigned on one or more subnets. This protocol reduces system administration workload, allowing devices to be added to the network with little or no manual configuration. There are various method to fix this issue but I would prefer to use the following one.

We have to modify our interface configuration file such as /etc/sysconfig/network-scripts/ifcfg-eth0 file and append the following option:

 

[root@m1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

TYPE=Ethernet

ONBOOT=yes

NM_CONTROLLED=yes

BOOTPROTO=dhcp

HWADDR=08:00:27:90:1E:98

DEFROUTE=yes

PEERDNS=NO ## change it to No from Yes and the following DNS accordingly. 

DNS1=192.168.56.104

DNS2=168.244.212.13

DNS3=168.244.217.13

PEERROUTES=yes

IPV4_FAILURE_FATAL=yes

IPV6INIT=no

NAME=”System eth0″

Save and close the file. Where,

1. PEERDNS=yes|no – Modify /etc/resolv.conf if peer uses msdns extension (PPP only) or DNS{1,2} are set, or if using dhclient. default to “yes”.

2. DNS{1,2}=<ip address> – Provide DNS addresses that are dropped into the resolv.conf file if PEERDNS is not set to “no”.

 

I hope this will help you, please feel free to give your valuable suggestion or feedback.


  • 0

Oozie server failing with error “cannot load JDBC driver class ‘com.mysql.jdbc.Driver'”

Issue : Oozie server is failing with following error :

FATAL Services:514 – SERVER[m2.hdp22] E0103: Could not load service classes, Cannot load JDBC driver class ‘com.mysql.jdbc.Driver’
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class ‘com.mysql.jdbc.Driver’
at org.apache.oozie.service.Services.loadServices(Services.java:309)
at org.apache.oozie.service.Services.init(Services.java:213)
at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4210)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4709)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:802)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:779)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:676)
at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:602)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:503)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1068)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1060)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
at org.apache.catalina.core.StandardService.start(StandardService.java:525)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:759)
at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: <openjpa-2.2.2-r422266:1468616 fatal general error> org.apache.openjpa.persistence.PersistenceException: Cannot load JDBC driver class ‘com.mysql.jdbc.Driver’
at org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:102)
at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:603)
at org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(MappingRepository.java:1518)
at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:531)
at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:456)
at org.apache.openjpa.lib.conf.PluginValue.instantiate(PluginValue.java:120)
at org.apache.openjpa.conf.MetaDataRepositoryValue.instantiate(MetaDataRepositoryValue.java:68)
at org.apache.openjpa.lib.conf.ObjectValue.instantiate(ObjectValue.java:83)
at org.apache.openjpa.conf.OpenJPAConfigurationImpl.newMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:967)
at org.apache.openjpa.conf.OpenJPAConfigurationImpl.getMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:958)
at org.apache.openjpa.kernel.AbstractBrokerFactory.makeReadOnly(AbstractBrokerFactory.java:644)
at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:203)
at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:156)
at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:227)
at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:154)
at org.apache.openjpa.persistence.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:60)
at org.apache.oozie.service.JPAService.getEntityManager(JPAService.java:500)
at org.apache.oozie.service.JPAService.init(JPAService.java:201)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.setService(Services.java:372)
at org.apache.oozie.service.Services.loadServices(Services.java:305)
… 26 more
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot load JDBC driver class ‘com.mysql.jdbc.Driver’
at org.apache.commons.dbcp.BasicDataSource.createConnectionFactory(BasicDataSource.java:1429)
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1371)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.apache.openjpa.lib.jdbc.DelegatingDataSource.getConnection(DelegatingDataSource.java:110)
at org.apache.openjpa.lib.jdbc.DecoratingDataSource.getConnection(DecoratingDataSource.java:87)
at org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:91)
… 46 more
Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1680)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526)
at org.apache.commons.dbcp.BasicDataSource.createConnectionFactory(BasicDataSource.java:1420)
… 51 more

Root Cause: 

Mysql driver is not located in the class path for oozie server to use.

Solution: You need to copy mysql jdbc driver to required location.

[root@m2 oozie]# cp /usr/share/java/mysql-connector-java.jar /usr/hdp/2.3.4.0-3485/oozie/oozie-server/webapps/oozie/WEB-INF/lib/

Now restart your oozie server and it will be fine.

I hope it helped you to solve your issue, please feel free to give your valuable feedback or suggestion.


  • 0

script to kill yarn application if it is running more than x mins

Sometime we get a situation where we have to get lists of all long running and based on threshold we need to kill them.Also sometime we need to do it for a specific yarn queue.  In such situation following script will help you to do your job.

[root@m1.hdp22~]$ vi kill_application_after_some_time.sh

#!/bin/bash

if [ “$#” -lt 1 ]; then

  echo Usage: $0  <max_life_in_mins>

  exit 1

fi

yarn application -list 2>/dev/null | grep <queue_name> | grep RUNNING | awk {print $1} > job_list.txt

for jobId in `cat job_list.txt`

do

finish_time=`yarn application -status $jobId 2>/dev/null | grep Finish-Time | awk {print $NF}`

if [ $finish_time -ne 0 ]; then

  echo App $jobId is not running

  exit 1

fi

time_diff=`date +%s``yarn application -status $jobId 2>/dev/null | grep Start-Time | awk {print $NF} | sed s!$!/1000!`

time_diff_in_mins=`echo ($time_diff)/60 | bc`

echo App $jobId is running for $time_diff_in_mins min(s)

if [ $time_diff_in_mins -gt $1 ]; then

  echo Killing app $jobId

  yarn application -kill $jobId

else

  echo App $jobId should continue to run

fi

done

[yarn@m1.hdp22 ~]$ ./kill_application_after_some_time.sh 30 (pass x tim in mins)

App application_1487677946023_5995 is running for 0 min(s)

App application_1487677946023_5995 should continue to run

I hope it would help you but please feel free to give your valuable feedback or suggestion.


  • 0

Hive2 action with Oozie in kerberos Env

One of my friend was trying to run some simple hive2 action in their Oozie workflow and was getting error. Then I decided to replicate it on my cluster and finally I did it after some retry.

If you have the same requirement where you have to run hive sql via oozie then this article will help you to do your job.

So there 3 requirements for Oozie Hive 2 Action on Kerberized HiveServer2:
1. Must have “oozie.credentials.credentialclasses” property defined in /etc/oozie/conf/oozie-site.xml. oozie.credentials.credentialclasses must include the value “hive2=org.apache.oozie.action.hadoop.Hive2Credentials”
2. workflow.xml must include a <credentials><credential>…</credential></credentials> section including the 2 properties “hive2.server.principal” and “hive2.jdbc.url”.
3. The Hive 2 action must reference the above defined credential name in the “cred=” field of the <action> definition.

 

Step 1: First create some dir inside hdfs(under your home dir) to have all script in same place and then run it from there:

[s0998dnz@m1 hive2_action_oozie]$ hadoop fs -mkdir -p /user/s0998dnz/hive2demo/app

Step 2: Now create your workflow.xml and job.properties:

[root@m1 hive_oozie_demo]# cat workflow.xml

<workflow-app name=”hive2demo” xmlns=”uri:oozie:workflow:0.4″>

  <global>

    <job-tracker>${jobTracker}</job-tracker>

    <name-node>${nameNode}</name-node>

  </global>

  <credentials>

    <credential name=”hs2-creds” type=”hive2″>

      <property>

        <name>hive2.server.principal</name>

          <value>${jdbcPrincipal}</value>

      </property>

      <property>

       <name>hive2.jdbc.url</name>

         <value>${jdbcURL}</value>

      </property>

    </credential>

  </credentials>

  <start to=”hive2″/>

    <action name=”hive2″ cred=”hs2-creds”>

      <hive2 xmlns=”uri:oozie:hive2-action:0.1″>

        <jdbc-url>${jdbcURL}</jdbc-url>

        <script>${hivescript}</script>

      </hive2>

      <ok to=”End”/>

      <error to=”Kill”/>

    </action>

    <kill name=”Kill”>

    <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>

  </kill>

  <end name=”End”/>

</workflow-app>

[s0998dnz@m1 hive2_action_oozie]$ cat job.properties

# Job.properties file

nameNode=hdfs://HDPINF

jobTracker=m2.hdp22:8050

exampleDir=${nameNode}/user/${user.name}/hive2demo

oozie.wf.application.path=${exampleDir}/app

oozie.use.system.libpath=true

# Hive2 action

hivescript=${oozie.wf.application.path}/hivequery.hql

outputHiveDatabase=default

jdbcURL=jdbc:hive2://m2.hdp22:10000/default

jdbcPrincipal=hive/_HOST@HADOOPADMIN.COM

Step 3: Now create your hive script :

[s0998dnz@m1 hive2_action_oozie]$ cat hivequery.hql

show databases;

Step 4: Now Upload  hivequery.hql and workflow.xml to HDFS:
For example:

[s0998dnz@m1 hive2_action_oozie]$ hadoop fs -put workflow.xml /user/s0998dnz/hive2demo/app/

[s0998dnz@m1 hive2_action_oozie]$ hadoop fs -put hivequery.hql /user/s0998dnz/hive2demo/app/

Step 5: Run the oozie job with the properites (please run kinit to acquire kerberos ticket first if required):

[s0998dnz@m1 hive2_action_oozie]$ oozie job -oozie http://m2.hdp22:11000/oozie -config job.properties -run

job: 0000008-170221004234250-oozie-oozi-W

I hope it will help you to run your hive2 action in oozie, please fell free to give your valuable feedback or suggestions.


  • 1

Enable GUI for Centos 6 on top of command line

If you have installed CentOS 6.5, and you just have a terminal with a black background and you want to enable GUI then thsi article is for you to get it done.

Desktop environment is not necessary for Server usage, though. But Sometimes installation or using an application requires Desktop Environment, then build Desktop Environment as follows.

1 .Install GNOME Desktop Environment on here.

[root@m1 ~]# yum -y groupinstall “X Window System”

Loaded plugins: fastestmirror

Setting up Group Process

Loading mirror speeds from cached hostfile

* base: mirrors.centos.webair.com

* extras: mirror.pac-12.org

* updates: bay.uchicago.edu

base/group_gz                                                                                                         | 226 kB     00:06     

Resolving Dependencies

There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.

The program yum-complete-transaction is found in the yum-utils package.

–> Running transaction check

—> Package firstboot.x86_64 0:1.110.15-4.el6 will be installed

————————-

————————

Complete!

2. Install Desktop

[root@m1 ~]# yum -y groupinstall “Desktop”

Loaded plugins: fastestmirror

Setting up Group Process

Loading mirror speeds from cached hostfile

* base: mirror.atlanticmetro.net

* extras: centos.chi.host-engine.com

* updates: centos.chi.host-engine.com

Package notification-daemon-0.5.0-1.el6.x86_64 already installed and latest version

Package metacity-2.28.0-23.el6.x86_64 already installed and latest version

Package yelp-2.28.1-17.el6_3.x86_64 already installed and latest version

Resolving Dependencies

There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.

The program yum-complete-transaction is found in the yum-utils package.

–> Running transaction check

—> Package NetworkManager.x86_64 1:0.8.1-107.el6 will be installed

Complete!

3. Install General Purpose Desktop

[root@m1 ~]# yum -y groupinstall “General Purpose Desktop”

Loaded plugins: fastestmirror, refresh-packagekit

Setting up Group Process

Loading mirror speeds from cached hostfile

* base: mirror.cs.pitt.edu

* extras: mirrors.kernel.org

* updates: centos.mirrors.wvstateu.edu

HDP-2.5                                                                                                               | 2.9 kB     00:00     

HDP-2.5.3.0                                                                                                           | 2.9 kB     00:00     

HDP-UTILS-1.1.0.21                                                                                                    | 2.9 kB     00:00     

HDP-UTILS-2.5.3.0                                                                                                     | 2.9 kB     00:00     

Updates-ambari-2.4.1.0                                                                                                | 2.9 kB     00:00     

base                                                                                                                  | 3.7 kB     00:00     

extras                                                                                                                | 3.4 kB     00:00     

mysql-connectors-community                                                                                            | 2.5 kB     00:00     

mysql-tools-community                                                                                                 | 2.5 kB     00:00     

mysql56-community                                                                                                     | 2.5 kB     00:00     

updates                                                                                                               | 3.4 kB     00:00     

Package gnome-themes-2.28.1-7.el6.noarch already installed and latest version

4. Input a command like below after finishing installation of new packages.

$startx

or

5. Run following command to have your GUI on your system. 

[root@m1 ~]# init 5

Screen Shot 2017-02-21 at 7.58.29 PM

 


  • 0

Encrypt Database and LDAP Passwords for Ambari-Server

By default the passwords to access the Ambari database and the LDAP server are stored in a plain text configuration file. To have those passwords encrypted, you need to run a special setup command.

[root@m1 ~]# cd /etc/ambari-server/conf/

[root@m1 conf]# ls -ltrh

total 52K

-rw-r–r– 1 root root 2.8K Mar 31  2015 ambari.properties.rpmsave.20161004015858

-rwxrwxrwx 1 root root  286 Sep 15 19:53 krb5JAASLogin.conf

-rw-r–r– 1 root root 3.9K Oct  4 01:58 ambari.properties.rpmsave.20161005033229

-rw-r–r– 1 root root 4.7K Oct  5 03:32 ambari.properties.rpmsave.20161005065356

-rw-r–r– 1 root root  286 Oct  5 06:45 krb5JAASLogin.conf.rpmsave

-rwxrwxrwx 1 root root 4.9K Oct  5 06:45 log4j.properties

-rw-r–r– 1 root root    9 Jan 31 05:35 users.txt

-rw-r—– 1 root root    7 Feb  8 11:51 password.dat

-rw-rw—- 1 root root   15 Feb  8 11:53 ldap-password.dat

-rwxrwxrwx 1 root root 7.8K Feb  8 11:53 ambari.properties

So to protect these you need to run ambari security setup. Ambari Server should not be running when you do this: either make the edits before you start Ambari Server the first time or bring the server down to make the edits.

Step 1: On the Ambari Server, run the special setup command and answer the prompts:

[root@m1 ~]# ambari-server setup-security

Using python  /usr/bin/python

Security setup options…

===========================================================================

Choose one of the following options:

  [1] Enable HTTPS for Ambari server.

  [2] Encrypt passwords stored in ambari.properties file.

  [3] Setup Ambari kerberos JAAS configuration.

  [4] Setup truststore.

  [5] Import certificate to truststore.

===========================================================================

Enter choice, (1-5): 2 (Select Option 2: Choose one of the following options:)

Please provide master key for locking the credential store: ******** (Provide a master key for encrypting the passwords. You are prompted to enter the key twice for accuracy.)

Re-enter master key:********

Do you want to persist master key. If you choose not to persist, you need to provide the Master Key while starting the ambari server as an env variable named AMBARI_SECURITY_MASTER_KEY or the start will prompt for the master key. Persist [y/n] (y)? n

Adjusting ambari-server permissions and ownership…

Ambari Server ‘setup-security’ completed successfully.

Note : Now you need to restart ambari server and provide the same master key when it prompt during restart. You can avoid it by setting up environment variable  in ambari-env.sh file.

Option 1: Provide key during restart. 

root@m1 ~]# ambari-server restart

Using python  /usr/bin/python

Restarting ambari-server

Using python  /usr/bin/python

Stopping ambari-server

Ambari Server stopped

Using python  /usr/bin/python

Starting ambari-server

Ambari Server running with administrator privileges.

Organizing resource files at /var/lib/ambari-server/resources…

Enter current Master Key: ********

Ambari database consistency check started…

No errors were found.

Ambari database consistency check finished

Server PID at: /var/run/ambari-server/ambari-server.pid

Server out at: /var/log/ambari-server/ambari-server.out

Server log at: /var/log/ambari-server/ambari-server.log

Waiting for server start………………..

Ambari Server ‘start’ completed successfully.

Option 2: set AMBARI_SECURITY_MASTER_KEY in ambari-env.sh

[root@m1 ~]# vi /var/lib/ambari-server/ambari-env.sh

[root@m1 ~]# grep -C4 AMBARI_SECURITY_MASTER_KEY /var/lib/ambari-server/ambari-env.sh

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

export AMBARI_SECURITY_MASTER_KEY=hadoop

AMBARI_PASSHPHRASE=”DEV”

export AMBARI_JVM_ARGS=$AMBARI_JVM_ARGS’ -Xms512m -Xmx2048m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false’

export PATH=$PATH:/var/lib/ambari-server

export PYTHONPATH=$PYTHONPATH:/usr/lib/python2.6/site-packages

Now restart Ambari-Server 

[root@m1 ~]# ambari-server restart

Using python  /usr/bin/python

Restarting ambari-server

Using python  /usr/bin/python

Stopping ambari-server

Ambari Server stopped

Using python  /usr/bin/python

Starting ambari-server

Ambari Server running with administrator privileges.

Organizing resource files at /var/lib/ambari-server/resources…

Ambari database consistency check started…

No errors were found.

Ambari database consistency check finished

Server PID at: /var/run/ambari-server/ambari-server.pid

Server out at: /var/log/ambari-server/ambari-server.out

Server log at: /var/log/ambari-server/ambari-server.log

Waiting for server start………………..

Ambari Server ‘start’ completed successfully.

[root@m1 conf]# ls -ltrh

total 52K

-rw-r–r– 1 root root 2.8K Mar 31  2015 ambari.properties.rpmsave.20161004015858

-rwxrwxrwx 1 root root  286 Sep 15 19:53 krb5JAASLogin.conf

-rw-r–r– 1 root root 3.9K Oct  4 01:58 ambari.properties.rpmsave.20161005033229

-rw-r–r– 1 root root 4.7K Oct  5 03:32 ambari.properties.rpmsave.20161005065356

-rw-r–r– 1 root root  286 Oct  5 06:45 krb5JAASLogin.conf.rpmsave

-rwxrwxrwx 1 root root 4.9K Oct  5 06:45 log4j.properties

-rw-r–r– 1 root root    9 Jan 31 05:35 users.txt

-rwxrwxrwx 1 root root 7.8K Feb  8 11:53 ambari.properties

Remove Encryption Entirely

To reset Ambari database and LDAP passwords to a completely unencrypted state:

  1. On the Ambari host, open /etc/ambari-server/conf/ambari.properties with a text editor and set this propertysecurity.passwords.encryption.enabled=false
  2. Delete /var/lib/ambari-server/keys/credentials.jceks
  3. Delete /var/lib/ambari-server/keys/master
  4. You must now reset the database password and, if necessary, the LDAP password. Run ambari-server setup and ambari-server setup-ldap again.

[root@m1 ~]# vi /etc/ambari-server/conf/ambari.properties

[root@m1 ~]# ls -ltrh /var/lib/ambari-server/keys/credentials.jceks

-rw-r—– 1 root root 992 Feb  8 11:35 /var/lib/ambari-server/keys/credentials.jceks

[root@m1 ~]# rm /var/lib/ambari-server/keys/credentials.jceks

rm: remove regular file `/var/lib/ambari-server/keys/credentials.jceks’? y

[root@m1 ~]# ambari-server setup

Using python  /usr/bin/python

Setup ambari-server

Checking SELinux…

SELinux status is ‘disabled’

Customize user account for ambari-server daemon [y/n] (n)? n

Adjusting ambari-server permissions and ownership…

Checking firewall status…

Checking JDK…

Do you want to change Oracle JDK [y/n] (n)? n

Completing setup…

Configuring database…

Enter advanced database configuration [y/n] (n)? y

Configuring database…

==============================================================================

Choose one of the following options:

[1] – PostgreSQL (Embedded)

[2] – Oracle

[3] – MySQL / MariaDB

[4] – PostgreSQL

[5] – Microsoft SQL Server (Tech Preview)

[6] – SQL Anywhere

[7] – BDB

==============================================================================

Enter choice (1): 1

Database name (ambari):

Postgres schema (ambari):

Username (ambari):

Enter Database Password (ambari.db.password):

Re-enter password:

Default properties detected. Using built-in database.

Configuring ambari database…

Checking PostgreSQL…

Configuring local database…

Connecting to local database…done.

Configuring PostgreSQL…

Backup for pg_hba found, reconfiguration not required

Extracting system views…

…………

Adjusting ambari-server permissions and ownership…

Ambari Server ‘setup’ completed successfully.

[root@m1 ~]# ambari-server setup-ldap

Using python  /usr/bin/python

Setting up LDAP properties…

Primary URL* {host:port} (ad.lowes.com:389):

Secondary URL {host:port} :

Use SSL* [true/false] (false):

User object class* (user):

User name attribute* (sAMAccountName):

Group object class* (group):

Group name attribute* (cn):

Group member attribute* (memberOf):

Distinguished name attribute* (dn):

Base DN* (dc=lowes,dc=com):

Referral method [follow/ignore] (ignore):

Bind anonymously* [true/false] (false):

Manager DN* (cn=ambariaddev,cn=users,dc=lowes,dc=com):

Enter Manager Password* :

Re-enter password:

====================

Review Settings

====================

authentication.ldap.managerDn: cn=ambariaddev,cn=users,dc=lowes,dc=com

authentication.ldap.managerPassword: *****

Save settings [y/n] (y)? y

Saving…done

Ambari Server ‘setup-ldap’ completed successfully.

[root@m1 ~]# ambari-server restart

Using python  /usr/bin/python

Restarting ambari-server

Using python  /usr/bin/python

Stopping ambari-server

Ambari Server stopped

Using python  /usr/bin/python

Starting ambari-server

Ambari Server running with administrator privileges.

Organizing resource files at /var/lib/ambari-server/resources…

Ambari database consistency check started…

No errors were found.

Ambari database consistency check finished

Server PID at: /var/run/ambari-server/ambari-server.pid

Server out at: /var/log/ambari-server/ambari-server.out

Server log at: /var/log/ambari-server/ambari-server.log

Waiting for server start………………..

Ambari Server ‘start’ completed successfully.

Change the Current Master Key

To change the master key:

  • If you know the current master key or if the current master key has been persisted:
    1. Re-run the encryption setup command and follow the prompts.ambari-server setup-security
      1. Select Option 2: Choose one of the following options:
        • [1] Enable HTTPS for Ambari server.
        • [2] Encrypt passwords stored in ambari.properties file.
        • [3] Setup Ambari kerberos JAAS configuration.
      2. Enter the current master key when prompted if necessary (if it is not persisted or set as an environment variable).
      3. At the Do you want to reset Master Key prompt, enter yes.
      4. At the prompt, enter the new master key and confirm.
  • If you do not know the current master key:
    • Remove encryption entirely, as described here.
    • Re-run ambari-server setup-security as described here.
    • Start or restart the Ambari Server.ambari-server restart

     

Please feel free to give your suggestion or feedback.


  • 0

Cannot retrieve repository metadata (repomd.xml) for repository

When you upgrade your hdp cluster through satellite server or local repository and you start your cluster via ambari or add some new services to your cluster then you may see following error.

resource_management.core.exceptions.Fail: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-collector’ returned 1. Error: Cannot retrieve repository metadata (repomd.xml) for repository: HDP-2.3.0.0-2557. Please verify its path and try again.

[root@m1 ~]# yum -d 0 -e 0 -y install slider_2_5_3_0_37
Error: Cannot retrieve repository metadata (repomd.xml) for repository: HDP-2.3.0.0-2557. Please verify its path and try again

Root Cause: It is because of two different version repository file in your yum dir. 

[root@m1 yum.repos.d]# ll
total 48
-rw-r–r– 1 root root 286 Sep 20 14:01 ambari.repo
-rw-r–r–. 1 root root 1991 Oct 23 2014 CentOS-Base.repo
-rw-r–r–. 1 root root 647 Oct 23 2014 CentOS-Debuginfo.repo
-rw-r–r–. 1 root root 289 Oct 23 2014 CentOS-fasttrack.repo
-rw-r–r–. 1 root root 630 Oct 23 2014 CentOS-Media.repo
-rw-r–r–. 1 root root 5394 Oct 23 2014 CentOS-Vault.repo
-rw-r–r– 1 root root 274 Oct 4 10:16 HDP-2.3.0.0-2557.repo
-rw-r–r– 1 root root 286 Oct 4 03:50 HDP-2.3.0.0.repo
-rw-r–r– 1 root root 234 Feb 1 11:05 HDP-2.5.3.0.repo
-rw-r–r– 1 root root 92 Feb 3 12:29 HDP.repo
-rw-r–r– 1 root root 135 Feb 3 12:29 HDP-UTILS.repo

 

Resolution:

Step 1: You need to disable your repo file for old version or need to move/delete them from /etc/yum.repo.d dir. 

[root@w1 yum.repos.d]# mv HDP-2.3.0.0* /tmp/
[root@w1 yum.repos.d]# ls -ltr
total 40
-rw-r–r–. 1 root root 5394 Oct 23 2014 CentOS-Vault.repo
-rw-r–r–. 1 root root 630 Oct 23 2014 CentOS-Media.repo
-rw-r–r–. 1 root root 289 Oct 23 2014 CentOS-fasttrack.repo
-rw-r–r–. 1 root root 647 Oct 23 2014 CentOS-Debuginfo.repo
-rw-r–r–. 1 root root 1991 Oct 23 2014 CentOS-Base.repo
-rw-r–r– 1 root root 286 Sep 20 14:01 ambari.repo
-rw-r–r– 1 root root 234 Feb 1 11:05 HDP-2.5.3.0.repo
-rw-r–r– 1 root root 92 Feb 3 12:29 HDP.repo
-rw-r–r– 1 root root 135 Feb 3 12:29 HDP-UTILS.repo

Step 2: Now clean all old repo and then update it again with new repo metadata.

[root@w3 yum.repos.d]# yum info all

Loaded plugins: fastestmirror

Determining fastest mirrors

* base: mirror.vcu.edu

* extras: mirror.cs.uwp.edu

* updates: mirror.nodesdirect.com

HDP-2.5                                                                                                                                                  | 2.9 kB     00:00     

HDP-2.5/primary_db                                                                                                                                       |  69 kB     00:00     

HDP-2.5.3.0                                                                                                                                              | 2.9 kB     00:00     

HDP-2.5.3.0/primary_db                                                                                                                                   |  69 kB     00:00     

HDP-UTILS-1.1.0.21                                                                                                                                       | 2.9 kB     00:00     

HDP-UTILS-1.1.0.21/primary_db                                                                                                                            |  33 kB     00:00     

HDP-UTILS-2.5.3.0                                                                                                                                        | 2.9 kB     00:00     

HDP-UTILS-2.5.3.0/primary_db                                                                                                                             |  33 kB     00:00     

Updates-ambari-2.4.1.0                                                                                                                                   | 2.9 kB     00:00     

Updates-ambari-2.4.1.0/primary_db                                                                                                                        | 8.3 kB     00:00     

base                                                                                                                                                     | 3.7 kB     00:00     

base/primary_db                                                                                                                                          | 4.7 MB     00:36     

extras                                                                                                                                                   | 3.4 kB     00:00     

extras/primary_db                                                                                                                                        |  37 kB     00:00     

updates                                                                                                                                                  | 3.4 kB     00:00     

http://mirror.nodesdirect.com/centos/6.8/updates/x86_64/repodata/b02ecfdd926546ba78f0f52d424e06c6a9b7da60cee4b9bf83a54a892b9efd06-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.nodesdirect.com/centos/6.8/updates/x86_64/repodata/b02ecfdd926546ba78f0f52d424e06c6a9b7da60cee4b9bf83a54a892b9efd06-primary.sqlite.bz2: (28, ‘Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds’)

Trying other mirror.

updates/primary_db                                                                                                                                       | 4.3 MB     00:01     

Installed Packages

Name        : MAKEDEV

Arch        : x86_64

Version     : 3.24

Release     : 6.el6

Size        : 222 k

Repo        : installed

From repo   : anaconda-CentOS-201410241409.x86_64

Summary     : A program used for creating device files in /dev

URL         : http://www.lanana.org/docs/device-list/

License     : GPLv2

Description : This package contains the MAKEDEV program, which makes it easier to create

            : and maintain the files in the /dev directory.  /dev directory files

            : correspond to a particular device supported by Linux (serial or printer

            : ports, scanners, sound cards, tape drives, CD-ROM drives, hard drives,

            : etc.) and interface with the drivers in the kernel.

            : You should install the MAKEDEV package because the MAKEDEV utility makes

            : it easy to manage the /dev directory device files.

 

Please feel free to give your valuable feedback.