Monthly Archives: February 2017

  • 0

script to kill yarn application if it is running more than x mins

Sometime we get a situation where we have to get lists of all long running and based on threshold we need to kill them.Also sometime we need to do it for a specific yarn queue.  In such situation following script will help you to do your job.

[root@m1.hdp22~]$ vi kill_application_after_some_time.sh

#!/bin/bash

if [ “$#” -lt 1 ]; then

  echo Usage: $0  <max_life_in_mins>

  exit 1

fi

yarn application -list 2>/dev/null | grep <queue_name> | grep RUNNING | awk {print $1} > job_list.txt

for jobId in `cat job_list.txt`

do

finish_time=`yarn application -status $jobId 2>/dev/null | grep Finish-Time | awk {print $NF}`

if [ $finish_time -ne 0 ]; then

  echo App $jobId is not running

  exit 1

fi

time_diff=`date +%s``yarn application -status $jobId 2>/dev/null | grep Start-Time | awk {print $NF} | sed s!$!/1000!`

time_diff_in_mins=`echo ($time_diff)/60 | bc`

echo App $jobId is running for $time_diff_in_mins min(s)

if [ $time_diff_in_mins -gt $1 ]; then

  echo Killing app $jobId

  yarn application -kill $jobId

else

  echo App $jobId should continue to run

fi

done

[yarn@m1.hdp22 ~]$ ./kill_application_after_some_time.sh 30 (pass x tim in mins)

App application_1487677946023_5995 is running for 0 min(s)

App application_1487677946023_5995 should continue to run

I hope it would help you but please feel free to give your valuable feedback or suggestion.


  • 0

Hive2 action with Oozie in kerberos Env

One of my friend was trying to run some simple hive2 action in their Oozie workflow and was getting error. Then I decided to replicate it on my cluster and finally I did it after some retry.

If you have the same requirement where you have to run hive sql via oozie then this article will help you to do your job.

So there 3 requirements for Oozie Hive 2 Action on Kerberized HiveServer2:
1. Must have “oozie.credentials.credentialclasses” property defined in /etc/oozie/conf/oozie-site.xml. oozie.credentials.credentialclasses must include the value “hive2=org.apache.oozie.action.hadoop.Hive2Credentials”
2. workflow.xml must include a <credentials><credential>…</credential></credentials> section including the 2 properties “hive2.server.principal” and “hive2.jdbc.url”.
3. The Hive 2 action must reference the above defined credential name in the “cred=” field of the <action> definition.

 

Step 1: First create some dir inside hdfs(under your home dir) to have all script in same place and then run it from there:

[s0998dnz@m1 hive2_action_oozie]$ hadoop fs -mkdir -p /user/s0998dnz/hive2demo/app

Step 2: Now create your workflow.xml and job.properties:

[root@m1 hive_oozie_demo]# cat workflow.xml

<workflow-app name=”hive2demo” xmlns=”uri:oozie:workflow:0.4″>

  <global>

    <job-tracker>${jobTracker}</job-tracker>

    <name-node>${nameNode}</name-node>

  </global>

  <credentials>

    <credential name=”hs2-creds” type=”hive2″>

      <property>

        <name>hive2.server.principal</name>

          <value>${jdbcPrincipal}</value>

      </property>

      <property>

       <name>hive2.jdbc.url</name>

         <value>${jdbcURL}</value>

      </property>

    </credential>

  </credentials>

  <start to=”hive2″/>

    <action name=”hive2″ cred=”hs2-creds”>

      <hive2 xmlns=”uri:oozie:hive2-action:0.1″>

        <jdbc-url>${jdbcURL}</jdbc-url>

        <script>${hivescript}</script>

      </hive2>

      <ok to=”End”/>

      <error to=”Kill”/>

    </action>

    <kill name=”Kill”>

    <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>

  </kill>

  <end name=”End”/>

</workflow-app>

[s0998dnz@m1 hive2_action_oozie]$ cat job.properties

# Job.properties file

nameNode=hdfs://HDPINF

jobTracker=m2.hdp22:8050

exampleDir=${nameNode}/user/${user.name}/hive2demo

oozie.wf.application.path=${exampleDir}/app

oozie.use.system.libpath=true

# Hive2 action

hivescript=${oozie.wf.application.path}/hivequery.hql

outputHiveDatabase=default

jdbcURL=jdbc:hive2://m2.hdp22:10000/default

jdbcPrincipal=hive/_HOST@HADOOPADMIN.COM

Step 3: Now create your hive script :

[s0998dnz@m1 hive2_action_oozie]$ cat hivequery.hql

show databases;

Step 4: Now Upload  hivequery.hql and workflow.xml to HDFS:
For example:

[s0998dnz@m1 hive2_action_oozie]$ hadoop fs -put workflow.xml /user/s0998dnz/hive2demo/app/

[s0998dnz@m1 hive2_action_oozie]$ hadoop fs -put hivequery.hql /user/s0998dnz/hive2demo/app/

Step 5: Run the oozie job with the properites (please run kinit to acquire kerberos ticket first if required):

[s0998dnz@m1 hive2_action_oozie]$ oozie job -oozie http://m2.hdp22:11000/oozie -config job.properties -run

job: 0000008-170221004234250-oozie-oozi-W

I hope it will help you to run your hive2 action in oozie, please fell free to give your valuable feedback or suggestions.


  • 1

Enable GUI for Centos 6 on top of command line

If you have installed CentOS 6.5, and you just have a terminal with a black background and you want to enable GUI then thsi article is for you to get it done.

Desktop environment is not necessary for Server usage, though. But Sometimes installation or using an application requires Desktop Environment, then build Desktop Environment as follows.

1 .Install GNOME Desktop Environment on here.

[root@m1 ~]# yum -y groupinstall “X Window System”

Loaded plugins: fastestmirror

Setting up Group Process

Loading mirror speeds from cached hostfile

* base: mirrors.centos.webair.com

* extras: mirror.pac-12.org

* updates: bay.uchicago.edu

base/group_gz                                                                                                         | 226 kB     00:06     

Resolving Dependencies

There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.

The program yum-complete-transaction is found in the yum-utils package.

–> Running transaction check

—> Package firstboot.x86_64 0:1.110.15-4.el6 will be installed

————————-

————————

Complete!

2. Install Desktop

[root@m1 ~]# yum -y groupinstall “Desktop”

Loaded plugins: fastestmirror

Setting up Group Process

Loading mirror speeds from cached hostfile

* base: mirror.atlanticmetro.net

* extras: centos.chi.host-engine.com

* updates: centos.chi.host-engine.com

Package notification-daemon-0.5.0-1.el6.x86_64 already installed and latest version

Package metacity-2.28.0-23.el6.x86_64 already installed and latest version

Package yelp-2.28.1-17.el6_3.x86_64 already installed and latest version

Resolving Dependencies

There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.

The program yum-complete-transaction is found in the yum-utils package.

–> Running transaction check

—> Package NetworkManager.x86_64 1:0.8.1-107.el6 will be installed

Complete!

3. Install General Purpose Desktop

[root@m1 ~]# yum -y groupinstall “General Purpose Desktop”

Loaded plugins: fastestmirror, refresh-packagekit

Setting up Group Process

Loading mirror speeds from cached hostfile

* base: mirror.cs.pitt.edu

* extras: mirrors.kernel.org

* updates: centos.mirrors.wvstateu.edu

HDP-2.5                                                                                                               | 2.9 kB     00:00     

HDP-2.5.3.0                                                                                                           | 2.9 kB     00:00     

HDP-UTILS-1.1.0.21                                                                                                    | 2.9 kB     00:00     

HDP-UTILS-2.5.3.0                                                                                                     | 2.9 kB     00:00     

Updates-ambari-2.4.1.0                                                                                                | 2.9 kB     00:00     

base                                                                                                                  | 3.7 kB     00:00     

extras                                                                                                                | 3.4 kB     00:00     

mysql-connectors-community                                                                                            | 2.5 kB     00:00     

mysql-tools-community                                                                                                 | 2.5 kB     00:00     

mysql56-community                                                                                                     | 2.5 kB     00:00     

updates                                                                                                               | 3.4 kB     00:00     

Package gnome-themes-2.28.1-7.el6.noarch already installed and latest version

4. Input a command like below after finishing installation of new packages.

$startx

or

5. Run following command to have your GUI on your system. 

[root@m1 ~]# init 5

Screen Shot 2017-02-21 at 7.58.29 PM

 


  • 0

Encrypt Database and LDAP Passwords for Ambari-Server

By default the passwords to access the Ambari database and the LDAP server are stored in a plain text configuration file. To have those passwords encrypted, you need to run a special setup command.

[root@m1 ~]# cd /etc/ambari-server/conf/

[root@m1 conf]# ls -ltrh

total 52K

-rw-r–r– 1 root root 2.8K Mar 31  2015 ambari.properties.rpmsave.20161004015858

-rwxrwxrwx 1 root root  286 Sep 15 19:53 krb5JAASLogin.conf

-rw-r–r– 1 root root 3.9K Oct  4 01:58 ambari.properties.rpmsave.20161005033229

-rw-r–r– 1 root root 4.7K Oct  5 03:32 ambari.properties.rpmsave.20161005065356

-rw-r–r– 1 root root  286 Oct  5 06:45 krb5JAASLogin.conf.rpmsave

-rwxrwxrwx 1 root root 4.9K Oct  5 06:45 log4j.properties

-rw-r–r– 1 root root    9 Jan 31 05:35 users.txt

-rw-r—– 1 root root    7 Feb  8 11:51 password.dat

-rw-rw—- 1 root root   15 Feb  8 11:53 ldap-password.dat

-rwxrwxrwx 1 root root 7.8K Feb  8 11:53 ambari.properties

So to protect these you need to run ambari security setup. Ambari Server should not be running when you do this: either make the edits before you start Ambari Server the first time or bring the server down to make the edits.

Step 1: On the Ambari Server, run the special setup command and answer the prompts:

[root@m1 ~]# ambari-server setup-security

Using python  /usr/bin/python

Security setup options…

===========================================================================

Choose one of the following options:

  [1] Enable HTTPS for Ambari server.

  [2] Encrypt passwords stored in ambari.properties file.

  [3] Setup Ambari kerberos JAAS configuration.

  [4] Setup truststore.

  [5] Import certificate to truststore.

===========================================================================

Enter choice, (1-5): 2 (Select Option 2: Choose one of the following options:)

Please provide master key for locking the credential store: ******** (Provide a master key for encrypting the passwords. You are prompted to enter the key twice for accuracy.)

Re-enter master key:********

Do you want to persist master key. If you choose not to persist, you need to provide the Master Key while starting the ambari server as an env variable named AMBARI_SECURITY_MASTER_KEY or the start will prompt for the master key. Persist [y/n] (y)? n

Adjusting ambari-server permissions and ownership…

Ambari Server ‘setup-security’ completed successfully.

Note : Now you need to restart ambari server and provide the same master key when it prompt during restart. You can avoid it by setting up environment variable  in ambari-env.sh file.

Option 1: Provide key during restart. 

root@m1 ~]# ambari-server restart

Using python  /usr/bin/python

Restarting ambari-server

Using python  /usr/bin/python

Stopping ambari-server

Ambari Server stopped

Using python  /usr/bin/python

Starting ambari-server

Ambari Server running with administrator privileges.

Organizing resource files at /var/lib/ambari-server/resources…

Enter current Master Key: ********

Ambari database consistency check started…

No errors were found.

Ambari database consistency check finished

Server PID at: /var/run/ambari-server/ambari-server.pid

Server out at: /var/log/ambari-server/ambari-server.out

Server log at: /var/log/ambari-server/ambari-server.log

Waiting for server start………………..

Ambari Server ‘start’ completed successfully.

Option 2: set AMBARI_SECURITY_MASTER_KEY in ambari-env.sh

[root@m1 ~]# vi /var/lib/ambari-server/ambari-env.sh

[root@m1 ~]# grep -C4 AMBARI_SECURITY_MASTER_KEY /var/lib/ambari-server/ambari-env.sh

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

export AMBARI_SECURITY_MASTER_KEY=hadoop

AMBARI_PASSHPHRASE=”DEV”

export AMBARI_JVM_ARGS=$AMBARI_JVM_ARGS’ -Xms512m -Xmx2048m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false’

export PATH=$PATH:/var/lib/ambari-server

export PYTHONPATH=$PYTHONPATH:/usr/lib/python2.6/site-packages

Now restart Ambari-Server 

[root@m1 ~]# ambari-server restart

Using python  /usr/bin/python

Restarting ambari-server

Using python  /usr/bin/python

Stopping ambari-server

Ambari Server stopped

Using python  /usr/bin/python

Starting ambari-server

Ambari Server running with administrator privileges.

Organizing resource files at /var/lib/ambari-server/resources…

Ambari database consistency check started…

No errors were found.

Ambari database consistency check finished

Server PID at: /var/run/ambari-server/ambari-server.pid

Server out at: /var/log/ambari-server/ambari-server.out

Server log at: /var/log/ambari-server/ambari-server.log

Waiting for server start………………..

Ambari Server ‘start’ completed successfully.

[root@m1 conf]# ls -ltrh

total 52K

-rw-r–r– 1 root root 2.8K Mar 31  2015 ambari.properties.rpmsave.20161004015858

-rwxrwxrwx 1 root root  286 Sep 15 19:53 krb5JAASLogin.conf

-rw-r–r– 1 root root 3.9K Oct  4 01:58 ambari.properties.rpmsave.20161005033229

-rw-r–r– 1 root root 4.7K Oct  5 03:32 ambari.properties.rpmsave.20161005065356

-rw-r–r– 1 root root  286 Oct  5 06:45 krb5JAASLogin.conf.rpmsave

-rwxrwxrwx 1 root root 4.9K Oct  5 06:45 log4j.properties

-rw-r–r– 1 root root    9 Jan 31 05:35 users.txt

-rwxrwxrwx 1 root root 7.8K Feb  8 11:53 ambari.properties

Remove Encryption Entirely

To reset Ambari database and LDAP passwords to a completely unencrypted state:

  1. On the Ambari host, open /etc/ambari-server/conf/ambari.properties with a text editor and set this propertysecurity.passwords.encryption.enabled=false
  2. Delete /var/lib/ambari-server/keys/credentials.jceks
  3. Delete /var/lib/ambari-server/keys/master
  4. You must now reset the database password and, if necessary, the LDAP password. Run ambari-server setup and ambari-server setup-ldap again.

[root@m1 ~]# vi /etc/ambari-server/conf/ambari.properties

[root@m1 ~]# ls -ltrh /var/lib/ambari-server/keys/credentials.jceks

-rw-r—– 1 root root 992 Feb  8 11:35 /var/lib/ambari-server/keys/credentials.jceks

[root@m1 ~]# rm /var/lib/ambari-server/keys/credentials.jceks

rm: remove regular file `/var/lib/ambari-server/keys/credentials.jceks’? y

[root@m1 ~]# ambari-server setup

Using python  /usr/bin/python

Setup ambari-server

Checking SELinux…

SELinux status is ‘disabled’

Customize user account for ambari-server daemon [y/n] (n)? n

Adjusting ambari-server permissions and ownership…

Checking firewall status…

Checking JDK…

Do you want to change Oracle JDK [y/n] (n)? n

Completing setup…

Configuring database…

Enter advanced database configuration [y/n] (n)? y

Configuring database…

==============================================================================

Choose one of the following options:

[1] – PostgreSQL (Embedded)

[2] – Oracle

[3] – MySQL / MariaDB

[4] – PostgreSQL

[5] – Microsoft SQL Server (Tech Preview)

[6] – SQL Anywhere

[7] – BDB

==============================================================================

Enter choice (1): 1

Database name (ambari):

Postgres schema (ambari):

Username (ambari):

Enter Database Password (ambari.db.password):

Re-enter password:

Default properties detected. Using built-in database.

Configuring ambari database…

Checking PostgreSQL…

Configuring local database…

Connecting to local database…done.

Configuring PostgreSQL…

Backup for pg_hba found, reconfiguration not required

Extracting system views…

…………

Adjusting ambari-server permissions and ownership…

Ambari Server ‘setup’ completed successfully.

[root@m1 ~]# ambari-server setup-ldap

Using python  /usr/bin/python

Setting up LDAP properties…

Primary URL* {host:port} (ad.lowes.com:389):

Secondary URL {host:port} :

Use SSL* [true/false] (false):

User object class* (user):

User name attribute* (sAMAccountName):

Group object class* (group):

Group name attribute* (cn):

Group member attribute* (memberOf):

Distinguished name attribute* (dn):

Base DN* (dc=lowes,dc=com):

Referral method [follow/ignore] (ignore):

Bind anonymously* [true/false] (false):

Manager DN* (cn=ambariaddev,cn=users,dc=lowes,dc=com):

Enter Manager Password* :

Re-enter password:

====================

Review Settings

====================

authentication.ldap.managerDn: cn=ambariaddev,cn=users,dc=lowes,dc=com

authentication.ldap.managerPassword: *****

Save settings [y/n] (y)? y

Saving…done

Ambari Server ‘setup-ldap’ completed successfully.

[root@m1 ~]# ambari-server restart

Using python  /usr/bin/python

Restarting ambari-server

Using python  /usr/bin/python

Stopping ambari-server

Ambari Server stopped

Using python  /usr/bin/python

Starting ambari-server

Ambari Server running with administrator privileges.

Organizing resource files at /var/lib/ambari-server/resources…

Ambari database consistency check started…

No errors were found.

Ambari database consistency check finished

Server PID at: /var/run/ambari-server/ambari-server.pid

Server out at: /var/log/ambari-server/ambari-server.out

Server log at: /var/log/ambari-server/ambari-server.log

Waiting for server start………………..

Ambari Server ‘start’ completed successfully.

Change the Current Master Key

To change the master key:

  • If you know the current master key or if the current master key has been persisted:
    1. Re-run the encryption setup command and follow the prompts.ambari-server setup-security
      1. Select Option 2: Choose one of the following options:
        • [1] Enable HTTPS for Ambari server.
        • [2] Encrypt passwords stored in ambari.properties file.
        • [3] Setup Ambari kerberos JAAS configuration.
      2. Enter the current master key when prompted if necessary (if it is not persisted or set as an environment variable).
      3. At the Do you want to reset Master Key prompt, enter yes.
      4. At the prompt, enter the new master key and confirm.
  • If you do not know the current master key:
    • Remove encryption entirely, as described here.
    • Re-run ambari-server setup-security as described here.
    • Start or restart the Ambari Server.ambari-server restart

     

Please feel free to give your suggestion or feedback.


  • 0

Cannot retrieve repository metadata (repomd.xml) for repository

When you upgrade your hdp cluster through satellite server or local repository and you start your cluster via ambari or add some new services to your cluster then you may see following error.

resource_management.core.exceptions.Fail: Execution of ‘/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-collector’ returned 1. Error: Cannot retrieve repository metadata (repomd.xml) for repository: HDP-2.3.0.0-2557. Please verify its path and try again.

[root@m1 ~]# yum -d 0 -e 0 -y install slider_2_5_3_0_37
Error: Cannot retrieve repository metadata (repomd.xml) for repository: HDP-2.3.0.0-2557. Please verify its path and try again

Root Cause: It is because of two different version repository file in your yum dir. 

[root@m1 yum.repos.d]# ll
total 48
-rw-r–r– 1 root root 286 Sep 20 14:01 ambari.repo
-rw-r–r–. 1 root root 1991 Oct 23 2014 CentOS-Base.repo
-rw-r–r–. 1 root root 647 Oct 23 2014 CentOS-Debuginfo.repo
-rw-r–r–. 1 root root 289 Oct 23 2014 CentOS-fasttrack.repo
-rw-r–r–. 1 root root 630 Oct 23 2014 CentOS-Media.repo
-rw-r–r–. 1 root root 5394 Oct 23 2014 CentOS-Vault.repo
-rw-r–r– 1 root root 274 Oct 4 10:16 HDP-2.3.0.0-2557.repo
-rw-r–r– 1 root root 286 Oct 4 03:50 HDP-2.3.0.0.repo
-rw-r–r– 1 root root 234 Feb 1 11:05 HDP-2.5.3.0.repo
-rw-r–r– 1 root root 92 Feb 3 12:29 HDP.repo
-rw-r–r– 1 root root 135 Feb 3 12:29 HDP-UTILS.repo

 

Resolution:

Step 1: You need to disable your repo file for old version or need to move/delete them from /etc/yum.repo.d dir. 

[root@w1 yum.repos.d]# mv HDP-2.3.0.0* /tmp/
[root@w1 yum.repos.d]# ls -ltr
total 40
-rw-r–r–. 1 root root 5394 Oct 23 2014 CentOS-Vault.repo
-rw-r–r–. 1 root root 630 Oct 23 2014 CentOS-Media.repo
-rw-r–r–. 1 root root 289 Oct 23 2014 CentOS-fasttrack.repo
-rw-r–r–. 1 root root 647 Oct 23 2014 CentOS-Debuginfo.repo
-rw-r–r–. 1 root root 1991 Oct 23 2014 CentOS-Base.repo
-rw-r–r– 1 root root 286 Sep 20 14:01 ambari.repo
-rw-r–r– 1 root root 234 Feb 1 11:05 HDP-2.5.3.0.repo
-rw-r–r– 1 root root 92 Feb 3 12:29 HDP.repo
-rw-r–r– 1 root root 135 Feb 3 12:29 HDP-UTILS.repo

Step 2: Now clean all old repo and then update it again with new repo metadata.

[root@w3 yum.repos.d]# yum info all

Loaded plugins: fastestmirror

Determining fastest mirrors

* base: mirror.vcu.edu

* extras: mirror.cs.uwp.edu

* updates: mirror.nodesdirect.com

HDP-2.5                                                                                                                                                  | 2.9 kB     00:00     

HDP-2.5/primary_db                                                                                                                                       |  69 kB     00:00     

HDP-2.5.3.0                                                                                                                                              | 2.9 kB     00:00     

HDP-2.5.3.0/primary_db                                                                                                                                   |  69 kB     00:00     

HDP-UTILS-1.1.0.21                                                                                                                                       | 2.9 kB     00:00     

HDP-UTILS-1.1.0.21/primary_db                                                                                                                            |  33 kB     00:00     

HDP-UTILS-2.5.3.0                                                                                                                                        | 2.9 kB     00:00     

HDP-UTILS-2.5.3.0/primary_db                                                                                                                             |  33 kB     00:00     

Updates-ambari-2.4.1.0                                                                                                                                   | 2.9 kB     00:00     

Updates-ambari-2.4.1.0/primary_db                                                                                                                        | 8.3 kB     00:00     

base                                                                                                                                                     | 3.7 kB     00:00     

base/primary_db                                                                                                                                          | 4.7 MB     00:36     

extras                                                                                                                                                   | 3.4 kB     00:00     

extras/primary_db                                                                                                                                        |  37 kB     00:00     

updates                                                                                                                                                  | 3.4 kB     00:00     

http://mirror.nodesdirect.com/centos/6.8/updates/x86_64/repodata/b02ecfdd926546ba78f0f52d424e06c6a9b7da60cee4b9bf83a54a892b9efd06-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.nodesdirect.com/centos/6.8/updates/x86_64/repodata/b02ecfdd926546ba78f0f52d424e06c6a9b7da60cee4b9bf83a54a892b9efd06-primary.sqlite.bz2: (28, ‘Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds’)

Trying other mirror.

updates/primary_db                                                                                                                                       | 4.3 MB     00:01     

Installed Packages

Name        : MAKEDEV

Arch        : x86_64

Version     : 3.24

Release     : 6.el6

Size        : 222 k

Repo        : installed

From repo   : anaconda-CentOS-201410241409.x86_64

Summary     : A program used for creating device files in /dev

URL         : http://www.lanana.org/docs/device-list/

License     : GPLv2

Description : This package contains the MAKEDEV program, which makes it easier to create

            : and maintain the files in the /dev directory.  /dev directory files

            : correspond to a particular device supported by Linux (serial or printer

            : ports, scanners, sound cards, tape drives, CD-ROM drives, hard drives,

            : etc.) and interface with the drivers in the kernel.

            : You should install the MAKEDEV package because the MAKEDEV utility makes

            : it easy to manage the /dev directory device files.

 

Please feel free to give your valuable feedback.


  • 2

Unable to initialize Falcon Client object. Cause : Could not authenticate, Authentication failed

If you upgrade to or install HDP 2.5.0 or later without first installing the Berkeley DB file, you will get the error “Unable to initialize Falcon Client object. Cause : Could not authenticate, Authentication failed” or 

HTTP ERROR: 503 Problem accessing /index.html. Reason: SERVICE_UNAVAILABL or 

Falcon UI is unavailable. From Falcon logs: java.lang.RuntimeException: org.apache.falcon.FalconException: Unable to get instance for org.apache.falcon.atlas.service.AtlasService

 

Resolutions:

Steps :

1.Download the required Berkeley DB implementation file.

wget –O je-5.0.73.jar http://search.maven.org/remotecontent?filepath=com/sleepycat/je/5.0.73/je-5.0.73.jar

2. Log in to the Ambari server with administrator privileges.su – root

3. Copy the file to the Ambari server share folder.

cp je-5.0.73.jar /usr/share/

4. Set permissions on the file to owner=read/write, group=read, other=read.

chmod 644 /usr/share/je-5.0.73.jar

5. Configure the Ambari server to use the Berkeley DB driver.

ambari-server setup --jdbc-db=bdb --jdbc-driver=/usr/share/je-5.0.73.jar

6. Restart the Ambari server.ambari-server restart

7. Restart the Falcon service from the Ambari UI.You need to have administrator privileges in Ambari to restart a service.

8. In the Ambari web UI, click the Services tab and select the Falcon service in the left Services pane.From the Falcon Summary page, click Service Actions > Restart All.Click Confirm Restart All.

When the service is available, the Falcon status displays as Started on the Summary page.


  • 0

Enable logging for client connections and running queries with Phoenix Query Server

Phoenix Query server (PQS) does not log details about client connections and the queries run using the default log level of INFO. It is required to modify the log4j configuration for certain classes to obtain such logs.

To enable logging such messages by PQS, perform the following:

  1. On the node that runs PQS service, edit the file: ‘/usr/hdp/current/phoenix-server/bin/log4j.properties’, and add/replace the following properties:

               log4j.threshold=DEBUG
log4j.logger.org.apache.phoenix.queryserver=DEBUG
log4j.logger.org.apache.phoenix.queryserver.server=DEBUG
log4j.logger.org.apache.phoenix=DEBUG
log4j.logger.org.eclipse.jetty.io.AbstractEndPoint=DEBUG

  1. Restart the Phoenix Query Server on the node.

Note: It is important to understand that by increasing the logging level using the steps above, the PQS log file size increases faster in comparison to the default logging level.