HCatalog - CLI


HCatalog Command Line Interface (CLI) can be invoked from the command $HIVE_HOME/HCatalog/bin/hcat where $HIVE_HOME is the home directory of Hive. hcat is a command used to initialize the HCatalog server.

Use the following command to initialize HCatalog command line.

cd $HCAT_HOME/bin
./hcat

If the installation has been done correctly, then you will get the following output −

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
usage: hcat { -e "<query>" | -f "<filepath>" } 
   [ -g "<group>" ] [ -p "<perms>" ] 
   [ -D"<name> = <value>" ]
	
-D <property = value>    use hadoop value for given property
-e <exec>                hcat command given from command line
-f <file>                hcat commands in file
-g <group>               group for the db/table specified in CREATE statement
-h,--help                Print help information
-p <perms>               permissions for the db/table specified in CREATE statement

The HCatalog CLI supports these command line options −

Sr.No Option Example & Description
1 -g

hcat -g mygroup ...

The table to be created must have the group "mygroup".

2 -p

hcat -p rwxr-xr-x ...

The table to be created must have read, write, and execute permissions.

3 -f

hcat -f myscript.HCatalog ...

myscript.HCatalog is a script file containing DDL commands to execute.

4 -e

hcat -e 'create table mytable(a int);' ...

Treat the following string as a DDL command and execute it.

5 -D

hcat -Dkey = value ...

Passes the key-value pair to HCatalog as a Java system property.

6 -

hcat

Prints a usage message.

Note −

  • The -g and -p options are not mandatory.

  • At one time, either -e or -f option can be provided, not both.

  • The order of options is immaterial; you can specify the options in any order.

Sr.No DDL Command & Description
1

CREATE TABLE

Create a table using HCatalog. If you create a table with a CLUSTERED BY clause, you will not be able to write to it with Pig or MapReduce.

2

ALTER TABLE

Supported except for the REBUILD and CONCATENATE options. Its behavior remains same as in Hive.

3

DROP TABLE

Supported. Behavior the same as Hive (Drop the complete table and structure).

4

CREATE/ALTER/DROP VIEW

Supported. Behavior same as Hive.

Note − Pig and MapReduce cannot read from or write to views.

5

SHOW TABLES

Display a list of tables.

6

SHOW PARTITIONS

Display a list of partitions.

7

Create/Drop Index

CREATE and DROP FUNCTION operations are supported, but the created functions must still be registered in Pig and placed in CLASSPATH for MapReduce.

8

DESCRIBE

Supported. Behavior same as Hive. Describe the structure.

Some of the commands from the above table are explained in subsequent chapters.

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements