HBase Mock Test



This section presents you various set of Mock Tests related to HBase. You can download these sample mock tests at your local machine and solve offline at your convenience. Every mock test is supplied with a mock test key to let you verify the final score and grade yourself.

Questions and Answers

HBase Mock Test II

Q 1 - The data in a cell in Hbase table is identified using the four coordinates. Three of which are − rowkey,column family and column qualifier. The fourth coordinate used to identify each value in a cell is

A - Sequence number

B - Version number

C - Serial number

D - table name

Answer : B

Explanation

In every cell Hbase stores the version number for each piece of data along with the value. So the version number is the fourth coordinate which identifies the exact piece of data.

Q 2 - Retrieving a batch of rows in every RPC call made by an API to a HBase database is called a

A - Batch

B - Scan

C - Bulkrow

D - Grouprow

Answer : B

Explanation

When a group of records is returned from HBASE database by an API making a RPC call the process is called a scan. The number of rows to be returned by configuring the caching property.

Q 3 - A scan returns bulk of rows. But only a selected few rows can be fetched form a scan using a

A - Group by clause

B - Minimize clause

C - Subset clause

D - Filter clause

Answer : D

Explanation

The filer clause is used to retrun only a specific set of records and not the entire result of the scan.

Q 4 - Filters in Hbase can be applied to

A - Rowkeys

B - Column qualifiers

C - Data values

D - All of the above

Answer : D

Explanation

Filetrs can be applied to rowkeys,column qualifiers and data values.

Q 5 - The command which allows you to change an integer value stored in Hbase cell without reading it first is

A - Incrementcolumnvalue()

B - Incrementinteger()

C - Incrmentcellval()

D - Incrementnext()

Answer : A

Explanation

The incrementcolumnvlaue() command increments the value stored in a Hbase cell without reading it first.

Answer : C

Explanation

As the tables are split into chunks and distributed across machines, there is no limit to how many columns they can hold.

Q 7 - A small chunk of data residing in one machine which is part of a cluster of machines holding one Hbase table is known as

A - Split

B - Region

C - Rowarea

D - Tablearea

Answer : B

Explanation

A region in Hbase table represents a small chunk of data which is part of a large Hbase table distributed across many servers.

Q 8 - Servers that host regions of a Hbase table are called

A - RegionServers

B - Regional servers

C - Hbase Servers

D - Splitservers

Answer : A

Explanation

The Regionservers are the servers which hold the regions of a Hbase table.

Q 9 - Typically a Hbase Regionserver is collocated with

A - HDFS Namenode

B - HDFS datanode

C - As a client to HDFS server

D - TAsktrackers

Answer : B

Explanation

The Regionservers are collocated with the datanode of a HDFS system.

Q 10 - The size of a individual region is governed by the parameter

A - Hbase.region.size

B - Hbase.region.filesize

C - Hbase.region.max.filesize

D - Hbase.max.region.size

Answer : C

Explanation

The parameter Hbase.region.max.filesize is present in hbase-site.xml and it is configured to decide the size of the region.

Answer : D

Explanation

The region gets split into small regions when it grows bigger in size.

Q 12 - The two tables which are used to find where regions of various tables are hosted are

A - Regiontab and Metatab

B - Regionbase and Metabase

C - –ROOT- and .META.

D - –ROOT- and .REGION.

Answer : C

Explanation

The –ROOT_ and >META> table hold the data to find the location of regions.

Q 13 - When a client application wants to access a row in a Hbase table it first queries the table

A - –ROOT-

B - .META.

C - .REGIONS.

D - .ALLREGIONS.

Answer : A

Explanation

The client first goes to the –ROOT- table which gives further information on which .META. table to refer.

Q 14 - In any mapreduce Job Hbase can be used as a

A - Metadata store

B - Data source

C - Datanode

D - Metadata node

Answer : B

Explanation

Hbase can act as source, sink, or shared resource in a mapreduce job.

Q 15 - All MapReduce jobs reading from an Hbase table accept their[K1,V1] pair in the form of

A - [rowid:cell value]

B - [rowkey:scan result]

C - [column Family:cell value]

D - [column attribute:scan result]

Answer : B

Explanation

The key and value in a mapreduce job reading from a Hbase table correspond to the [rowkey:scan result] values.

Q 16 - When a map tasks in a mapreduce job reads from the Hbase table, it reads from

A - One row

B - One column family

C - One column

D - One region

Answer : D

Explanation

Each map tasks reading a Hbase table reads from a region.

Q 17 - The part of a Mapreduce Task which writes to a Hbase table is

A - Map

B - Reduce

C - Keys

D - none

Answer : B

Explanation

In case of reading Hbase through mapreduce the map tasks do the reading but in case of writing to Hbase the reduce tasks do the writing.

Q 18 - While writing to Hbase using the Mapreduce tasks, each reduce tasks writes to

A - One region

B - Two regions

C - All the relevant regions

D - No regions

Answer : C

Explanation

The writes go to the region that is responsible for the rowkey that is being written by the reduce task.

Q 19 - In a reduce-side join the Mapreduce step which is used to collocate the relevant records form the two joining data sets is

A - Map step

B - Reduce step

C - Shuffle and sort step

D - Final output step

Answer : C

Explanation

A reduce-side join takes advantage of the intermediate Shuffle Step to collocate relevant records from the two sets.

Answer : A

Explanation

Reduce-side joins require shuffling and sorting data between map and reduce tasks.

This incurs I/O costs.

Q 21 - In a Map-Side join, we take rows from one table and map it with rows from the other table. The size of one of the table should be

A - Enough to fit into memory

B - Half the size of the other table

C - Double the size of the other table

D - Small enough to be located in one physical machine

Answer : A

Explanation

If you join two datasets where at least one of them can fit in memory of the map task,then load the smaller dataset into a hash-table so the map tasks can access it while iterating over the other dataset. In these cases, you can skip the Shuffle and Reduce Steps entirely and emit your final

output from the Map Step.

Answer : A

Explanation

HBase stores its data on a single file system. It assumes all the RegionServers have access to that file system across the entire cluster.

Q 23 - The number of namespaces, HDFS provides to the regionservers of a Hbase database is

A - Equal to number of regionserver

B - Half of the number of regionserver

C - Double the number of regionserver

D - One

Answer : D

Explanation

HDFS provides a single namespace to all the RegionServers,and any of them can access the persisted files from any other Regionserver.

Answer : B

Explanation

Given that HBase doesn’t allow inter-table or inter-row transactions.

Answer : B

Explanation

Hbase creates indexes only on the columns that cat as key( the rowkey)

Answer Sheet

Question Number Answer Key
1 B
2 B
3 D
4 D
5 A
6 C
7 B
8 A
9 B
10 C
11 D
12 C
13 A
14 B
15 B
16 D
17 B
18 C
19 C
20 A
21 A
22 A
23 D
24 B
25 B
hbase_questions_answers.htm
Advertisements