- HBase Tutorial
- HBase - Home
- HBase - Overview
- HBase - Architecture
- HBase - Installation
- HBase - Shell
- HBase - General Commands
- HBase - Admin API
- HBase - Create Table
- HBase - Listing Table
- HBase - Disabling a Table
- HBase - Enabling a Table
- HBase - Describe & Alter
- HBase - Exists
- HBase - Drop a Table
- HBase - Shutting Down
- HBase - Client API
- HBase - Create Data
- HBase - Update Data
- HBase - Read Data
- HBase - Delete Data
- HBase - Scan
- HBase - Count & Truncate
- HBase - Security
- HBase Resources
- HBase - Questions and Answers
- HBase - Quick Guide
- HBase - Useful Resources
HBase Mock Test
This section presents you various set of Mock Tests related to HBase. You can download these sample mock tests at your local machine and solve offline at your convenience. Every mock test is supplied with a mock test key to let you verify the final score and grade yourself.
HBase Mock Test IV
Q 1 - The two classes which are provided by coprocessors are
Answer : D
Explanation
The two classes provided by co-processors to extend it custom its functionalities are −
OBSERVER AND Endpoint.
Q 2 - A coprocessor is executed when an event occurs. This type of coprocessor is known as
Answer : A
Explanation
The observer type of coprocessor is executed when an event occurs.
Q 3 - The type of coprocessors which are similar to the stored procedures in relational database is
Answer : D
Explanation
The Endpoint type of coprocessor is similar to the data abse store dprocedure in relational systems.
Q 4 - The table descriptor can be used only for which type of coprocessors?
Answer : A
Explanation
The table descriptors are used only for the Region servers hence the region related coprocessors.
Q 5 - The class which ised to pool client API instances to the Hbase cluster is
Answer : D
Explanation
The Htable pool is used to pool the client API instances to the Hbase cluster.
Q 6 - If one single column family exceeds the maximum file size specified by Hbase configuration then
A - Data load error is encountered
B - The column family is dropped
Answer : D
Explanation
Once the max file size is reached, the region is split into two.
Q 7 - The Hbase tables are
Answer : A
Explanation
The Hbase tables by default are writable. They become read only by setting the readonly() option to true.
Q 8 - What is part of the directory name where Hbase data is stored?
Answer : C
Explanation
The column family is part of the directory name where the Hbase data is stored. It must be made up of printable characters.
Q 9 - The Hbase column qualifier in a column can be
B - Written to and read form when omitted
Answer : D
Explanation
The column qualifier can be left empty and still written to and read from. ALos it can not be renamed after it is created
Q 10 - A Habse column family
Answer : B
Explanation
A Hbase column family cannot be renamed. The only option is to create a new column family and copy the data.
Q 11 - Which of the following is not a valid file in Habse?
Answer : C
Explanation
in Hbase under the Hbase rot directory, each table is stored under a directory and under each table directory is a region directory for every region comprising that table.
Q 12 - The metadata of region is accessed using the file named
Answer : C
Explanation
The file .regioninfo stores the metadata information.
Q 13 - If a region directory does not have .tmp directory then
Answer : A
Explanation
No .tmp directory indicates no compaction happened
Q 14 - When a region does not have recovered.edits file, it indicates,
A - No compaction has happened in the region
B - Only major compaction has happened.
Answer : D
Explanation
Only the write-ahead replay cerates the recovered.edits file
Q 15 - The Hfile contains variable number of blocks. One fixed blocks in it is the block named file info block and the other one is
Answer : A
Explanation
in an Hfile only the file infor block and trailer blocks are fixed. All others are optional.
Q 16 - The Hbase block size and the HDFS block size
B - Hbase is twice the size of HDFS
Answer : A
Explanation
The HBAse and HDFS block sizes are not related. A HFile can spread over multiple HDFS blocks.
Q 17 - The method which can be used to access the HFiel directly without using Hbase is
Answer : C
Explanation
The HFile.main() method has various option to read the HFile diretly without suing the HBAse like - -m option prints the meta data of the file.
Q 18 - In Hbase there are two situations when the WAL logfiles needs to be replayed. One is when the server fails. The other is when
Answer : C
Explanation
The only two instances when the logs are replayed is when cluster starts or the server fails.
Q 19 - Before the edits in a HBAse logfile can be replayed they are separated into one logfile per region.
It is called −
Answer : A
Explanation
The separation of log file into one log per region is called log splitting.
Q 20 - The Hbase master node orchestrates
Answer : D
Explanation
The Regionservers slaves are managed by the Hbase Master node
Q 21 - To see all the tables presen tin a user space in Hbase the command used is
Answer : B
Explanation
The list command displays all the tables present in the user space in Hbase
Q 22 - In Habse the column can be added to a table
B - Before changing the schema
Answer : C
Explanation
Without changing the schema awe can add columns to a column family in Hbase.
Q 23 - In Hbase a table can be
Answer : B
Explanation
The table must be disabled before it is dropped.
Q 24 - The exported data using the inbuilt export utility from Hbase table is in which file format
Answer : C
Explanation
The data exported from Hbase table using the inbuilt export utility is in sequence file format.
Q 25 - In which scenario nothing is written in the WAL on HBase?
Answer : D
Explanation
during the bulk load process nothing gets written to the WAL.
Answer Sheet
Question Number | Answer Key |
---|---|
1 | D |
2 | A |
3 | D |
4 | A |
5 | D |
6 | D |
7 | A |
8 | C |
9 | D |
10 | B |
11 | C |
12 | C |
13 | A |
14 | D |
15 | A |
16 | A |
17 | C |
18 | C |
19 | A |
20 | D |
21 | B |
22 | C |
23 | B |
24 | C |
25 | D |