Hadoop Mock Test


Advertisements

This section presents you various set of Mock Tests related to Hadoop Framework. You can download these sample mock tests at your local machine and solve offline at your convenience. Every mock test is supplied with a mock test key to let you verify the final score and grade yourself.

Questions and Answers

Hadoop Mock Test IV

Answer : A

Q 4 - Which of the following is not a scheduling option available in YARN

A - Balanced scheduler

B - Fair scheduler

C - Capacity scheduler

D - FiFO schesduler.

Answer : A

Q 6 - Which one is not one of the big data feature?

A - Velocity

B - Veracity

C - volume

D - variety

Answer : B

Q 7 - Which technology is used to store data in Hadoop?

A - HBase

B - Avro

C - Sqoop

D - Zookeeper

Answer : A

Q 8 - Which technology is used to serialize the data in Hadoop?

A - HBase

B - Avro

C - Sqoop

D - Zookeeper

Answer : B

Q 9 - Which technology is used to import and export data in Hadoop?

A - HBase

B - Avro

C - Sqoop

D - Zookeeper

Answer : C

Q 10 - Which of the following technologies is a document store database?

A - HBase

B - Hive

C - Cassandra

D - CouchDB

Answer : D

Answer : D

Q 12 - Which one of the following stores data?

A - Name node

B - Data node

C - Master node

D - None of these

Answer : B

Q 13 - Which one of the following nodes manages other nodes?

A - Name node

B - Data node

C - slave node

D - None of these

Answer : A

Q 21 - When archiving Hadoop files, which of the following statements are true? (Choose two answers)

  1. Archived files will display with the extension .arc.

  2. Many small files will become fewer large files.

  3. MapReduce processes the original files names even after files are archived.

  4. Archived files must be UN archived for HDFS and MapReduce to access the original, small files.

  5. Archive is intended for files that need to be saved but no longer accessed by HDFS.

A - 1 & 3

B - 2 & 3

C - 2 & 4

D - 3 & 4

Answer : B

Q 22 - When writing data to HDFS what is true if the replication factor is three? (Choose 2 answers)

  1. Data is written to DataNodes on three separate racks (if Rack Aware).

  2. The Data is stored on each DataNode with a separate file which contains a checksum value.

  3. Data is written to blocks on three different DataNodes.

  4. The Client is returned with a success upon the successful writing of the first block and checksum check.

A - 1 & 3

B - 2 & 3

C - 3 & 4

D - 1 & 4

Answer : C

Q 24 - Which of the following components retrieves the input splits directly from HDFS to determine the number of map tasks?

A - The NameNode.

B - The TaskTrackers.

C - The JobClient.

D - The JobTracker.

E - None of the options is correct.

Answer : D

Q 25 - The org.apache.hadoop.io.Writable interface declares which two methods? (Choose 2 answers.)

  1. public void readFields(DataInput).

  2. public void read(DataInput).

  3. public void writeFields(DataOutput).

  4. public void write(DataOutput).

A - 1 & 4

B - 2 & 3

C - 3 & 4

D - 2 & 4

Answer : A

Answer : B

Q 28 - Which one of the following is not a main component of HBase?

A - Region Server.

B - Nagios.

C - ZooKeeper.

D - Master Server.

Answer : B

Q 30 - Which demon is responsible for replication of data in Hadoop?

A - HDFS.

B - Task Tracker.

C - Job Tracker.

D - Name Node.

E - Data Node.

Answer : D

Q 31 - Keys from the output of shuffle and sort implement which of the following interface?

A - Writable.

B - WritableComparable.

C - Configurable.

D - ComparableWritable.

E - Comparable.

Answer : B

Answer Sheet

Question Number Answer Key
1 A
2 B
3 A
4 A
5 D
6 B
7 A
8 B
9 C
10 D
11 D
12 B
13 A
14 A
15 A
16 B
17 C
18 B
19 C
20 C
21 B
22 C
23 D
24 D
25 A
26 B
27 C
28 B
29 C
30 D
31 B
32 C
hadoop_questions_answers.htm
Advertisements