
- Hadoop Tutorial
- Hadoop - Home
- Hadoop - Big Data Overview
- Hadoop - Big Data Solutions
- Hadoop - Introduction
- Hadoop - Environment Setup
- Hadoop - HDFS Overview
- Hadoop - HDFS Operations
- Hadoop - Command Reference
- Hadoop - MapReduce
- Hadoop - Streaming
- Hadoop - Multi-Node Cluster
- Hadoop Useful Resources
- Hadoop - Questions and Answers
- Hadoop - Quick Guide
- Hadoop - Useful Resources
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
Hadoop Mock Test
This section presents you various set of Mock Tests related to Hadoop Framework. You can download these sample mock tests at your local machine and solve offline at your convenience. Every mock test is supplied with a mock test key to let you verify the final score and grade yourself.

Hadoop Mock Test IV
Q 1 - When a jobTracker schedules a task is first looks for
A - A node with empty slot in the same rack as datanode
B - Any node on the same rack as the datanode
Answer : A
Answer : A
Q 4 - Which of the following is not a scheduling option available in YARN
Answer : A
Q 5 - What is the default input format?
B - There is no default input format. The input format always should be specified.
Answer : D
Q 6 - Which one is not one of the big data feature?
Answer : B
Answer : A
Q 8 - Which technology is used to serialize the data in Hadoop?
Answer : B
Q 9 - Which technology is used to import and export data in Hadoop?
Answer : C
Q 10 - Which of the following technologies is a document store database?
Answer : D
Q 11 - Which one of the following is not true regarding to Hadoop?
A - It is a distributed framework.
B - The main algorithm used in it is Map Reduce
Answer : D
Q 12 - Which one of the following stores data?
Answer : B
Q 13 - Which one of the following nodes manages other nodes?
Answer : A
Q 14 - What is AVRO?
A - Avro is a java serialization library.
B - Avro is a java compression library.
Answer : A
Q 15 - Can you run Map - Reduce jobs directly on Avro data?
A - Yes, Avro was specifically designed for data processing via Map-Reduce.
B - Yes, but additional extensive coding is required.
C - No, Avro was specifically designed for data storage only.
Answer : A
Q 16 - What is distributed cache?
C - The distributed cache is a component that caches java objects.
Answer : B
Q 17 - What is writable?
A - Writable is a java interface that needs to be implemented for streaming data to remote servers.
B - Writable is a java interface that needs to be implemented for HDFS writes.
C - Writable is a java interface that needs to be implemented for MapReduce processing.
Answer : C
Q 18 - What is HBASE?
A - Hbase is separate set of the Java API for Hadoop cluster.
C - Hbase is a "database" like interface to Hadoop cluster data.
Answer : B
Q 19 - How does Hadoop process large volumes of data?
A - Hadoop uses a lot of machines in parallel. This optimizes data processing.
C - Hadoop ships the code to the data instead of sending the data to the code.
D - Hadoop uses sophisticated caching techniques on name node to speed processing of data.
Answer : C
Q 20 - When using HDFS, what occurs when a file is deleted from the command line?
A - It is permanently deleted if trash is enabled.
B - It is placed into a trash directory common to all users for that cluster.
C - It is permanently deleted and the file attributes are recorded in a log file.
D - It is moved into the trash directory of the user who deleted it if trash is enabled.
Answer : C
Q 21 - When archiving Hadoop files, which of the following statements are true? (Choose two answers)
Archived files will display with the extension .arc.
Many small files will become fewer large files.
MapReduce processes the original files names even after files are archived.
Archived files must be UN archived for HDFS and MapReduce to access the original, small files.
Archive is intended for files that need to be saved but no longer accessed by HDFS.
Answer : B
Q 22 - When writing data to HDFS what is true if the replication factor is three? (Choose 2 answers)
Data is written to DataNodes on three separate racks (if Rack Aware).
The Data is stored on each DataNode with a separate file which contains a checksum value.
Data is written to blocks on three different DataNodes.
The Client is returned with a success upon the successful writing of the first block and checksum check.
Answer : C
Q 23 - Which of the following are among the duties of the Data Nodes in HDFS?
A - Maintain the file system tree and metadata for all files and directories.
B - None of the options is correct.
C - Control the execution of an individual map task or a reduce task.
D - Store and retrieve blocks when told to by clients or the NameNode.
Answer : D
Q 24 - Which of the following components retrieves the input splits directly from HDFS to determine the number of map tasks?
Answer : D
Q 25 - The org.apache.hadoop.io.Writable interface declares which two methods? (Choose 2 answers.)
public void readFields(DataInput).
public void read(DataInput).
public void writeFields(DataOutput).
public void write(DataOutput).
Answer : A
Q 26 - Which one of the following statements is true regarding <key,value> pairs of a MapReduce job?
A - A key class must implement Writable.
B - A key class must implement WritableComparable.
Answer : B
Q 27 - Which one of the following statements is false regarding the Distributed Cache?
B - The files in the cache can be text files, or they can be archive files like zip and JAR files.
C - Disk I/O is avoided because data in the cache is stored in memory.
Answer : C
Q 28 - Which one of the following is not a main component of HBase?
Answer : B
Q 29 - Which of the following is false about RawComparator ?
B - Performance can be improved in sort and suffle phase by using RawComparator.
C - Intermediary keys are deserialized to perform a comparison.
Answer : C
Q 30 - Which demon is responsible for replication of data in Hadoop?
Answer : D
Q 31 - Keys from the output of shuffle and sort implement which of the following interface?
Answer : B
Q 32 - In order to apply a combiner, what is one property that has to be satisfied by the values emitted from the mapper?
Answer : C
Answer Sheet
Question Number | Answer Key |
---|---|
1 | A |
2 | B |
3 | A |
4 | A |
5 | D |
6 | B |
7 | A |
8 | B |
9 | C |
10 | D |
11 | D |
12 | B |
13 | A |
14 | A |
15 | A |
16 | B |
17 | C |
18 | B |
19 | C |
20 | C |
21 | B |
22 | C |
23 | D |
24 | D |
25 | A |
26 | B |
27 | C |
28 | B |
29 | C |
30 | D |
31 | B |
32 | C |