- Sqoop Tutorial
- Sqoop - Home
- Sqoop - Introduction
- Sqoop - Installation
- Sqoop - Import
- Sqoop - Import-All-Tables
- Sqoop - Export
- Sqoop - Sqoop Job
- Sqoop - Codegen
- Sqoop - Eval
- Sqoop - List Databases
- Sqoop - List Tables
- Sqoop Useful Resources
- Sqoop - Questions and Answers
- Sqoop - Quick Guide
- Sqoop - Useful Resources
- Sqoop - Discussion
Sqoop Online Quiz
Following quiz provides Multiple Choice Questions (MCQs) related to Sqoop. You will have to read all the given answers and click over the correct answer. If you are not sure about the answer then you can check the answer using Show Answer button. You can use Next Quiz button to check new set of questions in the quiz.
Q 1 - The parameter in sqoop which specifies the output directories when importing data is
Answer : D
Explanation
The --target-dir and --warehouse-dir are the two parameters used for specifying the path where import will be done.
Q 2 - what option can bne used to import only some of the table from a database while using the --import-all-tables parameter?
Answer : D
Explanation
You can mention the tables names along with the --exclude-table clause to skip a given number of tables while importing an entire database.
Q 3 - What is achieved by using the --meta-connect parameter in a sqoop command?
A - run metastore as a service accessible remotely
B - run metastore as a service accessible locally
C - connect to the meastore tables
D - connect to the metadata of the external relational tables form which data has to be imported
Answer : A
Explanation
with the --meta-connect parameter the metastore starts running as a service with the default port 16000.Now this metastore service becomes accessible throughout the cluster.
Q 4 - Which parameter in sqoop is used for bulk data export to relational tables?
Answer : B
Explanation
The –batch parameter uses the JDBC batch load capability to do bulk load.
Q 5 - The –staging-table parameter is used for
A - Storing some sample data from Hadoop before loading the real table
B - Storing all the required data from Hadoop before loading it to real table
D - Storing the metadata structure of tables to which data is being exported
Answer : B
Explanation
When you want to verify that indeed all the require data is successfully exported before loading the data to final table, use the parameter –staging-table.
Q 6 - The –update-key parameter can
A - Not insert new rows to the already exported table
B - Insert new rows to an already exported table
C - Insert new rows into the exported table only if it has a primary key
Answer : A
Explanation
The –update-key parameter cannot export new rows which do not have a matching key in the already exported table.
Q 7 - If the table to which data is being exported has more columns than the data present in the hdfs file then
B - The load can be done only for the relevant columns present in HDFS file
Answer : B
Explanation
The load can still be done by specifying the –column parameter to populate a subset of columns in the relational table.
Q 8 - To overwrite data present in hive table while importing data using sqoop, the sqoop parameter is
Answer : B
Explanation
The --hive-overwrite parameter truncates the hive table before loading the data.
Q 9 - The parameter --hive-import can be used with
B - importing to hive as well as text file
Answer : B
Explanation
This parameter can be used with both hive and text files.
Q 10 - The tool in sqoop which combines two data sets and preserves only the latest values using a primary key is
Answer : A
Explanation
The Sqoop-merge tool combines two datasets and preserves the latest records. The column marked for primary key is indicated by the parameter –merge-key
To Continue Learning Please Login
Login with Google