Sqoop Mock Test



This section presents you various set of Mock Tests related to Sqoop. You can download these sample mock tests at your local machine and solve offline at your convenience. Every mock test is supplied with a mock test key to let you verify the final score and grade yourself.

Questions and Answers

Sqoop Mock Test III

Q 1 - Sqoop can automatically clear the staging table before loading by using the parameter

A - –clear-table

B - –clear-staging-table

C - --truncate-staging-table

D - -delete-from-staging-table

Answer : B

Explanation

the –clear-staging-table automatically cleans data form the staging table.

Q 2 - Can sqoop use the TRUNCATE option in database while clearing data from a table?

A - Yes

B - No

C - Depends on the database

D - Depends on the Hadoop configuration

Answer : C

Explanation

If available through the database driver, sqoop can clear the data quickly using TRUNCATE option.

Answer : C

Explanation

A comma separate dlist of column names which together identify a unique record can be used in the –update-key parameter.

Answer : A

Explanation

only the columns other than in the –update-key parameter will be appear in the SET clause.

Answer : A

Explanation

The –update-key parameter cannot export new rows which do not have a matching key in the already exported table.

Q 7 - Sqoop can insert new rows and update existing changed rows into an already exported table by using the parameter

A - –update-insert

B - -update-else-insert

C - –update-mode insert

D - –update-mode allowinsert

Answer : D

Explanation

the –update-mode allwoinsert can be used to update as well as insert existing rows into the exported table.

Q 8 - When using –update-mode allowinsert parameter with oracle database the feature of oracle used by sqoop is

A - UPSERT statement

B - MERGE statement

C - MULTITABLE INSERT statement

D - BULK LOAD statement

Answer : B

Explanation

The Merger statement of oracle is used to achieve update else insert condition.

Q 9 - With MySQL, the feature used by sqoop for update or insert data into an exported table is

A - ON DUPLICATE KEY UPDATE

B - ON KEY UPDATE

C - ON NEW KEY UPDATE

D - ON NEW UPDATE

Answer : A

Explanation

The ON DUPLICATE KEY UPDATE feature of mySql is used for update else insert with sqoop.

Q 10 - Can the upsert feature of sqoop delete some data form the exported table?

A - Yes

B - No

C - Depends on database

D - Yes With some additional sqoop parameters

Answer : A

Explanation

Sqoop will never delete data as part of upsert statement.

Q 11 - To sync a HDFS file with some deleted rows with a previously exported table for the same table the option is to

A - Using staging table

B - Export the data again to a new database table and rename it

C - Use a ETL tool

D - Can not be done using sqoop

Answer : B

Explanation

you can drop the existing table and re-import the data from Hadoop. Then rename it to the dropped table.

Q 12 - The parameter which can be used in place of --table parameter to insert data into table is

A - –call

B - –insert-into

C - –populate

D - –load-into

Answer : A

Explanation

The –call parameter will call a database stored procedure which in turn can insert data into table.

Answer : D

Explanation

As sqoop will call the stored procedure using parallel jobs, so heavy laod is induced in the database.

Answer : B

Explanation

The load can still be done by specifying the –column parameter to populate a subset of columns in the relational table.

Q 15 - The parameter to specify only a selected number of columns to be exported to a table is

A - -columns

B - –column-subset

C - ----columns-not-all

D - -columns-part

Answer : A

Explanation

The columns clause will take a comma separated values of column names which will be part of the export.

Q 16 - Load all or load nothing semantics is implemented by using the parameter

A - -loadd-all-nothing

B - -stage-load

C - -all-load

D - -staging-table

Answer : D

Explanation

The –staging-table parameter is used to load all the required data into a intermediate table before finally loading into the real table.

Answer : D

Explanation

we can use the –column parameter and specify the required column in the required order.

Answer : A

Explanation

If there are columns whose value is mandatory and the HDFS file does not have it in the subset the load will fail.

Q 19 - The parameter used to override NULL values to be inserted into relational targets is

A - -override-null

B - –input-null-string

C - -substitute-null

D - --replace-null

Answer : B

Explanation

the parameter –input-null-string is used to override the NULL values when exporting to relational tables.

Q 20 - For Text based columns the parameter used for substituting null values is

A - -input-null-string

B - -input-null-non-string

C - -input-null-text

D - -input-null-varchar

Answer : A

Explanation

The –input- null-string is used to substitute null values for text based columns.

Q 21 - For a column of data type numeric, the parameter used for substituting null values is

A - -input-null-string

B - -input-null-non-string

C - -input-null-text

D - -input-null-varchar

Answer : B

Explanation

The –input- null-non-string is used to substitute null values for text based columns.

Q 22 - When a column value has a different data type in the HDFS system than expected in the relational table to which data will be exported −

A - Sqoop skips the rows

B - Sqoop fails the job

C - Sqoop loads the remaining rows by halting and asking whether to continue the load

D - Sqoop automatically changes the data type to a compatible data type and loads the data.

Answer : B

Explanation

The job fails and sqoop gives a log showing the reason of failure.

Q 23 - The parameter used in sqoop to import data directly into hive is

A - -import-direct

B - -import-hive

C - –hive-import

D - -hive-sqoop

Answer : C

Explanation

The parameter used is –hive-import which will directly place the data in hie without needing any connectors as in case of relational systems.

Answer : B

Explanation

as both sqoop and hive are part of hadoop ecosystem, sqoop is able to create the meta data in hive.

Q 25 - To ensure that the columns created in hive by sqoop have the correct data types the parameter used by sqoop is

A - --map-column-hive

B - --map-column

C - --column-hive

D - --map-table-hive

Answer : A

Explanation

The correct column mapping is handled by the parameter --map-column-hive.

Answer Sheet

Question Number Answer Key
1 B
2 C
3 D
4 C
5 A
6 A
7 D
8 B
9 A
10 A
11 B
12 A
13 D
14 B
15 A
16 D
17 D
18 A
19 B
20 A
21 B
22 B
23 C
24 B
25 A
sqoop_questions_answers.htm
Advertisements