Hadoop - Standalone Operation Mode
There are no daemons running and everything runs in a single JVM. Standalone mode is suitable for running MapReduce programs during development, since it is easy to test and debug them.
Before proceeding further, you need to make sure that Hadoop is working fine. Just issue the following command:
$ hadoop version
If everything is fine with your setup then you should see something as follows:
Hadoop 2.2.0 Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768 Compiled by hortonmu on 2013-10-07T06:28Z Compiled with protoc 2.5.0 From source with checksum 79e53ce7994d1628b240f09af91e1af4
This mean your Hadoop's standalone mode setup is working fine. By default, Hadoop is configured to run in a non-distributed mode on a single machine.
Example
Let's check a simple example to have a feel how Hadoop works. Hadoop installation delivers following example MapReduce jar file, which provides basic functionality of MapReduce and can be used for calculating like Pi value, word counts in a given list of files etc.
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
Let's have an input directory where we will push few files and our requirement is to count total number of words in those files. To calculate total number of words, we do not need to write our MapReduce because provided .jar file already have implementation for word count. You can try other examples using the same .jar file, just issue the following commands to check supported MapReduce functional programs by hadoop-mapreduce-examples-2.2.0.jar file.
$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
Step - 1
Create temporary content files in input directory. You can create this input directory anywhere you would like to work.
$ mkdir input $ cp $HADOOP_HOME/*.txt input $ ls -l input
This will give following files in your input directory:
total 24 -rw-r--r-- 1 root root 15164 Feb 21 10:14 LICENSE.txt -rw-r--r-- 1 root root 101 Feb 21 10:14 NOTICE.txt -rw-r--r-- 1 root root 1366 Feb 21 10:14 README.txt
These file have been copied from Hadoop installation home directory, for your experiement you can have different and large set of files.
Step - 2
Let's start Hadoop process to count total number of words in all the files available in input directory as follows:
$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount input ouput
Step - 3
Step-2 will do required processing and save the output in output/part-r-00000 file, which you can check as below:
$cat output/*
This will list down all the words along with their total counts available in all the files available in input directory.
"AS 4 "Contribution" 1 "Contributor" 1 "Derivative 1 "Legal 1 "License" 1 "License"); 1 "Licensor" 1 "NOTICE" 1 "Not 1 "Object" 1 "Source" 1 "Work" 1 "You" 1 "Your") 1 "[]" 1 "control" 1 "printed 1 "submitted" 1 (50%) 1 (BIS), 1 (C) 1 (Don't 1 (ECCN) 1 (INCLUDING 2 (INCLUDING, 2 .............