Apache Storm - Workflow
A working Storm cluster should have one nimbus and one or more supervisors. Another important node is Apache ZooKeeper, which will be used for the coordination between the nimbus and the supervisors.
Let us now take a close look at the workflow of Apache Storm −
Initially, the nimbus will wait for the “Storm Topology” to be submitted to it.
Once a topology is submitted, it will process the topology and gather all the tasks that are to be carried out and the order in which the task is to be executed.
Then, the nimbus will evenly distribute the tasks to all the available supervisors.
At a particular time interval, all supervisors will send heartbeats to the nimbus to inform that they are still alive.
When a supervisor dies and doesn’t send a heartbeat to the nimbus, then the nimbus assigns the tasks to another supervisor.
When the nimbus itself dies, supervisors will work on the already assigned task without any issue.
Once all the tasks are completed, the supervisor will wait for a new task to come in.
In the meantime, the dead nimbus will be restarted automatically by service monitoring tools.
The restarted nimbus will continue from where it stopped. Similarly, the dead supervisor can also be restarted automatically. Since both the nimbus and the supervisor can be restarted automatically and both will continue as before, Storm is guaranteed to process all the task at least once.
Once all the topologies are processed, the nimbus waits for a new topology to arrive and similarly the supervisor waits for new tasks.
By default, there are two modes in a Storm cluster −
Local mode − This mode is used for development, testing, and debugging because it is the easiest way to see all the topology components working together. In this mode, we can adjust parameters that enable us to see how our topology runs in different Storm configuration environments. In Local mode, storm topologies run on the local machine in a single JVM.
Production mode − In this mode, we submit our topology to the working storm cluster, which is composed of many processes, usually running on different machines. As discussed in the workflow of storm, a working cluster will run indefinitely until it is shut down.