RDD Shared Variables In Spark

The full name of the RDD is a distributed database. Spark performance is based on this ambiguous set, enabling it to consistently cope with major data processing conditions, including MapReduce, streaming, SQL, machine learning, graphs, etc.

Spark supports many programming languages, including Scala, Python, and R. RDD also supports the maintenance of material in these languages.

How to create RDD

Spark supports RDDS architecture in many areas, including local file systems, HDFS file systems, memory, and HBase.

For the local file system, we can create RDD through the following way −

val distFile = sc.textFile("file:///user/root/rddData.txt")

By default, Spark takes data from the HDFS file system. So here is the way to create RDD in the HDFS file system −

val distFile = sc.textFile("/user/root/rddData.txt")

Users also have to specify the HDD URL by the below way −

val distFile = sc.textFile("hdfs://localhost:4440/user/rddData.txt")

RDD Shared Variables

If any function progresses to the transition function in Spark, it applies to the cluster node. Spark uses different copies of each variable used in the computation. These changes are copied to each machine, and no dynamic updates to the remote device are restored to the driver system.

If the remote node performs Spark's transmission function to work, the system will copy all function variables to the node. If these variables are updated on other nodes, the system will not update the current node variable until it is restored to the driver system. Often, flexible reading and writing skills in all activities do not work well. Spark uses two types of shared variables −


Like vertical variables in language C, Spark supports several variable collections and user-defined variables. Its feature grants multiple functions to update the same variables in order.

If you are creating a variable, you can use SparkContext.longAccumulator () or SparkContext.doubleAccumulator () to create a longer and dual accumulator of two types. Tasks can use the add-on method to add compact dynamic content. The actuator system cannot read the collected variables, and only the driver system can read the value in terms of value.

scala> val accum = sc.longAccumulator("Accumulator Data")
accum: org.apache.spark.util.LongAccumulator = LongAccumulator(id: 0, name: Some(Accumulator Data), value: 0)
scala> sc.parallelize(Array(6, 7, 8, 9)).foreach(x => accum.add(x))
22/02/09 01:37:51 INFO SparkContext: Tasks finished in 0.274529s


The flexibility of streaming allows developers to store a flexible repository for reading only in each location without copying it to each function. The system can use stream variables to copy an extensive data set for each node entry accurately. Spark intends to reduce transmission costs by using flexible streaming algorithms.

It is only useful to show dynamic streaming creation if the task has multiple stages, the same data is required, or the cached data is in reverse sequence. To make a broadcast variable, use the following commands −

scala> val broadcastVar = sc.broadcast(Array(6, 7, 8))
broadcastVar: org.apache.spark.broadcast.Broadcast[Array[Int]] = Broadcast(0)
scala> broadcastVar.value
res0: Array[Int] = Array(6, 7, 8)

Find the variable number of streams in the value field. As the name suggests, broadcast variables are sent from the driver to the task one way. The system cannot update stream variables, and the system cannot update drivers. Make sure all nodes receive the same data.

Call the unpersist () method to free resources used by several broadcasters. If the app is used again, the system will redirect the variable. If you want to delete live streaming services permanently, you can call Destruction ().


So, in this article, we’ve explained RDD shared variables in Spark. Broadcast variables related to read-only data, which can be copied before the first change in each location too, are stored there and used for further calculation.

After that, we saw how accumulators help manage shared resources. Hopefully, with this article, you have understood the concept of shared flexibility.