Big Data Analytics - Association Rules



Let I = i1, i2, ..., in be a set of n binary attributes called items. Let D = t1, t2, ..., tm be a set of transactions called the database. Each transaction in D has a unique transaction ID and contains a subset of the items in I. A rule is defined as an implication of the form X ⇒ Y where X, Y ⊆ I and X ∩ Y = ∅.

The sets of items (for short item-sets) X and Y are called antecedent (left-hand-side or LHS) and consequent (right-hand-side or RHS) of the rule.

To illustrate the concepts, we use a small example from the supermarket domain. The set of items is I = {milk, bread, butter, beer} and a small database containing the items is shown in the following table.

Transaction ID Items
1 milk, bread
2 bread, butter
3 beer
4 milk, bread, butter
5 bread, butter

An example rule for the supermarket could be {milk, bread} ⇒ {butter} meaning that if milk and bread is bought, customers also buy butter. To select interesting rules from the set of all possible rules, constraints on various measures of significance and interest can be used. The best-known constraints are minimum thresholds on support and confidence.

The support supp(X) of an item-set X is defined as the proportion of transactions in the data set which contain the item-set. In the example database in Table 1, the item-set {milk, bread} has a support of 2/5 = 0.4 since it occurs in 40% of all transactions (2 out of 5 transactions). Finding frequent item-sets can be seen as a simplification of the unsupervised learning problem.

The confidence of a rule is defined conf(X ⇒ Y ) = supp(X ∪ Y )/supp(X). For example, the rule {milk, bread} ⇒ {butter} has a confidence of 0.2/0.4 = 0.5 in the database in Table 1, which means that for 50% of the transactions containing milk and bread the rule is correct. Confidence can be interpreted as an estimate of the probability P(Y|X), the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS.

In the script located in bda/part3/apriori.R the code to implement the apriori algorithm can be found.

# Load the library for doing association rules
# install.packages(’arules’) 
library(arules)  

# Data preprocessing 
data("AdultUCI") 
AdultUCI[1:2,]  
AdultUCI[["fnlwgt"]] <- NULL 
AdultUCI[["education-num"]] <- NULL  

AdultUCI[[ "age"]] <- ordered(cut(AdultUCI[[ "age"]], c(15,25,45,65,100)), 
   labels = c("Young", "Middle-aged", "Senior", "Old")) 
AdultUCI[[ "hours-per-week"]] <- ordered(cut(AdultUCI[[ "hours-per-week"]], 
   c(0,25,40,60,168)), labels = c("Part-time", "Full-time", "Over-time", "Workaholic")) 
AdultUCI[[ "capital-gain"]] <- ordered(cut(AdultUCI[[ "capital-gain"]], 
   c(-Inf,0,median(AdultUCI[[ "capital-gain"]][AdultUCI[[ "capitalgain"]]>0]),Inf)), 
   labels = c("None", "Low", "High")) 
AdultUCI[[ "capital-loss"]] <- ordered(cut(AdultUCI[[ "capital-loss"]], 
   c(-Inf,0, median(AdultUCI[[ "capital-loss"]][AdultUCI[[ "capitalloss"]]>0]),Inf)), 
   labels = c("none", "low", "high"))

In order to generate rules using the apriori algorithm, we need to create a transaction matrix. The following code shows how to do this in R.

# Convert the data into a transactions format
Adult <- as(AdultUCI, "transactions") 
Adult 
# transactions in sparse format with 
# 48842 transactions (rows) and 
# 115 items (columns)  

summary(Adult)  
# Plot frequent item-sets 
itemFrequencyPlot(Adult, support = 0.1, cex.names = 0.8)  

# generate rules 
min_support = 0.01 
confidence = 0.6 
rules <- apriori(Adult, parameter = list(support = min_support, confidence = confidence))

rules 
inspect(rules[100:110, ]) 
# lhs                             rhs                      support     confidence  lift
# {occupation = Farming-fishing} => {sex = Male}        0.02856148  0.9362416   1.4005486
# {occupation = Farming-fishing} => {race = White}      0.02831579  0.9281879   1.0855456
# {occupation = Farming-fishing} => {native-country     0.02671881  0.8758389   0.9759474
                                       = United-States} 
Advertisements