Explain how the logistic regression function works with Tensorflow?


Tensorflow is a machine learning framework that is provided by Google. It is an open−source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes. It has optimization techniques that help in performing complicated mathematical operations quickly.

The ‘tensorflow’ package can be installed on Windows using the below line of code −

pip install tensorflow

Tensor is a data structure used in TensorFlow. It helps connect edges in a flow diagram. This flow diagram is known as the ‘Data flow graph’. Tensors are nothing but multidimensional array or a list. They can be identified using three main attributes:

Rank − It tells about the dimensionality of the tensor. It can be understood as the order of the tensor or the number of dimensions in the tensor that has been defined.

Type − It tells about the data type associated with the elements of the Tensor. It can be a one dimensional, two dimensional or n dimensional tensor.

Shape − It is the number of rows and columns together.

We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.

Following is an example −

Example

def logistic_reg(x):
   return tf.nn.softmax(tf.matmul(x, A) + b)
def cross_entropy(y_pred, y_true):
   y_true = tf.one_hot(y_true, depth=num_classes)
   y_pred = tf.clip_by_value(y_pred, 1e-9, 1.)
   return tf.reduce_mean(-tf.reduce_sum(y_true * tf.math.log(y_pred),1))
def accuracy_val(y_pred, y_true):
   correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
   return tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
optimizer = tf.optimizers.SGD(learning_rate)
def run_optimization(x, y):
   with tf.GradientTape() as g:
      pred = logistic_reg(x)
      loss = cross_entropy(pred, y)
   gradients = g.gradient(loss, [A, b])
   optimizer.apply_gradients(zip(gradients, [A, b]))

Code credit −  https://github.com/aymericdamien/TensorFlow-Examples/blob/master/tensorflow_v2/notebooks/2_BasicModels/logistic_regression.ipynb

Output

Once the functions are called, the optimization process begins.
Here, we are using the stochastic gradient descent optimizer.
This means, the optimal values for the ‘weight’ and ‘bias’ are tried to be computed.
Once these gradients are computed, the ‘weight’ and ‘bias’ values are updated.

Explanation

  • A function named ‘logistic_reg’ is defined that gives the softmax value of the input data.

  • It normalizes the logits to contain a probability distribution.

  • The cross entropy loss function is defined, that encodes the label to a one hot vector.

  • The prediction values are formatted to reduce the log(0) error.

  • The accuracy metric needs to be computed, hence a function is defined.

  • The stochastic gradient descent optimizer is defined.

  • A function for optimization is defined, that computes gradients and updates the value of weights and bias.

Updated on: 19-Jan-2021

125 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements