The abalone dataset can be downloaded by using the google API that stores this dataset. The ‘read_csv’ method present in the Pandas library is used to read the data from the API into a CSV file. The names of the features are also specified explicitly.
We will be using the abalone dataset, which contains a set of measurements of abalone. Abalone is a type of sea snail. The goal is to predict the age based on other measurements.
We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook.
import pandas as pd import numpy as np print("The below line makes it easier to read NumPy values") np.set_printoptions(precision=3, suppress=True) import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras.layers.experimental import preprocessing print("Reading the csv data") abalone_train = pd.read_csv("https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv", names=["Length", "Diameter", "Height", "Whole weight", "Shucked weight","Viscera weight", "Shell weight", "Age"])
Code credit: https://www.tensorflow.org/tutorials/load_data/csv
The below line makes it easier to read NumPy values Reading the csv data