A sequential model can be built using Keras Sequential API that is used to work with plain stack of layers. Here every layer has exactly one input tensor and one output tensor.
A pre-trained model can be used as the base model on the specific dataset. This saves the time and resources of having to train the model again on the specific dataset.
A pre-trained model is a saved network which would be previously trained on a large dataset. This large dataset would be a large-scale image-classification task. A pre-trained model can be used as it is or it can be customized along with transfer learning, depending on the requirement and model.
A customized model can be pre-trained in two ways:
The representations learned by a previous network can be used to extract meaningful features from new samples. A new classifier can be added, that would have been trained from scratch, which will be on top of the pre-trained model. This can be used to repurpose the feature maps that were previously learned for the dataset.
The entire model need not be retrained. The base convolutional network will already have the features that are generically useful to classify pictures.
But the final classification part of the pre-trained model is specific with respect to the original classification task. This means it is specific to the set of classes on which the model was trained.
Unfreeze some of the top layers of a frozen model base and train the newly-added classifier layers as well as the last layers of the base model together. This will allow the user to "fine-tune" the higher-order feature representations in the base model. This helps make the model more relevant to the specific task.