Original article was published on Artificial Intelligence on Medium
Introduction To Fastai(Part1)
What is fastai used for?
You might have heard about machine learning, deep learning, neural networks and related fields in AI industry. All these fields, concepts, and tools which you are surrounded by, are improving as fast as a blink of an eye and each day a new technology is revealed. Fastai is an easy-to-use, brilliant library built on top of Pytorch and developed by Jeremy Howard and Rachel Thomas, providing tools in four main areas:
As it was said, fastai is built on top of Pytorch and therefore, it includes some pretrained models such as resent18, resnet34, resnet50, resnet101, resnet152 (each providing different number of layers) as well as densenet121, densenet169 and… .It is a complementary for Pytorch, a deep-learning library based on Python used in computer vision and building neural network models.
What are the advantages of fastai library over other libraries?
As Jeremy Howard mentions, everything’s much easier with fastai due to less codes written by developers. As the documentation says, fastai provides flexibility, speed and also ease-of-use at the same time. It offers a great deal of features as well as functionality that makes developers customize the high-level API without getting involved with low-level API parts. One instance of this customization is DataBlock, which lets you load the data in a detailed way.
As fast.ai explains, both training and validation data classes, are loaded by DataLoader class. Besides, it would make the job easier due to the process of using validation data sets while training the data. Therefore, beginners working with this library, use available functions and start customizing models. As it was said earlier, there are four fields of applications shown in the figure including vision, text, tabular and collaborative filtering; each used for different purposes.
Furthermore, the fast.ai library implements the learning rate finder which provides the best value for the learning rate parameter after a sample training.
What makes fast.ai training faster?
As the documentation in fast.ai mentions, it includes an OO class, encapsulated pre-processing, augmentation, test, training, and validation sets, multi-class versus single class classification versus regression and…. along with the choice of model architecture. Therefore fastai is able to largely automatically figure out the best architecture, pre-processing, and training parameters for that model, for that data. And finally, it became more productive, and made far less errors, because everything that could be automated, was automated.
For instance, it tends to make it harder for Keras to customize models, especially during training. More importantly, the static computation graph on the back-end, along with Keras’ need for an extra compile() phase, means that it’s hard to customize a model’s behaviour once it’s built and fastai is much faster in this case.