This CNN neural network is built around two consecutive CONV-Pool cells and then the classifier. The folding section uses these features to match the features in the input images automatically and to match the classification section, to match the input images with ASCII character mapping (classify). The final layer converts the output of the classification network into a possibility vector for each class (such as ASCII character).
Infoblox has already begun to create a TensorFlow model and made to Amazon Sagemaker. From there, they used multiple Amazon Sagemaker features to accelerate or facilitate the model development:
The current documents and many samples made by AWS are open to everyone thanks to the Notebook computer from GitHub Repo or directly from the Amazon Sagemaker laptop computer.
In Amazon Sagemaker, a TensorFlow training script started to test a few lines of code. In local mode, training had the following benefits:
Having working flexibility in local mode in Amazon Sagemaker was very important to easily carry the current study to the cloud. Amazon Sagemaker TensorFlow can also prototype your inference code by deploying the presentation container in the local example. When you are satisfied with the model and educational behavior, you can only pass a few lines of code to a distributed training and extract; Thus, you can create a new estimate, optimize the model or even distribute trained works to a permanent endpoint.
After completing the data preparation and training process using the local mode, the Infoblox model began to set up in the cloud. This step began with a rough parameter set that is slowly refined through several adjustment jobs. At this stage, Infoblox is used in the Amazon Sagemaker hyperparameter setting to help them choose the best hyperparameter values. The following hyperparameters seemed to have the highest impact on model performance:
When the model is optimized and reached the necessary accuracy and F1 score performance, the Infoblox team dispensed the works to an Amazon Sagemaker end point. For more security, the Amazon Sagemaker endpoints are distributed in special cases isolated and therefore need to provide them and provide new estimates after a few minutes.
Having a correct or clean train, validation and test sets was very important when trying to reach a correct accuracy. For example, the Infoblox team to select the 65 font of the training sets, printed the existing fonts on workstations and viewed them manually to select the most relevant fonts.
The accuracy is the proportion of the homographs that the model has done correctly. It is defined as the correct number of estimates identified on the total number of estimates created by the model. Infoblox has achieved a higher accuracy than 96.9% (in other words, 969 from the model performed by the model was classified correctly as homographs or not).
Recall is defined as the rate between the ratio between the total number between the total number of real positives and the rate between the ratio of real positives:
Infoblox receives a harmonic average between a unified metric, F1-scoricification and recall.