Today, we’re going to be building a RESTFUL API that takes in an image and make predictions using a TensorFlow MobileNet pre-trained model.
TensorFlow.js has many pre-trained models that can be used in projects out of the box. This saves developers the task of training a model from scratch. Here we’re going to explore the MobileNet pre-trained architecture
If you haven’t already setup NodeJs on your computer, follow this link to download and install.
Open your terminal and follow the steps below.
- Paste and run the command below to:
- Create a folder.
- Navigate into the folder.
- And initialize a new project which creates a package.json file.
mkdir image-classifier-api && cd image-classifier-api && npm init --yes
2. Still in the terminal, run the following command to install dependencies:
npm i express logger @tensorflow-models/mobilenet @tensorflow/tfjs get-image-data multer morgan nodemon jimp cors --save
3. Open the newly created folder in any code editor of your choice.
4. Create a file called app.js in the root of the folder and paste the following code:
const express = require('express');
const logger = require('morgan');
const cors = require('cors');const app = express();
const corsOptions = {
origin: '*',
};app.use(logger('dev'));
app.use(express.json());
app.use(cors(corsOptions));
app.use(express.urlencoded({ extended: false }));app.get('/', (req, res) => {
res.send('Image Classifier API')
});PORT = process.env.PORT || 3000;
app.listen(PORT);
console.log(`Running server at http://localhost:${PORT}`);
The above code creates a new node server to run our application.
5. Open the package.json file and add this task to the script property:
"start": "nodemon app.js"
Also change the main script to app.js.
The package.json file should look like this:
6. On your terminal, run npm start.
This will start the server, you should see the console message “Running server at http://localhost:3000.”
7. Let’s create some directories. Back in the terminal in the project root, run the following commands:
mkdir routes && mkdir controllers && mkdir images
8. Navigate into the controllers folder and create a new file called predict.controller.js. Also navigate into the routes folder an create a new file called predict.route.js.
9. In the predict.route.js file paste in the following code:
const express = require('express');
const multer = require('multer');
const controller = require('../controllers/predict.controller');// configure multer
const storage = multer.diskStorage({
destination: (req, file, callback) => {
callback(null, 'images');
},
filename: (req, file, callback) => {
callback(null, 'test-image.jpg');
},
});const imageFileFilter = (req, file, callback) => {
if (!file.originalname.match(/.(jpg|jpeg|png)$/)) {
return callback(new Error('You can upload only image files'), false);
}
callback(null, true);
};const upload = multer({ storage, fileFilter: imageFileFilter });const router = express.Router();
router.route('/').post(upload.single('file'), controller.makePredictions);module.exports = router;
In the above code, we configure multer to handle the input image and store it in the /image folder. The image is renamed “test-image.png” for easy read in our controller file. We then create an express router to include the multer middleware and call the makePredictions function we’ll be creating next.
9. Now, we’re going to do three things:
- Load the MobileNet model.
- Read the raw image file and convert its pixel data into Tensor.
- Delete the file to clear up space.
Open predict.controller.js file and paste in the following code:
const tf = require('@tensorflow/tfjs');
const mobilenet = require('@tensorflow-models/mobilenet');
const image = require('get-image-data');
const fs = require('fs');exports.makePredictions = async (req, res, next) => {
const imagePath = './images/test-image.jpg';
try {
const loadModel = async (img) => {
const output = {};
// laod model
console.log('Loading.......')
const model = await mobilenet.load();
// classify
output.predictions = await model.classify(img);
console.log(output);
res.statusCode = 200;
res.json(output);
};
await image(imagePath, async (err, imageData) => {
// pre-process image
const numChannels = 3;
const numPixels = imageData.width * imageData.height;
const values = new Int32Array(numPixels * numChannels);
const pixels = imageData.data;
for (let i = 0; i < numPixels; i++) {
for (let channel = 0; channel < numChannels; ++channel) {
values[i * numChannels + channel] = pixels[i * 4 + channel];
}
}
const outShape = [imageData.height, imageData.width, numChannels];
const input = tf.tensor3d(values, outShape, 'int32');
await loadModel(input);// delete image file
fs.unlinkSync(imagePath, (error) => {
if (error) {
console.error(error);
}
});
});
} catch (error) {
console.log(error)
}
};
The above code first creates a makePrediction function, which includes a function that loads the MobileNet model and output predictions on an image parameter. Next, we read and decode the image data with the get-image-data package. This package decodes the width, height, and binary data from the image. Following that, we extract the pixel values of the image and convert into a tensor. We then call the loadModel function and pass the image tensor data to make predictions. After this, we delete the image file from memory.