Deep learning neural network models used for predictive modeling may need to be updated.
This may be because the data has changed since the model was developed and deployed, or it may be the case that additional labeled data has been made available since the model was developed and it is expected that the additional data will improve the performance of the model.
It is important to experiment and evaluate with a range of different approaches when updating neural network models for new data, especially if model updating will be automated, such as on a periodic schedule.
There are many ways to update neural network models, although the two main approaches involve either using the existing model as a starting point and retraining it, or leaving the existing model unchanged and combining the predictions from the existing model with a new model.
In this tutorial, you will discover how to update deep learning neural network models in response to new data.
After completing this tutorial, you will know:
- Neural network models may need to be updated when the underlying data changes or when new labeled data is made available.
- How to update trained neural network models with just new data or combinations of old and new data.
- How to create an ensemble of existing and new models trained on just new data or combinations of old and new data.
Let’s get started.
Tutorial Overview
This tutorial is divided into three parts; they are:
- Updating Neural Network Models
- Retraining Update Strategies
- Update Model on New Data Only
- Update Model on Old and New Data
- Ensemble Update Strategies
- Ensemble Model With Model on New Data Only
- Ensemble Model With Model on Old and New Data
Updating Neural Network Models
Selecting and finalizing a deep learning neural network model for a predictive modeling project is just the beginning.
You can then start using the model to make predictions on new data.
One possible problem that you may encounter is that the nature of the prediction problem may change over time.
You may notice this by the fact that the effectiveness of predictions may begin to decline over time. This may be because the assumptions made and captured in the model are changing or no longer hold.
Generally, this is referred to as the problem of “concept drift” where the underlying probability distributions of variables and relationships between variables change over time, which can negatively impact the model built from the data.
For more on concept drift, see the tutorial:
Concept drift may affect your model at different times and depends specifically on the prediction problem you are solving and the model chosen to address it.
It can be helpful to monitor the performance of a model over time and use a clear drop in model performance as a trigger to make a change to your model, such as re-training it on new data.
Alternately, you may know that data in your domain changes frequently enough that a change to the model is required periodically, such as weekly, monthly, or annually.
Finally, you may operate your model for a while and accumulate additional data with known outcomes that you wish to use to update your model, with the hopes of improving predictive performance.
Importantly, you have a lot of flexibility when it comes to responding to a change to the problem or the availability of new data.
For example, you can take the trained neural network model and update the model weights using the new data. Or we might want to leave the existing model untouched and combine its predictions with a new model fit on the newly available data.
These approaches might represent two general themes in updating neural network models in response to new data, they are:
- Retrain Update Strategies.
- Ensemble Update Strategies.
Let’s take a closer look at each in turn.
Retraining Update Strategies
A benefit of neural network models is that their weights can be updated at any time with continued training.
When responding to changes in the underlying data or the availability of new data, there are a few different strategies to choose from when updating a neural network model, such as:
- Continue training the model on the new data only.
- Continue training the model on the old and new data.
We might also imagine variations on the above strategies, such as using a sample of the new data or a sample of new and old data instead of all available data, as well as possible instance-based weightings on sampled data.
We might also consider extensions of the model that freeze the layers of the existing model (e.g. so model weights cannot change during training), then add new layers with model weights that can change, grafting on extensions to the model to handle any change in the data. Perhaps this is a variation of the retraining and the ensemble approach in the next section, and we’ll leave it for now.
Nevertheless, these are the two main strategies to consider.
Let’s make these approaches concrete with a worked example.
Update Model on New Data Only
We can update the model on the new data only.
One extreme version of this approach is to not use any new data and simply re-train the model on the old data. This might be the same as “do nothing” in response to the new data. At the other extreme, a model could be fit on the new data only, discarding the old data and old model.
- Ignore new data, do nothing.
- Update existing model on new data.
- Fit new model on new data, discard old model and data.
We will focus on the middle ground in this example, but it might be interesting to test all three approaches on your problem and see what works best.
First, we can define a synthetic binary classification dataset and split it into half, then use one portion as “old data” and another portion as “new data.”
... # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # record the number of input features in the data n_features = X.shape[1] # split into old and new data X_old, X_new, y_old, y_new = train_test_split(X, y, test_size=0.50, random_state=1) |
We can then define a Multilayer Perceptron model (MLP) and fit it on the old data only.
... # define the model model = Sequential() model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on old data model.fit(X_old, y_old, epochs=150, batch_size=32, verbose=0) |
We can then imagine saving the model and using it for some time.
Time passes, and we wish to update it on new data that has become available.
This would involve using a much smaller learning rate than normal so that we do not wash away the weights learned on the old data.
Note: you will need to discover a learning rate that is appropriate for your model and dataset that achieves better performance than simply fitting a new model from scratch.
... # update model on new data only with a smaller learning rate opt = SGD(learning_rate=0.001, momentum=0.9) # compile the model model.compile(optimizer=opt, loss=‘binary_crossentropy’) |
We can then fit the model on the new data only with this smaller learning rate.
... model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on new data model.fit(X_new, y_new, epochs=100, batch_size=32, verbose=0) |
Tying this together, the complete example of updating a neural network model on new data only is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
# update neural network with new data only from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # record the number of input features in the data n_features = X.shape[1] # split into old and new data X_old, X_new, y_old, y_new = train_test_split(X, y, test_size=0.50, random_state=1) # define the model model = Sequential() model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on old data model.fit(X_old, y_old, epochs=150, batch_size=32, verbose=0)
# save model…
# load model…
# update model on new data only with a smaller learning rate opt = SGD(learning_rate=0.001, momentum=0.9) # compile the model model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on new data model.fit(X_new, y_new, epochs=100, batch_size=32, verbose=0) |
Next, let’s look at updating the model on new and old data.
Update Model on Old and New Data
We can update the model on a combination of both old and new data.
An extreme version of this approach is to discard the model and simply fit a new model on all available data, new and old. A less extreme version would be to use the existing model as a starting point and update it based on the combined dataset.
Again, it is a good idea to test both strategies and see what works well for your dataset.
We will focus on the less extreme update strategy in this case.
The synthetic dataset and model can be fit on the old dataset as before.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
... # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # record the number of input features in the data n_features = X.shape[1] # split into old and new data X_old, X_new, y_old, y_new = train_test_split(X, y, test_size=0.50, random_state=1) # define the model model = Sequential() model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on old data model.fit(X_old, y_old, epochs=150, batch_size=32, verbose=0) |
New data comes available and we wish to update the model on a combination of both old and new data.
First, we must use a much smaller learning rate in an attempt to use the current weights as a starting point for the search.
Note: you will need to discover a learning rate that is appropriate for your model and dataset that achieves better performance than simply fitting a new model from scratch.
... # update model with a smaller learning rate opt = SGD(learning_rate=0.001, momentum=0.9) # compile the model model.compile(optimizer=opt, loss=‘binary_crossentropy’) |
We can then create a composite dataset composed of old and new data.
... # create a composite dataset of old and new data X_both, y_both = vstack((X_old, X_new)), hstack((y_old, y_new)) |
Finally, we can update the model on this composite dataset.
... # fit the model on new data model.fit(X_both, y_both, epochs=100, batch_size=32, verbose=0) |
Tying this together, the complete example of updating a neural network model on both old and new data is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
# update neural network with both old and new data from numpy import vstack from numpy import hstack from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # record the number of input features in the data n_features = X.shape[1] # split into old and new data X_old, X_new, y_old, y_new = train_test_split(X, y, test_size=0.50, random_state=1) # define the model model = Sequential() model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on old data model.fit(X_old, y_old, epochs=150, batch_size=32, verbose=0)
# save model…
# load model…
# update model with a smaller learning rate opt = SGD(learning_rate=0.001, momentum=0.9) # compile the model model.compile(optimizer=opt, loss=‘binary_crossentropy’) # create a composite dataset of old and new data X_both, y_both = vstack((X_old, X_new)), hstack((y_old, y_new)) # fit the model on new data model.fit(X_both, y_both, epochs=100, batch_size=32, verbose=0) |
Next, let’s look at how to use ensemble models to respond to new data.
Ensemble Update Strategies
An ensemble is a predictive model that is composed of multiple other models.
There are many different types of ensemble models, although perhaps the simplest approach is to average the predictions from multiple different models.
For more on ensemble algorithms for deep learning neural networks, see the tutorial:
We can use an ensemble model as a strategy when responding to changes in the underlying data or availability of new data.
Mirroring the approaches in the previous section, we might consider two approaches to ensemble learning algorithms as strategies for responding to new data; they are:
- Ensemble of existing model and new model fit on new data only.
- Ensemble of existing model and new model fit on old and new data.
Again, we might consider variations on these approaches, such as samples of old and new data, and more than one existing or additional models included in the ensemble.
Nevertheless, these are the two main strategies to consider.
Let’s make these approaches concrete with a worked example.
Ensemble Model With Model on New Data Only
We can create an ensemble of the existing model and a new model fit on only the new data.
The expectation is that the ensemble predictions perform better or are more stable (lower variance) than using either the old model or the new model alone. This should be checked on your dataset before adopting the ensemble.
First, we can prepare the dataset and fit the old model, as we did in the previous sections.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
... # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # record the number of input features in the data n_features = X.shape[1] # split into old and new data X_old, X_new, y_old, y_new = train_test_split(X, y, test_size=0.50, random_state=1) # define the old model old_model = Sequential() old_model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) old_model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) old_model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model old_model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on old data old_model.fit(X_old, y_old, epochs=150, batch_size=32, verbose=0) |
Some time passes and new data becomes available.
We can then fit a new model on the new data, naturally discovering a model and configuration that works well or best on the new dataset only.
In this case, we’ll simply use the same model architecture and configuration as the old model.
... # define the new model new_model = Sequential() new_model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) new_model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) new_model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model new_model.compile(optimizer=opt, loss=‘binary_crossentropy’) |
We can then fit this new model on the new data only.
... # fit the model on old data new_model.fit(X_new, y_new, epochs=150, batch_size=32, verbose=0) |
Now that we have the two models, we can make predictions with each model, and calculate the average of the predictions as the “ensemble prediction.”
... # make predictions with both models yhat1 = old_model.predict(X_new) yhat2 = new_model.predict(X_new) # combine predictions into single array combined = hstack((yhat1, yhat2)) # calculate outcome as mean of predictions yhat = mean(combined, axis=–1) |
Tying this together, the complete example of updating using an ensemble of the existing model and a new model fit on new data only is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
# ensemble old neural network with new model fit on new data only from numpy import hstack from numpy import mean from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # record the number of input features in the data n_features = X.shape[1] # split into old and new data X_old, X_new, y_old, y_new = train_test_split(X, y, test_size=0.50, random_state=1) # define the old model old_model = Sequential() old_model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) old_model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) old_model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model old_model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on old data old_model.fit(X_old, y_old, epochs=150, batch_size=32, verbose=0)
# save model…
# load model…
# define the new model new_model = Sequential() new_model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) new_model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) new_model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model new_model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on old data new_model.fit(X_new, y_new, epochs=150, batch_size=32, verbose=0)
# make predictions with both models yhat1 = old_model.predict(X_new) yhat2 = new_model.predict(X_new) # combine predictions into single array combined = hstack((yhat1, yhat2)) # calculate outcome as mean of predictions yhat = mean(combined, axis=–1) |
Ensemble Model With Model on Old and New Data
We can create an ensemble of the existing model and a new model fit on both the old and the new data.
The expectation is that the ensemble predictions perform better or are more stable (lower variance) than using either the old model or the new model alone. This should be checked on your dataset before adopting the ensemble.
First, we can prepare the dataset and fit the old model, as we did in the previous sections.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
... # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # record the number of input features in the data n_features = X.shape[1] # split into old and new data X_old, X_new, y_old, y_new = train_test_split(X, y, test_size=0.50, random_state=1) # define the old model old_model = Sequential() old_model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) old_model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) old_model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model old_model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on old data old_model.fit(X_old, y_old, epochs=150, batch_size=32, verbose=0) |
Some time passes and new data becomes available.
We can then fit a new model on a composite of the old and new data, naturally discovering a model and configuration that works well or best on the new dataset only.
In this case, we’ll simply use the same model architecture and configuration as the old model.
... # define the new model new_model = Sequential() new_model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) new_model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) new_model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model new_model.compile(optimizer=opt, loss=‘binary_crossentropy’) |
We can create a composite dataset from the old and new data, then fit the new model on this dataset.
... # create a composite dataset of old and new data X_both, y_both = vstack((X_old, X_new)), hstack((y_old, y_new)) # fit the model on old data new_model.fit(X_both, y_both, epochs=150, batch_size=32, verbose=0) |
Finally, we can use both models together to make ensemble predictions.
... # make predictions with both models yhat1 = old_model.predict(X_new) yhat2 = new_model.predict(X_new) # combine predictions into single array combined = hstack((yhat1, yhat2)) # calculate outcome as mean of predictions yhat = mean(combined, axis=–1) |
Tying this together, the complete example of updating using an ensemble of the existing model and a new model fit on the old and new data is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
# ensemble old neural network with new model fit on old and new data from numpy import hstack from numpy import vstack from numpy import mean from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD # define dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # record the number of input features in the data n_features = X.shape[1] # split into old and new data X_old, X_new, y_old, y_new = train_test_split(X, y, test_size=0.50, random_state=1) # define the old model old_model = Sequential() old_model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) old_model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) old_model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model old_model.compile(optimizer=opt, loss=‘binary_crossentropy’) # fit the model on old data old_model.fit(X_old, y_old, epochs=150, batch_size=32, verbose=0)
# save model…
# load model…
# define the new model new_model = Sequential() new_model.add(Dense(20, kernel_initializer=‘he_normal’, activation=‘relu’, input_dim=n_features)) new_model.add(Dense(10, kernel_initializer=‘he_normal’, activation=‘relu’)) new_model.add(Dense(1, activation=‘sigmoid’)) # define the optimization algorithm opt = SGD(learning_rate=0.01, momentum=0.9) # compile the model new_model.compile(optimizer=opt, loss=‘binary_crossentropy’) # create a composite dataset of old and new data X_both, y_both = vstack((X_old, X_new)), hstack((y_old, y_new)) # fit the model on old data new_model.fit(X_both, y_both, epochs=150, batch_size=32, verbose=0)
# make predictions with both models yhat1 = old_model.predict(X_new) yhat2 = new_model.predict(X_new) # combine predictions into single array combined = hstack((yhat1, yhat2)) # calculate outcome as mean of predictions yhat = mean(combined, axis=–1) |
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Tutorials
Summary
In this tutorial, you discovered how to update deep learning neural network models in response to new data.
Specifically, you learned:
- Neural network models may need to be updated when the underlying data changes or when new labeled data is made available.
- How to update trained neural network models with just new data or combinations of old and new data.
- How to create an ensemble of existing and new models trained on just new data or combinations of old and new data.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.