• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Crypto Currency
  • Technology
  • Contact
NEO Share

NEO Share

Sharing The Latest Tech News

  • Home
  • Artificial Intelligence
  • Machine Learning
  • Computers
  • Mobile
  • Crypto Currency

Semi-Supervised Learning With Label Propagation

December 29, 2020 by systems

Semi-supervised learning refers to algorithms that attempt to make use of both labeled and unlabeled training data.

Semi-supervised learning algorithms are unlike supervised learning algorithms that are only able to learn from labeled training data.

A popular approach to semi-supervised learning is to create a graph that connects examples in the training dataset and propagate known labels through the edges of the graph to label unlabeled examples. An example of this approach to semi-supervised learning is the label propagation algorithm for classification predictive modeling.

In this tutorial, you will discover how to apply the label propagation algorithm to a semi-supervised learning classification dataset.

After completing this tutorial, you will know:

  • An intuition for how the label propagation semi-supervised learning algorithm works.
  • How to develop a semi-supervised classification dataset and establish a baseline in performance with a supervised learning algorithm.
  • How to develop and evaluate a label propagation algorithm and use the model output to train a supervised learning algorithm.

Let’s get started.

Semi-Supervised Learning With Label Propagation

Semi-Supervised Learning With Label Propagation
Photo by TheBluesDude, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  1. Label Propagation Algorithm
  2. Semi-Supervised Classification Dataset
  3. Label Propagation for Semi-Supervised Learning

Label Propagation Algorithm

Label Propagation is a semi-supervised learning algorithm.

The algorithm was proposed in the 2002 technical report by Xiaojin Zhu and Zoubin Ghahramani titled “Learning From Labeled And Unlabeled Data With Label Propagation.”

The intuition for the algorithm is that a graph is created that connects all examples (rows) in the dataset based on their distance, such as Euclidean distance. Nodes in the graph then have label soft labels or label distribution based on the labels or label distributions of examples connected nearby in the graph.

Many semi-supervised learning algorithms rely on the geometry of the data induced by both labeled and unlabeled examples to improve on supervised methods that use only the labeled data. This geometry can be naturally represented by an empirical graph g = (V,E) where nodes V = {1,…,n} represent the training data and edges E represent similarities between them

— Page 193, Semi-Supervised Learning, 2006.

Propagation refers to the iterative nature that labels are assigned to nodes in the graph and propagate along the edges of the graph to connected nodes.

This procedure is sometimes called label propagation, as it “propagates” labels from the labeled vertices (which are fixed) gradually through the edges to all the unlabeled vertices.

— Page 48, Introduction to Semi-Supervised Learning, 2009.

The process is repeated for a fixed number of iterations to strengthen the labels assigned to unlabeled examples.

Starting with nodes 1, 2,…,l labeled with their known label (1 or −1) and nodes l + 1,…,n labeled with 0, each node starts to propagate its label to its neighbors, and the process is repeated until convergence.

— Page 194, Semi-Supervised Learning, 2006.

Now that we are familiar with the Label Propagation algorithm, let’s look at how we might use it on a project. First, we must define a semi-supervised classification dataset.

Semi-Supervised Classification Dataset

In this section, we will define a dataset for semis-supervised learning and establish a baseline in performance on the dataset.

First, we can define a synthetic classification dataset using the make_classification() function.

We will define the dataset with two classes (binary classification) and two input variables and 1,000 examples.

...

# define dataset

X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, random_state=1)


Next, we will split the dataset into train and test datasets with an equal 50-50 split (e.g. 500 rows in each).

...

# split into train and test

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1, stratify=y)


Finally, we will split the training dataset in half again into a portion that will have labels and a portion that we will pretend is unlabeled.

...

# split train into labeled and unlabeled

X_train_lab, X_test_unlab, y_train_lab, y_test_unlab = train_test_split(X_train, y_train, test_size=0.50, random_state=1, stratify=y_train)


Tying this together, the complete example of preparing the semi-supervised learning dataset is listed below.

# prepare semi-supervised learning dataset

from sklearn.datasets import make_classification

from sklearn.model_selection import train_test_split

# define dataset

X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, random_state=1)

# split into train and test

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1, stratify=y)

# split train into labeled and unlabeled

X_train_lab, X_test_unlab, y_train_lab, y_test_unlab = train_test_split(X_train, y_train, test_size=0.50, random_state=1, stratify=y_train)

# summarize training set size

print(‘Labeled Train Set:’, X_train_lab.shape, y_train_lab.shape)

print(‘Unlabeled Train Set:’, X_test_unlab.shape, y_test_unlab.shape)

# summarize test set size

print(‘Test Set:’, X_test.shape, y_test.shape)


Running the example prepares the dataset and then summarizes the shape of each of the three portions.

The results confirm that we have a test dataset of 500 rows, a labeled training dataset of 250 rows, and 250 rows of unlabeled data.

Labeled Train Set: (250, 2) (250,)

Unlabeled Train Set: (250, 2) (250,)

Test Set: (500, 2) (500,)


A supervised learning algorithm will only have 250 rows from which to train a model.

A semi-supervised learning algorithm will have the 250 labeled rows as well as the 250 unlabeled rows that could be used in numerous ways to improve the labeled training dataset.

Next, we can establish a baseline in performance on the semi-supervised learning dataset using a supervised learning algorithm fit only on the labeled training data.

This is important because we would expect a semi-supervised learning algorithm to outperform a supervised learning algorithm fit on the labeled data alone. If this is not the case, then the semi-supervised learning algorithm does not have skill.

In this case, we will use a logistic regression algorithm fit on the labeled portion of the training dataset.

...

# define model

model = LogisticRegression()

# fit model on labeled dataset

model.fit(X_train_lab, y_train_lab)


The model can then be used to make predictions on the entire hold out test dataset and evaluated using classification accuracy.

...

# make predictions on hold out test set

yhat = model.predict(X_test)

# calculate score for test set

score = accuracy_score(y_test, yhat)

# summarize score

print(‘Accuracy: %.3f’ % (score*100))


Tying this together, the complete example of evaluating a supervised learning algorithm on the semi-supervised learning dataset is listed below.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

# baseline performance on the semi-supervised learning dataset

from sklearn.datasets import make_classification

from sklearn.model_selection import train_test_split

from sklearn.metrics import accuracy_score

from sklearn.linear_model import LogisticRegression

# define dataset

X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, random_state=1)

# split into train and test

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1, stratify=y)

# split train into labeled and unlabeled

X_train_lab, X_test_unlab, y_train_lab, y_test_unlab = train_test_split(X_train, y_train, test_size=0.50, random_state=1, stratify=y_train)

# define model

model = LogisticRegression()

# fit model on labeled dataset

model.fit(X_train_lab, y_train_lab)

# make predictions on hold out test set

yhat = model.predict(X_test)

# calculate score for test set

score = accuracy_score(y_test, yhat)

# summarize score

print(‘Accuracy: %.3f’ % (score*100))


Running the algorithm fits the model on the labeled training dataset and evaluates it on the holdout dataset and prints the classification accuracy.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the algorithm achieved a classification accuracy of about 84.8 percent.

We would expect an effective semi-supervised learning algorithm to achieve better accuracy than this.


Next, let’s explore how to apply the label propagation algorithm to the dataset.

Label Propagation for Semi-Supervised Learning

The Label Propagation algorithm is available in the scikit-learn Python machine learning library via the LabelPropagation class.

The model can be fit just like any other classification model by calling the fit() function and used to make predictions for new data via the predict() function.

...

# define model

model = LabelPropagation()

# fit model on training dataset

model.fit(..., ...)

# make predictions on hold out test set

yhat = model.predict(...)


Importantly, the training dataset provided to the fit() function must include labeled examples that are integer encoded (as per normal) and unlabeled examples marked with a label of -1.

The model will then determine a label for the unlabeled examples as part of fitting the model.

After the model is fit, the estimated labels for the labeled and unlabeled data in the training dataset is available via the “transduction_” attribute on the LabelPropagation class.

...

# get labels for entire training dataset data

tran_labels = model.transduction_


Now that we are familiar with how to use the Label Propagation algorithm in scikit-learn, let’s look at how we might apply it to our semi-supervised learning dataset.

First, we must prepare the training dataset.

We can concatenate the input data of the training dataset into a single array.

...

# create the training dataset input

X_train_mixed = concatenate((X_train_lab, X_test_unlab))


We can then create a list of -1 valued (unlabeled) for each row in the unlabeled portion of the training dataset.

...

# create “no label” for unlabeled data

nolabel = [–1 for _ in range(len(y_test_unlab))]


This list can then be concatenated with the labels from the labeled portion of the training dataset to correspond with the input array for the training dataset.

...

# recombine training dataset labels

y_train_mixed = concatenate((y_train_lab, nolabel))


We can now train the LabelPropagation model on the entire training dataset.

...

# define model

model = LabelPropagation()

# fit model on training dataset

model.fit(X_train_mixed, y_train_mixed)


Next, we can use the model to make predictions on the holdout dataset and evaluate the model using classification accuracy.

...

# make predictions on hold out test set

yhat = model.predict(X_test)

# calculate score for test set

score = accuracy_score(y_test, yhat)

# summarize score

print(‘Accuracy: %.3f’ % (score*100))


Tying this together, the complete example of evaluating label propagation on the semi-supervised learning dataset is listed below.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

# evaluate label propagation on the semi-supervised learning dataset

from numpy import concatenate

from sklearn.datasets import make_classification

from sklearn.model_selection import train_test_split

from sklearn.metrics import accuracy_score

from sklearn.semi_supervised import LabelPropagation

# define dataset

X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, random_state=1)

# split into train and test

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1, stratify=y)

# split train into labeled and unlabeled

X_train_lab, X_test_unlab, y_train_lab, y_test_unlab = train_test_split(X_train, y_train, test_size=0.50, random_state=1, stratify=y_train)

# create the training dataset input

X_train_mixed = concatenate((X_train_lab, X_test_unlab))

# create “no label” for unlabeled data

nolabel = [–1 for _ in range(len(y_test_unlab))]

# recombine training dataset labels

y_train_mixed = concatenate((y_train_lab, nolabel))

# define model

model = LabelPropagation()

# fit model on training dataset

model.fit(X_train_mixed, y_train_mixed)

# make predictions on hold out test set

yhat = model.predict(X_test)

# calculate score for test set

score = accuracy_score(y_test, yhat)

# summarize score

print(‘Accuracy: %.3f’ % (score*100))


Running the algorithm fits the model on the entire training dataset and evaluates it on the holdout dataset and prints the classification accuracy.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the label propagation model achieves a classification accuracy of about 85.6 percent, which is slightly higher than a logistic regression fit only on the labeled training dataset that achieved an accuracy of about 84.8 percent.


So far, so good.

Another approach we can use with the semi-supervised model is to take the estimated labels for the training dataset and fit a supervised learning model.

Recall that we can retrieve the labels for the entire training dataset from the label propagation model as follows:

...

# get labels for entire training dataset data

tran_labels = model.transduction_


We can then use these labels along with all of the input data to train and evaluate a supervised learning algorithm, such as a logistic regression model.

The hope is that the supervised learning model fit on the entire training dataset would achieve even better performance than the semi-supervised learning model alone.

...

# define supervised learning model

model2 = LogisticRegression()

# fit supervised learning model on entire training dataset

model2.fit(X_train_mixed, tran_labels)

# make predictions on hold out test set

yhat = model2.predict(X_test)

# calculate score for test set

score = accuracy_score(y_test, yhat)

# summarize score

print(‘Accuracy: %.3f’ % (score*100))


Tying this together, the complete example of using the estimated training set labels to train and evaluate a supervised learning model is listed below.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

# evaluate logistic regression fit on label propagation for semi-supervised learning

from numpy import concatenate

from sklearn.datasets import make_classification

from sklearn.model_selection import train_test_split

from sklearn.metrics import accuracy_score

from sklearn.semi_supervised import LabelPropagation

from sklearn.linear_model import LogisticRegression

# define dataset

X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, random_state=1)

# split into train and test

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1, stratify=y)

# split train into labeled and unlabeled

X_train_lab, X_test_unlab, y_train_lab, y_test_unlab = train_test_split(X_train, y_train, test_size=0.50, random_state=1, stratify=y_train)

# create the training dataset input

X_train_mixed = concatenate((X_train_lab, X_test_unlab))

# create “no label” for unlabeled data

nolabel = [–1 for _ in range(len(y_test_unlab))]

# recombine training dataset labels

y_train_mixed = concatenate((y_train_lab, nolabel))

# define model

model = LabelPropagation()

# fit model on training dataset

model.fit(X_train_mixed, y_train_mixed)

# get labels for entire training dataset data

tran_labels = model.transduction_

# define supervised learning model

model2 = LogisticRegression()

# fit supervised learning model on entire training dataset

model2.fit(X_train_mixed, tran_labels)

# make predictions on hold out test set

yhat = model2.predict(X_test)

# calculate score for test set

score = accuracy_score(y_test, yhat)

# summarize score

print(‘Accuracy: %.3f’ % (score*100))


Running the algorithm fits the semi-supervised model on the entire training dataset, then fits a supervised learning model on the entire training dataset with inferred labels and evaluates it on the holdout dataset, printing the classification accuracy.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that this hierarchical approach of the semi-supervised model followed by supervised model achieves a classification accuracy of about 86.2 percent on the holdout dataset, even better than the semi-supervised learning used alone that achieved an accuracy of about 85.6 percent.


Can you achieve better results by tuning the hyperparameters of the LabelPropagation model?
Let me know what you discover in the comments below.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Papers

APIs

Articles

Summary

In this tutorial, you discovered how to apply the label propagation algorithm to a semi-supervised learning classification dataset.

Specifically, you learned:

  • An intuition for how the label propagation semi-supervised learning algorithm works.
  • How to develop a semi-supervised classification dataset and establish a baseline in performance with a supervised learning algorithm.
  • How to develop and evaluate a label propagation algorithm and use the model output to train a supervised learning algorithm.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Discover Fast Machine Learning in Python!

Master Machine Learning With Python

Develop Your Own Models in Minutes

…with just a few lines of scikit-learn code

Learn how in my new Ebook:

Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:

Loading data, visualization, modeling, tuning, and much more…

Finally Bring Machine Learning To

Your Own Projects

Skip the Academics. Just Results.

See What’s Inside

Filed Under: Machine Learning

Primary Sidebar

Stay Ahead: The Latest Tech News and Innovations

Cryptocurrency Market Updates: What’s Happening Now

Emerging Trends in Artificial Intelligence: What to Watch For

Top Cloud Computing Services to Secure Your Data

The Future of Mobile Technology: Recent Advancements and Predictions

Footer

  • Privacy Policy
  • Terms and Conditions

Copyright © 2025 NEO Share

Terms and Conditions - Privacy Policy