

Ok, not exactly. But COVID did significantly impact some AI based applications. For example, models used to detect fraud in air travel used to see the purchase of a one-way ticket as a strong sign of fraud. Clearly no longer the case. The type of items bought online has shifted rapidly, rendering previous recommender models less relevant.
The cause is something called concept drift, which means that the data used to train the model is no longer an accurate reflection of the data used for prediction. Read more about it here.
“These shifts result in constantly changing patterns in data — which ultimately degrade the predictive ability of models built, trained and tested on patterns of data that are suddenly no longer relevant.”
“While concept drift has always been an issue for data science, its impact has accelerated aggressively and has reached unprecedented levels due to the COVID-19 pandemic.”
As a security researcher, this article naturally raised the question of adversarial uses of concept drift in my mind, so I poked around on google scholar. Apparently it is difficult for existing systems to differentiate between benign and adversarial concept drift, and some hypothesize an adversary could potentially force concept drift so that the underlying model would adapt to false data. Here is a paper still on arXiv discussing a proposed solution.