
An informative article on the Riot API ecosystem, and a basic investigation on the champion-specific player win-rate factor in match outcome prediction
In my previous article on a similar topic, I implemented a simple feed-forward neural network to “successfully” reach a near 70% accuracy in predicting the outcome of a League of Legends (abbr. LoL) match using only the pick-ban phase data. The only feature I used to draw such “outstanding” accuracy is the champion-specific player win-rate factor.
As I soon found out, however, there is a high probability that the data I used for that article is riddled with look-ahead bias. Note that the key feature I used is each individual player’s win-rate with his/her picked champion, which I gathered by scraping the most recent data from LoL analytics websites. This means there is a high possibility that the win-rate feature already reflects the outcome of the games in the training set.
Put simply, look-ahead bias could exist because I included features that could not have been reasonably gathered at the time each game in the training set took place
I might have been on the right footing; Huang, Kim, and Leung (2015) shows that it is possible to predict the match outcome more than 90% of the time with the champion-specific player win-rate factor. However, there are certain details that such prior literature exclude that need further explanation. And the conclusion of my project shows that a more up-to-date research might be necessary.
Before moving on to the details, let me explain the project background a little bit, so as to explain why this project will be worthwhile for you to be reading about.
As I mentioned in my previous article, the e-sports industry has shown phenomenal growth in the recent few years. Not only is it relatively cheaper to fund than the traditional sports industry for corporate sponsors, the e-sports industry is also showing exponential growth in both developed and emerging markets. Especially in the case of emerging markets, for example in Vietnam, e-sports sponsorship is proving to be a successful marketing strategy for foreign firms looking to spread brand awareness in their target region. This is the main reason behind the increasing direct management of e-sports teams by various corporate entities in LPL and LCK (League of Legends e-sports league in China and Korea, respectively)
The e-sports data sector is also showing gradual growth. Comparable to soccer’s Opta is Abios, which provides match data on a variety of professional e-sports games. And a simple search on Linkedin, Wanted, etc shows that numerous gaming companies are looking for data scientists.
However it is still difficult for starters to gain information on what researches are being made on this field. Most scholarly works are either pet projects, or student projects submitted for their graduate coursework; usually all boasting outstanding match prediction rate, yet lacking critical information such as the data collection time period, the data collection target tier, and the method of feature engineering. Interestingly those that provide more informative conclusions are often based on DotA2.
So I intend to test the waters of this field, and maybe take a peek at what could be happening within the data science departments of those secretive companies.
Riot (the developer of League of Legends) conveniently provides an API to access all historical player games minus custom games.
The game data is passed on as a JSON file. Inside is a detailed analysis of the game, covering various stats such as kills, deaths, assists, gold-per-min, first-kill y/n, etc.
Although Riot’s API does not provide a way to fetch a random number of games from a specific tier, there are a number of ways to circumvent this problem. The most helpful solution for this is through this page.
One downside of Riot’s API is its rate-limit, which can be conveniently controlled using the RiotWatcher library. Another annoying problem is the API’s failure at categorizing the lane and role of each player correctly, so this has to be done manually by the developer looking at the timeline data. Surprisingly there exists machine learning solutions to this, shown here and here.
Now, the champion-specific player win-rate factor, the main topic of this article, is the win-rate that a player shows when playing a specific champion. According to Huang, Kim, and Leung (2015), in 2015 it was possible to predict the match outcome 92.8% of the time using the sum of champion-specific player win-rate for each team. So with just two features, the sum of win-rates for each team, a Naive Bayes classifier could basically tell you whether you were going to win or not even before playing.
There are other factors that are touted to capture the winning probabilities. For example, Jiang (2020) gathered five other factors from the first ten minutes of the game to predict the match outcome with 70% accuracy using various ML algorithms.
The most impressive is Hall (2017), where a greater than 80% win prediction accuracy is achieved by masterfully feature engineering the basic statistics of players’ past ranked games. Here, Hall (2017) does not use the champion-specific player win-rate factor.
Now, it happens to be that just recently, I remembered that I had submitted an application to Riot for Personal App API key a long time ago. When I checked, it got approved! Since I got the API key, I needed to do some project with it.
So first, I successfully set up a data pipeline that gathers the necessary game JSON files into an AWS S3 bucket, then feature engineers it for data store in MongoDB.
Then naturally my first thought was: why don’t I try to redeem myself for my terrible project with look-ahead bias by doing the exact same thing, with proper data?
Assumptions
This is a list of basic assumptions I made while working on the project.
- Challenger-tier games will be the most similar to the professional games.
- A reasonable time span for a player to retain consistent performance with a champion will be 5 weeks.
- If a player has never played a champion in the recent 5 weeks, then his/her performance will be similar to the picked champion’s overall performance within those 5 weeks.
You will notice that the “5 weeks” here seems arbitrary. It is.
It is, in a sense, a hyper-parameter that I was too lazy to optimize. However, it is a reasonable guess, as a player will probably feel more awkward controlling a champion he/she has not played for more than 5 weeks.
Data Collection
To avoid the same mistake I made before, it is crucial to make sure that the input features of the machine learning model are collected in the time period before the target game takes place.
- Assemble a list of Challenger-tier players as of December 5th of 2020.
- Fetch 5 most recent solo-ranked games from each player — these will be called “root games”.
- For each root game, assemble tuples of player account ID and the picked champion ID.
- For each tuple of player account ID and champion ID, fetch all games within the recent 5 weeks with matching tuples — these will be called “tail games”.
- Upload all root games and tail games on an AWS S3 bucket.
- Extract features from target games and store in a MongoDB free cluster.
The steps above become entangled and more complex when some exceptions are added. Riot API way too frequently throws rate limit error, and even with the help of RiotWatcher API, we sometimes receive unfriendly responses. There are two ways to tackle this problem.
(1) Have a loop that keeps on tackling the problem (pseudo-code below).
while(True):
try:
game_json = riotwatcher.get_match()
break
except:
time.sleep(120)
continue
(2) Log it, move on, then re-visit.
for something in some_outer_loop:
try:
game_json = riotwatcher.get_match()
except:
logger.log(account_id, match_id, etc)
continue
I adopted the second solution, and in the process lost some data points. If you are in lack of time, however, the second solution will prove to be better as there is zero risk of your program getting stuck with a faulty request. And I do have a suspicion that Riot API does not have a valid response for every valid request so the second solution might be a better idea…though I could be wrong.
In the end, I gathered 1087 root games, and a corresponding number of tail games (50,000~60,000 matches).
Feature Engineering
For the task here, feature engineering is quite simple. Gather the past matches where player X played champion Y in the recent 5 weeks prior playing the root game in question, then just get the win probability.
Training Process
To start off, I tried four machine learning algorithms for the analysis.
- Stochastic Gradient Descent (SGD)
- Support Vector Classification (SVC)
- Random Forest Classifier (RandomForest)
- Feed-forward Neural Network (FNN)
These four are the most convenient ML algorithms to implement using scikit-learn and pytorch.
80% of the root games were used as training set, 10% as dev set, and the rest 10% as the test set. I used grid search to find the best optimization for hyper-parameters.
Result
The result is very different from Huang, Kim, and Leung (2015) and my former article on the same topic.
First, unlike Huang, Kim, and Leung (2015), where algorithms with relatively simpler complexities also performed superbly with above 80% accuracy, mine shows dismal results. Accuracy below 50% is a clear indicator that somehow, SGD and SVC models have overfitted to the training set.
Second, unlike the model in my previous article which showed a whopping 70% accuracy, the current feed-forward neural net only shows about 56% accuracy in match prediction.
So how do we make sense out of this?
First, the changes in the game since 2015 could explain this result. Over the past five years, Riot has been working ceaselessly to balance the game while also introducing new features, and that could have reduced the impact of champion-specific player win-rate factor.
Another possible reason is the target tier. While all my games were collected from par-Challenger tier, such information is not specified in Huang, Kim, and Leung (2015). All we know for the latter is that 600 games were collected, of which 300 were used for training and the other 300 for testing.
The result above shows that while champion-specific player win-rate is not as dominant as it perhaps once was in 2015, it still has an impact on the match outcome. The near 56% match prediction accuracy using a feed-forward neural network is certainly higher than randomly guessing the result. ML algorithms are also necessary for prediction; just merely adding up the champion-specific player win-rates of each team and comparing them for prediction (checked for sanity check — 49% accuracy) is lower than the test accuracy of RandomForest and FNN.
So in conclusion, while champion-specific player win-rate factor still seems to be relevant, it is in such a slight degree that further replication might be necessary for conclusive analysis.