If you want to stay on top of the latest distributed training with PyTorch and Ray, this is a healthy intro:
“Transformers interpret allows any transformers model to be explained in just two lines. It even supports visualizations in both notebooks and as savable html files.”
So for example if you were doing sentiment analysis on the sentence below:
“I love you, I like you”
This output 👇 would tell you what words have the biggest impact on inference.
[(‘BOS_TOKEN’, 0.0),
(‘I’, 0.46820529249283205),
(‘love’, 0.46061853275727177),
(‘you’, 0.566412765400519),
(‘,’, -0.017154456486408547),
(‘I’, -0.053763869433472),
(‘like’, 0.10987746237531228),
(‘you’, 0.48221682341218103),
(‘EOS_TOKEN’, 0.0)]
Then you visualize it with 1 line of code:
cls_explainer.visualize("distilbert_viz.html")
“ConvLab-2 is an open-source toolkit that enables researchers to build task-oriented dialog systems with state-of-the-art models, perform an end-to-end evaluation, and diagnose the weakness of systems.”
The creator of QuestGen library, Ramsri Golla, has a new course on Udemy!
And I got a discount coupon you can use for his program. Here’s a description of what you’ll learn in case you are interested:
- Generate assessments like MCQs, True/False questions etc from any content using state-of-the-art natural language processing techniques.
- Apply recent advancements like BERT, OpenAI GPT-2, and T5 transformers to solve real-world problems in edtech.
- Use NLP libraries like Spacy, NLTK, AllenNLP, HuggingFace transformers, etc.
- Use Google Colab environment to run all these algorithms.
- 4 hours on-demand video 🤖
25% Off Coupon:
Electrical Engineering and Computer Science courses at MIT.
Article describing the genesis of Wikipedia’s API, the problem of originally not having a holistic API strategy at the Wikimedia Foundation (WMF) and their solution to this problem. The API was completed in December of 2020.
Source Code:
Includes code…Hope you like YML files. 😁
Where unreproducible papers come to live…