Ok, if you own a KIA automobile please read this…👇
KIA was apparently hacked with ransomware earlier this month, and the actors want to be paid in full. They are asking for a cool $21 million in BTC. KIA has denied the allegations it was ever hacked although they recently suffered network outages.
Read more here.
Consequences of the Hack 👀
“Kia’s key connected services remain offline, meaning customers are unable to pay their car loans, remotely start their vehicles, or other functions using Kia’s infrastructure.” — the drive blog
OpenAI opened the week with a pre-emptive strike w/r/t its DALL-E project by releasing *part* of the model. They released the image reconstruction *part* d-VAE. The actual encoder language model remains out of pocket, and without this, we can’t actually achieve what they demo’d in their paper.
If you’re still interested in OpenAI’s CLIP, we found a YUUUGE list on Reddit highlighting a ton of Colab/Jupyter notebooks running the model!! 🔥🔥
⚠ WARNING length of list may impair your judgement, we recommend you avoid driving or operating heavy machinery while reading. ⚠
- The Big Sleep: BigGANxCLIP.ipynb — Colaboratory by advadnoun. Uses BigGAN to generate images. To my knowledge, this is the first CLIP-steered BigGAN app that was released. Instructions and examples. Notebook copy by levindabhi.
- (Added Feb. 15, 2021) Drive-Integrated The Big Sleep: BigGANxCLIP.ipynb — Colaboratory by advadnoun. Uses BigGAN to generate images.
- Big Sleep — Colaboratory by lucidrains. Uses BigGAN to generate images. The GitHub repo has a local machine version. GitHub.
- The Big Sleep Customized NMKD Public.ipynb — Colaboratory by nmkd. Uses BigGAN to generate images. Allows multiple samples to be generated in a run.
- Text2Image — Colaboratory by tg_bomze. Uses BigGAN to generate images. GitHub.
- Text2Image_v2 — Colaboratory by tg_bomze. Uses BigGAN to generate images. GitHub.
- Text2Image_v3 — Colaboratory by tg_bomze. Uses BigGAN (default) or Sigmoid to generate images. GitHub.
- (Added Feb. 26, 2021) Image Guided Big Sleep Public.ipynb — Colaboratory by jdude_. Uses BigGAN to generate images. Reddit post.
- ClipBigGAN.ipynb — Colaboratory by eyaler. Uses BigGAN to generate images/videos. GitHub. Notebook copy by levindabhi.
- WanderCLIP.ipynb — Colaboratory by eyaler. Uses BigGAN (default) or Sigmoid to generate images/videos. GitHub.
- Story2Hallucination.ipynb — Colaboratory by bonkerfield. Uses BigGAN to generate images/videos. GitHub.
- (Added around Feb. 7, 2021) Story2Hallucination_GIF.ipynb — Colaboratory by bonkerfield. Uses BigGAN to generate images. GitHub.
- (Added Feb. 24, 2021) Colab-BigGANxCLIP.ipynb — Colaboratory by styler00dollar. Uses BigGAN to generate images. “Just a more compressed/smaller version of that [advadnoun’s] notebook”. GitHub.
- CLIP-GLaSS.ipynb — Colaboratory by Galatolo. Uses BigGAN (default) or StyleGAN to generate images. The GPT2 config is for image-to-text, not text-to-image. GitHub.
- (Added Feb. 15, 2021) dank.xyz. Uses BigGAN or StyleGAN to generate images. An easy-to-use website for accessing The Big Sleep and CLIP-GLaSS. To my knowledge this site is not affiliated with the developers of The Big Sleep or CLIP-GLaSS. Reddit reference.
- (Added Feb. 25, 2021) Aleph-Image: CLIPxDAll-E.ipynb — Colaboratory by advadnoun. Uses DALL-E’s discrete VAE (variational autoencoder) component to generate images. Twitter reference. Reddit post.
- (Added Feb. 26, 2021) Aleph2Image (delta): CLIP+DALL-E decoder.ipynb — Colaboratory by advadnoun. Uses DALL-E’s discrete VAE (variational autoencoder) component to generate images. Twitter reference. Reddit post.
- (Added Feb. 27, 2021) Copy of working wow good of gamma aleph2img.ipynb — Colaboratory by advadnoun. Uses DALL-E’s discrete VAE (variational autoencoder) component to generate images. Twitter reference.
- (Added Feb. 27, 2021) Aleph-Image: CLIPxDAll-E (with white blotch fix #2) — Colaboratory by thomash. Uses DALL-E’s discrete VAE (variational autoencoder) component to generate images. Applies the white blotch fix mentioned here to advadnoun’s “Aleph-Image: CLIPxDAll-E” notebook.
- (Added Feb. 14, 2021) GA StyleGAN2 WikiArt CLIP Experiments — Pytorch — clean — Colaboratory by pbaylies. Uses StyleGAN to generate images. More info.
- (Added Feb. 15, 2021) StyleCLIP — Colaboratory by orpatashnik. Uses StyleGAN to generate images. GitHub. Twitter reference. Reddit post.
- (Added Feb. 15, 2021) StyleCLIP by vipermu. Uses StyleGAN to generate images.
- (Added Feb. 24, 2021) CLIP_StyleGAN.ipynb — Colaboratory by levindabhi. Uses StyleGAN to generate images.
- (Added Feb. 23, 2021) TediGAN — Colaboratory by weihaox. Uses StyleGAN to generate images. GitHub. I got error “No pre-trained weights found for perceptual model!” when I used the Colab notebook, which was fixed when I made the change mentioned here. After this change, I still got an error in the cell that displays the images, but the results were in the remote file system. Use the “Files” icon on the left to browse the remote file system.
- TADNE and CLIP — Colaboratory by nagolinc. Uses TADNE (“This Anime Does Not Exist”) to generate images. GitHub.
- CLIP + TADNE (pytorch) v2 — Colaboratory by nagolinc. Uses TADNE (“This Anime Does Not Exist”) to generate images. Instructions and examples. GitHub. Notebook copy by levindabhi
- (Added Feb. 24, 2021) clipping-CLIP-to-GAN by cloneofsimo. Uses FastGAN to generate images.
- CLIP & gradient ascent for text-to-image (Deep Daze?).ipynb — Colaboratory by advadnoun. Uses SIREN to generate images. To my knowledge, this is the first app released that uses CLIP for steering image creation. Instructions and examples. Notebook copy by levindabhi.
- Deep Daze — Colaboratory by lucidrains. Uses SIREN to generate images. The GitHub repo has a local machine version. GitHub. Notebook copy by levindabhi.
- CLIP-SIREN-WithSampleDL.ipynb — Colaboratory by norod78. Uses SIREN to generate images.
- (Added Feb. 17, 2021) Text2Image Siren+.ipynb — Colaboratory by eps696. Uses SIREN to generate images. Twitter reference. Example #1. Example #2. Example #3.
- (Added Feb. 24, 2021) Colab-deep-daze — Colaboratory by styler00dollar. Uses SIREN to generate images. I did not get this notebook to work, but your results may vary. GitHub.
- (Added Feb. 18, 2021) Text2Image FFT.ipynb — Colaboratory by eps696. Uses FFT (Fast Fourier Transform) from Lucent/Lucid to generate images. Twitter reference. Example #1. Example #2.
Sebastian Ruder, one of the pioneers of transfer learning in NLP, has an awesome new blog post on the recent advances in fine-tuning. The five categories below are discussed. This is a must read if you are anywhere near the realm of NLP.
Communities is a Python library for detecting community structure in graphs. It implements the following algorithms:
Louvain method
Girvan-Newman algorithm
Hierarchical clustering
Spectral clustering
Bron-Kerbosch algorithm
Helpful post on the Stack Overflow blog discussing Python libraries such as numpy, pandas, matplotlib, and seaborn. There’s a YouTube Video on the page where they explore a New York City housing dataset. This is on the introductory level.
BertViz is a tool for visualizing attention in the Transformer model, supporting all models from the transformers library (BERT, GPT-2, XLNet, RoBERTa, XLM, CTRL, etc.)
Includes a Colab 😎
Check out this awesome library if you are into topic modeling. They include a zeroshot cross-lingual variant and also a bag-of-words approach for various use-cases.
New Features:
A new library from HF for pruning models resulting in fewer parameters while maintaining accuracy. Their sparsity notebook can be found on the Super Duper NLP Repo.