Dataset of Diverse Text for Language Modeling, Top 10 Computer Vision Papers 2020 & blog post on World Models and Artificial Curiosity in today’s Data Science Daily 📰
The Pile: an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets — both existing and newly constructed — many of which derive from academic or professional sources.
Twitter thread: https://twitter.com/nabla_theta/status/1345130409579794432
TowardsAI has compiled a list of the top 10 Computer Vision Papers published in 2020, it is basically a curated list of the latest breakthroughs in AI and CV with a clear video explanation, link to a more in-depth article, and code (if applicable).
Watch here: https://youtu.be/CP3E9Iaunm4
Full article here: https://whats-ai.medium.com/top-10-computer-vision-papers-2020-aa606985f688
GitHub repository with all videos, articles, codes, and references here: https://github.com/louisfb01/Top-10-Computer-Vision-Papers-2020
1990: Planning & Reinforcement Learning with Recurrent World Models and Artificial Curiosity
From the article:
What does all of this have to do with the seemingly elusive concepts of consciousness and self-awareness? My first deep learning machine of 1991 [UN0-UN3] emulates aspects of consciousness as follows. It uses unsupervised learning and predictive coding [UN0-UN3] [SNT] to compress observation sequences. A so-called “conscious chunker RNN” attends to unexpected events that surprise a lower-level so-called “subconscious automatiser RNN.” The chunker RNN learns to “understand” the surprising events by predicting them. The automatiser RNN uses a neural knowledge distillation procedure of 1991 [UN0-UN2] (see Sec. 2 of [MIR]) to compress and absorb the formerly “conscious” insights and behaviours of the chunker RNN, thus making them “subconscious.”
Reach out to us on [email protected] to get featured here. Learn data science and machine learning with free hands-on data science courses on Jovian.