• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Crypto Currency
  • Technology
  • Contact
NEO Share

NEO Share

Sharing The Latest Tech News

  • Home
  • Artificial Intelligence
  • Machine Learning
  • Computers
  • Mobile
  • Crypto Currency

Daily Newsletter — 4th January 2021

January 4, 2021 by systems

Hemanth Janesh

Dataset of Diverse Text for Language Modeling, Top 10 Computer Vision Papers 2020 & blog post on World Models and Artificial Curiosity in today’s Data Science Daily 📰

The Pile: an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets — both existing and newly constructed — many of which derive from academic or professional sources.

Website: https://pile.eleuther.ai/

Paper: https://pile.eleuther.ai/paper.pdf

Twitter thread: https://twitter.com/nabla_theta/status/1345130409579794432

TowardsAI has compiled a list of the top 10 Computer Vision Papers published in 2020, it is basically a curated list of the latest breakthroughs in AI and CV with a clear video explanation, link to a more in-depth article, and code (if applicable).

Watch here: https://youtu.be/CP3E9Iaunm4

Full article here: https://whats-ai.medium.com/top-10-computer-vision-papers-2020-aa606985f688

GitHub repository with all videos, articles, codes, and references here: https://github.com/louisfb01/Top-10-Computer-Vision-Papers-2020

1990: Planning & Reinforcement Learning with Recurrent World Models and Artificial Curiosity

From the article:

What does all of this have to do with the seemingly elusive concepts of consciousness and self-awareness? My first deep learning machine of 1991 [UN0-UN3] emulates aspects of consciousness as follows. It uses unsupervised learning and predictive coding [UN0-UN3] [SNT] to compress observation sequences. A so-called “conscious chunker RNN” attends to unexpected events that surprise a lower-level so-called “subconscious automatiser RNN.” The chunker RNN learns to “understand” the surprising events by predicting them. The automatiser RNN uses a neural knowledge distillation procedure of 1991 [UN0-UN2] (see Sec. 2 of [MIR]) to compress and absorb the formerly “conscious” insights and behaviours of the chunker RNN, thus making them “subconscious.”

Link: http://people.idsia.ch/~juergen/world-models-planning-curiosity-fki-1990.html

Reach out to us on community@jovian.ai to get featured here. Learn data science and machine learning with free hands-on data science courses on Jovian.

Follow us on Twitter, LinkedIn, and YouTube to stay updated.

Filed Under: Machine Learning

Primary Sidebar

Stay Ahead: The Latest Tech News and Innovations

Cryptocurrency Market Updates: What’s Happening Now

Emerging Trends in Artificial Intelligence: What to Watch For

Top Cloud Computing Services to Secure Your Data

The Future of Mobile Technology: Recent Advancements and Predictions

Footer

  • Privacy Policy
  • Terms and Conditions

Copyright © 2025 NEO Share

Terms and Conditions - Privacy Policy