• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Crypto Currency
  • Technology
  • Contact
NEO Share

NEO Share

Sharing The Latest Tech News

  • Home
  • Artificial Intelligence
  • Machine Learning
  • Computers
  • Mobile
  • Crypto Currency

Daily Data Science Tip #10

February 3, 2021 by systems

Photo by Rafael Pol on Unsplash

Why do we batch the dataset before training?

Richmond Alake

As a Machine Learning practitioner, you’ve probably wondered why is it a standard processing tp batch training data before feeding it to a neural network?

A straightforward answer is that training data or data used within neural networks are batched mainly for memory optimisation purposes. Placing a whole dataset, for example, all 60,000 of the MNIST training dataset, in a GPU’s memory is very expensive. You would probably run into the infamous “RuntimeError: CUDA error: out of memory”.

To avoid memory issues when training a neural network, large datasets are batched in sets of 16, 32, or 128. The batch number depends on the compute resource’s memory capacity.

Filed Under: Machine Learning

Primary Sidebar

Stay Ahead: The Latest Tech News and Innovations

Cryptocurrency Market Updates: What’s Happening Now

Emerging Trends in Artificial Intelligence: What to Watch For

Top Cloud Computing Services to Secure Your Data

The Future of Mobile Technology: Recent Advancements and Predictions

Footer

  • Privacy Policy
  • Terms and Conditions

Copyright © 2025 NEO Share

Terms and Conditions - Privacy Policy