• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Crypto Currency
  • Technology
  • Contact
NEO Share

NEO Share

Sharing The Latest Tech News

  • Home
  • Artificial Intelligence
  • Machine Learning
  • Computers
  • Mobile
  • Crypto Currency

Tensorflow-GPU RAM managing

February 6, 2021 by systems

Omkarsinghrajpurohit

managing the GPU RAM is difficult in TensorFlow but TensorFlow offers some option to manage RAM.

Problem: by default TensorFlow automatically grab all the RAM in all available GPUs the first time run a graph, so you will not be able to start a second TensorFlow program while the first one is still running, if you try, you will get the following error:

E [….]/cuda_driver.cc.965] failed to allocate 3.66G from device: CUDA_ERROR_OUT_OF_MEMORY

1.One solution is to run each process on different GPU cards:

$ CUDA_VISIBLE_DEVICES=0,1 python3 om.py

In another terminal:

$ CUDA_VISIBLE_DEVICES=2,3 python3 om1.py

program 1 will see GPU card 0 and 1

program 2 will see GPU card 2 and 3

2. Another option is to tell TensorFlow to grab only a fraction of the memory. For example, to make TensorFlow grab only 40% of each GPU memory, you must create a ConfigProto object, set its gpu_options.pre_process_gpu_memory_fraction option 0.4.

config=tf.ConfigProto()

config.gpu_options.pre_process_gpu_memory_fraction =0.4

yet another option is tell tensorflow to grab memory only when it needs it. To do this you must set.

config.gpu_options.allow_growth=True

Filed Under: Machine Learning

Primary Sidebar

Stay Ahead: The Latest Tech News and Innovations

Cryptocurrency Market Updates: What’s Happening Now

Emerging Trends in Artificial Intelligence: What to Watch For

Top Cloud Computing Services to Secure Your Data

The Future of Mobile Technology: Recent Advancements and Predictions

Footer

  • Privacy Policy
  • Terms and Conditions

Copyright © 2025 NEO Share

Terms and Conditions - Privacy Policy