Nvidia RTX 3080 is a great choice for the price. It has all new NVIDIA Ampere architecture, 2nd Generation RT Cores, 3rd Generation Tensor Cores, and 10GB GDDR6X 320-bit memory interface which is great for deep learning.
But why RTX 3080, why not RTX 2080Ti?
In terms of Cuda cores which directly impact deep learning performance shoots up from 4,352 to a whopping 8,704 in the RTX 3080. That’s why we have to go to 3080. and of course, it’s much cheaper compared to $999 2080Ti. But if you need multi GPU support I suggest you to go for 3090 which supports SLI. (only 3080 has SLI support in RTX 30 series.)
For my build, I have used a GIGABYTE GeForce RTX 3080 GAMING OC version. You can choose any RTX from Asus, MSI, or GIGABYTE depending on availability. Asus TUF and GIGABYTE GeForce RTX™ 3080 has good thermals and great value for money. I have chosen the OC version because it has slightly more clock speeds that the non-OC version.
If you get out of memory errors ( Resource exhausted: OOM when allocating tensor) you could lower your batch size. If it does not help, you have to go for 3090. So it’s better if you can estimate your GPU needs first. If you go for multiple 3090 GPUs I recommend you to go for the founders edition because of its blower-style thermal design, otherwise, when you stack two or more cards inline the heat will be a problem because reference editions do not blow the heat out of the casing.
Since the primary use for the CPU is data preprocessing such as batch scheduling and executing other primary OS related tasks. Deep Learning algorithms always runs on CUDA cores. Even if you get Intel CPU that’s completely fine.
Because of the AMD processor’s higher base clock speeds I would go for a Ryzen 5 3000 processor than an all-new Intel 10 gen i5 processor. If you want to go for a 9th or 8th gen i7 processor that’s also fine. But never go beyond that because RTX GPUs come with PCI gen 4. You will not find a matching motherboard for that and of course, your pc will be bottlenecked.
You can find Ryzen 5 3600 on Amazon for $199.99.
As you can see they come in different sizes. Standard sizes decide the type of case you want. As per your needs, in a given motherboard, you can check for the number of:
- DIMM slots for RAM
- PCIe slots for GPUs
- M2 slots for NVMe
- IO ports.
If you are planning to add more GPUs in the future, make sure that your motherboard has enough PCI slots and it’s SLI-ready.
For my build I would go for a Gigabyte X570 Gaming X. It has great VRMs and especially it has many slots for upgrades, good thermal design, and XMP support also.
RAM (memory) is one of the foremost important aspects of a computer because a big part of the PC’s performance and speed is dependent on the amount and speed of the RAM. The more RAM your CPU has access to, the faster it can process its tasks for a certain point. You have to find the sweet spot of your system. For Ryzen 5 3600 CPU, 3200MHz, and 3600MHz will not make a huge performance gain. It’s not worth the money. Go for a 16Gb or higher ram if you need. The clock speeds are really important here. You should get a 3200 or higher memory that supports your processor and motherboard. Activate XMP profile to get the announced speeds otherwise, it will be limited below 3000.
For my build, I have gone with a Corsair Vengeance LPX 16GB (2x8GB) RAM. This is a Non-RGB version. It helps you to cut down some unwanted expenses.
I recommend an SSD over an HDD because SSDs are at least 15 times faster than HDD and a lot quieter. We chose a 1TB SSD as datasets nowadays can easily go up to many gigabytes. Moreover, 1TB is sufficient for us as the data needed for training, and validating is temporarily stored and will be removed after a model is fully trained and tested.
Rather than using the SATA bus, PCIe is used giving a big performance gain. Also, the transfer protocol used is not AHCI but NVMe, giving highly efficient parallel processing. In the end, we are talking 2–3 GB/s here. The numbers vary from model to model. This will help you to get full performance from your GPU.
My suggestion is to go for an NVMe depending on your data set sizes. For my build, I have chosen a Gigabyte NVMe M.2 256GB. It’s enough for the ubuntu installation and other deep learning applications like Tensorflow, Jupyter, and more than 100GB dataset. If you can spend $119.99 you can go for 1TB.
Since water cooling will not fit into our budget but many of the Air Cooler will make our done. AMD Ryzen comes with an air cooler so I will be using that. In the training, the process CPU will not stress much since we are using GPU. So stock air cooler is fine for our job.
And PSU, Nvidia recommends 750W PSU, So I will choose CorsairTX750M, 80+ Gold Certified, Semi-Modular Power Supply. It has a great price value. EVGA, SeaSonic, Cooler Master, NZXT are also well-reputed brands in PSU manufacturing. Going for a semi-modular PSU will save you some bucks.
Okay, this can be an issue. No matter how much you take care of it, sometimes, a cable can be of slightly smaller length or the RAM comes in the way of the CPU radiator you are planning to put on top of the case. Minor issues will come. Some can give you a hard time too. Some PSU or motherboard manufacturers also give cable extenders. You can also watch online youtube build videos and get the same product if you want to be very sure of getting a smooth build experience. But in most cases, things work out pretty well.
The Operating System we went with is Ubuntu. Ubuntu is highly flexible. The amount of resources that it takes to run is much less when compared to Windows. Ubuntu uses less RAM for example. Also, Ubuntu allows the use of docker.