Tut 1
- Why do we prefer using a GPU (like the A100) over a CPU for Deep Learning tasks
- DGX A100
- CUDA
- T4
- If you are accessing the DGX A100 remotely, what protocol are you likely using
- What is the relationship between NVIDIA drivers, CUDA, and cuDNN
- If you are working on your local laptop without a powerful GPU, what cloud alternative is suggested in the texts
- Are we using Virtualization here or Containerization?
- Where is this DGX100 is located?
- role of tensor core
- nvidia-smi
- Why do we use the NVIDIA Container Toolkit instead of just standard Docker on the DGX?