Tut 1

  1. Why do we prefer using a GPU (like the A100) over a CPU for Deep Learning tasks
  2. DGX A100
  3. CUDA
  4. T4
  5. If you are accessing the DGX A100 remotely, what protocol are you likely using
  6. What is the relationship between NVIDIA drivers, CUDA, and cuDNN
  7. If you are working on your local laptop without a powerful GPU, what cloud alternative is suggested in the texts
  8. Are we using Virtualization here or Containerization?
  9. Where is this DGX100 is located?
  10. role of tensor core
  11. nvidia-smi
  12. Why do we use the NVIDIA Container Toolkit instead of just standard Docker on the DGX?