I’m trying to wrap my head around the implications of Nvidia’s GPU sharing strategies:
- MIG
- Time Slicing
- MPS
But given how opaque I’ve found their docs to be on the subject, so far I’ve been piecing together my understanding of each by experimenting with each option and reading relevant source code e.g. nvidia’s device plugin.
The current item I’m looking at is benchmarking each strategy. I ran 7 replicas in k8s of the same app for all four variants:
import os
# Set YOLOv8 to quiet mode
os.environ['YOLO_VERBOSE'] = 'False'
from prometheus_client import start_http_server, Histogram
from ultralytics import YOLO
import torch
start_http_server(8000)
device = torch.device("cuda")
model = YOLO("yolov8n.pt").to(device=device)
h = Histogram('gpu_stress_inference_yolov8_milliseconds_duration', 'Description of histogram', buckets=(1, 5, 10, 15, 20, 25, 30, 35, 40, 50, 75, 100, 150, 200, 500, 1000, 5000))
def run_model():
results = model("https://ultralytics.com/images/bus.jpg")
# print(model.device.type)
h.observe(results[0].speed['inference'])
while True:
run_model()
The results are as follows:
Therefore, if the speed of inference speed is best with the default settings, and it can already support having multiple applications talking to one GPU, then why bother using any other strategy? I understand their docs state that a strategy like MIG can give you memory tolerance i.e. one application sharing the GPU can’t bring down another, but if we put that to one side, is there really any good reason to use these strategies if you’re prioritising performance?
To add to the confusion, if I do a matmul with two enormous matrices, there’s zero difference in the performance of each strategy:
import torch
import time
from prometheus_client import start_http_server, Histogram
# Check if CUDA is available and Tensor Cores are supported
if not torch.cuda.is_available():
raise SystemError("CUDA is not available on this system")
device = torch.device("cuda")
torch.cuda.set_sync_debug_mode(debug_mode="warn")
torch.set_default_device(device) # ensure we actually use the GPU and don't do the calculations on the CPU
h = Histogram('gpu_stress_mat_mul_seconds_duration', 'Description of histogram', buckets=(0.001, 0.005, 0.01, 0.1, 0.25, 0.5, 1.0, 2.0, 3.0, 4.0, 5.0, 10.0, 20.0, 50.0, 100.0, 200.0, 500.0, 1000.0))
def mat_mul(m1, m2):
return torch.matmul(m1, m2)
# Function to perform matrix multiplication using Tensor Cores
def stress(matrix_size=16384):
# Create random matrices on the GPU
m1 = torch.randn(matrix_size, matrix_size, dtype=torch.float16)
m2 = torch.randn(matrix_size, matrix_size, dtype=torch.float16)
# Perform matrix multiplications indefinitely
while True:
start = time.time()
output = torch.matmul(m1, m2)
print(output.any())
end = time.time()
h.observe(end - start)
if __name__ == "__main__":
start_http_server(8000)
stress()
I must be missing here as their docs e.g. MPS seem to imply that these strategies are better for GPU sharing.