Your self-hosted LLMs care more about your memory performance ...
GPU memory (VRAM) is the critical limiting factor that determines which AI models you can run, not GPU performance. Total VRAM requirements are typically 1.2-1.5x the model size due to weights, KV ...
Your gaming woes might be linked to overheating VRAM, not the GPU core ...