Skip to content

⚙️ Homelab Update: GPU Upgrade for Local AI Workloads

A new piece of hardware just joined my homelab: RTX 3090 with 24 GB VRAM.

Description

🧠 Why this matters: For most AI workloads, VRAM is the real bottleneck - whether you’re running:

  • Large Language Models (LLMs)
  • Image generation models
  • Embedding pipelines for semantic search

At the same time, GPU prices are rising again. Growing demand from data centers and AI infrastructure is driving.

🚀 Why the RTX 3090? It doesn’t have to be the newest hardware. The RTX 3090 is a price-to-performance king for local AI:

  • 24 GB VRAM unlocks entire classes of models and workflows
  • Excellent ecosystem support
  • More affordable than current-gen high-VRAM GPUs

Perfect fit for local, self-hosted AI experiments.

🔧 What I’m using it for: GPU passthrough to Proxmox VMs for local AI workflows, including:

More experiments coming soon 👀