Skip to main content
This guide walks you through setting up GPU infrastructure on Porter, from creating a GPU node group to deploying your first GPU-accelerated application. You can deploy GPU-enabled workloads on Porter by creating a fixed node group and selecting a GPU-enabled instance type. Note that this has to be the second node group in your cluster as the default node group is reserved for CPU workloads.
GPUs are only supported on fixed node groups because cost optimization does not currently support GPU instances. This means you select a specific GPU instance type and Porter scales by adding or removing instances of that exact type.

Creating a GPU Node Group

1

Navigate to Infrastructure

From your Porter dashboard, click on the Infrastructure tab in the left sidebar.
2

Select your cluster

Click on Cluster to view your cluster configuration and existing node groups.
3

Add a node group

Click Add an additional node group to open the node group configuration panel.
4

Configure the GPU node group

Configure your GPU node group with the following settings:
SettingDescription
Instance typeSelect a GPU-enabled instance type (see table below)
Minimum nodesSelect minimum number of nodes that will be available at all times
Maximum nodesThe upper limit for autoscaling based on demand
Fixed Node Group Configuration
GPU instances are significantly more expensive than standard instances.
5

Save and provision

Click Save to create the node group. Porter will provision the GPU nodes in your cluster. This may take a few minutes as GPU nodes often require additional driver installation.

Deploying a GPU Application

Once your GPU node group is ready, you can deploy applications that use GPU resources.
1

Create or select your application

Navigate to your application in the Porter dashboard, or create a new one if you haven’t already.
2

Go to the Services tab

Click on the Services tab to view your application’s services.
3

Select the service

Click on the service that needs GPU access (e.g., your inference worker or training job).
4

Assign to the GPU node group

Under General, find the Node group selector and choose your GPU node group from the dropdown.
5

Configure GPU resources

Under Resources, configure your GPU requirements:
SettingRecommended Value
GPUNumber of GPUs needed (typically 1)
CPUMatch your workload needs (GPU instances have fixed CPU/RAM ratios)
RAMMatch your workload needs
Request only the GPUs you need. Each GPU request reserves an entire GPU.
6

Save and deploy

Save your changes and redeploy the application. Porter will schedule your workload on the GPU node group.

Troubleshooting

This usually means there are no available GPU nodes. Check:
  • The node group has scaled up (check Infrastructure → Cluster)
  • Your GPU request doesn’t exceed available GPUs on the instance type
  • The node group maximum nodes hasn’t been reached
GPU memory errors indicate your model or batch size is too large:
  • Reduce batch size
  • Use a larger GPU instance with more VRAM
GPU nodes take longer to start than CPU nodes due to driver initialization:
  • Keep minimum nodes at 1 for latency-sensitive workloads
  • Consider keeping a warm pool of nodes during peak hours