I am currently facing an issue with accessing the master of a private GKE cluster on Google Cloud Platform. Here’s the configuration I have:
terraform code:
resource "google_container_cluster" "private_cluster" {
name = "name"
location = var.zone
network = gke_vpc_self_link
subnetwork = gke_subnet_self_link
enable_shielded_nodes = true
remove_default_node_pool = true
initial_node_count = 1
private_cluster_config {
enable_private_endpoint = true
enable_private_nodes = true
master_ipv4_cidr_block = "172.16.0.0/28"
}
Network Architecture:
- VPC Hub: Contains a VM with Pritunl VPN server.
- VPC Spoke: Contains the private GKE cluster.
- VPC Peering: Established between VPC Hub and VPC Spoke.
Issue
I have a Pritunl VPN server running in the VPC Hub. The problem is that the master of the GKE cluster in the VPC Spoke is not accessible using the private IP obtained from the VPN in the VPC Hub. This is because VPC peering in Google Cloud does not support transit routing, meaning the IPs from the VPN cannot directly access the GKE master unless they belong to the same VPC.
Requirements
- Access the GKE master from my local machine via the VPN in the VPC Hub.
- Avoid setting up a VPN in the VPC Spoke.
Tried Solutions
- VPC Peering: Established peering between VPC Hub and VPC Spoke, but it does not support transit routing.
Desired Solution
Looking for a method to enable access from my local machine (via the VPN in the VPC Hub) to the GKE master in the VPC Spoke without having to set up another VPN in the VPC Spoke.
What are the recommended practices or configurations to achieve this setup? Any suggestions on using a transit gateway, routes, or other methods to resolve this issue would be highly appreciated.
#Environment Details
- Google Cloud Platform (GCP)
- GKE Cluster (Private)
- Pritunl VPN in VPC Hub
- VPC Peering between Hub and Spoke