Blogs on Stano Bočinec | junIQ /blog/ Recent content in Blogs on Stano Bočinec | junIQ Hugo -- gohugo.io en-us Wed, 17 Nov 2021 21:01:26 +0100 Finer Kubernetes pod scheduling with topology spread constraints /blog/kubernetes-pod-scheduling/ Wed, 17 Nov 2021 21:01:26 +0100 /blog/kubernetes-pod-scheduling/ Reviewing a PR to upgrade ingress-nginx my brain got caught on thinking more about Kubernetes pod scheduling. Curiosity to find the optimal solution to meet company’s resiliency requirements for the ingress controller led me to discover a fresh new K8s feature - Pod Topology Spread Constraints. In this blog post, I’m going to show you an example of how to fine tune Kubernetes scheduling using the constraints to spread your workload in a more resilient way. Reviewing a PR to upgrade ingress-nginx my brain got caught on thinking more about Kubernetes pod scheduling. Curiosity to find the optimal solution to meet company’s resiliency requirements for the ingress controller led me to discover a fresh new K8s feature - Pod Topology Spread Constraints. In this blog post, I’m going to show you an example of how to fine tune Kubernetes scheduling using the constraints to spread your workload in a more resilient way.

Default scheduling

I run a low-traffic beta workload as Kubernetes deployment with 2 replicas waiting in a task backlog to be optimized before going GA. The deployment with low resource requests and limits runs in a GKE cluster with 10+ nodes in a single node pool in 2 availability zones. To make it more resilient, I’d like to have every pod replica scheduled on a different K8s node and in a different availability zone. The Kubernetes scheduler initially schedules resources well, even without additional configuration. Though combined with other external factors (other workloads, node scale up/down events, preemptible node restarts, node-pool upgrades, …), pod replicas sometimes end up scheduled on the same node in a single availability zone. What’s indeed not what I want for my workload to be resilient.

Let’s go straight to the example. Let’s have a GKE cluster with 2 nodes in 2 availability zones (4 nodes total) and our workload defined as a deployment with 2 replicas:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fine-spread-app
  namespace: edu
spec:
  replicas: 2
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: fine-spread-app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: fine-spread-app
    spec:
      containers:
      - name: dummy
        image: k8s.gcr.io/pause:3.2
        resources:
          requests:
            cpu: 50m
            memory: 50Mi
          limits:
            cpu: 500m
            memory: 500Mi

Create a namespace, apply the manifest, check the pods and, depending on the star constellation, you might get (un)lucky to have all replicas scheduled on the same node:

$ kubectl get pod -n edu -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
NAME                             NODE
fine-spread-app-7589d49465-29wr9   gke-default-0c04c813-0d1z
fine-spread-app-7589d49465-8l7v7   gke-default-0c04c813-0d1z

If the node gke-default-0c04c813-0d1z goes down, all pod replicas are gone too, making the deployment unavailable until rescheduled elsewhere. This is something that is outside your control by default. The good thing is, you might easily prevent it.

Inter-pod anti-affinity

To ensure the multiple pod replicas are each scheduled on a different node, you can use inter-pod anti-affinity constraints. Let’s add it to the existing manifest (please check the referenced doc on how it works in details):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fine-spread-app
  namespace: edu
spec:
  replicas: 2
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: fine-spread-app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: fine-spread-app
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/name
                  operator: In
                  values:
                  - fine-spread-app
              topologyKey: kubernetes.io/hostname
      containers:
      - name: dummy
        image: k8s.gcr.io/pause:3.2

Applying the manifest, the pods now got rescheduled to different nodes:

$ kubectl get pod -n edu -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
NAME                             NODE
fine-spread-app-856975d9d4-2ptg8   gke-default-0c04c813-0d1z
fine-spread-app-856975d9d4-dp9sg   gke-default-0c04c813-7pfr

In case you have more nodes with free resources available in the cluster than the requested number of replicas, this indeed helps the app resiliency. This is also what the PR I mentioned in the intro was trying to address. Reviewing it, I got curious whether I can improve it further. As the cluster spreads across 2 different availability zones, can we schedule each pod replica in a different AZ? Yes, we can - as the doc explains, simply modify the podAntiAffinity constraints by changing the topologyKey to prevent scheduling the replicas in the same zone:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fine-spread-app
  namespace: edu
spec:
  replicas: 2
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: fine-spread-app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: fine-spread-app
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/name
                  operator: In
                  values:
                  - fine-spread-app
              topologyKey: topology.kubernetes.io/zone
      containers:
      - name: dummy
        image: k8s.gcr.io/pause:3.2

For the example deployment with 2 replicas, this solution fully meets the requirement to have the pods scheduled, each on a different node and in a different AZ. But what if we need more replicas, e.g. 4? The podAntiAffinity example might again end up scheduling multiple pods on the same node. Searching for the solution, I first thought I will simply add another podAffinityTerm to the podAntiAffinity spec. Though, checking the doc I noticed there might be a better solution - a fresh new Kubernetes feature providing much more flexibility.

Pod Topology Spread Constraints

The rather recent Kubernetes version v1.19 added a new feature called Pod Topology Spread Constraints to “control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.”. This fits perfectly with this post’s goal of better controlling the pod scheduling in our cluster.

Using the Pod Topology Spread Constraints allows us to further improve the existing deployment - simply change back the podAntiAffinity topologyKey to kubernetes.io/hostname and add topologySpreadConstraints section to the deployment template.spec:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fine-spread-app
  namespace: edu
spec:
  replicas: 4
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: fine-spread-app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: fine-spread-app
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/name
                  operator: In
                  values:
                  - fine-spread-app
              topologyKey: kubernetes.io/hostname
      containers:
      - name: dummy
        image: k8s.gcr.io/pause:3.2
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
          matchLabels:
            app.kubernetes.io/name: fine-spread-app

Apply the manifest, the pods are now scheduled based on the requirements (Note, we have also increased the replicas count):

$ kubectl get pod -n edu -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
NAME                                 NODE
fine-spread-app-6c97884994-cptjm   gke-default-92685fea-81d3
fine-spread-app-6c97884994-f4w22   gke-default-0c04c813-rbwf
fine-spread-app-6c97884994-qr58f   gke-default-92685fea-7pfr
fine-spread-app-6c97884994-s4qj8   gke-default-0c04c813-0d1z

Wait, do we need to combine podAntiAffinity and topologySpreadConstraints? No!

# Multiple topologySpreadConstraints 

The TSC feature documentation [provides clarifying comparison with the _PodAffinity_/_PodAntiaffinity_](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#comparison-with-podaffinity-podantiaffinity):
> * For `PodAffinity`, you can try to pack any number of Pods into qualifying topology domain(s)
> * For `PodAntiAffinity`, only one Pod can be scheduled into a single topology domain.
>
> For finer control, you can specify topology spread constraints to distribute Pods across different topology domains - to achieve either high availability or cost-saving. This can also help on rolling update workloads and scaling out replicas smoothly.

It means we can fully replace the `podAntiAffinity` spec with multiple `topologySpreadConstraints` items as the doc further explains: 
> When a Pod defines more than one topologySpreadConstraint, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints.

So let's constraint:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fine-spread-app
  namespace: edu
spec:
  replicas: 4
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: fine-spread-app
  template:
    metadata:
      labels:
        app.kubernetes.io/name: fine-spread-app
    spec:
      containers:
      - name: dummy
        image: k8s.gcr.io/pause:3.2
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
          matchLabels:
            app.kubernetes.io/name: fine-spread-app
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: ScheduleAnyway
        labelSelector:
          matchLabels:
            app.kubernetes.io/name: fine-spread-app

Apply the manifest, and see the pods being distributed, as in the previous example - each on a different node across the 2 availability zones. And this is the level of pod scheduling control I wanted to achieve to show you an example of how to spread workload to make it more resilient.

Summary

Workload scheduling in Kubernetes is an interesting topic to explore if you want to better utilize your servers, make the workload more resilient, or prevent unexpected surprises. I wrote this post to show you a practical example of how to control the scheduling using Pod (Anti) Affinities and Pod Topology Spread Constraints. The former is available since the early K8s versions, but sometimes might not be flexible enough for your needs. Pod Topology Spread Constraints, available since v1.19 when EvenPodsSpread feature gate reached GA stage, provides much more flexibility. There is, of course, much more to explore beyond format of this introductory post - check the referenced Kubernetes documentation to learn more.

Let me know if you liked the article or interested to read more about different Kubernetes features! 👋

]]>
DIY global cloud VPN service using WireGuard and GCP /blog/diy-vpn-wireguard-gcp/ Tue, 18 Feb 2020 17:00:00 +0100 /blog/diy-vpn-wireguard-gcp/ Do you need to secure your internet connection occasionally while travelling, connecting to the internet over untrusted networks (cafes, airports, …)? Tired of growing number of subscriptions or searching for a trusted VPN provider? Skilled enough to build your private VPN server but don’t want to host the service on your home network? I found myself in a similar situation. Then a couple of announcements came during the last weeks of January 2020: Do you need to secure your internet connection occasionally while travelling, connecting to the internet over untrusted networks (cafes, airports, …)? Tired of growing number of subscriptions or searching for a trusted VPN provider? Skilled enough to build your private VPN server but don’t want to host the service on your home network?

I found myself in a similar situation. Then a couple of announcements came during the last weeks of January 2020:

  • “Linus pulled in net-next about a half hour ago. So WireGuard is now officially upstream. Yeah!” by Bruno Wolf III on the WireGuard mailing list.
  • “I am beyond excited to finally announce Secret Manager - a secure and convenient method for storing API keys, passwords, certificates, and other sensitive data on @GCPcloud. It’s available for everyone today in beta” by @sethvargo on Twitter

The news combined with my intense affair with Google Cloud Platform (GCP) sparked an idea to build a solution for this - let’s create a pool of short-lived WireGuard VPN servers in GCP cloud with minimum effort and cost. Starting a VPN instance in any supported location worldwide only when I need it. Most of the internet is running through HTTPS anyway and unless I want to escape internet censorship or access a service not being served in the current country, VPN is no more a daily necessity.

The idea

The idea is the following:

  1. Store both sensitive data (DNS API keys, WireGuard private keys,…) and WireGuard configuration inside the Google Secret Manager
  2. Create a free (US-only) or paid GCE VM instance, without a static IP address to enjoy
  3. Use cloud-init config to install & setup the WireGuard VPN and update DNS record automatically on VM boot
  4. Create multiple VMs (VPN instances) in various locations but run only 1 at a time to minimize costs
  5. Use the Cloud Console mobile app to start the instance in a region you just need
  6. Enjoy

This article can introduce you to the following topics:

  • Install a WireGuard VPN server with minimal 3rd party tooling or complicated scripts
  • Store sensitive info required at runtime in a Google Secret Manager secure vault
  • Spawn a VPN server VM in an automated way in a location of your choice worldwide

WARNING

  • The article is intended for tech-savvy readers with basic experience with DNS & Cloud providers as it’s not going to provide you a step-by-step guide for all the topics covered.
  • If you search for a solution to provide you with complete privacy, total anonymity and bullet-proof security, this provides you with none of those. Using a VPN is not a magic bullet, but definitely brings a few benefits.
  • Consult with a security professional all the pros and cons of using a VPN.

Requirements

I built this concept using services I like for their simplicity and features offered in their free plan: GCP as a cloud, Cloudflare as a DNS provider. Here is a complete list of building blocks and skills recommended to try the concept:

  • your own domain name
  • a DNS hosting for the domain allowing to update DNS records using API - I’m using Cloudflare’s Free Plan
  • Google Cloud Platform account
  • basic GCP experience - create, start, stop VM, create IAM account and configure firewall rules
  • (optional) Google Cloud SDK CLI installed - most of the implementation steps will provide a gcloud example
  • (optional) Shell scripting / Python basics - if you want to understand a few simple scripts and oneliners used in the examples
  • (optional) Cloud Console Mobile App - to control VPN instances from your smartphone

💡 Feel free to use providers and services based on your preference and experience.

Let’s play

Enough theory, let’s play. This is the agenda:

  • GCP quickstart
  • Create a limited GCP IAM Service Account
  • Create a DNS record
  • Create WireGuard configuration
  • Create firewall rules to allow the WireGuard traffic
  • Create WireGuard secrets using Google Secret Manager
  • Create a VM and your first VPN instance
  • Test, play, enjoy!

GCP quickstart

  1. Read the GCP overview if this domain is completely new for you
  2. Sign in to the GCP Cloud console using a Google account. If you don’t already have one, sign up for a new account.
  3. Create a new billing account to use the account (required only for new users). You must have a valid Cloud Billing Account to use GCP even if you are in your free trial period or you only use Google Cloud resources that are covered by the Always Free program.
  4. Select or create a Cloud project
  5. Install Cloud SDK cli if you prefer to follow the gcloud examples in this article
  6. (Optional, but highly recommended) Set budget alerts to avoid any unexpected payments.

Create a limited GCP IAM service Account

Create a new Cloud IAM service account and grant it only a role to access the Secret manager secrets for the project. The SA account will be used later to create every new VPN VM instance with a restricted identity. This is important for security reasons:

  • the default Compute Engine service account grants the instance more permissions than is required. Google recommends that each instance that needs to call a Google API should run as a service account with the minimum permissions necessary for that instance to do its job.
  • The role grants instance rights to only access (view) secrets stored in the Secret Manager vault. The VM (or a potential attacked) will not be allowed to edit or list the secrets stored in the Secret Manager, nor to access any other resources using API.

Steps:

  1. Enable the Secret Manager API for the project
  2. Create a service account using a console or use gcloud CLI:
    gcloud iam service-accounts create secret-accessor \
    --description "SA for accessing Secret Manager secrets" \
    --display-name "secret accessor"
    
  3. Grant the service account secretmanager.secretAccessor IAM role:
    gcloud projects add-iam-policy-binding project-123 \
    --member serviceAccount:secret-accessor@project-123.iam.gserviceaccount.com \
    --role roles/secretmanager.secretAccessor
    

Setup DNS

To setup the system and enable the dynamic DNS update, we need the following:

  • A DNS record
  • API token and another provider specific info (e.g. Cloudflare Zone ID, the DNS record Cloudflare ID) to enable dynamic DNS updates

Steps:

  1. Create an A DNS record that will VPN clients use to connect to the VPN server. As we do know the IP address of the VM instance yet, point the record to any IP address, e.g. any from the private ranges of the IPv4 addresses: vpn.example.com -> 172.27.27.2. Example instructions for adding a DNS record on Cloudflare
  2. Create an API token or any provider specific key that will allow you to update DNS record remotely using API.
  3. Get other details required for the API requests to work

Create a WireGuard configuration

There are tons of howto guides on the internet how to create a WireGuard configuration fitting your needs. Here are at least 2 links:

I will further use following WireGuard configuration used in every VM instance created (wg0.conf):

[Interface]
Address = 172.30.0.1/24
PostUp = iptables -A FORWARD -i wg0 -o wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -s 172.30.0.0/24 -o ens4 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -o wg0  -j ACCEPT; iptables -t nat -D POSTROUTING -s 172.30.0.0/24 -o ens4 -j MASQUERADE
ListenPort = 51820
PrivateKey = abcdef123456serverPrivateKey==

# Client Alice
[Peer]
PublicKey = abcdef123456aliceKey=
AllowedIPs = 172.30.0.2/32

# Client Bob
[Peer]
PublicKey = abcdef123456bobKey=
AllowedIPs = 172.30.0.3/32

This configuration exaplanation:

  • WireGuard server listens on UDP port 51820
  • creates iptables NAT & forwarding firewall rules on start
  • binds to ens4 network interface that’s being created and enabled by default on the ubuntu-minimal-1910 VM image from the ubuntu-os-cloud GCE family that I use for this setup.
  • uses following keypair for the VPN server: abcdef123456serverPrivateKey==/abcdef123456serverPublicKey==
  • configures 2 peers/clients - Alice (IP: 172.30.0.2/32) and Bob (IP: 172.30.0.3/32)

Store config and secrets in the Secret Manager vault

Secret Manager is a pretty cool, new product in the GCP portfolio (currently in Beta) that complements the existing Cloud KMS solution for data encryption. It allows storing sensitive data encrypted at rest using AES-256. Customer managed encryption keys are not yet supported.

Let’s create and store following secrets for this example setup:

  1. WireGuard configuration created in the previous section: gcloud beta secrets create wg_config --data-file wg0.conf --replication-policy automatic
  2. Cloudflare API token: echo -n 'my_cloudflare_api_token==' | gcloud beta secrets create wg_cf_api_token --data-file - --replication-policy automatic
  3. Cloudflare Zone ID: echo -n 'my_cloudflare_zone_id_123456' | gcloud beta secrets create wg_cf_zone_id --data-file - --replication-policy automatic
  4. Hostname of the VM WireGuard clients will conect to: echo -n 'vpn.example.com' | gcloud beta secrets create wg_hostname --data-file - --replication-policy automatic

As an alternative, Secrets Manager cloud console can be used to create the secrets if you prefer GUI.

Create a cloud-init configuration

cloud-init is a tool that helps to install & configure a cloud (GCE) VM to the desired state. In short, the cloud-init example:

  • installs required packages - iptables & wireguard
  • enables IP forwarding
  • sets up a script executed on every boot to:
    • read and store WireGuard config from the Secret Manager
    • read and use other sensitive data we saved in the Secret Manager
    • create or update Cloudflare DNS record with the VM’s public IP address using Cloudflare API on boot
  • updates OS packages
  • reboots the VM after the install is finished
#cloud-config
package_upgrade: true
packages:
  - iptables
  - wireguard
write_files:
  - content: |
      net.ipv4.ip_forward=1
    path: /etc/sysctl.d/99-wireguard.conf
    permissions: '0644'
  - content: |
      #!/usr/bin/env bash
      set -e

      # get latest wireguard configuration from the secret manager and store it locally
      gcloud beta secrets versions access latest --secret wg_config > /etc/wireguard/wg0.conf
      chmod 0600 /etc/wireguard/wg0.conf

      # get other sensitive data
      hostname="$(gcloud beta secrets versions access latest --secret wg_hostname)"
      cf_api_token="$(gcloud beta secrets versions access latest --secret wg_cf_api_token)"
      cf_zone_id="$(gcloud beta secrets versions access latest --secret wg_cf_zone_id)"

      # create or update the Cloudflare DNS record
      if ip=$(curl -fs ifconfig.me); then
              echo "Checking if DNS record $hostname exists in Cloudflare"
              dns_check=$(curl -sX GET "https://api.cloudflare.com/client/v4/zones/${cf_zone_id}/dns_records?type=A&name=${hostname}" \
              -H "Authorization: Bearer $cf_api_token" \
              -H "Content-Type: application/json" \
              | python3 -c "import sys, json; f=json.load(sys.stdin); print('{};{}'.format(f['result'][0]['content'],f['result'][0]['id'])) if 'result' in f else print('EE')")
              IFS=';' read -ra response <<< "${dns_check}"
              if [[ "${response[0]}" == "${ip}" ]]; then
                      echo "DNS record ${hostname} OK: ${ip}. No need to update."
              else
                      echo "Updating existing A record $hostname to $ip"
                      curl -o /dev/null -fsX PUT "https://api.cloudflare.com/client/v4/zones/${cf_zone_id}/dns_records/${response[1]}" \
                      -H "Authorization: Bearer ${cf_api_token}" \
                      -H "Content-Type: application/json" \
                      -d "{\"type\": \"A\", \"name\": \"${hostname}\", \"content\": \"${ip}\", \"proxied\": false}"
              fi
      else
              echo "Problem detecting current IP address. Exiting"
      fi
    path: /usr/local/bin/wg_dns_updater.sh
    permissions: '0755'
  - content: |
      [Unit]
      Description=wg-dns-updater script
      After=network.target

      [Service]
      Type=oneshot
      RemainAfterExit=yes
      ExecStart=/usr/local/bin/wg_dns_updater.sh

      [Install]
      WantedBy=multi-user.target
    path: /etc/systemd/system/wg-dns-updater.service
    permissions: '0644'
runcmd:
  - [ sysctl, -p, /etc/sysctl.d/99-wireguard.conf ]
  - [ systemctl, daemon-reload ]
  - [ systemctl, enable, --now, --no-block, wg-quick@wg0.service ]
  - [ systemctl, enable, --now, wg-dns-updater.service ]
power_state:
  mode: reboot
  delay: 1
  message: Rebooting after installation

I recommend everyone to review the config and change it to fit personal needs.

Create a GCE VM

Now the best part - glueing everything together. I will create multiple VMs using the cloud-init config in regions I need. I use the following bash script to create the instances:

#!/usr/bin/env bash
NAME=${1:-vpn-ue1b}
ZONE=${2:-us-east1-b}
gcloud compute instances create "$NAME" \
  --machine-type f1-micro \ # f1-micro is powerful enough to run WireGuard server
  --image-family ubuntu-minimal-1910 \ # Ubuntu 19.10 provides WireGuard in the universe repo
  --image-project ubuntu-os-cloud \
  --metadata-from-file user-data=cloud-init-config.yaml \ # use the prepared cloud-init config here
  --scopes cloud-platform \
  --service-account secret-accessor@project-123.iam.gserviceaccount.com \ # use the prepared secret-accessor service account
  --tags wg \ # use network tags for the firewall rules
  --zone "$ZONE" # zone where to create the instance

And then execute the script:

./vm-create.sh vpn-ue1b us-east1-b
./vm-create.sh vpn-uc1b us-central1-b
./vm-create.sh vpn-uw1b us-west1-b

It’s important to remind the DNS record is updated on every VM boot. Therefore shut down all VMs after creation and start only a single VM in a region you just need.

Pro tips:

  • you can have up to 3 VMs created in the US zones and still qualify for the free tier if you have only single one running at a time.
  • use --preemptible argument when creating the VM to have it automatically shut down within 24 hours. The only con of this is that a preemptible instance is excluded from the free tier so you will be charged for running it.
  • WARNING: only 1 GB of network egress from North America to all region destinations per month (excluding China and Australia) is included in the free tier.

Configure firewall

Create a firewall rule to enable incoming traffic to udp/51820 to the WireGuard VM(s): gcloud compute firewall-rules create wg --direction IN --target-tags wg --allow udp:51820 --source-ranges 0.0.0.0/0

As the VM instances are tagged by a network tag, this allows applying firewall rules and routes to a specific instance or set of instances.

Test VPN connection

Let’s now test if the VPN connection works. Install a WireGuard client on a platform of your choice and configure it to connect to the VPN server:

[Interface]
Address = 172.30.0.2/32
PrivateKey = abcdef123456alicePrivateKey==

[Peer]
PublicKey = abcdef123456serverPublicKey==
Endpoint = vpn.example.com:51820
AllowedIPs = 0.0.0.0/0

Install the Cloud Console Mobile app

I recommend to install the Cloud Console Mobile app to start and stop WireGuard server instances from your smartphone. You can thus fire up any of the pre-installed instances in the region you just need using your smartphone, very handy thing!

Final notes

In this post, I have shown you how to build a self-managed, on-demand VPN infrastructure ready to tunnel your private internet traffic using WireGuard and Google Cloud Platform. You can now own and run the whole infrastructure on-demand with minimum effort and costs end scape the VPN subscription and fake reviews hell.

If you like the article or have any feedback, feel free to let me know or share the post.

Also, consider donating to WireGuard project to help with the development of this great OSS tool ❤️

]]>
Hello, the world of words! /blog/hello-the-world-of-words/ Mon, 18 Nov 2019 22:55:02 +0100 /blog/hello-the-world-of-words/ Every blog should have some proper introduction. So this is it - my own creative playground. I will try to occasionally put down thoughts and share knowledge on mostly tech-related topics that exceeds 280 characters. Let&rsquo;s see how it will work and how much joy it will spark 💥. Happy reading! Every blog should have some proper introduction. So this is it - my own creative playground. I will try to occasionally put down thoughts and share knowledge on mostly tech-related topics that exceeds 280 characters. Let’s see how it will work and how much joy it will spark 💥.

Happy reading!

]]>