I would really really really like to have one device on my tailnet as the exitnode for all other devices on the tailnet. However, most VPNs make this really difficult. Is there any way to do this? I've read it's possible with split-tunnelling, but ProtonVPN (which I use) doesn't support that. I just installed Alpine Linux on my RPI 4b. And would like to use this as my exit node. Does anyone have any tips for how this could be done?

  • zzzzzz@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    I have solved this problem! The trick is to use two Docker containers:

    1. Gluetun (https://github.com/qdm12/gluetun): set this up to connect to your VPN.
    2. Tailscale (https://tailscale.com/kb/1282/docker/): set this to use the Gluetun network.

    Here is an example docker-compose.yml:

    version: "3"
    services:
      gluetun:
        image: qmcgaw/gluetun
        container_name: gluetun
        # line above must be uncommented to allow external containers to connect.
        # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/connect-a-container-to-gluetun.md#external-container-to-gluetun
        restart: unless-stopped
        cap_add:
          - NET_ADMIN
        devices:
          - /dev/net/tun:/dev/net/tun
        volumes:
          - ./gluetun:/gluetun
        environment:
          - VPN_SERVICE_PROVIDER=airvpn
          - VPN_TYPE=wireguard
          - WIREGUARD_PRIVATE_KEY=xxx
          - WIREGUARD_PRESHARED_KEY=xxx
          - WIREGUARD_ADDRESSES=xxx
          - WIREGUARD_MTU=1320
          - SERVER_COUNTRIES=United States
          # See https://github.com/qdm12/gluetun-wiki/tree/main/setup#setup
          # Timezone for accurate log times
          - TZ=America/New_York
          # Server list updater
          # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
          - UPDATER_PERIOD=24h
    
      tailscale:
        container_name: tailscale
        cap_add:
          - NET_ADMIN
          - NET_RAW
        volumes:
          - ./tailscale/var/lib:/var/lib
          - ./tailscale/state:/state
          - /dev/net/tun:/dev/net/tun
        network_mode: "service:gluetun"
        restart: unless-stopped
        environment:
          - TS_HOSTNAME=airvpn-exit-node
          - TS_AUTHKEY=xxxxxxxx
          - TS_EXTRA_ARGS=--login-server=https://example.com --advertise-exit-node
          - TS_NO_LOGS_NO_SUPPORT=true
          - TS_STATE_DIR=/state
        image: tailscale/tailscale
    
    • Mr. Forager@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Wow! You know what, I was just thinking about using Gleutun for this enefore I went to bed last night, and then I wake up to this gem of a message!! 😅 Well done sir, I'll be cooking this up ASAP!

      • zzzzzz@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Let me know how it works out for you! I'm happy to be able to share this. I was very pleased with myself but had no one to tell haha. I actually have several copies of this set up with each Gluetun instance connected to different countries. Then, changing country is as easy as changing your tailnet exit node!

        • Mr. Forager@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Awesome stuff dude! I will totally do this too 😁 Worst thing is I am already using Gleutun and I am ashamed I didn't think about using it for this before… But honestly gonna have to donate some money to the developer of Gleutun as its just so awesome.

    • RandomlyRight@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 months ago

      For anyone trying this, make sure you do not have “- TS_USERSPACE=false” in your yaml from previous experimentation. After removing this, it works for me too.

      In the documentation they say to add sysctl entries, it is possible in docker compose like so:

      tailscale:
          sysctls:
            - net.ipv4.ip_forward=1
            - net.ipv6.conf.all.forwarding=1
      

      But it does not seem to make a difference for me. Does anyone know why these would not be required in this specific setup?