Skip to content

alysnnix/devproxy

Repository files navigation

 ██████╗ ███████╗██╗   ██╗██████╗ ██████╗  ██████╗ ██╗  ██╗██╗   ██╗
 ██╔══██╗██╔════╝██║   ██║██╔══██╗██╔══██╗██╔═══██╗╚██╗██╔╝╚██╗ ██╔╝
 ██║  ██║█████╗  ██║   ██║██████╔╝██████╔╝██║   ██║ ╚███╔╝  ╚████╔╝
 ██║  ██║██╔══╝  ╚██╗ ██╔╝██╔═══╝ ██╔══██╗██║   ██║ ██╔██╗   ╚██╔╝
 ██████╔╝███████╗ ╚████╔╝ ██║     ██║  ██║╚██████╔╝██╔╝ ██╗   ██║
 ╚═════╝ ╚══════╝  ╚═══╝  ╚═╝     ╚═╝  ╚═╝ ╚═════╝╚═╝  ╚═╝   ╚═╝

Automatic Docker port conflict resolution via per-project loopback IPs.

The Problem

When running multiple Docker Compose projects simultaneously, services like PostgreSQL and Redis bind to the same default host ports (5432, 6379), causing conflicts. You end up remapping ports in every docker-compose.yml and trying to remember which port belongs to which project.

How It Works

devproxy assigns a unique loopback IP to each Docker Compose project and forwards TCP traffic on standard ports:

acme.localhost:5432   -> acme's PostgreSQL
acme.localhost:6379   -> acme's Redis
widgets.localhost:5432 -> widgets's PostgreSQL
widgets.localhost:6379 -> widgets's Redis

No changes to your existing docker-compose files. Each project uses a memorable, consistent endpoint (project.localhost:standard-port) regardless of what host port Docker assigns.

Quick Start (NixOS)

The recommended way to run devproxy is via the NixOS module:

{
  inputs.devproxy.url = "github:alysnnix/devproxy";

  # In your host configuration:
  imports = [ inputs.devproxy.nixosModules.default ];
  services.devproxy.enable = true;
}

The module automatically configures systemd-resolved DNS delegation, capabilities, and the systemd service.

Quick Start (Manual)

Build from source and run:

go build -o devproxy ./cmd/devproxy
sudo mkdir -p /run/devproxy
sudo ./devproxy daemon

You will also need to configure systemd-resolved to delegate .localhost queries to devproxy. Add to your resolved.conf:

[Resolve]
DNS=127.0.53.53
Domains=~localhost

The daemon requires CAP_NET_ADMIN (loopback IP management) and CAP_NET_BIND_SERVICE (DNS on port 53).

CLI Commands

Command Description
devproxy daemon Run the daemon (normally started via systemd)
devproxy status List active projects, IPs, and port mappings (--json for machine-readable output)
devproxy down Stop daemon and clean up all state
devproxy cleanup Purge stale state (loopback IPs, DNS) without starting the daemon
devproxy doctor Validate the entire chain: Docker socket, DNS delegation, systemd-resolved, loopback IPs
devproxy windows-setup Generate PowerShell script for Windows integration
devproxy windows-cleanup Generate PowerShell script to remove all Windows-side configuration

Windows Integration

Windows-native apps (DBeaver, Chrome, etc.) cannot access WSL2 loopback IPs. devproxy provides a Windows integration path using a Microsoft KM-TEST Loopback Adapter with netsh portproxy forwarding.

One-Time Setup

  1. Install the loopback adapter: Open Device Manager -> Add legacy hardware -> Network adapters -> Microsoft -> KM-TEST Loopback Adapter

  2. Find the adapter name:

    Get-NetAdapter | Where-Object { $_.InterfaceDescription -like '*Loopback*' }
  3. Add project IPs to the adapter:

    New-NetIPAddress -InterfaceAlias "YOUR_ADAPTER_NAME" -IPAddress 10.42.x.y -PrefixLength 32
  4. Generate the full setup script: Run devproxy windows-setup in WSL2, then execute the generated PowerShell script as Administrator. This creates portproxy rules, adds IPs, and updates the Windows hosts file.

After setup, acme.localhost:5432 works identically in DBeaver on Windows and psql in WSL2.

Maintenance

Re-run devproxy windows-setup when:

  • New projects are added or removed
  • Containers restart (Docker may assign new host ports)
  • WSL2 reboots (the eth0 IP changes)

The generated script is idempotent -- it cleans up old rules before creating new ones.

Run devproxy windows-cleanup to remove everything: adapter IPs, portproxy rules, and hosts file entries.

Architecture

Docker Socket --> devproxy daemon --> 1. Assign loopback IP (127.X.Y.1)
                                      2. Register in embedded DNS
                                      3. Start TCP forwarder (if needed)

Components:

  • Docker Watcher -- monitors container start/die events via the Docker socket, extracts Compose project names and port mappings
  • IP Manager -- deterministic IP assignment from project name using FNV-1a hash (range: 127.10.1.1 -- 127.254.254.1, ~62k slots)
  • DNS Resolver -- embedded DNS server on 127.0.53.53:53 resolving *.localhost to project IPs
  • TCP Forwarder -- pure Go TCP forwarding from project-ip:container-port to 127.0.0.1:docker-host-port
  • State -- in-memory project state, rebuilt from running containers on startup

CLI commands communicate with the daemon via HTTP over a Unix socket at /run/devproxy/devproxy.sock.

How DNS Works

devproxy runs an embedded DNS server on 127.0.53.53:53. Rather than editing /etc/hosts (which can be overwritten by NetworkManager, VPN scripts, or NixOS rebuilds), it uses systemd-resolved delegation:

DNS=127.0.53.53
Domains=~localhost

This is surgical: only *.localhost queries go to devproxy. All other DNS traffic is completely unaffected. If devproxy crashes, only .localhost resolution breaks -- the rest of the system continues normally.

The .localhost TLD is used instead of .local because .local is reserved for mDNS (RFC 6762), while .localhost is guaranteed to resolve to loopback by RFC 6761.

Development

# Run tests
go test ./internal/... ./cmd/... -v

# Integration tests (requires Docker + root)
sudo go test -tags=integration ./integration/ -v

# Build with Nix
nix build

About

Automatic Docker port conflict resolution via per-project loopback IPs.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors