Exposing Docker Containers to your LAN

Monday, the twenty-first of March, A.D. 2022

A while back I had occasion to make a number of docker containers directly accessible on the LAN, i.e. without all the usual ceremony of port-forwardism that Docker requires. In retrospect I made it a lot more complicated than it had to be, but I wanted to document the process anyway because you never know when that sort of thing might come in handy.

Aside: You Probably Don’t Want This

In my case, the reason for doing this was so that I could expose multiple difference services that all wanted to bind the same port. In other words, given that I was going to be hosting more than one HTTP-based application, I didn’t want to have to remember (and type out all the time) a bunch of different ports to distinguish between the services I wanted to talk to. DNS is great, but it only points to IP addresses

1 Well, SRV records can include ports, but browsers don’t pay attention to those.
, after all.

That said, had I only realized it at the time, there’s a much better way to accomplish this than exposing entire containers to the LAN, and much less… questionable from a security standpoint: Just bind multiple IPs on the host. Docker allows you to specify what IP address to bind when forwarding a port to a container, so you can forward e.g. 192.168.50.21:80 to App 1, and 192.168.50.22:80 to App 2, and neither the apps nor the users need ever worry their pretty little heads about a thing. This is better than exposing the container directly - containerized applications generally expect to be pretty isolated from a networking point of view, with external traffic only hitting the one or two ports that they specify as their window to the outside world. So if some packaged application has to run its own Redis server

2 Because some people just can’t help jamming Redis into every app they write, it’s like a spinal reflex or something.
, it might not take the extra step of only binding to localhost, and congratulations now anyone on the LAN can read your session cookies or whatever.
3 Alternatively you can do what I did: Set up a shared Redis server for a bunch of different applications, in Docker of course, and then knowingly expose that to the entire LAN, and damn the torpedoes. I cannot legally recommend this course of action.

The caveat here is of course that you need to be sure the IP addresses you use aren’t going to be stolen out from under you by somebody’s iPad or something next time it connects to the network. This is easy if you control the DHCP server, and either easy or impossible if you don’t. For reasons that I’ve never fully understood, but probably boil down to leaving room for people to do exactly this sort of thing, many standard DHCP configurations assign IPs from just a portion of the available range. .100 is a common start point in a /24 network, so you can usually expect that .2-.99

4 Someday I’m going to set up a network where the router is at, like, .233 or something instead of .1, just to freak out the one or two people who might ever notice.
will be available for you to work your will upon.

The worse solution (exposing containers directly to the LAN) has this same caveat, so it’s just worse in every way, there’s really no advantage except that maybe it’s lower-overhead, since not as much forwarding of packets needs to take place. So yeah, probably just don’t unless your containerized application really needs Layer 2 access to the network, like it’s an intrusion detection system and needs keep an eye on broadcast traffic or something.

Anyway

With that all out of the way, having hopefully convinced you that this is almost never a good idea, here’s how to do it:

docker network create \
    -d ipvlan \
    --subnet 192.168.50.0/24 \
    --gateway 192.168.50.1 \
    -o parent=eth0 \
    lan

docker run --network lan --ip 192.168.50.24 some/image:tag

That’s it! You’re done, congratulations. (Obviously --subnet, --gateway, and --parent should be fed values appropriate to your network.)

This isn’t actually what the first draft of this post said. Initially I was going to suggest using the macvlan driver, and then go into a whole spiel about how if you do this and you also want the host to be able to talk to its containers, then you have to create another (non-Docker-managed) macvlan interface in bridge mode, then route an IP range or two via that interface, as described here.

ipvlan is a lot easier, though, and gives you almost exactly the same result. The only difference is that with macvlan Docker will actually make up a MAC address for the virtual interface and respond to ARP queries and so on with that. With ipvlan it just uses the host MAC. My suspicion is that this is probably another argument for ipvlan, as I think I remember reading that multiple MAC addresses on one physical interface is considered a Bad Sign by some network watchdog types of things.

I’m really not sure why I ended up going for macvlan in my own case. Maybe ipvlan was a later invention so the guides I came across weren’t aware of it? Anyway it’s there, and it works a lot better than macvlan for most use cases, so it’s almost certainly what you should use.

5 In the event that you need to use either of them, that is. Which you probably don’t.

So there you have it. You can dump containers on your LAN, and they will (from a networking standpoint) behave as if they were their own machines. But you probably don’t want to.