My Overkill Home Network
I've been meaning to write this up for a while. People ask what my home network looks like, and the answer is "probably more than it needs to be" — but I enjoy it, which is justification enough.
This post covers the physical setup, how I've segmented things, and why. I'll talk about the homelab side separately.
Importantly, this is not professional advice, please don't treat it as such.
Design Goals
Before getting into the specifics, here's what I actually wanted:
- Tinker freely — I run a lot of software I wouldn't necessarily trust, and I want to be able to do that without it becoming a problem.
- Segmentation — things I trust and things I don't shouldn't share the same network. Simple principle, takes a bit of effort to actually implement properly.
- Reduced attack surface — if something gets compromised, the blast radius should be limited.
- Cheap IoT stays away from things I care about — a £12 smart plug shouldn't have a route to my NAS.
- End user just works — the other half doesn't care about any of this. If something stops working, I hear about it immediately. In short "Keep the WiFi working".
From my own perspective — authenticated, or on my own machine — I want to be able to reach anything else fairly easily, but not the other way around. This isn't about bypassing the auth running on the services themselves, it's about making lateral movement harder if something gets compromised. I accept that convenience involves tradeoffs, so I layer defences rather than relying on any single control. The network itself? I treat it as potentially compromised. The firewall should be doing meaningful work.
A bit of paranoia is healthy. But I try not to let it become the point — the goal is a network I can actually use.
Hardware
The physical side runs entirely on Unifi. I replaced a UDM-SE not long ago with a UCG Fiber, and the reason is worth explaining because it's a good illustration of how the details matter.
One of my uplinks is a 1.6Gbps BT/EE line on PPPoE. PPPoE adds overhead, and that overhead has real consequences at high throughput — especially when you're running IDS/IPS on top of it. The UDM-SE would get to around 1.4Gbps with everything disabled, which sounds fine until you remember that the entire point of having a gateway with threat detection is actually running it. With IDS/IPS enabled, real throughput dropped to well under 1Gbps on that line. The other uplink is Virgin Media on DHCP, which never had this problem — DHCP doesn't carry that same overhead. The UCG Fiber handles both lines properly with everything enabled. Problem solved, at the cost of replacing hardware that was otherwise working fine (and by some miracle, only losing £40 on 4 year old hardware on eBay).
Camera recording moved to a Unifi Instant NVR, which took over the NVR role the UDM-SE previously handled (I was not going to use an NVME/M.2 SSD for continuous recording, i'm not made of money, despite the UCG Fiber supporting this).
Switching is a mix of USW-24 Pro, USW Pro Max 24 PoE, a few USW Flex Mini 2.5G units, and a USW Flex 2.5G PoE. The PoE side powers most of the access points, with the minis behind TV's or at my desk.
When I moved in, I hardwired as many rooms as possible — one or two ethernet drops per room depending on size and usage. That's the foundation. Wireless is supplementary, not a replacement for cable where cable is practical.
Wireless
Four SSIDs, none of which I'll name specifically. Here's how they break down:
End User is the main trusted network. WPA3 Enterprise only — 5GHz and 6GHz, no 2.4GHz. If the device doesn't support WPA3 Enterprise, it doesn't belong here.
That last point causes occasional grief. Older Samsung TVs are a good example — some nominally support WPA3, but not with Protected Management Frames enabled. PMF is a meaningful part of what WPA3 actually brings to the table in terms of security; without it you're getting the label but not much of the substance. So the choice is: weaken the security profile of the network to accommodate one device, or put that device somewhere more appropriate. It goes on the IOT VLAN. In my view, this is the correct answer, even if it's mildly irritating every time.
Guest is isolated, WPA2/WPA3, internet access — plus a few deliberate holes. Primarily for things like AirPlay, where guests reasonably expect to be able to cast to a screen without needing to be on the main network (I enjoy not having to explain how Airplay works to non technical people). Punching those specific holes while keeping everything else blocked involves mDNS proxying between VLANs, which I'll come back to, because it deserves its own discussion.
IoT (Internet) covers devices that need cloud connectivity to function. WPA2, 2.4GHz and 5GHz. No access inward, outbound to the internet only where the firewall permits.
IoT (Local) is for devices that need local communication but have no legitimate reason to touch the internet (Smart Bulbs with LocalTuya, Matter over WiFi). Same security profile, but outbound internet is blocked at the firewall. Some devices complain about this in ways that need working around — DNS being the most common one — but it's a solvable problem.
Access Points
Two U7 Pros cover the main living spaces indoors. A U7 Pro Wall is mounted in the conservatory, covering the garden (overkill). On the driveway there's a U6-LR — it's IP-rated for outdoor use, which is why it's out there, and the practical benefit is that it reaches the car. Useful for OTA updates, which is a sentence that would have sounded odd a few years ago. In the garage, a U7-Lite — I really don't need 6GHz coverage in a space where I'm not going to be doing anything wireless-intensive.
Channels and Widths
6GHz runs at 160MHz channels (all of them available in the UK). I'm aware that this makes me a bad neighbour in spectrum terms — but right now I'm exactly zero other devices using 6GHz around me. I'll revisit this if that changes, but for now the throughput is worth it.
5GHz runs 80MHz, with transmit powers tweaked per-AP. The primary indoor APs are on DFS channels — the dynamic frequency selection space that most consumer gear tends to avoid because of the radar detection requirement. Around here the DFS space is consistently quiet; I haven't had a meaningful DFS event trigger in a long time. It's worth surveying your own area before committing to this, but if it's empty, there's no good reason not to use it.
The driveway U6-LR is the exception. It runs 40MHz, and specifically on a channel from the allocation (and power level) designated for outdoor use in UK regulation. This is a detail that's easy to overlook — the 5GHz band has specific sub-bands with different indoor/outdoor permissions, and using an indoor-only channel on an outdoor AP is technically a regulatory violation even if no one's likely to notice.
2.4GHz runs 20MHz. That's the right answer for an urban environment and I'm not going to argue about it.
802.1x and Camera Encryption
Both wired and wireless networks use 802.1x for authentication. Devices that don't present valid credentials get placed into a restricted zone with no useful access. The practical effect is that plugging an unknown device into a wall port — including any external cable runs — doesn't give you meaningful network access. You're authenticated or you're nowhere useful.
Camera streams are encrypted end-to-end. Some of the camera cabling runs externally, and the idea of an unencrypted video stream sitting on a cable that someone could physically access is uncomfortable enough to be worth fixing. The likelihood of someone actually doing that is low. The cost of encrypting it is zero. Easy decision.
VLANs and Segments
Rather than one flat network, everything sits in its own VLAN with firewall rules controlling what can talk to what. The rough segments:
Management is infrastructure-only. Network devices, access points, switches. Nothing else communicates here, and nothing from other VLANs should be reaching it under normal operation.
End User is for trusted devices — my machines, family devices, anything I'd consider properly within the trusted perimeter. Not to say this can get everywhere, holes are still explicitly punched. But this is the place I want to feel "flat".
Security covers cameras and the NVR. Isolated from most other segments; the cameras talk to the NVR, and very little else is permitted.
Media is for TVs, streaming sticks, and similar. These things need internet access but have no good reason to reach internal services, except Home Assistant (Homekit).
Network Services is the bare metal layer where services actually run.
Network Services - Virtual is where containers, virtual machines, and other virtualized services live. Kept separate from the bare metal VLAN deliberately — more on this below.
IoT (Internet) and IoT (Local) match the wireless SSIDs of the same name. Wired IoT devices land in one of these depending on whether they have any legitimate need for internet access.
Guest is the hotspot zone — internet access, plus specific exceptions for things like AirPlay. Isolated from everything else.
Firewall Approach
The general principle is: more trusted zones can reach less trusted ones where there's a reason, but not the reverse. IoT can't initiate connections into Network Services. Guest can't reach Internal. Internal things can reach services fairly freely, but services don't reach back into internal zones unprompted. With a big exception, those IOT devices are truly isolated from anything else internal except Home Assistant (or their controllers), Tuya stuff is set up with LocalTuya, and cannot phone home (except for certain devices).
On top of zone-to-zone rules, there's country-level blocking on inbound traffic, and I make use of the threat intelligence and cybersecurity rulesets the gateway provides. Traffic flow logs get ingested into monitoring — I want to actually know what's happening on the network rather than assume the rules are working.
DNS
DNS runs via an AdGuard instance on a VPS that's geographically very close — within about 1ms of me (OVH London). The gateway uses it as an encrypted upstream, so DNS queries leave the house encrypted rather than in plaintext. All VLANs use this, so ad, malicious, geo-restricted and tracker blocking is network-wide rather than per-device. Some DNS is configured on the Unifi Gateway, but this is for local services.
Getting this right means traffic stays inside the network where it should, and services remain accessible even if external resolution has issues. This becomes somewhat relevent when my ability to turn on/off lights, is dependent on Homekit's ability to access Home Assistant, and Home Assistant to access my lightbulbs (via Zigbee, or sometimes LocalTuya). The real test here is to pull out the modem, and see whether I can still do basic home automations, if I can't, i've failed.
mDNS — A Brief Tangent
mDNS is the mechanism behind things like AirPlay. Devices announce themselves, other devices listen, things find each other. On a flat network it's seamless. On a segmented network, it's a persistent source of frustration.
The problem is that multicast doesn't cross VLAN boundaries by default. If your phone is on End User and the Apple TV is on Media, they can't hear each other's mDNS announcements. The obvious fix — collapsing everything onto one network — rather defeats the point. The actual fix is mDNS reflection, where something on the network selectively repeats announcements across VLAN boundaries.
Unifi has built-in mDNS reflection support, but you can't easily say "reflect mDNS from Media to End User, but not to Guest, except for these specific service types so that AirPlay works from Guest to this one Apple TV, but not to the NAS." Getting that level of granularity requires more configuration and some trial and error about what actually needs to propagate where.
The Guest AirPlay case is a good example of where this gets awkward. I want guests to be able to cast to a screen — it's a reasonable thing to expect. I don't want them discovering everything else on the network in the process. Threading that needle involves selectively punching holes in what is otherwise an isolated VLAN, which is fine in principle but fiddly in practice.
I could write an entire post just on mDNS. Maybe I will at some point. For now: if you're planning a segmented network and you have any Apple devices, smart TVs, or smart home kit, set aside dedicated time for mDNS wrangling. It will take longer than you expect. You might just decide it's not worth it, and that discovery isn't inherently a bad thing and just enable reflection across everything (and, i'm not an expert, it's not down to me to tell you how to build your stuff, it's a perfectly good option.)
Containers and Isolation
Containers run in the Network Services - Virtual VLAN. By default, each container is isolated from the others — no inter-container communication unless it's been explicitly permitted. Where containers do need to talk to each other, that happens via Docker Swarm, which communicates over the bare metal Network Services VLAN. Nothing talks to anything it doesn't need to.
I don't treat network isolation as the only layer here. There are forward-auth proxies in front of all services — the assumption being that the network could be compromised, and a service should still require proper authentication before it gives anything useful back.
Remote Access
Two approaches, depending on what I'm trying to reach.
Tailscale is the primary mesh. It spans the homelab, personal devices, and remote nodes across various cloud providers. From my perspective, whether something is physically in my house or running on a VPS, it's all on the same network — Tailscale makes it feel flat. Devices that aren't running Tailscale natively are reachable via subnet routing from a host that is. The tailnet also has exit nodes configured, which is useful when I'm on an untrusted connection and want traffic routed somewhere sensible.
WireGuard is available via the gateway as a more traditional VPN option. It works, it's fast, and it's there when I need it. I use iOS VPN on demand when i'm out and about, on untrusted WiFi, this VPN is functionally in my "End User" zone. Am I really concerned about public wifi snooping, not really, espectially with DNS over HTTPS enabled, but, i've got this stuff here so I may as well use it - secretly I get joy seeing my bandwidth usage graphs go up.
My preference where possible though is to expose services via Cloudflare Tunnels with Cloudflare Access in front, authenticated against Azure AD. The reason is straightforward: I don't want to roll my own authentication. Cloudflare and Microsoft each have teams dedicated to auth security; I have me, occasionally, when I have time. Delegating the authentication boundary to them gives me a much better baseline than anything I'd build and maintain myself. It also means not punching inbound holes in the firewall — the tunnel is outbound, Cloudflare proxies traffic through it. From a firewall perspective, that's a significantly smaller attack surface than an exposed VPN endpoint. There are obviously some scenarios where this is not practical, where the use of other reverse proxies & oauth/oidc integrations is required.
The layered approach means a compromise at any one level doesn't immediately mean everything is exposed. But for the authentication boundary specifically, delegation to a big player beats DIY and that's a hill i'll die on.
The Not-So-Great Parts
It's overkill. Genuinely. A simpler setup would serve most people perfectly well. I do this because I find it interesting.
mDNS remains an annoyance. Every new device or new segment involves working out what needs to discover what, and whether the proxying is passing the right announcements. It gets more familiar with experience but never becomes straightforward.
Older devices cause friction. The Samsung TV WPA3 situation is representative of a broader pattern — consumer electronics have long lifespans, and security standards evolve faster than product cycles. I make deliberate placement decisions about devices that "work" but don't meet the bar for a particular segment. It's the right call, and it's mildly annoying every time.
Local-only IoT is a commitment. Some devices that claim to support local control still try to phone home constantly and behave strangely when they can't reach the internet. DNS tricks and firewall rules handle most of it, but knowing what each device is actually doing requires monitoring that most people don't have set up — which is another reason to have it.
Things break occasionally. More moving parts means more failure modes. The self-imposed constraint of keeping End User reliable is what prevents this from getting too experimental where it actually matters.
A disclaimer! I wrote this up, and have used an LLM (Sonnet 4.5, for those that are interested) to do formatting, basic stylistic changes, headers and breaking up of text, but the content is all me.