Remix.run Logo
EnigmaCurry 6 hours ago

> Many home routers try to preserve the source port in external mappings. This is a property called “equal delta mapping” – it won’t work on all routers but for our algorithm we’re sacrificing coverage for simplicity.

It is precisely this point that has flummoxed me when connecting my p2p wireguard config[1] with a friend that uses a pfsense router, no matter what we tried, pfsense always chooses a random source port.

But in the simple case this blog outlines, if both ends use the same source port, this method punches through 2 firewalls effortlessly:

[1] https://blog.rymcg.tech/blog/linux/wireguard_p2p/

hdgvhicv an hour ago | parent | next [-]

In my experience, Cisco ASA does source port persistence by default (when it can’t do it then it falls back to random), fortigates can do it (in various ways depending on version, although fallback method in the map-ports doesn’t work), juniper SRXs can’t, unless you guarentee a 1:1 map.

jonathanlydall 6 hours ago | parent | prev | next [-]

Does your friend setting up port forwarding on their pfSense not help in your scenario?

EnigmaCurry 5 hours ago | parent [-]

Yes, that solves it completely. But the exercise we were trying to do was to do it without that.

hdgvhicv an hour ago | parent [-]

You’re getting into birthday paradox territory, throw a few hundred packets in each direction and one will get through

This hs a good diagram to understand the options

https://rajsinghtech.github.io/claude-diagrams/diagrams/netw...

getcrunk 3 hours ago | parent | prev [-]

AI ANSWER: (lightly edited)

The Solution (Static Port)

To fix this without a permanent port forward, you must enable Static Port in pfSense's Outbound NAT. This doesn't open a hole to the world; it simply tells pfSense: "When this internal IP sends UDP traffic, do not rewrite the source port."

Navigate to Firewall > NAT > Outbound.

Switch to Hybrid Outbound NAT (if not already).

Add a rule at the top:

Interface: WAN

Protocol: UDP

Source: [Friend's WireGuard Internal IP/Port]

Destination: [Your Public IP]

Translation: Check Static Port.

craftkiller 9 minutes ago | parent | next [-]

This is against the HN guidelines:

> Don't post generated comments or AI-edited comments. HN is for conversation between humans.

https://news.ycombinator.com/newsguidelines.html

Boltgolt 2 hours ago | parent | prev | next [-]

We can all run this through our LLM if choice, why post this?

lxgr 3 hours ago | parent | prev [-]

Did you validate this solution yourself?

getcrunk 3 hours ago | parent [-]

No, hence the all caps ai disclaimer. But seems plausible

nneonneo 3 hours ago | parent | next [-]

Lord, we're how many years into using LLMs, and people still don't understand that their whole shtick is to produce the most plausible output - not the most correct output?

The most plausible output might be correct, or it might be utter bullshit hallucinations that only sound correct; the only way to tell is to actually try it or cross-reference primary sources. Unless you do, the AI answer is worthless.

The reason why they're getting so good at code now is that they can check their output by running and testing it; if you're just prompting questions into a chatbot and then copying their output verbatim to a comment, you're not adding any meaningful value.

anovikov 2 hours ago | parent [-]

Exactly! This is what LLMs do: they bullshit you by coming across as extremely knowledgeable, but as soon as you understand 5% of the topic you realise you've been blatantly lied to.

lxgr an hour ago | parent [-]

Even if you get 70% blatant lies and 30% helpful ideas, if you can cheaply distinguish the two due to domain expertise, is that not still an extremely useful tool?

But to the point of this thread: If you can't validate their output at all, why would you choose to share it? This was even recently added to this site's guidelines, I believe.

lxgr an hour ago | parent | prev [-]

But then why make this comment at all, even despite the disclaimer? Anyone can prompt an LLM. What's your contribution to the conversation?

To be clear, I use LLMs to gut check ideas all the time, but the absolute minimum required to share their output, in my view, is verification (can you vouch for the generated answer based on your experience or understanding), curation (does this output add anything interesting to the conversation people couldn't have trivially prompted themselves and are missing in their comments), and adding a disclaimer if you're at all unsure about either (thanks for doing that).

But you can't skip any of these, or you're just spreading slop.