Table of Contents
- Outgoing Connections
- Required Ports and Interfaces
- Firewall Rules Table
- NodePort Services
- Related Documentation
- How to Enable
br_netfilter
on Ubuntu Linux - How to Enable
br_netfilter
on Red Hat Enterprise Linux (RHEL)
Although this guide uses IP addresses from the 10.0.0.0/24 subnet for internal networking in its configuration examples, you can also use other subnets and address ranges for the servers in your Kubernetes cluster.
Firewall Configuration
Outgoing Connections
To successfully complete the setup and view location details in PhotoPrism®, please ensure that your firewall or HTTP/HTTPS proxy server allows outgoing connections to the following hosts:
- dl.photoprism.app
- my.photoprism.app
- cdn.photoprism.app
- maps.photoprism.app
- setup.photoprism.app
- places.photoprism.app
- places.photoprism.xyz
In addition, access to the following hosts should be allowed for pulling the required images from Docker Hub:
- auth.docker.io
- registry-1.docker.io
- index.docker.io
- dseasb33srnrn.cloudfront.net
- production.cloudflare.docker.com
Required Ports and Interfaces
1. Internal Communication (10.0.0.0/24)
Depending on your specific configuration, some or all of the following ports should be open to enable communication between the cluster nodes over the internal (private) network:
- TCP 22: Secure Shell (SSH)
- TCP/UDP 53: Domain Name System (DNS)
- TCP/UDP 443: HTTPS, QUIC
- TCP/UDF 2049: NFS v4 (Network File System)
- TCP 3306: MariaDB Database Server
- TCP/UDP 6443: Kubernetes API
- TCP/UDP 8472: Flannel VXLAN (network overlay)
- TCP 10250: Kubelet API
- TCP 2379-2380: etcd (only for etcd nodes/control plane)
- TCP 10251-10252: kube-scheduler, kube-controller-manager (control plane only)
Note: Since administrators or custom applications may require access to additional ports, it is recommended that no firewall rules are applied to restrict communication within the cluster.
2. Public Interfaces
Allow only the following ports for incoming traffic on public internet or intranet interfaces (as needed):
- TCP 22: SSH (for administration; restrict by IP as much as possible)
- TCP 80: HTTP Ingress (e.g. for redirects and certificate validation)
- TCP/UDP 443: HTTPS/QUIC Ingress (to access the application and admin interface)
- TCP 6443: Kubernetes API (for external access if required; restrict by IP if possible)
Firewall Rules Table
Interface | Port(s) | Protocol | Source | Purpose |
---|---|---|---|---|
Internal (10.0.0) | allow all | ICMP, TCP, UDP | Admin, Kubernetes, MariaDB | Cluster operation |
Public | ICMP | Intranet, Internet | Path MTU Discovery | |
Public | 22 | TCP | Admin IPs, VPN | Secure Shell (SSH) |
Public | 80 | TCP | Intranet, Internet | HTTP Ingress (optional) |
Public | 443 | TCP, UDP | Intranet, Internet | HTTPS/QUIC Ingress |
Public | 6443 | TCP | Admin IPs, VPN | Kubernetes API |
Note: Never expose the Flannel (8472), etcd (2379–2380) or Kubelet API (10250) ports on a public interface.
NodePort Services
The NodePort range (30000–32767, TCP/UDP) is the default port range that Kubernetes uses to publish NodePort services. This means that if services are exposed using the NodePort type (e.g. by Rancher), traffic can reach them via any node’s public or internal IP on a port in this range:
Interface | Port(s) | Protocol | Source | Purpose |
---|---|---|---|---|
Internal (10.0.0) | 30000-32767 | TCP/UDP | Cluster nodes | NodePort services |
Public | 30000-32767 | TCP/UDP | As needed only | Public NodePort svc |
If Traefik is the only ingress controller used to expose applications (through ports 80 and 443 only), the NodePort range does not need to be opened externally. In this case, NodePort ports should only be accessible on the internal interface.
However, if workloads (or Rancher features such as the dashboard, monitoring or ingress controllers) are exposed via NodePort, you need to allow inbound traffic to these ports, but only on the interface(s) where the services should be accessible:
- Typical: Open the NodePort range on the internal interface (10.0.0.x) for inter-node communication.
- Optional: Open specific NodePort ports on the public interface to expose a service externally, without exposing the entire range to the world for security reasons.
Related Documentation
- k3s Network Requirements
- Rancher - Ports Requirements
- Flannel Networking
- Traefik - Exposing Services
- Kubernetes - Publishing Services (ServiceTypes)
- Kubernetes - Communication between Nodes and the Control Plane
Linux Network Configuration
Name Server Configuration
If your servers have a Linux distribution with systemd
installed, it is recommended that you disable systemd-resolved
before proceeding with the installation to avoid Rancher or Kubernetes name resolution problems:
sudo systemctl disable systemd-resolved --now
Make sure to also delete the symbolic link to the systemd-resolved
configuration:
sudo rm /etc/resolv.conf
Once systemd-resolved
is disabled, you can manually configure name resolution in /etc/resolv.conf
, for example:
nameserver 8.8.8.8
nameserver 8.8.4.4
options edns0 trust-ad
search .
If the server is on a private network without a public IP address, you can set up a forwarding nameserver and configure it in /etc/resolv.conf
as shown in this example (make sure to replace 10.0.0.2
with the actual nameserver IP address):
nameserver 10.0.0.2
Configuring a System-wide Proxy
To use this proxy service, you can add the following lines to /etc/environment
(replace 10.0.0.2
with the actual proxy server address):
HTTP_PROXY="http://10.0.0.2:3128"
HTTPS_PROXY="http://10.0.0.2:3128"
FTP_PROXY="http://10.0.0.2:3128"
NO_PROXY="localhost,10.0.0.0/8,127.0.0.0/8,172.16.0.0/12,192.168.0.0/16,::1"
On Debian/Ubuntu, create or edit /etc/apt/apt.conf.d/95proxies
to include the following:
Acquire::http::Proxy "http://10.0.0.2:3128";
Acquire::https::Proxy "http://10.0.0.2:3128";
The proxy settings for Snap on Ubuntu Linux can be changed with the following commands:
sudo snap set system proxy.http="http://10.0.0.2:3128"
sudo snap set system proxy.https="http://10.0.0.2:3128"
Linux Kernel Modules
The br_netfilter
Linux kernel module is crucial for Kubernetes networking—especially when using network plugins (like Flannel, Calico, Cilium, etc.) that rely on bridged networking.
Without the module, packets traveling across Linux bridges bypass iptables
by default, so firewall rules, NAT, or port forwarding won’t apply to inter-pod or pod-to-service traffic.
You must therefore ensure that the br_netfilter
module is loaded before installing Rancher or Kubernetes cluster nodes:
How to Enable br_netfilter
on Ubuntu Linux
echo "br_netfilter" | sudo tee /etc/modules-load.d/br_netfilter.conf
sudo modprobe br_netfilter
sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
Running the command sudo lsmod
will show a list of all the currently loaded kernel modules. It should now include br_netfilter
.
Also ensure that your servers have the latest security updates installed:
sudo apt update
sudo apt dist-upgrade
Then, restart your servers before proceeding with the installation.
How to Enable br_netfilter
on Red Hat Enterprise Linux (RHEL)
echo "br_netfilter" | sudo tee /etc/modules-load.d/br_netfilter.conf
sudo modprobe br_netfilter
Running the command sudo lsmod
will show a list of all the currently loaded kernel modules. It should now include br_netfilter
.
You should also ensure that the latest security updates are installed. Then, restart your servers before proceeding with the installation.
Rancher Cluster Setup
Registering a New Cluster Node
To add a new node to a Rancher-managed cluster, run the following command on the server you want to add (replace the Rancher server hostname and authentication token, as well as the node's external and internal IP addresses):
curl -fL https://[server_hostname]/system-agent-install.sh | sudo sh -s - \
--server https://[server_hostname] --label 'cattle.io/os=linux' \
--token [token] \
--address [external_ip] --internal-address [internal_ip] \
--etcd --controlplane --worker
After running the script, you can use the following command to view the logs and check whether the node is connecting to the server:
sudo tail -f /var/log/syslog
If the service fails to start due to an incorrect token
, wrong server hostname, or other configuration problem, you can try resolving these issues and then restarting the service with the following command:
sudo systemctl restart rancher-system-agent
The following script removes the Rancher agent from the server, so you can start from scratch or use the server for other purposes:
sudo rancher-system-agent-uninstall.sh
Note: Be sure to remove the node from the cluster in the Rancher web interface before uninstalling the agent, shutting down the server permanently, or reinstalling its operating system.
PhotoPrism® Documentation
For more information on specific features, services and related resources, please refer to the other documentation available in our Knowledge Base and User Guide: