Websockets BigIP F5 Loadbalancer Config
Lately I’ve been playing a lot with the WebSocket protocol and ran into an interesting issue when Deploying the Socket Server to production behind a BigIP F5 Load Balancer. I had noticed that after the WebSocket Auth handshake the Upgrade requests started to bounce back and forth between the production nodes. How come the WebSocket connection does “stick” to a specific node. After researching a bit I found that there is a persistence profile that needed to be configured.
BigIP F5 Types of persistence
Destination address affinity persistence
Also known as sticky persistence, destination address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the destination IP address of a packet.
Source address affinity persistence
Also known as simple persistence, source address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the source IP address of a packet.
In our case we needed to setup a Destination address affinity persistence profile. The Upgrade TCP requests will all “Stick” to a specific node and use that node for all communication unless it goes down which it will fallback to another node in the ring.
Here is a portion of the Config With our Specifics Redacted :
ltm virtual /Common/<VIP> {
destination /Common/<ip:port>
ip-protocol tcp
mask 255.255.255.255
**persist {
/Common/source_addr {
default yes
}
}**
pool /Common/<Server Pool>
profiles {
/Common/CLIENTSSL-<wildcard host> {
context clientside
}
/Common/HTTPS-SSL-X-Forwarded-Proto-XFF { }
/Common/tcp { }
}
rules {
//Define Your Specific Rules Here - Encryption, etc
}
source 0.0.0.0/0
translate-address enabled
translate-port enabled
}
This is mainly for future reference but thought i’d write a blog for others to read incase they are running into the same issue.
BigIP f5 Links :