How to run Keycloak in HA on Kubernetes
How to setup Keycloak, the Open Source Identity and Access Management, in HA on Kubernetes.
Introduction and key concepts
In this post we'll see the key concepts that you should have in mind while deploying a Keycloak Cluster on top Kubernetes. If you want deeper knowledge on Keycloak, we recommend you check out the references at the end of this post. For a more practical point of view you can check out our fury-kubernetes-keycloak module that implements the concepts described in this article.
Keycloak is a complex system built in Java that runs on top of the Wildfly application server. To be short, we can say that it is an authentication framework that gives application users federation and single sign-on (SSO) capabilities.
We invite you to check the official site or Wikipedia for a more detailed explanation.
Running Keycloak
Keycloak is a stateful system that uses two data sources to run:
- A database: used to persist permanent data, such as users' information.
- A datagrid cache: used to cache persistent data from the database and also to save some short-lived and frequently-changing metadata, such as for user sessions, implemented using Infinispan. Infinispan is usually much faster than a database, however, the data saved using Infinispan is not permanent and is not expected to persist across cluster restarts.
It can operate in four different modes:
- Standalone: one and only one instance of Keycloak. Uses the standalone.xml configuration file.
- Standalone Clustered (High Availability): you must have all the instances’ configuration synchronized to your own. It uses the standalone-ha.xml configuration file and you need to additionally setup:
- A shared database connection
- A load balancer in front
- Domain Clustered: Running a cluster in standard mode can quickly become aggravating as the cluster grows in size. Every time you need to make a configuration change, you have to perform it on each node in the cluster. Domain mode solves this problem by providing a central place to store and publish configuration. It uses the domain.xml configuration file.
- Cross-Datacenter Replication Mode: is for when you want to run Keycloak in a cluster across multiple data centers, most typically using data center sites that are in different geographic regions. When using this mode, each data center will have its own cluster of Keycloak servers.
This article will concentrate on the second type, Standalone Clustered, and we will also talk about the Cross-Datacenter Replication mode, the two modes that make sense in a Kubernetes environment. Luckily, in Kubernetes, we don't have problems synchronizing the configuration of several pods (Keycloak nodes), so the Domain Clustered mode doesn't add much to our use case.
Please note that the word cluster
within the previous definitions and throughout the rest of this article is used regarding a group of Keycloak nodes working together and doesn't necessarily refer to a Kubernetes cluster.
Standalone Clustered Keycloak
To run Keycloak in standalone clustered mode, we need to:
- Configure a shared external database
- Set up a load balancer
- Supply a private network that supports IP multicast
The external database configuration is out of the scope of this blog post, let's assume that there's a database working and that we have an endpoint to connect to it. Add environment variables to your deployment pointing to the endpoint.
To better understand how Keycloak works in a high availability (HA) cluster formation, it is important to know that it heavily relies on Wildfly's clustering capabilities. Wildfly uses several subsystems, some of them are used for load balancing and some of them for fail-over. The load balancing features make sure that the application availability is not impacted by a node being overloaded, and the fail-over features make sure that the application is available even though some nodes go out of service. Some of these subsystems are:
mod_cluster
: used together with Apache as an HTTP load balancer that relies on TCP multicast for node discovery by default. It can be replaced by an external load balancer.infinispan
: a distributed cache that uses a JGroups channel as transport layer. It can additionally use the HotRod protocol to communicate to an external Infinispan cluster to persist the cache contents.jgroups
: provides group communication support for HA services in the form of JGroups channels. Named channel instances allow application peers in a cluster to communicate as a group and in such a way that the communication satisfies defined properties (e.g. reliable, ordered, failure-sensitive).
In the two following subsections we analyze the remaining load balancer and networking items.
Load balancer
While setting up the load balancer and ingress controller in your Kubernetes cluster, it is important to have in mind the following:
A few features in Keycloak rely on the fact that the remote address of the HTTP client connecting to the authentication server is the real IP address of the client machine. Configure the load balancer and ingress to properly set the X-Forwarded-For
and X-Forwarded-Proto
HTTP headers and preserve the original HOST
HTTP header. Latest versions of ingress-nginx
(>0.22.0) disable this by default.
Enable the proxy-address-forwarding
Keycloak flag setting the environment variable PROXY_ADDRESS_FORWARDING
to true
to let Keycloak know that it is running behind a proxy.
Enable the sticky sessions on your ingress. Keycloak is using Infinispan distributed cache under the covers for saving data related to the current authentication session and user session. The Infinispan distributed caches are configured with one owner by default. In other words, that particular session is saved just on one cluster node and the other nodes need to lookup the session remotely if they want to access it.
Contrary to what is stated in the documentation, in our experience stickying the session to the
AUTH_SESSION_ID
cookie turned out to be problematic, Keycloak entered an infinite redirect loop. We recommend using a different cookie for the sticky session.
Keycloak attaches the name of the node you hit first to the AUTH_SESSION_ID
and because each node in HA configuration uses the same database, they will each require a separate and unique node identifier for managing their database transactions. It is recommended that you set in the JAVA_OPTS
both jboss.node.name
and jboss.tx.node.id
to be unique for each node, you can use for example the pod's name. If you go with the pod's name, there's an underlying hard limit of 23 characters for the jboss variable’s length, so it might be better to use a StatefulSet instead of a Deployment.
Another side effect of this behavior is that if one pod is deleted or restarted, its cache will be lost. To counter this, we recommend setting the cache owners count for all the different caches to at least 2, so there will be a copy of the cache. One way of achieving this is to run a Wildfly CLI script on the pod startup, mount the following file on the container's /opt/jboss/startup-scripts/
folder:
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
echo * Setting CACHE_OWNERS to "${env.CACHE_OWNERS}" in all cache-containers
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:1})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:1})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=actionTokens:write-attribute(name=owners, value=${env.CACHE_OWNERS:1})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:1})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=clientSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:1})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineClientSessions:write-attribute(name=owners, value=${env.CACHE_OWNERS:1})
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures:write-attribute(name=owners, value=${env.CACHE_OWNERS:1})
run-batch
stop-embedded-server
and then set the CACHE_OWNERS
environment variable to the desired owners count.
Private network with IP multicast support
If you are using Weavenet as CNI, multicast support is out of the box and your Keycloak nodes should discover themselves when they go live.
If you don't have IP multicast support in your Kubernetes cluster networking, you can configure JGroups to use other protocols for the nodes’ discovery.
One option is to use KUBE_DNS
that uses a headless service
to discover the Keycloak nodes, you just have to tell JGroups what service name it should use to discover the nodes.
Another option is to use the KUBE_PING
discovery method, this method uses the Kubernetes API to discover the Keycloak nodes (you'll need to set up a serviceAccount
with list
and get
permissions on the pods
resources and configure the pods to use that serviceAccount
).
The JGroups discovery method is configured by setting the JGROUPS_DISCOVERY_PROTOCOL
and JGROUPS_DISCOVERY_PROPERTIES
environment variables. For the KUBE_PING
option you can select your pods using a namespace
and labels
.
⚠️ If you are using the multicast discovery method and running two or more different Keycloak clusters on the same Kubernetes cluster (let's say one in the
production
namespace and another in thestaging
namespace) then the Keycloak nodes from one namespace will be able to join with the nodes from the other namespace. You should use a unique multicast address for each Keycloak cluster, this can be done by setting the flagsjboss.default.multicast.address
andjboss.modcluster.multicast.address
in theJAVA_OPTS
environment variable.
Cross-datacenter replication
Image from Keycloak's documentation.
Communication details
Keycloak uses multiple, separate clusters of Infinispan caches. For each data center there's a Keycloak cluster composed by the Keycloak nodes of that data center. There's no direct communication between the Keycloak nodes of one data center and the Keycloak nodes of the other data center.
Keycloak nodes use external Java Data Grid (Infinispan servers) for communication across data centers. This communication is done using the Infinispan HotRod protocol.
The Infinispan caches on the Keycloak side must be configured with the remoteStore attribute to ensure that data is saved to the remote cache. There is a separate Infinispan cluster between JDG servers, so the data saved on JDG1 on site1
are replicated to JDG2 on site2
.
Finally, the receiving JDG server notifies the Keycloak servers in its cluster through the Client Listeners, which are a feature of the HotRod protocol. Keycloak nodes on site2
then update their Infinispan caches and the particular user session is also visible on Keycloak nodes on site2
.
For some caches, it is even possible to not backup at all and completely skip writing data to the Infinispan server. To set this up, do not use the remote-store
element for the particular cache on the Keycloak side (file standalone-ha.xml) and then the particular replicated-cache
element is also not needed on the Infinispan server side.
Server cache configuration
Keycloak has two types of caches:
- Local cache: sits in front of the database to decrease the load on the database and to decrease overall response times. Realm, client, role, and user metadata is kept in this type of cache. It does not use replication even if you are in the cluster with more Keycloak servers. If one entry is updated an invalidation message is sent to the rest of the cluster and the entry is evicted. See the
work
cache below. - Replicated cache: handles user sessions, offline tokens, and keeping track of login failures so that the server can detect password phishing and other attacks. The data held in these caches is temporary, in memory only, but is possibly replicated across the cluster.
Infinispan caches
Authentication sessions
In Keycloak we have the concept of authentication sessions. There is a separate Infinispan cache called authenticationSessions
used to save data during authentication of a particular user. Requests from this cache usually involve only a browser and the Keycloak server, not the application. Here we can rely on sticky sessions and the authenticationSessions
cache content does not need to be replicated across data centers, even if you are in Active/Active mode.
Action tokens
We also have the concept of action tokens, which are used typically for scenarios when the user needs to confirm an action asynchronously by email. For example, during the forget password
flow the actionTokens
Infinispan cache is used to track metadata about related action tokens, such as which action token was already used, so it can’t be reused a second time. This usually needs to be replicated across data centers.
Caching and invalidation of persistent data
Keycloak uses Infinispan to cache persistent data to avoid many unnecessary requests to the database. Caching improves performance, however, it adds another challenge. When one Keycloak server updates any data, all other Keycloak servers in all data centers need to be aware of it, so they invalidate particular data from their caches. Keycloak uses local Infinispan caches called realms
, users
, and authorization
to cache persistent data.
We use a separate cache, work
, which is replicated across all data centers. The work
cache itself does not cache any real data. It is used only for sending invalidation messages between cluster nodes and data centers. In other words, when data is updated, the Keycloak node sends the invalidation message to all other cluster nodes in the same data center and also to all other data centers. After receiving the invalidation notice, every node then invalidates the appropriate data from their local cache.
User sessions
There are Infinispan caches called sessions
, clientSessions
, offlineSessions
, and offlineClientSessions
, all of which usually need to be replicated across data centers. These caches are used to save data about user sessions, which are valid for the length of a user’s browser session. The caches must handle the HTTP requests from the end-user and the application. As described above, sticky sessions cannot be reliably used in this instance, but we still want to ensure that subsequent HTTP requests can see the latest data. For this reason, the data are usually replicated across data centers.
Brute force protection
Finally, the loginFailures
cache is used to track data about failed logins, such as how many times a user entered a bad password. The details are described here. It is up to the admin whether this cache should be replicated across data centers. Replication across data centers is needed to have an accurate count of login failures. On the other hand, not replicating this data can save some performance. So if performance is more important than accurate counts of login failures, the replication can be avoided.
While deploying your Infinispan cluster, you should add Keycloak's cache definitions to Infinispan's configuration file:
<replicated-cache-configuration name="keycloak-sessions" mode="ASYNC" start="EAGER" batching="false">
</replicated-cache-configuration>
<replicated-cache name="work" configuration="keycloak-sessions" />
<replicated-cache name="sessions" configuration="keycloak-sessions" />
<replicated-cache name="offlineSessions" configuration="keycloak-sessions" />
<replicated-cache name="actionTokens" configuration="keycloak-sessions" />
<replicated-cache name="loginFailures" configuration="keycloak-sessions" />
<replicated-cache name="clientSessions" configuration="keycloak-sessions" />
<replicated-cache name="offlineClientSessions" configuration="keycloak-sessions" />
It is important that the Infinispan cache cluster is up and running before starting your Keycloak cluster.
Then, you'll have to configure the remoteStore
for Keycloak's caches. This can be done using a startup CLI script as we did previously to set the CACHE_OWNERS
variable, save the following snippet to a file and mount it to Keycloak's /opt/jboss/startup-scripts/
folder:
embed-server --server-config=standalone-ha.xml --std-out=echo
batch
echo *** Update infinispan subsystem ***
/subsystem=infinispan/cache-container=keycloak:write-attribute(name=module, value=org.keycloak.keycloak-model-infinispan)
echo ** Add remote socket binding to infinispan server **
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-cache:add(host=${remote.cache.host:localhost}, port=${remote.cache.port:11222})
echo ** Update replicated-cache work element **
/subsystem=infinispan/cache-container=keycloak/replicated-cache=work/store=remote:add( \
passivation=false, \
fetch-state=false, \
purge=false, \
preload=false, \
shared=true, \
remote-servers=["remote-cache"], \
cache=work, \
properties={ \
rawValues=true, \
marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, \
protocolVersion=${keycloak.connectionsInfinispan.hotrodProtocolVersion} \
} \
)
/subsystem=infinispan/cache-container=keycloak/replicated-cache=work:write-attribute(name=statistics-enabled,value=true)
echo ** Update distributed-cache sessions element **
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions/store=remote:add( \
passivation=false, \
fetch-state=false, \
purge=false, \
preload=false, \
shared=true, \
remote-servers=["remote-cache"], \
cache=sessions, \
properties={ \
rawValues=true, \
marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, \
protocolVersion=${keycloak.connectionsInfinispan.hotrodProtocolVersion} \
} \
)
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:write-attribute(name=statistics-enabled,value=true)
echo ** Update distributed-cache offlineSessions element **
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions/store=remote:add( \
passivation=false, \
fetch-state=false, \
purge=false, \
preload=false, \
shared=true, \
remote-servers=["remote-cache"], \
cache=offlineSessions, \
properties={ \
rawValues=true, \
marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, \
protocolVersion=${keycloak.connectionsInfinispan.hotrodProtocolVersion} \
} \
)
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions:write-attribute(name=statistics-enabled,value=true)
echo ** Update distributed-cache clientSessions element **
/subsystem=infinispan/cache-container=keycloak/distributed-cache=clientSessions/store=remote:add( \
passivation=false, \
fetch-state=false, \
purge=false, \
preload=false, \
shared=true, \
remote-servers=["remote-cache"], \
cache=clientSessions, \
properties={ \
rawValues=true, \
marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, \
protocolVersion=${keycloak.connectionsInfinispan.hotrodProtocolVersion} \
} \
)
/subsystem=infinispan/cache-container=keycloak/distributed-cache=clientSessions:write-attribute(name=statistics-enabled,value=true)
echo ** Update distributed-cache offlineClientSessions element **
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineClientSessions/store=remote:add( \
passivation=false, \
fetch-state=false, \
purge=false, \
preload=false, \
shared=true, \
remote-servers=["remote-cache"], \
cache=offlineClientSessions, \
properties={ \
rawValues=true, \
marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, \
protocolVersion=${keycloak.connectionsInfinispan.hotrodProtocolVersion} \
} \
)
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineClientSessions:write-attribute(name=statistics-enabled,value=true)
echo ** Update distributed-cache loginFailures element **
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures/store=remote:add( \
passivation=false, \
fetch-state=false, \
purge=false, \
preload=false, \
shared=true, \
remote-servers=["remote-cache"], \
cache=loginFailures, \
properties={ \
rawValues=true, \
marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, \
protocolVersion=${keycloak.connectionsInfinispan.hotrodProtocolVersion} \
} \
)
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures:write-attribute(name=statistics-enabled,value=true)
echo ** Update distributed-cache actionTokens element **
/subsystem=infinispan/cache-container=keycloak/distributed-cache=actionTokens/store=remote:add( \
passivation=false, \
fetch-state=false, \
purge=false, \
preload=false, \
shared=true, \
cache=actionTokens, \
remote-servers=["remote-cache"], \
properties={ \
rawValues=true, \
marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, \
protocolVersion=${keycloak.connectionsInfinispan.hotrodProtocolVersion} \
} \
)
/subsystem=infinispan/cache-container=keycloak/distributed-cache=actionTokens:write-attribute(name=statistics-enabled,value=true)
echo ** Update distributed-cache authenticationSessions element **
/subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions:write-attribute(name=statistics-enabled,value=true)
echo *** Update undertow subsystem ***
/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding,value=true)
run-batch
stop-embedded-server
Remember to set the JAVA_OPTS
so Keycloak points to your Infinispan's Hot Rod service: remote.cache.host
, remote.cache.port
and set the site name jboss.site.name
.
References and additional documentation
- Keycloak Server Installation Guide
- Wildfly High Availability Guide
- Infinispan User Guide
- Keycloak Official Docker Image
- Infinispan Official Docker Image
If you are searching for a way to run Keycloak on Kubernetes, we invite you to try our module fury-kubernetes-keycloak
and if you also need to set up a Kubernetes cluster we invite you to consider our open-source Kubernetes Fury Distribution.