FAQ on deploying Multiple SSO Agents with Multiple Domain Controllers
03/26/2020 20 People found this article helpful 475,692 Views
Description
FAQ on deploying Multiple SSO Agents with Multiple Domain Controllers
Resolution
FAQ on deploying Multiple SSO Agents with Multiple Domain Controllers Do I have to add all Read-only Domain Controllers (RODC) and Windows Domain Controllers (WDC) to each SSO agent domain controller list to ensure that the SSO agent will see all security logins? It is not recommended. Having multiple SSO agents reading from the same DC or set of DCs simply increases the load on it, with each agent building an identical database of the users logged into all of them. Having two agents configured with same DCs gives redundancy should an agent go down, so it is a good idea having multiple agents, but more than two is not a good idea. It is better to spread the load across the agents by having them read from different DCs. So, for example, if you have 8 DCs and 6 agents I would set it up so that:
- Agents 1 & 2 read from DCs A, B and C.
- Agents 3 & 4 read from DCs D, E and F.
- Agents 5 & 6 read from DCs G and H.
So it’s recommended that you pair the agents to read the same sets of domain controllers. You should not mix the agents to do overlapping coverage as this will increase the amount of redundant work with no added benefits.
The “1-2-1” group scheme is better than the “different pairing” schemes. The reason is that the appliance creates its groups of agents based on agents that are configured for the same DCs, so in case 1 there are two groups: DCs A,B and C,D, but in case 2 there are 4 groups DCs AD, AC, BD and BC. So in case 1 the appliance will cover all DCs in just two requests, but in case 2 it will take 3 requests if a user is on DC B, and the list of users on DC A will have been checked through twice before it gets to there.
“1-2-1 Group” – Both agents talk to same group of DC
- Agent 1 talks to DC A, B
- Agent 2 talks to DC A, B
- Agent 3 talks to DC C, D
- Agent 4 talks to DC C, D
“Different Pairing” – Agents do not share the same DC group.
- Agent 1 talks to DC A, D
- Agent 2 talks to DC A, C
- Agent 3 talks to DC B, D
- Agent 4 talks to DC B, C
What kind of delays will we see if a user being handled by an SSO agent has to traverse a list of 12 domain controllers before discovering that the logon event is on the 12th domain controller? Extremely small. Because the agent has built an internal database of logged in users, it simply looks up the user in that and replies to the appliance immediately. So no delays are involved while the agent queries anything. But it is precisely because this delay is very small that it is better to have the appliance run down a list of agent groups than to try to have one agent read all the users from all the DCs.
Can we control the order in which the SSO agent accesses the domain controllers listed on the SSO agent. The appliance treats the domain controllers as groups, each DC group being those DCs that are read by one agent or one set of agents. The domain controller groups are actually built dynamically in the appliance since it isn’t configured with any information to tell it about them - it gets that information from the agents and creates the groupings as they return it. The DC groups will then be accessed in the same order that the appliance creates them on hearing back from the agents at startup. All things being equal that would tend to be the same order that the agents are configured, but it’s likely to be affected by network delays, distance to each agent, speeds of the agent PCs, etc. so that is not guaranteed.
The agent groupings are shown in the Single Sign On section of the TSR, and they will be accessed in the order as shown, e.g. in this example DC 192.168.168.4 will be tried first, then DC 192.168.168.3, and then if those fail to identify the user then finally NetAPI will be tried:
Agent 1 @ 192.168.168.3, state = up, protocol version = 4 (supported = 4)
- User ID mechanisms: NetAPI
Agent 2 @ 192.168.168.92, state = up, protocol version = 4 (supported = 4)
- User ID mechanisms: DC Logs
Agent 3 @ 192.168.168.94, state = up, protocol version = 4 (supported = 4)
- User ID mechanisms: DC Logs + NetAPI
Agent group 1:
- domain controllers: 192.168.168.4
- agents: 192.168.168.94
Agent group 2:
- domain controllers: 192.168.168.3
- agents: 192.168.168.92
Agent group 3 (default):
- domain controllers: none
- agents: 192.168.168.3
If all agents that talk to a particular group of DCs are disabled then that DC group will be deleted on the appliance, and if they are then subsequently re-enabled the DC group will be re-created and will now be the last one accessed, so this does give a way to control that ordering. If I disable and then re-enable agent 192.168.168.92, the above order changes to:
Agent group 1:
- domain controllers: 192.168.168.3
- agents: 192.168.168.92
Agent group 2:
- domain controllers: 192.168.168.4
- agents: 192.168.168.94
Agent group 3 (default):
- domain controllers: none
- agents: 192.168.168.3
So now DC
192.168.168.4 will be tried after DC
192.168.168.3.
Note that if agents go down (due to becoming unresponsive) the DC groups don’t change. So, if you set them up as you want them, they will stay that way. If the appliance is rebooted or an HA pair fails over, however, then it will be necessary to perform that again to restore the ordering.
Related Articles
Categories