UiPath Orchestrator Guide

Hardware Requirements

For a Production environment, it is highly recommended to provide one dedicated server for each role:

  • One server for the Orchestrator web application.
  • One server for SQL Server Database Engine.
  • One server for Elasticsearch and Kibana.

For a Demo, PoC, Development, UAT or Test environment two machines can be used. Install Elasticsearch on the same server as Orchestrator web application and leave SQL Server alone. A maximum number of 100 Unattended robots running simultaneously is assumed.

The same hardware requirements as for Production can be used for Development and Test.

Support up to 250 Unattended Robots

Web Application Server

Number of Robots
CPU Cores (min 2 GHz)
RAM (GB)
HDD (GB)

<20

4

4

100

<50

4

4

100

<100

4

4

150

<200

4

4

200

<250

4

4

200

In terms of HDD, consider an average of 4GB per day taken by the internal logs of IIS. Disc space maintenance should be considered, to delete old log files.

Note:

For more than 200 Robots, increase the number of SQL connections in the web.config file to 200. To do this, add the Max Pool Size=200 code to the connection string, so that it looks somethings like this:
<add name="Default" providerName="System.Data.SqlClient" connectionString="Server=SQL4142;Integrated Security=True;Database=UiPath;Max Pool Size=200;" />

SQL Server

Number of Robots
CPU Cores (min 2 GHz)
RAM (GB)
HDD (GB)

<20

4

8

100

<50

4

8

200

<100

4

8

300

<200

8

8

SSD 400

<250

8

16

SSD 400

Disc space requirements highly depend on:

  • Whether work queues are used or not. If work queues are used, it depends on average number of transactions added daily/weekly and size (number of fields, size of each field) of each transaction.
  • The retention period for successfully processed queue items (the customer should implement their own retention policy).
  • Whether messages logged by the Robots are stored or not in the database. If they are stored, a filter can be applied to only store in the DB specific levels of messages (for example, store in the DB the messages with log level Error and Critical, and store in Elasticsearch messages with log level Info, Warn and Trace).
  • Frequency of logging messages - the Robot developer uses the Log Message activity at will, whenever they consider a message is worth to be logged.
  • The retention period for old logged messages (the customer should implement their own retention policy).
  • Logging level value set up in the Robot. For example, if logging level in the robot is set to Info, only messages with levels Info, Warn, Error and Critical are sent to Orchestrator; messages with levels Debug, Trace and Verbose are ignored, they will not reach Orchestrator.

Elasticsearch Server

Number of Robots
CPU Cores (min 2 GHz)
RAM (GB)
HDD (GB)

<20

4

4

100

<50

4

4

100

<100

4

8

150

<200

4

12

200

<250

4

12

200

Disc space requirements depend on:

  • The retention period (the customer should implement their own retention policy).
  • Frequency of logging messages - the Robot developer uses the Log Message activity at will, whenever they consider a message is worth to be logged.
  • Logging level value set up in the Robot. For example, if logging level in the Robot is set to Info, only messages with levels Info, Warn, “Error” and “Critical” are sent to Orchestrator; messages with levels “Debug”, “Trace” and “Verbose” are ignored, they will not reach Orchestrator.

Note:

For more than 50 Robots you need to instruct the Java Virtual Machine used by Elasticsearch to use 50% of the available RAM, by setting both the -Xms and -Xmx arguments to half of the total amount of memory. This is done either through the ES_JAVA_OPTS environment variable or by editing the jvm.options file.

Support Between 250 and 500 Unattended Robots

Web Application Server

Number of Robots
CPU Cores (min 2 GHz)
RAM (GB)
HDD (GB)

<300

8

8

100

<400

8

8

120

<500

16

8

150

Note:

For more than 400 Robots it is recommended to increase the number of CPU Cores to 16.

SQL Server

Number of Robots
CPU Cores (min 2 GHz)
RAM (GB)
HDD (Gb)

<300

16

32

SSD 400

<400

16

32

SSD 500

<500

16

32

SSD 600

Note:

For SQL Server Standard Edition, 16 CPU cores is the maximum that the Standard edition will use. For a virtual machine, please ensure that this number of cores is obtained as 4 virtual sockets with 4 cores each (and not as 2 sockets with 8 cores or 8 sockets with 2 cores). For Enterprise edition, it does not matter what is the combination to obtain 16 cores.

For more than 300 Robots, please consider not storing all logged messages in the SQL Server database. Store in the DB only the messages with log level Error and Critical. Store all messages (including Error and Critical) in Elasticsearch.

Elasticsearch Server

Number of Robots
CPU Cores (min 2 GHz)
RAM (GB)
HDD (GB)

<300

4

12

500

<400

4

16

600

<500

4

16

600

Support for Over 500 Unattended Robots

If Orchestrator needs to support more than 500 Robots running simultaneously, you need to provide 2 or more Orchestrator nodes in a farm, under a Load Balancer. Each node should have the hardware requirements according to the number of Robots it serves by request from the Load Balancer. But remember that the SQL Server is still one single machine (even with Always On Availability Groups, the Primary Replica is the one responsible to serve all the I/O requests). Therefore you need to:

  • Increase the RAM on the SQL Server to 64GB.
  • Store ONLY Error and Critical log levels from the Robot in the DB.

SQL Server

Number of Robots
CPU Cores (min 2 GHz)
RAM (GB)
HDD (GB)

500

16

64

SSD 800

For SQL Server Standard Edition, 16 CPU cores is the maximum that the Standard edition will use. For a virtual machine, please ensure that this number of cores is obtained as 4 virtual sockets with 4 cores each (and not as 2 sockets with 8 cores or 8 sockets with 2 cores). For Enterprise edition, it does not matter what is the combination to obtain 16 cores.

Network Load Balancer

A Network Load Balancer (software NLB, like F5 BIG-IP, NGINX, Zen Load Balancer, HAProxy) is required when Orchestrator is installed on multiple servers in a High Availability configuration.

Minimum requirements are to be a Layer 7 HTTP Load Balancer, configured with Round Robin algorithm, no affinity or sticky session.

TCP Ports

Communication between Robots and Orchestrator takes place on the default HTTPS port, 443. The communication port can be specified at installation time, but it can also be changed after installation.

Orchestrator is communicating with Elasticsearch by default on port 9200 (configurable after Elasticsearch installation). If Elasticsearch and Orchestrator are installed on different computers, port 9200 should be opened for inbound access on the computer where Elasticsearch is installed. The firewall rule can use filters to allow access from Orchestrator only.

The Kibana web plugin listens by default on port 5601 (configurable after Kibana installation). This port needs to opened if Kibana needs to be accessed from other computers in the network, not only from the server where Orchestrator is installed.

If the deployment model for High Availability (with at least 2 Orchestrator nodes) is chosen, then the port 6379 is by default used by Redis (in-memory database, used for caching SignalR store). The Redis port should be opened for inbound access on the machine where Redis is installed.

The SQL Server port (by default 1433) needs to be opened to give access to the web application.

You can also check out hardware requirements for Studio and Robot.


Hardware Requirements