A job is the execution of a process on one or multiple Robots. After creating a process (deploying a package to an environment), the next step is to execute it by creating a job.
When creating a new job, you can assign it to specific Robots or you can allocate them dynamically.
Jobs assigned to specific robots have the advantage of having execution priority over jobs assigned dynamically, however, the latter gives you the possibility to execute the same process multiple times on whatever Robot becomes available first. Specifically, dynamic allocation means that jobs are placed in a pending state in the environment workload. As soon as a Robot becomes Available, it executes the indicated process according to your input.
If you define several jobs to run the same process multiple times, the jobs cumulate and are placed in the environment queue to be executed whenever a Robot becomes available.
Using the Allocate Dynamically option you can execute a process up to 10000 times in one job.
Job assignation can be done manually from the Jobs page or in a preplanned manner, from the Schedules page.
The Jobs page displays all the jobs that were executed, the ones still running, and the ones placed in a pending state, regardless of whether they were started manually or through a schedule. Jobs started on Attended Robots from their tray are also displayed here, with Agent displayed as source.
In this page, you can manually start a job, assign it an input parameter (if configured) or display its output parameter. Additionally, you may Stop or Kill a job, and display the logs generated by it with just a click of the button. More details are available on the Job Details window, in the event you plan to troubleshoot faulted jobs.
If you start a job on multiple High-Density Robots from the same Windows Server machine, it means that the selected process is executed by each specified Robot, at the same time. An instance for each of these executions is created and displayed in the Jobs page.
For example, in the following screenshot, you can see the same process running on four different Robots that have an identical start time.
If you start multiple jobs on the same Robot, the first one is executed, while the others are placed in a queue, in a pending state. The Robot executes the queued jobs in order, one after the other.
For example, in the following screenshot, you can see that three different jobs were started on the same Robot. The first job is running, while the others are in a pending state.
If a Robot goes offline while executing a job, when it comes back online, its execution is picked up from where it left off.
If you start the same process on the same Robot multiple times, and the first job is not fully executed, only the second job is placed in a queue.
If you start a job on multiple Robots from the same machine that does not run on Windows Server, the selected process is executed only by the first Robot and the rest fail. An instance for each of these executions is still created and displayed in the Jobs page.
If you are using High-Density Robots and did not enable RDP on that machine, each time you start a job, the following error is displayed: “A specified logon session does not exist. It may already have been terminated.” To see how to set up your machine for High-Density Robots, please see the About Setting Up Windows Server for High-Density Robots page.
For unattended faulted jobs, if your process had the Enable Recording option switched on, you can download the corresponding execution media to check the last moments of the execution before failure.
The Download Recording option is only displayed on the Jobs window if you have View permissions on Execution Media.