Welcome Guest! Log in
Stambia versions 2.x, 3.x, S17, S18, S19 and S20 are reaching End of Support January, 15th, 2024. Please consider upgrading to the supported Semarchy xDI versions. See Global Policy Support and the Semarchy Documentation.

The Stambia User Community is moving to Semarchy! All the applicable resources have already been moved or are currently being moved to their new location. Read more…

The objective of this tutorial is to create a cluster of Runtimes that share the same delivery schedule planning.

Since Stambia Runtime's scheduling is based on quartz it natively manages schedule clustering.

The only requirement is to create the configuration file properly.



1. Obviously, you'll need multiple runtimes to create the cluster.

For this example, we have set 5 runtimes on the same machine listening on different port.

Refer to this article to learn how to change the runtime default ports.

2. An RDBMS schema in which all the shared information about the schedules will be stored.

3. Using the same log database for all the runtimes is not required but is advised to centralize all the logs.

Refer to this article to learn how to change the log database.


Creating the necessary tables

The first step is to create the tables that will be used by the schedulers.

For this, sample scripts for the most common databases are provided under <stambiaRuntime>/scripts/scheduler/*.sql. Execute the script corresponding to your database in the schema in which you want to create the tables.


Configuring quartz scheduling

Now it is possible to configure quartz to act as a cluster instead of standalone.

To do so, you have to create a file named engineScheduler.properties under <stambiaRuntime>/properties

Note: There is a sample file by default within any stambia runtime installation.

Example with an oracle database:

# Configure Main Scheduler Properties
# all the runtime installation must share the same instanceName

org.quartz.scheduler.instanceName = MyClusteredScheduler
# this parameter must set to AUTO
org.quartz.scheduler.instanceId = AUTO

# Configure ThreadPool
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool

org.quartz.threadPool.threadCount = 50
org.quartz.threadPool.threadPriority = 5

# Configure JobStore
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX

org.quartz.jobStore.useProperties = false
org.quartz.jobStore.tablePrefix = STB_
# this parameter must be set to true
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
#user stambiaRuntime as dataSource name
org.quartz.jobStore.dataSource = stambiaRuntime

# Configure Datasources
# Specify the connection to the Quartz tables

org.quartz.dataSource.stambiaRuntime.driver = oracle.jdbc.driver.OracleDriver
org.quartz.dataSource.stambiaRuntime.URL = jdbc:oracle:thin:@
org.quartz.dataSource.stambiaRuntime.user = STAMBIA
org.quartz.dataSource.stambiaRuntime.password = STAMBIA
org.quartz.dataSource.stambiaRuntime.maxConnections = 5
org.quartz.dataSource.stambiaRuntime.validationQuery=select 0 from dual

You can next copy this file under every runtime installation that is part of the cluster and start (or restart) every runtime as well.


  • When deploying a cluster on separate machines, be SURE that the clocks are synchronized. Use third part software to ensure this.
  • In the Configure Datasources part, set the parameters accordingly to the database/schema in which the tables from the script have been created.
  • You may also need to adjust some jobStore parameters to match your Rdbms product.


Additional properties for Microsoft SQL Server:

  • org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.MSSQLDelegate
  • org.quartz.jobStore.selectWithLockSQL=SELECT * FROM {0}LOCKS UPDLOCK WHERE LOCK_NAME = ?
  • org.quartz.jobStore.lockHandler.class = org.quartz.impl.jdbcjobstore.UpdateLockRowSemaphore
  • org.quartz.jobStore.acquireTriggersWithinLock = true



Now, you can create a schedule, modify it or remove it, will be automatically shared among all the runtimes.

Every runtime sharing the same cluster configuration will get the schedules created in the cluster, even if the runtime is not started or doesn't exist at the time the schedule is created.


Note there is another way to use the scheduler with native load balancing and failover

Refer to this article for further information



Suggest a new Article!