8.1Server Instances |
8.1.1VMS Clustering Comparison |
8.1.2Considerations |
8.1.3Configuration |
8.1.4Status |
8.2Server Environments |
↩︎ | ↖︎ | ↑︎ | ↘︎ | ↪︎ |
WASD instances and environments are two distinct mechanisms for supporting multiple WASD server processes on a single system.
Server instances are multiple, cooperating server processes providing the same set of configured resources.
Server environments are multiple, independent server processes providing differently configured resources.
The term instance is used by WASD to describe an autonomous server process. WASD will support multiple server processes running on a single system, alone or in combination with multiple server processes running across a cluster. This is not the same as supporting multiple virtual servers (see Virtual Services of WASD Configuration). When multiple instances are configured on a single system they cooperate to distribute the request load between themselves and share certain essential resources such as accounting and authorization information.
The approach WASD has used in providing multiple instance serving may be compared in many ways to VMS clustering.
A cluster is often described as a loosely-coupled, distributed operating environment where autonomous processors can join, process and leave (even fail) independently, participating in a single management domain and communicating with one another for the purposes of resource sharing and high availability.
Similarly WASD instances run in autonomous, detached processes (across one or more systems in a cluster) using a common configuration and management interface, aware of the presence and activity of other instances (via the Distributed Lock Manager and shared memory), sharing processing load and providing rolling restart and automatic "fail-through" as required.
On a multi-CPU system there are performance advantages to having processing available for scheduling on each. WASD employs AST (I/O) based processing and was not originally designed to support VMS kernel threading. Benchmarking has shown this to be quite fast and efficient even when compared to a kernel-threaded server (OSU) across 2 CPUs. The advantage of multiple CPUs for a single multi-threaded server also diminishes where a site frequently activates scripts for processing. These of course (potentially) require a CPU each for processing. Where a system has many CPUs (and to a lesser extent with only two and few script activations) WASD's single-process, AST-driven design would scale more poorly. Running multiple WASD instances addresses this.
Of course load sharing is not the only advantage to multiple instances …
When multiple WASD instances are executing on a node and a restart is initiated only one process shuts down at a time. Others remain available for requests until the one restarting is again fully ready to process them itself, at which point the next commences restart. This has been termed a rolling restart. Such behaviour allows server reconfiguration on a busy site without even a small loss of availability.
When multiple instances are executing on a node and one of these exits for some reason (resource exhaustion, bugcheck, etc.) the other(s) will continue to process requests. Of course requests in-progress by the particular instance at the time of instance failure are disconnected (this contrasts with the rolling restart behaviour described above). If the former process has actually exited (in contrast to just the image) a new server process will automatically be created after a few seconds.
The term fail-through is used rather than failover because one server does not commence processing as another ceases. All servers are constantly active with those remaining immediately and automatically taking all requests in the absence any one (or more) of them.
Of course "there is no such thing as a free lunch" and supporting multiple instances is no exception to this rule. To coordinate activity between and access to shared resources, multiple instances use low-level mutexes and the VMS Distributed Lock Manager (DLM). This does add some system overhead and a little latency to request processing, however as the benchmarks indicate increases in overall request throughput on a multi-CPU system easily offset these costs. On single CPU systems the advantages of rolling restart and fail-through need to be assessed against the small cost on a per-site basis. It is to be expected many low activity sites will not require multiple instances to be active at all.
When managing multiple instances on a single node it is important to consider each process will receive a request in round-robin distribution and that this needs to be considered when debugging scripts, using the Server Administration page and the likes of WATCH, etc. (see 8.1 Server Instances).
If not explicitly configured only one instance is created. The configuration directive [InstanceMax] allows multiple instances to be specified Global Configuration of WASD Configuration). When this is set to an integer that many instances are created and maintained. If set to "CPU" then one instance per system CPU is created. If set to "CPU-integer" then one instance for all but one CPU is created, etc. The current limit on instances is eight, although this is somewhat arbitrary. As with all requests, Server Administration page access is automatically shared between instances. There are occasions when consistent access to a single instance is desirable. This is provided via an admin service (see Service Configuration of WASD Configuration).
When executing, the server process name appends the instance number to the "WASD". Associated scripting processes are named accordingly. This example shows such a system:
The instance management infrastructure distributes basic status data to all instances on the node and/or cluster. The intent is to provide an easily comprehended snapshot of multi-instance/multi-node WASD processing status. The data comprises:
The data are constrained to these items due to the need to accomodate it within a 64 byte lock value block for cluster purposes. Single node environments do not utilise the DLM, each instance updating its table entry directly.
Each node has a table with an entry for every other instance in that WASD environment. Instance data are updated once every minute so any instance with data older than one minute is no longer behaving correctly. This could be due to some internal error, or that the instance no longer exists (e.g. been stopped, exited or otherwise no longer executing). An entry for an instance that no longer exists is retained indefinitely, or until a /DO=STATUS=PURGE is performed removing all such expired entries, or a /DO=STATUS=RESET removing all entries (and allowing those currently executing to repopulate the instance data over the next minute.
These status data are accessible via command-line and in-browser reports, intended for larger WASD installations, primarily those operating across multiple nodes in a cluster. With the data being stored in a common, another of those other nodes can provide a per-cluster history even if one or more nodes become completely non-operational.
This is an example report on a 132 column terminal display. Due to screen width constraints the date/time omits the year field of the date.
This provides an example CLI report showing a single node, where a single instance has been started, changed to a three instance configuration, restarted so that the three instances have begun processing. The configuration has been returned a single instance and then the existing three instances restarted the previous day, resulting in the original single instance returning to processing. That instance was last (re)started some 54 minutes ago (a normal exit status showing) and its status was last updated some 41 seconds ago. Note that the three instances showing white-space struck-through with hyphens are stale, having last been updated 1 day ago. Entries older than three minutes are displayed in this format to differentiate them from current entries.
The same report on an 80 column terminal. Note that the overt date/time has been omitted, leaving only the period ago the event happened.
Where multiple instances exist, or have existed, and the terminal page size is greater than 24 lines, HTTPMON displays an equivalent of the 80 column report at the bottom of the display.
Similarly, the Server Admin report (9. Server Administration) shows an HTML equivalent of the 80 column report immediately below the control and time panels.
WASD server environments allow multiple, distinctly configured environments to execute on a single system. Generally, WASD's unlimited virtual servers and multiple account scripting eliminates the need for multiple execution environments to kludge these requirements. However there may be circumstances that make this desirable; regression and forward-compatibility testing comes to mind.
See Server Environments in WASD Installation for deltained information on maintaining multiple installations of WASD.
↩︎ | ↖︎ | ↑︎ | ↘︎ | ↪︎ |