TL;DR mea culpa
The preceding posting, 'Throttle revisited plus CPU exhaustion', is published
for the valid reason that such circumstances do occur.
https://wasd.vsm.com.au/info-WASD/2025/0038
Where adjustments to operating parameters need to be made with respect to
demand verses available resources. These are some of the approaches that may
mitigate those circumstances. Often it is not possible just to cut a chunk
off as suggested and undertaken in that document.
And if frequenters of the DECUServe.org site, both web visitors and
interactive users, have noticed some patchiness in both availability and
performance, please lay some of the blame on the harvesters pounding at the
DECUServe.org plantation (pushing the metaphor a little too hard, I concede).
I am also about to employ the old adage regarding not seeing the wood for the
trees.
https://wasd.vsm.com.au/info-WASD/2025/0004
Instead of stepping back and considering the pounding DECUServe.org was
receiving, my first response is (as usual) the code is deficient in some way,
and start looking and (most often) tinkering with the works. Yup, no wonder
the lofty heights of Enterprise Architect escaped me.꙳꙳ Code first then
look to see how the design falls out of that. Hmmm, sounds kinda modern.
So, I began addressing the potential reasons that WASD was not coping on a
4xCPU 2GB ES40-class system (once quite respectable). I decline to admit how
long I laboured before it dawned that WASD was CPU bound under the current
onslaught. The solution has been done-to-death in the previous posting so we
won't go there again. Instances worked brilliantly! When the server image
wasn't ACCVIOing, getting into endless loops, and generally exhibiting all
the symptoms of repeated TIAs. I had tinkered it almost into a care home.
Hmmm. Instances seemed the processing solution. How to recover? Roll back
to a stable version. Looking at the version log, six months seemed enough.
Build and restart. Two instances. Stayed up just like a bought one. One
day, two, three days were enough to convince me. Of course there were
bugfixes to be (re)applied. And a couple of feature requests. And my own
potpourri of improvements. One or two at a time, supplemented by two days of
continuous up-time. Refactor one more optimisation originally in place since
VAX days ... which of course caused its own share of subtle trouble over a
week or two (thanks GB for pointing me in the right direction).
Then some bright spark (HG) made the point that the system is littered with
'conan' (The Librarian꙳꙳꙳) scripts, providing LOTS of cross-linked VMS
Help, text library content, and the like, to literally *thousands* of
crawlers across the globe. Should we shut these down and see what happens?
Seems so elementary. Nice to have help at your fingertips. But not
essential. And it would appear an easy target for pointless harvesting.
So, as discussed late in the previous posting, that is what was done.
Now back to a single instance and plenty of spare resource. Help? Who needs it.
┊ 0 25 50 75 100
┊ + - - - - + - - - - + - - - - + - - - - +
┊ 00025D24 WASD:80 21 ▒▒▒▒▒▒▒▒
┊ 00000422 MULTINET_SERVER 1
┊ 00011928 SSHD 0004A PTD
┊ 00000435 NTP_SERVER
┊ 00000412 ACME_SERVER
8< snip 8<
┊ + - - - - + - - - - + - - - - + - - - - +
Should first have looked for the wood.
Mea culpa.
꙳꙳ I was attending a global conference in another part of the country and
while walking back to my digs struck up a conversation with three other
attendees. I was way later in my working life than these young people.
Looked like recent grads. To a man, they were all Enterprise Architects.
I thought to myself, "these must be the year's crème de la crème". When I
started after graduation I got to sit in on code reviews. System Analysts
(old fashioned term, I know) led ethereal lives consumed by interview
records, use cases, flowcharting tools, and shooting the occasional rapids.
꙳꙳꙳ With due acknowledgement to Weird Al Yankovic.
This item is one of a collection at
https://wasd.vsm.com.au/other/#occasional
|