soyMAIL 2.1.0 requires JavaScript
soyMAIL @ wasd.vsm.com.au
       info-WASD Mailing List 2023 

Sun 11:53:37 Message "2023 / 0100" opened.  MIME.  utf-8.  2 attachment(s).  3 part(s).  193 kbytes.    JavaScript

Subject:[Info-WASD] DDoS in the Real (WASD) World0100 / 0000
From:Mark.Daniel@wasd.vsm.com.au
Reply-to:info-wasd@vsm.com.au
Date:Tue, 21 Nov 2023 15:35:10 +1030  [21-NOV-2023 15:35]
To:info-WASD@vsm.com.au

TL;DR sometimes impossible to tell the difference.

Saturday received via the EISNER webmaster email address a note thanking me
for hosting the server (actually just manage the web) but "Please fix the
user guide links (currently 404) https://eisner.decus.org/online/guide". 
Hmmm.  Was working when originally set up (7 years ago).  A simple click in
the email will confirm.  However I had trouble connecting.  After repeated
attempts it sprang into life and looked OK.  Hmmm again.  Repeated attempts.
Looked at the Server Admin / Activity Report and there was a heap happening.

Decided to look more closely and SSHed in.  HttpdMon showed connections and
requests bolting along at the concurrent maximum permitted 100.  A huge
variety of origin hosts.  Some repeated.  Not seen this before.  A DDoS
attack?  After the earlier post regarding HTTP/2 mediated attacks, and with
most of the requests using HTTP/2, that was my assumption
(although it didn't seem to be employing known HTTP/2 vectors --
e.g. https://datatracker.ietf.org/doc/html/rfc7540#section-10.5)
Had the earlier post been noticed and considered an invitation?

Went to use WATCH to observe the request stream.  Trouble connecting again. 
When finally reconnected used WATCH to observe the default connection
activity plus Request [x]Header.  A significant number contained the user
agents

  HackerNews/1516 CFNetwork/1474 Darwin/23.0.0
  Dart/3.1 (dart:io)
  http.rb/3.1 (Mastodon/1.5.1)

with the first an understandable red flag, as well as a lot of reportedly
"android" based agents.  Hmmm yet again.  The rest reporting more commodity
agents.  Significantly, all observed requests were for /online/ URIs. 
Nothing of relevance in any Referer: field.  HttpdMon geolocation data
indicated the absolute global extent of connections.

This was an obviously organised onslaught that due to the resultant
difficulties in connecting to EISNER amounted to a Denial of Service attack
and with the global extent, a Distributed DoS.

How and why?  Later, after looking further, I came to a conclusion.
~~~~~~~~~~~~  But more on that shortly.  First, how it was tackled.
                                         ~~~~~~~~~~~~~~~~~~~~~~~~~~
WASD v12.2 has extended the functionality of #WASD_CONFIG_GLOBAL [Accept] and
[Reject] directives (and EISNER is a field test).  WASD v12.2 will be
released Real Soon Now (tm) (shortly after the recently released X86 X7.4-843
C compiler is re-released as a bug-fixed version).  While a detailed article
on this functionality is planned before v12.2 release, the gist of these
changes is the new functionality stores an IP address and *immediately*
disconnects that IP should repeated reconnects occur.  This expires after a
specified period (one hour by default).  A very low cost mechanism.

Having been underway for some 5 hours the initial response was to add
multiple rules such as

  #WASD_CONFIG_MAP
  if (user-agent:"*hackernews*") pass * 418

to immediately disconnect the request *and* pass the IP address to the
[Reject] functionality.  In the attached image EISNER_DDoS_12.png the effect
of this is noticeable as an immediate dip at the 6 hour mark (mid-graph).

Then with the possibility of HTTP/2 DDoS in mind

  #WASD_CONFIG_GLOBAL
  [Http2Protocol]  disabled

and server restarted, seen in the first vertical line.  After 30 minutes this
had made little difference to overall traffic and was reenabled, seen as the
second vertical line.

The image shows traffic slowly declining over the next 4 hours.  Both the
DDoS and the decline continued over the following 12 hours until the system
was back to a more usual traffic profile.  See attachment EISNER_DDoS_72.png

Did the EISNER remedial action contribute to the decline?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Not by much would be my estimate.  The Activity graph shows a couple of
obvious dents so there was some effect.  But it eventually died away of its
own accord.  Not much can be done by the target itself in a systematic global
bombardment.

Why did the DDoS seemingly lose momentum?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Likely it wasn't -- a DDoS that is.  Though difficult to tell the difference
when the end result appears much the same!  The following day I searched
using my go-to engine for 'eisner.decus.org' over the last 24 hours.
It returned a small number of hits including

  https://www.podbean.com/site/EpisodeDownload/PB1500B4ACQWIP

  "Top daily Hacker News for November 18, 2023"

which contained amongst a collage of many other links

  https://eisner.decus.org/online/

The podcast was published at or about [18/Nov/2023:05:44:57 -0400] i.e.
sometime after 09:00 GMT Saturday.  The number of (eisner.decus.org:443)
access log records for the hour beginning 02:00 that date was 7, 03:00 was 9,
04:00 was 13, 05:00 was 202, 06:00 was 7,344, 07:00 was 6,313, 08:00 was
6,472 and for the whole day 71,347.  The day before (Friday) totalled 359,
the day after (Sunday) 10,394 and today (EISNER time, Monday) some 3,240.

Talking points via Generative AI with accompanying links.  Hmmmmm...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[~01:55 into the podcast]

  "DECUServe is offering a treasure-trove of free consulting services.
   It's a platform rich in technical solutions and a history of hosting
   technical conferences.  Whether you are looking for peer-to-peer
   networking or access to a vast knowledge base, DECUServe is your
   go-to, with command-line and browser access to boot."

Nary a mention of [Open]VMS!

(Based on the EISNER spiel have low confidence in other items.)

Take-away: Sometimes impossible to tell the difference from being dumped on.

PS. Undid mapping rules consigning users of certain agents to the abyss.

PPS. The preponderance of HTTP/2 was most probably a result of the commodity,
HTTP/2-capable agents actually performing the access.  Though undoubtedly
some/many of these were aggregators (e.g. the three listed above).

This item is one of a collection at
https://wasd.vsm.com.au/other/#occasional

  ¤¤¤       
  ¤¤¤     
  ¤¤¤     
Image: 1st click 100%, 2nd actual size, 3rd default again
  ¤¤¤