
This is not the Black Swan. The next incident is not necessarily unimaginable, you might quite likely be able to conceive the scenario.
It’s more that we have to stop using past failures to predict future ones.
Dr Richard Cook taught me this in his seminal paper “how complex systems fail” which has its own website. All complex systems are broken all the time and every now and then the brokenness lines up (the Swiss cheese model) to take the system down. The next combination of factors to do that is highly unlikely to be similar to the last combination.