Missing the availability requirements is a failure.
Safety is not only about human lives, but also about health and property (also e.g. critical financial and other losses, or reputational damage). The present incident has obviously caused considerable damage. We can only hope that the rest of the system does not suffer from similar omissions and that it is not pure coincidence that even worse events occur.
Yeah of course, but success/failure is also not binary. There are degrees of failure, including low-consequence availability issues, high-consequence availability issues, loss of operational safety, 'never events' (e.g. significant loss of life). In this case the system suffered the second of those options. It seems reasonable that design choices may prioritise that type of failure over the later ones in the list.
The first part of this argument is semantics - how do we define failure. The second part is IMHO more important - what decisions are taken with regards to the behavior of subsystems and how they influence overall system degredation. In this case the overall design prevented any loss of operational safety which, to me, is a success.
Safety is not only about human lives, but also about health and property (also e.g. critical financial and other losses, or reputational damage). The present incident has obviously caused considerable damage. We can only hope that the rest of the system does not suffer from similar omissions and that it is not pure coincidence that even worse events occur.