By putting a burden on people who know something is “morally questionable,” you are discouraging people from taking actions which might reveal such knowledge to them. In the hypothetical you posted, the manager is not in a position to avoid this knowledge.
By your logic, an activist that knows about the conditions inside the plant is also responsible for the safety of the workers, and no action other than successfully protecting the workers will satisfy that responsibility. In this framework, it’s better for an outsider to actively avoid learning about the working conditions because it might impose a duty that’s impossible to fulfill.
This is a tortured misreading of what I’m saying - and regardless is addressed by my argument.
> you are discouraging people from taking actions which might reveal such knowledge to them
If you are deliberately and knowingly avoiding that knowledge to escape culpability then that’s morally equivalent to having the knowledge but failing to act. A police sergeant who says “get a confession, and if you rough him up don’t tell me about it” might be protecting themselves from legal responsibility but certainly not moral responsibility.
Again: there isn’t a loophole here. Constructing a loophole is itself immoral.
> By your logic, an activist that knows about the conditions inside the plant is also responsible for the safety of the workers
No - the manager has substantially more capacity to act than the activist. There is some fuzziness - is the activist is deliberately suppressing information about conditions inside the plant then they do share responsibility. But if the activist is merely ineffective then it’s not reasonable to blame them for injuries or fatalities. This is not a question that’s amenable to algorithms or flowcharts, which seems to be giving you some difficulty.
> and no action other than successfully protecting the workers will satisfy that responsibility.
This is not what my logic said at all! It’s such a ridiculous misreading that I wonder if you are arguing in good faith. I didn’t say the manager had to successfully protect the worker, but that they had to try. If machine maintenance is the responsibility of another team then perhaps the manager could do everything right and still have a worker killed. No one human can solve every problem that might come up. But if the manager knew there was a risk and did nothing, not even taking a look, then they share moral responsibility with the team actually responsible for maintaining the equipment. There is a huge difference between “the firefighters failed to stop the fire because it got too dangerous” and “the firefighters failed to stop the fire because they went out drinking,” even if in the latter case the fire was hopeless.
Again - you really can’t lawyer or logic your way out of moral responsibility, even if it works on a legal or social level. And no, there isn’t a foolproof algorithm for determining moral questions like this. But if your approach is “where are the gaps in this moral framework that lets me get away with bad things?” then you are not interested in acting morally.
I think the OP is concerned with the question "how do I create incentive-compatible moral codes", not "how do I behave given a moral code". In economics theres lots of cases when these two questions are very different questions in a pretty counterintuitive way (except with "laws" replacing "moral codes")
Ultimately, you can try and build a perfect theoretical moral framework, but if it disagrees too much with how people feel about it in aggregate, it won't take (and you'll do more wrong than good by trying to force-feed it to people). It's e.g. one of the bigger failing of communism - it turned out that the concept of private ownership is close to fundamental to humans[0], so the then-new system failed as soon as people with guns stopped enforcing it.
--
[0] - Don't know why, but my pet hypothesis is that it's fundamental to each of us to be able to think about some things as "it's mine, I control it, I'm responsible for what happens to it". Might have been necessary to survival - if there are things your life depends on, you want to have absolute say about what happens to them, so that they are always there when you need them the most.
If I understand you correctly, moral culpability arises independently of your knowledge (or lack thereof) of the actual situation. Instead, it comes from your ability to effect change; even if unforseen (or unforseeable?), you are responsible for the difference between the actual outcome and the best outcome that could have been had you acted differently.
What I don’t understand is how awareness of potential responsibility or the act of grappling with moral questions factors into this model.
By your logic, an activist that knows about the conditions inside the plant is also responsible for the safety of the workers, and no action other than successfully protecting the workers will satisfy that responsibility. In this framework, it’s better for an outsider to actively avoid learning about the working conditions because it might impose a duty that’s impossible to fulfill.