This is not like any other moral theory that you have ever seen. I am not trying to discover a moral principle that can be plugged into an argument to provide normative conclusions. In fact, I think that that all attempts to do so have failed and will continue to fail. (I think that philosophers who talk about “reflective equilibrium” effectively concede that the method does not work and simultaneously announce that they are going to continue to use it even though it does not, which strikes me as irrational.)
Instead, my project involves the reverse engineering of moral agents who have the intuitions that people usually have in the trolley cases: moral agents who do not think that they have to sacrifice themselves to save a larger number, that they must not sacrifice someone else to save a larger number, and that they should direct the trolley so that it kills one instead of five. Moreover, the moral agents in question have to be produced by the standard processes of evolution by variation and natural selection.
My reverse-engineered moral agents have foundational attitudes with content, which are extended through a disposition to acknowledge some others and a consequent commitment to acknowledge all others, and which are constrained by three types of hierarchically organized desire-dependence. The development of moral agents results in the development of real moral facts, which are a perfectly ordinary sort of fact, and the development of a natural moral community.
The result of the reverse-engineering is a form of intuitionism because it explains why moral agents have intuitions and when their intuitions are reliable, not because I use intuitions to supposedly justify normative results. There is no need to do so because the account has empirical implications, implications that are confirmed. The reverse-engineering approach makes ethics scientific.
Copyright Brian Zamulinski.