I like your idea of “thinking backwards.” It reminds me of Prolog and Alexander Wissner-Gross ‘s definition of intelligence, quoting from Wikipedia: “intelligence doesn’t like to get trapped.” Thus, using AI to help us avoid certain undesired scenarios (the “trap”) would be an intelligent thing to do. Also, the movie Minority Report
However, I do believe this sort of system is inherently impossible to create, as it is akin to the halting problem or, more broadly, predicting the future. The world is simply just too complex for a system to reasonable predict or identify a root cause for a given future condition, even if these are past events from our current time frame. To make such system any less impossible, some degree of bias would be needed to force some sort of convergence, which would be an ethically questionable decision. By the way, I guess this whole point also hinges on the world being somewhat deterministic.
In the end, I guess this entire discussion boils down to how much faith do we place on ethics. As I understand, you firmly believe there is an ethical path to take that is completely correct and, if an AI is trained to always take it, it will never go wrong. I, on the other hand, am more grounded on not believing there is an all correct ethical path or, if there is one, we are not even near of being able to find it or, at the very least, identify it if one is shown to us.
Socialists believe the greater good is above all individuals while libertarians praise individual freedom above all else, including the greater good. Which one is right? To this very day, being racist is not a crime on many countries, being sexist is closer to being the default than the exception, and, in many places, homosexuality is punishable by death. I don’t know where you live, but you can be sure that half of what is ok today might be unethical tomorrow. How can we even tell what ethics an AI should follow if our very own is flawed?