First and foremost, reasoning backwards is completely doable within certain limitations. For instance, the Prolog language is able to do it in reasonable time on a subset of first order logic. However, keep in mind this is a deterministic domain with perfect information (meaning nothing is hidden / unknown).
Beyond these domains, backtracking becomes increasingly intractable and, ultimately, impossible. In general, backtracking is akin to the halting problem because you will always end up needing some information that might take infinitely long to arrive. Consider the following example:
When you send a packet through the internet, you expect a follow-up package telling you that your package arrived safely. However, until you get this acknowledgment, your are in a limbo of "either my package (or the follow-up) was lost or I just didn't wait enough". Of course that you can just resend it over and over until you get an acknowledgement of delivery (which is what computers do), but that doesn't solve the underlying issue. You solved "sending a message" but you are still clueless whether any of the prior messages arrived. You can't even tell, in general, how many attempts you need to make until one succeeds. That's a canonical example of the halting problem.
So, back to the backtracking problem. Say you are in a state N and you pictured a future state F. Then, you want your AI system to tell you how to avoid ever getting from N to F. Say I want you to answer this message, so I want to avoid the state in which you didn't. I go to this AI and say "AI, please tell me how to not get ignored by Chuan". Then, the AI starts thinking and reasoning but it never answers, cause it can't really tell if you will or will not answer me, because it might just be that you did (and the answer was lost in transit) or you just didn't (despite all of the AIs effort) or that we just haven't waited enough (so keep waiting).
Being less formal, backtracking relies on trying out possibilities, and, for it to work, all possibilities must be testable in a finite amount of time. The moment this property is lost, you cannot backtrack anymore. And, in general, there are far more things that are undecidable than decidable things.
You might try to find a hole in this, like branching to evaluate the assumption that something will happen and will not happen in parallel. However, this will add uncertainty to the AI's answer (on most cases) or remain unsolvable on others. For instance, say that at some point the AI needs to know a numeric result of some kind. The resulting value can be 1, can be 2, can be 3.14, can be 0.00001, etc. There are infinitely many possibilities, so you can't just branch on all possibilities.
Hope it clarifies a bit while backtracking is akin to the halting problem (because it is akin to predicting the future)