MPD are not independent from the past, but future actions starting from current state are independent from the past i.e. the probability of the next state given all previous states is the same as the probability of the next state given the previous state.
Any state representation which is composed by the full history is a MDP, because looking at the history (coded in your state) is not the same as looking back previous states, so Markov property holds. The problem here is that you will have an explosion of states, since you need to code in the state any possible trajectory, and it is unfeasible most of the times.
What if I define my state as the current "main" state + previous decisions?
For Example in Poker the "main" state would be my cards and the pot + all previous information about the game.
Yes it is a Markov Decision Problem.