A DeepStack-style algorithm only requires that you have a way of approximating equilibrium counterfactual values for subtrees at the leaves of lookahead trees from each of its decision points. So if I'm acting at the beginning of the pre-flop and I only have the time and memory available to look ahead to the start of the flop, then I need to approximate the equilibrium counterfactual values at the root of each subtree that begins at the start of the flop.
DeepStack as in the Science paper's study used three neural networks, one for values at the end of the pre-flop, one at the beginning of the flop, and one at the beginning of the turn. These, along with the lookahead tree data structures, had to fit onto the 8GB of GPU memory on the 1080. Although not needed while playing, the networks were trained on millions of examples (ten million for the pre-flop and turn networks, and one million for the flop network).
We have an example DeepStack implementation on GitHub that plays Leduc hold'em if you want to look at some of the programming details in greater depth. Leduc hold'em is a small toy poker game that is commonly used in the poker research community.
The DeepStack algorithm arises out of a mathematically rigorous approach to approximating Nash equilibria in two-player, zero-sum, imperfect information games. Most of the strong poker AI to date attempt to approximate a Nash equilibria to one degree or another. So rather than analyzing poker specifically, we usually use an analysis of properties of imperfect information games more generally to get a general algorithm, then we apply these techniques to poker as a specific example of an imperfect information game.