The short answer is that Hawkins' vision has yet to be implemented in a widely accessible way, particularly the indispensable parts related to prediction.
The long answer is that I read Hawkins' book a few years ago and was excited by the possibilities of Hierarchical Temporal Memory (HTM). I still am, despite the fact that I have a few reservations about some of his philosophical musings on the meanings of consciousness, free will and other such topics. I won't elaborate on those misgivings here because they're not germane to the main, overwhelming reason why HTM nets haven't succeeded as much as expected to date: to my knowledge, Numenta has only implemented a truncated version of his vision. They left out most of the prediction architecture, which plays such a critical role in Hawkins' theories. As Gerod M. Bonhoff put it in an excellent thesis1 on HTMs,
"In March of 2007, Numenta released what they claimed was a “research
implementation” of HTM theory called Numenta Platform for Intelligent
Computing (NuPIC). The algorithm used by NuPIC at this time is called
“Zeta1.” NuPIC was released as an open source software platform and
binary files of the Zeta1 algorithm. Because of licensing, this paper
is not allowed to discuss the proprietary implementation aspects of
Numenta’s Zeta1 algorithm. There are, however, generalized
concepts of implementation that can be discussed freely. The two most
important of these are how the Zeta 1 algorithm (encapsulated in each
memory node of the network hierarchy) implements HTM theory. To
implement any theory in software, an algorithmic design for each
aspect of the theory must be addressed. The most important design
decision Numenta adopted was to eliminate feedback within the
hierarchy and instead choose to simulate this theoretical concept
using only data pooling algorithms for weighting. This decision is
immediately suspect and violates key concepts of HTM. Feedback,
Hawkins’ insists, is vital to cortical function and central to his
theories. Still, Numenta claims that most HTM applicable problems can
be solved using their implementation and proprietary pooling
algorithms."
I am still learning the ropes in this field and cannot say whether or not Numenta has since scrapped this approach in favor of a full implementation of Hawkins' ideas, especially the all-important prediction architecture. Even if they have, this design decision has probably delayed adoption by many years. That's not a criticism per se; perhaps the computational costs of tracking prediction values and updating them on the fly were too much to bear at the time, on top of the ordinary costs of processing neural nets, leaving them with no other path except to try half-measures like their proprietary pooling mechanisms. Nevertheless, all of the best research papers I've read on the topic since then have chosen to reimplement the algorithms rather than relying on Numenta's platform, typically because of the missing prediction features. Cases in point include Bonhoff's thesis and Maltoni's technical report for the University of Bologna Biometric System Laboratory2. In all of those cases, however, there is no readily accessible software for putting their variant HTMs to immediate use (as far as I know). The gist of all this is that like G.K. Chesterton's famous maxim about Christianity, "HTMs have not been tried and found wanting; they have been found difficult, and left untried." Since Numenta left out the prediction steps, I assume that they would be the main stumbling blocks awaiting anyone who wants to code Hawkins' full vision of what an HTM should be.
1Bonhoff, Gerod M., 2008, Using Hierarchical Temporal Memory for Detecting Anomalous Network Activity. Presented in March, 2008 at the Air Force Institute of Technology, Wright-Patterson Air Force Base, Ohio.
2Maltoni, Davide, 2011, Pattern Recognition by Hierarchical Temporal Memory. DEIS Technical Report published April 13, 2011. University of Bologna Biometric System Laboratory: Bologna, Italy.