Is AIXI really a big deal in artificial general intelligence research?
Yes, it is a great theoretical contribution to AGI. AFAIK, it is the most serious attempt to build a theoretical framework or foundation for AGI. Similar works are Schmidhuber's Gödel Machines and SOAR architecture.
AIXI is an abstract and non-anthropomorphic framework for AGI which builds on top of the reinforcement learning field, without a few usual assumptions (e.g., without the Markov and ergodicity assumptions, which guarantees that the agent can easily recover from any mistakes it made in the past). Even though some optimality properties of AIXI have been proved, it is (Turing) uncomputable (it cannot be run on a computer), and so it is of very limited practical usefulness. Nonetheless, in the Hutter's book Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability (2005), where several properties of AIXI are rigorously proved, a computable but intractable version of AIXI, AIXItl, is also described. Furthermore, in the paper A Monte Carlo AIXI Approximation (2009), by Joel Veness et al., a computable and tractable approximation of AIXI is introduced. So, there have been some attempts to make AIXI practically useful.
The article What is AIXI? — An Introduction to General Reinforcement Learning (2015), by Jan Leike, which is one of the contributors to the development and evolution of the AIXI framework, gives a gentle introduction to the AIXI agent. See also The AIXI Architecture at the Stanford Encyclopedia of Philosophy for a possibly gentler introduction to AIXI.
Can it be thought of as a central concept for the field?
Yes, the introduction of AIXI and related research has contributed to the evolution of the AGI field. There have been several discussions and published papers, after its introduction in 2000 by Hutter in the paper A Theory of Universal Artificial Intelligence based on Algorithmic Complexity.
See e.g. section 7, "Examples of Superintelligences", of the paper Artificial General Intelligence and the Human Mental Model (2012), by Roman V. Yampolskiy and Joshua Fox. See also https://wiki.lesswrong.com/wiki/AIXI which contains a discussion regarding a few problems related to AIXI, which need to be solved or possibly avoided in future AGI frameworks. Furthermore, see also this and this articles.
If so, why don't we have more publications on this subject (or maybe we have and I'm not aware of them)?
There have been several publications, mainly by Marcus Hutter and associated researchers. You can see Marcus Hutter's publications on the following web page: http://www.hutter1.net/official/publ.htm.
If you are interested in contributing to this theory, there are several ways. If you are mathematically well educated, you can attempt to solve some of the problems described here (which are also mentioned in the Hutter's 2005 book mentioned above). Furthermore, you can also contribute to new approximations or improvements of existing approximations of the AIXI agent. Finally, you can build your new AGI framework by avoiding the problems associated with the AIXI framework. See also projects promoted by Hutter. It may be a good idea to also take into account e.g. Gödel Machines and related work, before attempting to introduce a new framework (provided you are capable of it).
I think that this theory has not attracted more people probably because it is highly technical and mathematical (so it is not very easy to understand unless you have a very solid background in reinforcement learning, probability theory, etc.). I also think that most people (in the AI community) are not interested in theories, but they are mainly guided by practical and useful results.