Many people without a formal/solid background in statistics (e.g. without knowing exactly what the central limit theorem (CLT) states) are doing research on machine learning, which is a very big and fundamental subfield of AI that has a big overlap with statistics, or using machine learning to solve problems.
So, in my view, you don't need to learn everything about statistics to do research on some AI topic, including machine learning, but you need to have an understanding of the basics (at least a full introductory college-level course on statistics and probability theory), and the more you know the better.
More specifically, if you don't know what the CLT or the law of large numbers state, you will not have a full understanding of many things that are going on. At the same time, you will find a lot of research papers (published in ML conferences and journals) that do not even mention hypothesis testing, but it's important to have an idea of what a sample, sample mean, sample variance, likelihood, maximum likelihood estimation (MLE) or Bayes' theorem are. In fact, MLE is widely used in machine learning, but not many people using/doing ML would probably be able to explain precisely what the likelihood function is.
Finally, in my opinion, having a formal/solid (not necessarily extensive) background in statistics should be a prerequisite for doing research in machine learning (you need to really know what the likelihood function is!), which some people called applied/computational statistics or glorified statistics for some reason, but not necessarily for using machine learning to solve some problem. Moreover, there are other areas of AI that do not make use of statistics, but ML is probably the most important area of AI. So, if you hate statistics, you may not like AI and particularly ML, but maybe you will change your opinion about statistics, once you understand what e.g. neural networks are capable of doing or not.