Yes, PAC learning can be relevant in practice. There's an area of research that combines PAC learning and Bayesian learning that is called PAC-Bayesian (or PAC-Bayes) learning, where the goal is to find PAC-like bounds for Bayesian estimators.
For example, Theorem 1 (McAllester’s bound) of the paper A primer on PAC-Bayesian learning (2019) by Benjamin Guedj, who provides a nice overview of the topic, shows a certain bound that can be used to design Bayesian estimators. An advantage of PAC-Bayes is that you get bounds on the generalization ability of the Bayesian estimator, so you do not necessarily need to test your estimator on a test dataset. Sections 5 and 6 of the paper go into the details of the real-world applications of PAC-Bayes.
See e.g. Risk Bounds for the Majority Vote: From a PAC-Bayesian
Analysis to a Learning Algorithm (2015) by P. Germain et al. for a specific application of PAC-Bayes. There's also the related Python implementation.
See also these related slides and this blog post (by the same author and John Shawe-Taylor) that will point you to their video tutorials about the topic.
The VC dimension can also be useful in practice. For example, in the paper Model Selection via the VC Dimension (2019) M. Mpoudeu et al. describe a method for model selection based on the VC dimension.