I just started learning about AI and have been reading a book called "Foundations of Machine Learning" by Mehryar Mohri so that I can try to create my own. I had a question come up recently: Can I create a machine learning algorithm that can reasonably solve high dimensional problems?
For example, say I want to find a local maximum within a specified range for $Y$, but $Y$ is a function of $x_1,...x_{30}$ functions. And I call $x_i$ a function because it relies on all other $x$ functions such that for $x_{i=n}$, this is a function of all $x_{i \neq n}$. So there are 30 dimensions that the algo can alter and each variable is also a function of the other.
I looked up online about dimensional issues with AI and found a pretty good, simple article outlining why AI accuracy decreases with dimensionality increasing. The article was written a couple of years ago though and I wanted to see if anyone knew if research since then has found a machine learning method to get around this problem. If not, what are ways to say interpolate or minimize the error, besides a ton of training data or what the article recommends: dimensionality reduction prior to training?
PS - Please delve into the applied analysis that underlies whatever answer you may have!