Most of the practical research in AI that includes neural networks deals with higher dimensional tensors. It is easy to imagine tensors up to three dimensions.
When I ask the question How do researchers imagine vector space? on Mathematics Stack exchange, you can read the responses
Response #1:
I personally view vector spaces as just another kind of algebraic object that we sometimes do analysis with, along the lines of groups, rings, and fields.
Response #2
In research mathematics, linear algebra is used mostly as a fundamental tool, often in settings where there is no geometric visualization available. In those settings, it is used in the same way that basic algebra is, to do straightforward calculations.
Response #3:
Thinking of vectors as tuples or arrows or points and arrows... is rather limiting. I generally do not bother imagining anything visual or specific about them beyond what is required by the definition... they are objects that I can add to one another and that I can "stretch" and "reverse" by multiplying by a scalar from the scalar field.
In concise, mathematicians generally treat vectors as objects in vector space rather than popular academic/beginner imaginations such as points or arrows in space.
A similar question on our site also recommends not to imagine higher dimensions and to treat dimensions as degrees of freedom.
I know only two kinds of treatments regarding tensors:
Imagining at most up to three-dimensional tensors spatially.
Treating tensors as objects having shape attribute which looks like $n_1 \times n_2 \times n_3 \times \cdots n_d$
Most of the time I prefer the first approach. But I am feeling difficulty with the first approach when I try to understand codes (programs) that use higher dimensional tensors. I am not habituated with the second approach although I think it is capable enough to understand all the required tasks on tensors.
I want to know:
- How do researchers generally treat tensors?
- If it is the second approach I mentioned: Is it possible to understand all the high dimensional tensor-related tasks?