5

AI algorithms involving neural networks can use tensor specific hardware. Are there any other artificial intelligence algorithms that could benefit from many tensor calculations in parallel? Are there any other computer science algorithms (not part of AI) that could benefit from many tensor calculations in parallel?

Have also a look at TensorApplications and Application Theory.

nbro
  • 39,006
  • 12
  • 98
  • 176
bob smith
  • 51
  • 1

1 Answers1

1

There are a lot of other potential applications. It's a good idea to start with GPU related problems, since GPUs are essentially doing a slightly wider set of operations, slightly slower. Some possible problems where TPUs might be advantageous are:

  • Shaders are algorithms for rendering graphics in one style or another. Since computer graphics can be understood as mostly linear algebra, it is natural to view this as operations over tensors.

  • Physical simulations, which again involve multiplication of vectors by a series of matrices.

  • Options Pricing, which again involves the multiplication of vectors by a series of matrices, especially when more complex derivatives are prices and the lattice becomes multi-dimensional.

Within AI, there are many other algorithms optimized to work with GPUs, and that could be modified to work with tensor specific hardware. For example, we have:

There does not yet seem to be much work optimizing these other problems for Tensor Processing units, but TPUs are also not yet very old. It took several years following the availability of inexpensive consumer GPUs before we started to see widespread use in AI. I suspect we will see more TPU-tailored code for other problems soon.

John Doucette
  • 9,147
  • 1
  • 17
  • 52
  • What about RL's? I am a beginner, but as far as my understanding goes in RL we have to look up through multiple paths to get some value? Can this multi-path lookup not paralleizable on multiple cores/GPU's? –  May 06 '19 at 05:01
  • I think that might also be an application, although I think the advantages of doing so are probably small. In most RL algorithms (e.g. TD-$\lambda$), the change we make in our estimate of the value of one state will depend on the _changes_ we make to the estimates for other states. This means there is an inherently sequential nature to RL updates. I think you might be able to come up with a parallel approach, but it's not as obviously useful as the other domains that were mentioned. – John Doucette May 06 '19 at 19:57