When General Purpose GPUs did not exist (just GPUs designed for DirectX/OpenGL), consumers found how to use the texture memory of GPUs to load to 2 dimensional data structures + use the primitive compute resources of the GPUs for various calculus.
Although the most common consumer TPUs (such as Pixel 6’s Tensor) are just designed for high-level environments (such as PyTorch or TensorFlow,) is it possible to pass them memory which is general purpose, + reuse the TPUs primitives for neural-networks (such as data reduction, convolution, dot products, matrix multiplication, transposes) on your own custom neural-networks (or non-neural-network calculus programs)?
Assistant suggests that this is possible, but does not give information about how to perform the actual function calls to do this, it just suggests to look into JAX + XLA