Introducing PyTorch Monarch – PyTorch

@loiseaujc and @jeremie.vandenplas are right, it’s not a language issue, it’s just building out the libraries to do the work and make it easy for people to build their own neural networks and perform automated hyperparameter optimisation using available tools like Weights and Biases (and optimising the codes so that they run at comparable speeds to PyTorch).

Most of the libraries such as neural-fortran, fiats, athena already have the standard optimisers implemented (such as adam and sgd). @loiseaujc, interesting you mention that, I am currently in the process of implementing automatic differentiation into athena. It works for some of the layers (such as fully connected), I’m just battling with fixing memory issues.

2 Likes