I’d expect any code generated by LLMs to be very fragile at best, and seriously wrong at worst, especially in the context of numerical methods. The reasons were discussed in this thread:
Specifically, this post by AI scholar Gary Marcus illustrates the fundamental limitations of stochastic prediction as an approach to knowledge.
That, in a nutshell, is why we should never trust pure LLMs; even under carefully controlled circumstances with massive amounts of directly relevant data, they still never really get even the most basic linear functions.