Expressions like this are evaluated as exp(log(a)*1.0/3.0) or maybe exp(log(a)/3.0)with some optimization. Unless there is something wrong with the exp() and log() evaluations, from where would the error arise?
Of course, efficiency is a separate issue. There may be faster ways to compute specific values, including cube roots, but however they are evaluated they should all agree to the last bit or two, which is all that you can expect from floating point arithmetic. Are there data anywhere that shows otherwise?