Can floating point literals be adapted by the compiler to double precision variable?

2 posts were merged into an existing topic: Simple Generics