A Floating Point Precision Of X
This is not a code revision, therefore I have created the thread here. I have an assignment which is as follows: Write a program which: 1. intializes a variable a with 123, and
Solution 1:
From the format string mini-language specification:
The precision is a decimal number indicating how many digits should be displayed after the decimal point for a floating point value formatted with
'f'
and'F'
, or before and after the decimal point for a floating point value formatted with'g'
or'G'
. For non-number types the field indicates the maximum field size - in other words, how many characters will be used from the field content. The precision is not allowed for integer values.
So yes, it is the number of digits after the decimal point.
Post a Comment for "A Floating Point Precision Of X"