The Most significant digit is the left for English speaking, the Least significant digit is on the right.Logik wrote: ↑Fri Mar 22, 2019 11:08 am
Do you think you have a framework in which to quantify the concept of ''precision' without ending up in circularities?Scott Mayers wrote: ↑Fri Mar 22, 2019 10:41 am Ten, 10 would be expressed as 1.0 x 10¹. But is only 'precise' to 0.5. That is less precise. 18 m/s² would be just false of the definition of a gravitational field acceleration rate at sea level without using some other measure other than the "m" for meters and "s" for seconds. Those units matter too in the expression.
The general form of 1.0 x 10¹ is: significant * base ^ exponent
10 = 1.0 x 10¹
11 = 1.1 x 10¹
16 = 1.6 x 10¹
20 = 2.0 x 10¹
Because 1.1 = 1 when you ROUND DOWN then it follows 10 = 11
BECAUSE 1.6 = 2.0 when you ROUND UP then it follows that 16 = 20
Accuracy is to the most where precision is to the least.
For different architectures it depends on your electronic design: see Little-endian vs. Big-endian. This is true of the assignment of integers as well as floats. AND they differ in formats for the integer representing only the fraction part usually (but not necessary) and the exponent. 103.849034 is moved to 0.103849034 exp3, changed to binary, etc.
What is the purpose here? What are you confused about the measure of gravity? I showed you that you CAN use the UNIVERSAL gravitational constant if it bugs you that the specific one varies. But I don't think you seem to know the difference.