I recently came across a concrete example of something that has always been maintained as a best practice – when working on items that are sensitive to decimal precision, always use precise data-types like DECIMAL instead of FLOAT or REAL.

The following example clearly demonstrates that using a FLOAT in mathematical calculations may change the outcome of the computation. For financial calculations, this would mean incorrect reporting of revenues.

USE tempdb;
GO
DECLARE @floatValue FLOAT = 1.5,
@decimalValue DECIMAL(5,2) = 1.5,
@multiplicationFactor INT = 159;
SELECT @floatValue AS FloatValue,
@decimalValue AS DecimalValue,
(@multiplicationFactor*(@floatValue/100)) AS FloatBasedValue,
(@multiplicationFactor*(@decimalValue/100)) AS DecimalBasedValue,
ROUND((@multiplicationFactor*(@floatValue/100)),2) AS RoundedFloatBasedValue,
ROUND((@multiplicationFactor*(@decimalValue/100)),2) AS RoundedDecimalBasedValue;

The output of this simple computation is provided in the screenshot below:

Impact of using imprecise datatypes in mathematical computations

As can clearly be seen, the computations performed using an imprecise datatype (FLOAT) result in a rounding error. When compounded over multiple derived calculations, this can have a serious impact on the end result.

The conclusion therefore is simply that it is always a best practice to use precise datatypes when working with financial calculations.

**Further Reading**

Until we meet next time,

*Be courteous. Drive responsibly.*

### Spread the word around! Share via:

### Like this:

Like Loading...

*Related*

Jeremy SingerA more general rule is: use the correct representation for the domain for which you are solving a problem. Think carefully about what sorts of computations you are likely to do, and especially about what aggregates you will do.

In the financial domain, results need to be significant at least to the nearest dollar, and often to cents. Its important to think carefully about use cases for the data, and especially for significance in results.

In bioinformatics, as well as other sciences, a problem is that measurements for a field can have differences in size of many orders of magnitude. In such cases, a common practice (especially for computing probabilities) is to work with log values. In such cases, float or double datatypes are a better choice. For example, when talking about gene expression levels, very often what is important is the change in expression of a particular gene under different conditions, not a comparison with the expression values of other genes. In such a case, where one gene’s expression value may be many orders of magnitude different, normalizing calculations by using logs can prevent results getting washed out by the absolute value differences.

LikeLiked by 1 person

nakulvachhrajaniPost authorThank you for taking the time to read through the post and share your valuable comments!

Yes, the domain will play a very important role in the determination of data types, and the example shared by you about the biomedical and life sciences domain is an ideal case that justifies the very existence of FLOAT/Double.

Thanks again, and have a wonderful rest of the day!

LikeLike

Carlton D'SouzaI think it would be better to say always ensure to use correct precission when opting for decimal over float. For example, using the very same data types as the you have used in the blog post, it can be shown that the decimal data type of incorrect precission is just as prone to rounding errors:

DECLARE @floatValue FLOAT = 1.5,

@decimalValue DECIMAL(5,2) = 1.5

SELECT (1/@floatValue)*1.5 float_result1

,(1/@decimalValue)*1.5 decimal_result1

float_result1 decimal_result1

———————- —————————————

1 0.9999990

SELECT (1/@floatValue)*3.0 float_result1

,(1/@floatValue)*3 float_result2

,(1/@decimalValue)*3.0 decimal_result1

,(1/@decimalValue)*3 decimal_result2

float_result1 float_result2 decimal_result1 decimal_result2

————– ————– —————– —————-

2 2 1.9999980 1.999998

LikeLiked by 1 person