1 Ax.sql.Decimal
Primitive numeric types are useful for storing single values in memory. But when dealing with calculation using double and float types, there is a problems with the rounding. It happens because memory representation doesn't map exactly to the value.
Databases have a special type called DECIMAL to perform abitrary precision calculations. When returning data of such type from database to application, data is mapped to class java.math.BigDecimal.
The following example uses doubles and Decimal to perform the same operation leaving to different results.
<script> var row = Ax.db.executeQuery(` SELECT CAST(0.02 AS DECIMAL) a, CAST(0.03 AS DECIMAL) b FROM systables WHERE tabid = 1 `).toOne(); // A: using javascript double precision aritmetic console.log("Double precision"); console.log(0.02 - 0.03); console.log(row.a - row.b); // B: using BigDecimal aritmetic console.log("BigDecimal precision"); console.log(new Ax.sql.Decimal("0.02").subtract(new Ax.sql.Decimal("0.03"))); console.log(row.a.subtract(row.b)); </script>
Double precision
-0.009999999999999998
-0.009999999999999998
BigDecimal precision
-0.01
-0.01
Notice that Decimal types returned from database must be operated using its own function (add, subtract, div, mul, etc). If not, they are converted to double before operating.