DECIMAL (also written as NUMERIC in many dialects) is an exact numeric data type that keeps all digits of a number without rounding errors. It lets you declare the total number of digits (precision) and how many of those digits appear to the right of the decimal point (scale). Because values are stored as fixed-point numbers, DECIMAL is the preferred choice for financial data, scientific measurements, and any scenario where fractional rounding cannot be tolerated. If scale is omitted, most engines default it to 0. If both precision and scale are omitted, the database falls back to an implementation-specific default (often DECIMAL(10,0)). Values that exceed the declared precision are rejected with an error, and assignments with more fractional digits than the declared scale are rounded or blocked depending on the dialect.
precision
(integer) - total number of significant digits allowed (1-38 in most systems)scale
(integer) - number of digits to the right of the decimal point (0-precision)NUMERIC, FLOAT, DOUBLE PRECISION, MONEY, CAST, ROUND
SQL-92 standard
If you omit scale, most databases default it to 0, making DECIMAL behave like an integer with the specified precision.
Engines typically store DECIMAL as a scaled integer or packed BCD. Implementation details differ, but all preserve exact digits.
In the SQL standard they are synonyms. Some databases (e.g., PostgreSQL) treat them identically, so choose whichever is conventional in your codebase.
Yes, DECIMAL columns store signed values by default. Use CHECK constraints if you need only positive values.