NUMERIC (often interchangeable with DECIMAL) stores exact fixed-point numbers by allocating a fixed total number of digits (precision) and a subset of those digits to the right of the decimal point (scale). Because values are stored as integers plus an implicit decimal separator, arithmetic on NUMERIC avoids the rounding errors common in floating-point types such as FLOAT or DOUBLE. NUMERIC is ideal for monetary amounts, inventory counts, and other values that require predictable accuracy. The valid precision range and the internal storage size vary by database, but most systems follow the SQL standard maximum of 38 digits. If a value inserted into a NUMERIC column exceeds the declared precision or scale, the statement errors or, in some dialects, silently rounds, so explicit precision-scale choices are important for data integrity.
precision
(INTEGER) - Total number of significant digits (1-38, database dependent)scale
(INTEGER) - Number of digits to the right of the decimal point (0-precision)DECIMAL, INTEGER, FLOAT, MONEY, CAST, ROUND
SQL-92 standard
Both types are defined as exact fixed-point in the SQL standard. Most databases implement them identically, but check documentation for storage limits.
No. If you omit both, the database uses its default (often unlimited precision). If you supply precision but not scale, scale defaults to 0, giving an integer-like column.
FLOAT is approximate and can introduce rounding errors. NUMERIC guarantees exact representation within the declared precision, critical for money and counts.
The SQL standard caps precision at 38 digits, but some databases allow more (e.g., PostgreSQL up to 1000). Always check your system limits.