DEC is a standards-compliant synonym for DECIMAL that stores exact (not floating-point) numbers. You supply a precision, which is the total number of significant digits, and an optional scale, which is how many of those digits appear to the right of the decimal point. If scale is omitted, it defaults to 0. Internally, databases use fixed-length or variable-length packed representations to guarantee exactness, making DEC ideal for currency, inventory counts, and other calculations that cannot tolerate rounding error. Precision limits vary by vendor (e.g., up to 38 in PostgreSQL and Snowflake, 65 in MySQL, 38 in SQL Server). Values that exceed the defined precision raise an error. When scale is greater than precision, most systems throw a syntax error. DEC columns participate fully in indexes, constraints, and arithmetic expressions without implicit conversion to floating types.
precision
(integer) - Total number of digits (required)scale
(integer) - Digits to the right of the decimal point (optional, default 0)DECIMAL, NUMERIC, NUMBER, MONEY, FLOAT, REAL
SQL-92
DEC is merely a shorthand alias for DECIMAL. Both keywords behave identically in every major SQL database.
Precision is the total number of digits. Scale is the subset of those digits that appear after the decimal point. For example, DEC(8,3) can store 9999.999.
Some databases allow bare DEC, defaulting precision to vendor-specific maximums and scale to 0. Explicit precision is recommended for portability.
Yes. DEC uses a fixed-point representation, so calculations maintain exact values within the declared precision, avoiding binary rounding drift seen with FLOAT.