BIT_LENGTH is a scalar string function defined in the SQL standard. It evaluates its single argument and returns an integer that represents how many bits are required to store that value. Because one byte equals eight bits, the result is OCTET_LENGTH(expression) * 8. If the argument is NULL, the function returns NULL. The function works on character strings (CHAR, VARCHAR, TEXT) and binary strings (BLOB, BYTEA, VARBINARY) without changing their content. In multibyte encodings, BIT_LENGTH still counts physical storage bytes, not characters, so a UTF-8 string containing non-ASCII symbols will yield more bits than its visible length. BIT_LENGTH does not trim trailing spaces. Precision of the result is implementation-dependent but generally fits in a standard INTEGER.
SQL-92 standard; first implemented in early PostgreSQL releases.
BIT_LENGTH works on any character or binary string type. Cast other data types to TEXT or BLOB before using it.
Yes. The function measures the raw storage length, so trailing spaces in CHAR values count toward the result.
Use DATALENGTH(expression) * 8 to obtain the bit length.
In multibyte encodings such as UTF-8, some characters occupy more than one byte. BIT_LENGTH counts every byte, so the bit length exceeds the character count.