Use MySQL’s JSON functions and operators to extract, transform, and filter data stored in JSON columns or strings.
Parsing JSON lets you keep flexible schemas while still querying individual attributes efficiently. You can filter rows, join tables, and build reports without exporting data to an application layer.
Use JSON_EXTRACT()
, ->
, and ->>
to fetch JSON values. JSON_UNQUOTE()
, JSON_SET()
, and JSON_REPLACE()
modify or cleanse data. The JSON_TABLE()
function turns JSON arrays into relational rows.
Store customer addresses in Customers.address
(JSON). Retrieve the city:
SELECT id, name, address->>"$.city" AS city
FROM Customers
WHERE address->>"$.country" = 'US';
When an order’s metadata
JSON contains an items array, use:
SELECT JSON_EXTRACT(metadata, '$.items[0].product_id') AS first_item
FROM Orders
WHERE id = 42;
JSON_TABLE()
converts each element into a row, perfect for aggregations.
SELECT o.id AS order_id, jt.product_id, jt.quantity
FROM Orders o,
JSON_TABLE(o.metadata,
'$.items[*]' COLUMNS(
product_id INT PATH '$.product_id',
quantity INT PATH '$.quantity')) AS jt;
Index frequently parsed paths with virtual generated columns
. Validate JSON on insert to avoid malformed data. Prefer ->>
for strings to skip quoting.
Using the wrong operator: ->
returns quoted JSON; cast or use ->>
for plain text.
Ignoring nulls: If the key is missing, the result is NULL
; coalesce when necessary (COALESCE(field, default)
).
Remember: ->
JSON, ->>
text, JSON_TABLE
rows. Index paths you query often and keep JSON small for performance.
Light parsing is fast, but heavy use without indexes can degrade performance. Use generated columns and proper indexing.
Yes. Use JSON_SET() or JSON_REPLACE() to modify only the targeted path without rewriting the entire document.