Removing Duplicate Rows in Pandas

Galaxy Glossary

How do you remove duplicate rows in pandas DataFrames?

Removing duplicate rows in pandas refers to the process of eliminating repeated observations in a DataFrame, typically by using the drop_duplicates() method.

Sign up for the latest in SQL knowledge from the Galaxy Team!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Description

Removing Duplicate Rows in Pandas

Learn the theory and practice of eliminating duplicate rows in pandas—from the basics of drop_duplicates() to advanced, production-grade techniques.

Overview

Duplicate records are a silent threat to data quality. Whether they arise from system glitches, bad joins, or user error, duplicates can distort aggregates, inflate counts, and generally derail your analytics. In the pandas ecosystem, the DataFrame.drop_duplicates() method is the primary tool for excising these anomalies. Mastering this function—and the related best practices—will help ensure your analyses are both accurate and performant.

Why Removing Duplicates Matters

Data engineers and analysts use pandas as a staging ground for exploratory analysis, feature engineering, and even production pipelines. A single duplicate row can:

  • Skew metrics. Summations or averages double-count observations.
  • Pollute machine-learning models. Redundant samples bias the learner toward repeated patterns.
  • Trigger integrity violations. Downstream databases or BI tools often enforce uniqueness constraints.
  • Confuse stakeholders. Two identical rows with different timestamps raise questions about lineage.

Therefore, duplicate removal is a cornerstone of any data-quality checklist.

Core Concepts

The drop_duplicates() Method

DataFrame.drop_duplicates() scans the DataFrame for identical rows—or identical subsets of columns—and returns a DataFrame in which duplicates have been removed. Key parameters include:

  • subset: Columns to consider when identifying duplicates.
  • keep: Which duplicate to keep—"first" (default), "last", or False (drop all duplicates).
  • inplace: If True, mutate the DataFrame instead of returning a copy.
  • ignore_index: If True, reindex the result from 0.

Under the hood, pandas hashes each row—or the specified subset—and flags hash collisions as duplicates. This makes the operation O(n) on average, enabling fast execution even on large data sets.

Full-Row vs. Column-Subset Deduplication

Not all duplicates are equal. Sometimes an entire row is duplicated; at other times, only a business-key column is repeated. Using the subset parameter, you can drop duplicates based on a subset of columns while preserving additional contextual data in the remaining columns.

# Drop rows where "user_id" + "event_time" repeat
clean_df = df.drop_duplicates(subset=["user_id", "event_time"])

Choosing Which Duplicate to Keep

The keep parameter controls which record is preserved. Typical strategies include:

  • Keep first: Retain the earliest occurrence, e.g., the first load.
  • Keep last: Preserve the most recent update.
  • Drop all: Remove every instance of the duplicate, useful when duplicates indicate corrupted data.

Practical Walk-Through

Sample Dataset

import pandas as pd

data = [
{"user_id": 1, "country": "US", "event_time": "2024-06-21 10:01", "amount": 25},
{"user_id": 1, "country": "US", "event_time": "2024-06-21 10:01", "amount": 25}, # duplicate
{"user_id": 2, "country": "FR", "event_time": "2024-06-21 11:15", "amount": 42},
{"user_id": 3, "country": "US", "event_time": "2024-06-21 11:59", "amount": 13},
{"user_id": 2, "country": "FR", "event_time": "2024-06-21 11:15", "amount": 42}, # duplicate
]

df = pd.DataFrame(data)

1. Remove Perfect Duplicates

df_no_dupes = df.drop_duplicates()
print(df_no_dupes)

Result:

user_id country event_time amount
0 1 US 2024-06-21 10:01 25
2 2 FR 2024-06-21 11:15 42
3 3 US 2024-06-21 11:59 13

2. Deduplicate on Business Keys

Suppose you consider user_id the primary key and want to keep the latest event.

latest = (
df.sort_values("event_time")
.drop_duplicates(subset=["user_id"], keep="last")
.reset_index(drop=True)
)
print(latest)

3. Drop All Duplicates

# Remove any user_id appearing more than once
no_repeats = df[~df.duplicated(subset=["user_id"], keep=False)]

4. In-Place Operations

When memory is tight or the DataFrame is gigantic, set inplace=True:

df.drop_duplicates(inplace=True)

Best Practices

1. Profile Before You Drop

Quantify duplicates with df.duplicated().sum() or value_counts() to avoid accidental data loss.

2. Always Sort Intelligently

When keeping "first" or "last," define sort order explicitly so pandas doesn’t rely on load order, which may be random.

3. Preserve Raw Data

Store the untouched DataFrame or a snapshot (e.g., Parquet) before destructive operations for auditability.

4. Pipeline Integration

In production ETL pipelines, encapsulate deduplication logic in a reusable function and validate the outcome with assertions.

5. Log Your Actions

Record the number of rows removed and the criteria applied, ideally in a metadata table or monitoring system.

Common Mistakes to Avoid

Misunderstanding "keep" Default

By default, keep="first" retains the earliest duplicate. If the most recent record is desired, you must sort accordingly and set keep="last".

Forgetting Subset

Omitting subset can delete rows that differ only in non-essential columns, leading to information loss.

Relying on In-Place Without Backups

Using inplace=True without version control or a copy puts you at risk of irreversible changes.

Advanced Techniques

Conditional Deduplication

Sometimes duplicates should be removed only under certain conditions—e.g., rows created within the same minute. Combine boolean masks with .duplicated():

mask = (
(df["event_time"].astype("datetime64[m]") == "2024-06-21 10:01") &
df.duplicated(subset=["user_id"], keep=False)
)
pruned = df[~mask]

Distributed Deduplication with Dask

For >50 million rows, use Dask DataFrame’s .drop_duplicates(), which mirrors pandas syntax but performs out-of-core, parallel execution.

Pandas vs. SQL Approaches

SQL offers SELECT DISTINCT or window functions for deduplication. If your data lives in a database, consider removing duplicates at the source to minimize network IO. Modern SQL editors like Galaxy make crafting and testing these queries painless, after which you can pull already-clean data into pandas.

Real-World Use Case

A fintech company receives daily CSV exports of transactions. Due to upstream retries, some days include repeated records. An Airflow DAG ingests the CSVs into a pandas pipeline:

def load_transactions(path: str) -> pd.DataFrame:
raw = pd.read_csv(path, parse_dates=["event_time"])
clean = (raw.sort_values("event_time")
.drop_duplicates(subset=["transaction_id"], keep="last")
.reset_index(drop=True))
assert clean["transaction_id"].duplicated().sum() == 0
return clean

The cleaned DataFrame is then pushed to a data warehouse. This simple yet robust approach has saved the company from double-counting millions of dollars in volume reports.

Conclusion

Removing duplicate rows in pandas is more than a single method call; it’s a deliberate process involving profiling, choosing proper keys, and validating results. By following the best practices outlined here, you’ll safeguard data quality and maintain analytic integrity.

Next Steps

Embed deduplication as a reusable utility in your codebase, and—if you query databases—consider leveraging Galaxy’s AI-assisted SQL editor to deduplicate at the source before data even touches pandas.

FAQ

What does drop_duplicates() do in pandas?

It returns a DataFrame with duplicate rows removed based on all columns or a specified subset. The original DataFrame remains unchanged unless inplace=True.

How do I keep the last occurrence of duplicates?

Sort your DataFrame chronologically and call drop_duplicates(keep="last"). Pandas will retain the final duplicate.

Does drop_duplicates() modify the DataFrame in place?

Only if you set inplace=True. Otherwise, it returns a new DataFrame.

How is handling duplicates in pandas different from SQL?

SQL often uses SELECT DISTINCT or window functions; pandas uses drop_duplicates(). The logic is similar, but pandas operates in-memory, whereas SQL works inside the database. Tools like Galaxy help bridge the gap by letting you craft deduplication SQL before moving data to pandas.

Why Removing Duplicate Rows in Pandas is important

Duplicates distort analytics, bias machine-learning models, and violate data-integrity constraints. Efficiently removing them with pandas.drop_duplicates() is essential for reliable data pipelines and accurate insights.

Removing Duplicate Rows in Pandas Example Usage


df.drop_duplicates(subset=["user_id", "event_time"], keep="last", inplace=True)

Common Mistakes

Frequently Asked Questions (FAQs)

What does drop_duplicates() do in pandas?

It removes duplicate rows based on all or selected columns and returns a cleaned DataFrame.

How do I keep the last occurrence of duplicates?

Sort your DataFrame and pass keep="last" to drop_duplicates().

Does drop_duplicates() modify the DataFrame in place?

Only if inplace=True; otherwise, it returns a new DataFrame.

How is handling duplicates in pandas different from SQL?

SQL deduplicates inside the database; pandas operates in-memory. Use Galaxy to craft SQL deduplication when data lives in a DB.

Want to learn about other SQL terms?