Find and remove duplicates

From LemonWiki共筆
Revision as of 14:16, 26 April 2022 by Unknown user (talk) (Created page with "=== Find duplicate data === ==== EXCEL ==== ===== Finding duplicate rows that differ in one column ===== * one column data: [http://www.extendoffice.com/documents/excel/1499-c...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Find duplicate data

EXCEL

Finding duplicate rows that differ in one column
Finding duplicate rows that differ in multiple columns

Cygwin

  • uniq command on Cygwin of Win Os windows.png or Linux Os linux.png : uniq -d <file.txt> > <duplicated_items.txt>[1]

MySQL

Finding duplicate rows that differ in one column

Find the duplicated data for one column[2]

-- Generate test data.
CREATE TABLE `table_name` (
  `id` int(11) NOT NULL,
  `content` varchar(5) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

INSERT INTO `table_name` (`id`, `content`) VALUES
(1, 'apple'),
(2, 'lemon'),
(3, 'apple');

ALTER TABLE `table_name`
  ADD PRIMARY KEY (`id`);

-- Find duplicated data
SELECT `content`, COUNT(*) count 
FROM `table_name` 
GROUP BY `content` 
HAVING count > 1;

SELECT tmp.* FROM 
( 
  SELECT `content`, count(*) count FROM `table_name` GROUP BY `content` 
) tmp 
WHERE tmp.count >1;
Finding duplicate rows that differ in multiple columns

Using CONCAT for multiple columns ex: column_1, column_2

SELECT count(*) count, CONCAT(  `column_1`, `column_2`  ) 'key'
	FROM `table_name`
	GROUP BY CONCAT(  `column_1`, `column_2`  )
HAVING count > 1;

or

SELECT tmp.key FROM
(
	SELECT count(*) count, CONCAT(  `column_1`, `column_2`  ) 'key'
	FROM `table_name`
	GROUP BY CONCAT(  `column_1`, `column_2`  )
) tmp
WHERE tmp.count >=2
other cases

For counting purpose: find the count of repeated id (type: int) between table_a and table_b

SELECT count(DISTINCT(id)) FROM table_a WHERE id IN
(
   SELECT DISTINCT(id) FROM table_b
)

Google Spreadsheet

Deduplicate

  • GNU Coreutils: sort invocation OS: Linux Os linux.png , cygwin of Win Os windows.png . More details on Merge multiple plain text files.
    • To remove duplicate lines:
      • sort -us -o <output_unique.file> <input.file> in a large text file (GB)[4]
      • cat <input.file> | grep <pattern> | sort | uniq Processes text line by line and prints the unique lines which match a specified pattern. Equal to these steps: (1) cat <input.file> | grep <pattern> > <tmp.file> (2) sort <tmp.file> | uniq
    • Ignore first n line(s) & remove duplicate lines[5][6][7]
      • (1) ignore first one line: (head -n 1 <file> && tail -n +2 <file> | sort -us) > newfile
      • (2) ignore first two lines: (head -n 2 <file> && tail -n +3 <file> | sort -us) > newfile

Counting number of duplicate occurrence

MySQL: find the number of duplicate occurrence between list_a & list_b which using the same primary key: column name id

  • SELECT count(DISTINCT(`id`)) FROM `list_a` WHERE `id` IN (SELECT DISTINCT(`id`) FROM `list_b`) ;

Excel:

Other

  • symbol e.g. data-mining or data_mining

References