Find Duplicate File in System - Problem

Given a list paths of directory info, including the directory path, and all the files with contents in this directory, return all the duplicate files in the file system in terms of their paths.

You may return the answer in any order.

A group of duplicate files consists of at least two files that have the same content.

A single directory info string in the input list has the following format:

"root/d1/d2/.../dm f1.txt(f1_content) f2.txt(f2_content) ... fn.txt(fn_content)"

It means there are n files (f1.txt, f2.txt ... fn.txt) with content (f1_content, f2_content ... fn_content) respectively in the directory "root/d1/d2/.../dm".

Note that n >= 1 and m >= 0. If m = 0, it means the directory is just the root directory.

The output is a list of groups of duplicate file paths. For each group, it contains all the file paths of the files that have the same content.

A file path is a string that has the following format: "directory_path/file_name.txt"

Input & Output

Example 1 — Basic Duplicate Detection
$ Input: paths = ["root/a 1.txt(abcd) 2.txt(efgh)", "root/c 3.txt(abcd)", "root/c/d 4.txt(efgh)", "root 4.txt(efgh)"]
Output: [["root/a/2.txt","root/c/d/4.txt","root/4.txt"],["root/a/1.txt","root/c/3.txt"]]
💡 Note: Files with content 'efgh': root/a/2.txt, root/c/d/4.txt, root/4.txt. Files with content 'abcd': root/a/1.txt, root/c/3.txt
Example 2 — No Duplicates
$ Input: paths = ["root/a 1.txt(abcd) 2.txt(efgh)", "root/c 3.txt(ijkl)"]
Output: []
💡 Note: All files have unique content, so no duplicates exist
Example 3 — Single Directory
$ Input: paths = ["root 1.txt(same) 2.txt(same) 3.txt(different)"]
Output: [["root/1.txt","root/2.txt"]]
💡 Note: Only files with 'same' content are duplicates: root/1.txt and root/2.txt

Constraints

  • 1 ≤ paths.length ≤ 2 × 104
  • 1 ≤ sum of all paths[i].length ≤ 5 × 105
  • paths[i] consists of English letters, digits, '/', '.', '(', ')', and ' '.
  • You may assume no files or directories share the same name in the same directory.
  • You may assume each given directory info represents a unique directory. A single blank space separates the directory path and file info.

Visualization

Tap to expand
Find Duplicate File in System INPUT root/ a/ 1.txt(abcd) 2.txt(efgh) c/ 3.txt(abcd) d/ 4.txt(efgh) 4.txt(efgh) content: abcd content: efgh Input Paths: "root/a 1.txt(abcd) 2.txt(efgh)" "root/c 3.txt(abcd)" "root/c/d 4.txt(efgh)" "root 4.txt(efgh)" ALGORITHM STEPS 1 Parse Each Path Split dir and file(content) 2 Extract Content Get text between ( and ) 3 Group by Content HashMap: content --> paths 4 Filter Duplicates Keep groups with size >= 2 HashMap Structure "abcd" --> root/a/1.txt root/c/3.txt "efgh" --> root/a/2.txt root/c/d/4.txt root/4.txt FINAL RESULT Duplicate Groups Found: Group 1 (content: "efgh") "root/a/2.txt" "root/c/d/4.txt" "root/4.txt" Group 2 (content: "abcd") "root/a/1.txt" "root/c/3.txt" Output Array: [[ "root/a/2.txt","root/c/d/4.txt", "root/4.txt" ],[ "root/a/1.txt","root/c/3.txt" ]] Key Insight: Use file content as the hash key to group files. Files with identical content will map to the same bucket. After grouping, filter out groups with only one file (no duplicates). Time: O(N*L) where N = total files, L = average content length. Space: O(N*L) for storing all paths and contents. TutorialsPoint - Find Duplicate File in System | Hash Map Grouping Approach
Asked in
Dropbox 25 Google 20 Amazon 15
28.0K Views
Medium Frequency
~25 min Avg. Time
892 Likes
Ln 1, Col 1
Smart Actions
💡 Explanation
AI Ready
💡 Suggestion Tab to accept Esc to dismiss
// Output will appear here after running code
Code Editor Closed
Click the red button to reopen