Step 4: Unveiling the Duplicates
Identifying Duplicates Made Easy
After painstakingly counting the frequency of each word, finding duplicates becomes a cakewalk. All we have to do is sift through our occurrences
HashMap and pinpoint the words that have a frequency of 2
or more. These are our coveted duplicates.
Code Snippet for Identifying Duplicates
Here's how you can extract the duplicates from our occurrences
HashMap:
1ArrayList<String> duplicates = new ArrayList<>();
2for (Map.Entry<String, Integer> entry : occurrences.entrySet()) {
3 if (entry.getValue() > 1) {
4 duplicates.add(entry.getKey());
5 }
6}
Analyzing the Complexity of Our Final Solution
Time Complexity Revisited
Let n
be the number of words in our input string s
.
- Populating the
occurrences
HashMap tookO(n)
time. - Traversing the
occurrences
HashMap to identify duplicates also takesO(n)
time.
Summing these up, our algorithm works in linear O(n)
time, which is remarkably efficient.
Space Complexity Revisited
We've used a HashMap (occurrences
) to store the frequency of each word, and a list (duplicates
) to store the duplicate words. In a worst-case scenario, each word in the sentence is unique, making our space complexity linear O(n)
.
xxxxxxxxxx
27
import java.util.*;
public class Main {
public static void main(String[] args){
String s = "Original String Original String";
String[] split_s = s.toLowerCase().split(" ");
Map<String, Integer> occurrences = new HashMap<>();
for (String word : split_s) {
if (!occurrences.containsKey(word)) {
occurrences.put(word, 1);
} else {
occurrences.put(word, occurrences.get(word) + 1);
}
}
List<String> dupes = new ArrayList<>();
for (String k : occurrences.keySet()) {
if (occurrences.get(k) == 2) {
dupes.add(k);
}
}
System.out.println(dupes);
}
}
OUTPUT
Results will appear here.