Step 4: Unveiling the Duplicates
Identifying Duplicates Made Easy
After painstakingly counting the frequency of each word, finding duplicates becomes a cakewalk. All we have to do is sift through our occurrences
HashMap and pinpoint the words that have a frequency of 2
or more. These are our coveted duplicates.
Code Snippet for Identifying Duplicates
Here's how you can extract the duplicates from our occurrences
HashMap:
1List<string> duplicates = new List<string>();
2foreach (KeyValuePair<string, int> occurrence in occurrences)
3{
4 if (occurrence.Value > 1)
5 {
6 duplicates.Add(occurrence.Key);
7 }
8}
Analyzing the Complexity of Our Final Solution
Time Complexity Revisited
Let n
be the number of words in our input string s
.
- Populating the
occurrences
HashMap tookO(n)
time. - Traversing the
occurrences
HashMap to identify duplicates also takesO(n)
time.
Summing these up, our algorithm works in linear O(n)
time, which is remarkably efficient.
Space Complexity Revisited
We've used a HashMap (occurrences
) to store the frequency of each word, and a list (duplicates
) to store the duplicate words. In a worst-case scenario, each word in the sentence is unique, making our space complexity linear O(n)
.
xxxxxxxxxx
27
using System;
using System.Collections.Generic;
public class Program {
public static void Main() {
string s = "Original String Original String";
string[] split_s = s.ToLower().Split(' ');
Dictionary<string, int> occurrences = new Dictionary<string, int>();
foreach (string word in split_s) {
if (!occurrences.ContainsKey(word)) {
occurrences[word] = 1;
} else {
occurrences[word] += 1;
}
}
List<string> dupes = new List<string>();
foreach (string k in occurrences.Keys) {
if (occurrences[k] == 2) {
dupes.Add(k);
}
}
Console.WriteLine(string.Join(", ", dupes));
}
}
OUTPUT
Results will appear here.