Step 4: Unveiling the Duplicates
Identifying Duplicates Made Easy
After painstakingly counting the frequency of each word, finding duplicates becomes a cakewalk. All we have to do is sift through our occurrences
HashMap and pinpoint the words that have a frequency of 2
or more. These are our coveted duplicates.
Code Snippet for Identifying Duplicates
Here's how you can extract the duplicates from our occurrences
HashMap:
1std::vector<std::string> duplicates;
2for (auto& kv : occurrences) {
3 if (kv.second > 1) {
4 duplicates.push_back(kv.first);
5 }
6}
Analyzing the Complexity of Our Final Solution
Time Complexity Revisited
Let n
be the number of words in our input string s
.
- Populating the
occurrences
HashMap tookO(n)
time. - Traversing the
occurrences
HashMap to identify duplicates also takesO(n)
time.
Summing these up, our algorithm works in linear O(n)
time, which is remarkably efficient.
Space Complexity Revisited
We've used a HashMap (occurrences
) to store the frequency of each word, and a list (duplicates
) to store the duplicate words. In a worst-case scenario, each word in the sentence is unique, making our space complexity linear O(n)
.
xxxxxxxxxx
32
}
using namespace std;
int main() {
string s = "Original String Original String";
transform(s.begin(), s.end(), s.begin(), ::tolower);
stringstream ss(s);
string word;
map<string, int> occurrences;
while (ss >> word) {
occurrences[word]++;
}
vector<string> dupes;
for (auto& k : occurrences) {
if (k.second == 2) {
dupes.push_back(k.first);
}
}
for (string dupe: dupes){
OUTPUT
Results will appear here.