Step 4: Unveiling the Duplicates
Identifying Duplicates Made Easy
After painstakingly counting the frequency of each word, finding duplicates becomes a cakewalk. All we have to do is sift through our occurrences HashMap and pinpoint the words that have a frequency of 2 or more. These are our coveted duplicates.
Code Snippet for Identifying Duplicates
Here's how you can extract the duplicates from our occurrences HashMap:
1duplicates := []string{}
2for word, count := range occurrences {
3 if count > 1 {
4 duplicates = append(duplicates, word)
5 }
6}Analyzing the Complexity of Our Final Solution
Time Complexity Revisited
Let n be the number of words in our input string s.
- Populating the
occurrencesHashMap tookO(n)time. - Traversing the
occurrencesHashMap to identify duplicates also takesO(n)time.
Summing these up, our algorithm works in linear O(n) time, which is remarkably efficient.
Space Complexity Revisited
We've used a HashMap (occurrences) to store the frequency of each word, and a list (duplicates) to store the duplicate words. In a worst-case scenario, each word in the sentence is unique, making our space complexity linear O(n).
xxxxxxxxxx26
package mainimport ( "strings" "fmt")func main() { s := "Original String Original String" split_s := strings.Fields(strings.ToLower(s)) occurrences := make(map[string]int) for _, word := range split_s { occurrences[word]++ } var dupes []string for k, v := range occurrences { if v == 2 { dupes = append(dupes, k) } } fmt.Println(dupes)}OUTPUT
Results will appear here.