Step 4: Unveiling the Duplicates

Identifying Duplicates Made Easy

After painstakingly counting the frequency of each word, finding duplicates becomes a cakewalk. All we have to do is sift through our occurrences HashMap and pinpoint the words that have a frequency of 2 or more. These are our coveted duplicates.

Conclusion and Complexity

Code Snippet for Identifying Duplicates

Here's how you can extract the duplicates from our occurrences HashMap:

1ArrayList<String> duplicates = new ArrayList<>();
2for (Map.Entry<String, Integer> entry : occurrences.entrySet()) {
3    if (entry.getValue() > 1) {
4        duplicates.add(entry.getKey());
5    }
6}

Analyzing the Complexity of Our Final Solution

Time Complexity Revisited

Let n be the number of words in our input string s.

  1. Populating the occurrences HashMap took O(n) time.
  2. Traversing the occurrences HashMap to identify duplicates also takes O(n) time.

Summing these up, our algorithm works in linear O(n) time, which is remarkably efficient.

Space Complexity Revisited

We've used a HashMap (occurrences) to store the frequency of each word, and a list (duplicates) to store the duplicate words. In a worst-case scenario, each word in the sentence is unique, making our space complexity linear O(n).

JAVA
OUTPUT
Results will appear here.