The Problem Statement
One of the classic algorithm problems frequently encountered in interviews is the Two Sum problem. It challenges you to find two indices in an array such that the sum of the elements at these indices equals a given target. It seems simple, but the real depth lies in optimizing its solution.
There is this common programming question that goes something like this:
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.
You may assume that each input would have exactly one solution, and you may not use the same element twice.
You can return the answer in any order.
- Example 1:
│ Input: nums = [2,7,11,15], target = 9
│ Output: [0,1]
│ Explanation: Because nums[0] + nums[1] == 9, we return [0, 1].
- Example 2:
│ Input: nums = [3,2,4], target = 6
│ Output: [1,2]
- Example 3:
│ Input: nums = [3,3], target = 6
│ Output: [0,1]
Constraints:
- 2 <= nums.length <= 10^4
- -10^9 <= nums[i] <= 10^9
- -10^9 <= target <= 10^9
- Only one valid answer exists.
Follow-up: Can you come up with an algorithm that is less than O(n^2) time complexity?
Now this isn’t that difficult, at first you could think of simply looking through the first array/list, and then using that iteration to do another loop through to find the matching pair.
The Naïve Solution: A Double Loop Approach
The most intuitive solution involves a nested loop where we compare every possible pair of elements in the array.
While this will work:
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
i, j = 0, 0
while i < len(nums):
while j < len(nums):
if i != j:
if nums[i] + nums[j] == target:
return [i, j]
j += 1
i += 1
You’d be left with a very slow solution.
Analysis
Time Complexity: O(n2)
For every element, you iterate through the rest of the array, resulting in quadratic growth.
Space Complexity: O(1)
No extra space is used beyond the input array.
Performance
On an array with 10,000 elements, this approach can perform roughly: Operations=n(n−1)2≈50×106 Operations=2n(n−1)≈50×106
This becomes computationally expensive for larger arrays. Let’s explore more efficient approaches.
So why don’t we explore another type of data structure?
Optimized Approach: Using a Hash Table
The key insight to optimize is leveraging a hash table to store already-visited numbers and their indices. This allows checking for the required complement in constant time.
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
num_dict = {}
for i, num in enumerate(nums):
complement = target - num
if complement in num_dict:
return [num_dict[complement], i]
num_dict[num] = i
Analysis
Time Complexity: O(n)
Each lookup and insertion in the hash table is O(1)O(1), making the entire solution linear with respect to the array size.
Space Complexity: O(n)
The hash table grows linearly with the number of elements in the array.
Performance
For an array with 10,000 elements, the hash table approach requires roughly 10,000 operations.
Why the Hash Table Works
The hash table approach works because of the property: target=num+complement target=num+complement
Instead of re-checking all pairs, the algorithm checks if the complement of the current number exists in the hash table. If found, we have our solution.
Handling Edge Cases
- Negative Numbers: Both solutions handle negative numbers seamlessly because the arithmetic logic applies universally.
- Duplicates: The constraint that “only one valid answer exists” simplifies handling duplicates.
- Large Arrays: Hash table approach is memory-efficient and fast for arrays up to millions of elements.
Empirical Benchmarks
Let’s simulate the performance of both solutions using Python with random inputs:
import random
import time
# Generate a random array of size 10,000
nums = [random.randint(-10**9, 10**9) for _ in range(10**4)]
target = random.randint(-10**9, 10**9)
# Naïve Approach
start = time.time()
solution = Solution().twoSum(nums, target)
end = time.time()
print(f"Naïve Approach Time: {end - start:.4f}s")
# Optimized Approach
start = time.time()
solution = Solution().twoSum(nums, target)
end = time.time()
print(f"Optimized Approach Time: {end - start:.4f}s")
Expected Results
Naïve Approach: ≈2−3 seconds for 10,000 elements.
Optimized Approach: ≈0.01−0.03 seconds for 10,000 elements.
Follow-Up: Less Than O(n2) Complexity?
The hash table approach already achieves O(n)O(n). To improve further:
- Sorted Array (Two Pointer Technique): If the array is sorted, use two pointers to converge toward the target sum. Code:
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
nums = sorted(enumerate(nums), key=lambda x: x[1]) # Sort with indices
left, right = 0, len(nums) - 1
while left < right:
current_sum = nums[left][1] + nums[right][1]
if current_sum == target:
return [nums[left][0], nums[right][0]]
elif current_sum < target:
left += 1
else:
right -= 1
Analysis
Time Complexity: O(nlogn) (for sorting) + O(n) (for pointer traversal).
Space Complexity: O(n) (to maintain indices).
- Parallel Processing: On very large datasets, you can parallelize hash table construction and searches across multiple threads.
Our observations so far
The Two Sum problem teaches fundamental optimization techniques:
Trade-offs between time and space complexity.
Using hash tables for efficient lookups.
Thinking critically about constraints to reduce unnecessary computations.
In practice, the hash table method balances simplicity and performance, making it a go-to solution for similar problems.