# 474. Ones and Zeroes

Find the maximum number of strings that you can form with given m 0s and n 1s. Each 0 and 1 can be used at most once.

Note:

The given numbers of 0s and 1s will both not exceed 100
The size of given string array won't exceed 600.

Example 1:

Input: Array = {"10", "0001", "111001", "1", "0"}, m = 5, n = 3
Output: 4

Explanation: This are totally 4 strings can be formed by the using of 5 0s and 3 1s, which are “10,”0001”,”1”,”0”

Example 2:

Input: Array = {"10", "0", "1"}, m = 1, n = 1
Output: 2

Explanation: You could form "10", but then you'd have nothing left. Better form "0" and "1".

# Solution

Approach 1: DP -- knapsack.

# Code (Python)

Approach 1:

import collections

class Solution:
    def findMaxForm(self, strs: List[str], m: int, n: int) -> int:
        encodings = []
        for string in strs:
            counter = collections.Counter(string)
            encodings.append((counter['0'], counter['1']))

        # knapsack problem where the weights are 2 dimensional
        # max_nums[i][zero][one]: max strings to form with the first i items that use this many zeroes and ones
        
        '''
        max_nums = [[[0] * (n + 1) for _ in range(m + 1)] for _ in range(len(strs) + 1)]
        
        for i in range(1, len(max_nums)):
            for zero in range(0, len(max_nums[0])): # a string can have 0 zeroes or ones
                for one in range(0, len(max_nums[0][0])):
                    max_nums[i][zero][one] = max_nums[i-1][zero][one]
                    if zero >= encodings[i-1][0] and one >= encodings[i-1][1]:
                        max_nums[i][zero][one] = max(max_nums[i][zero][one], 1 + max_nums[i-1][zero-encodings[i-1][0]][one-encodings[i-1][1]])

        return max_nums[-1][-1][-1]
        '''
        # space optimization
        max_nums = [[0] * (n + 1) for _ in range(m + 1)]
        for i in range(1, len(strs) + 1):
            for zero in range(m, -1, -1):
                for one in range(n, -1, -1):
                    if zero >= encodings[i-1][0] and one >= encodings[i-1][1]:
                        max_nums[zero][one] = max(max_nums[zero][one], 1 + max_nums[zero-encodings[i-1][0]][one-encodings[i-1][1]])
        return max_nums[-1][-1]

# Code (C++)

Approach 1:

Approach 2: