# C++ program to count minimum number of binary digit numbers needed to represent n

Suppose we have a number n. A number is a binary decimal if it's a positive integer and all digits in its decimal notation are either 0 or 1. For example, 1001 (one thousand and one) is a binary decimal, while 1021 are not. From the number n, we have to represent n as a sum of some (not necessarily distinct) binary decimals. Then compute the smallest number of binary decimals required for that.

So, if the input is like n = 121, then the output will be 2, because this can be represented as 110 + 11 or 111 + 10.

## Steps

To solve this, we will follow these steps −

ans := -1
while n > 0, do:
ans := maximum of ans and (n mod 10)
n := n / 10
return ans

## Example

Let us see the following implementation to get better understanding −

#include <bits/stdc++.h>
using namespace std;

int solve(int n) {
int ans = -1;
while (n > 0) {
ans = max(ans, n % 10);
n /= 10;
}
return ans;
}
int main() {
int n = 121;
cout << solve(n) << endl;
}

## Input

121

## Output

2