So I gave the on-campus Google hiring test yesterday and when I saw the 2 problems I had to solve in an hour, I was really excited since they looked doable. However, things took a turn when I actually started to think about optimizations. Anyways, here goes :

**Problem -1**

We have an array A of N elements. We will be asked Q queries. each query is in the form of a single integer, X, and we have to tell whether there exists an index i in the array such that the **bitwise AND** of A[i] and X equals 0. If such an index exists print YES, otherwise print NO.

**Constraints** : 1<=N<=1e5, 1<=Q<=1e5, 1<=A[i]<=1e5

**Problem 2**

We have a binary string S of length N, and we have to find the number of substrings(that is, a contiguous range i to j) in which the frequency of 1 is strictly greater than the frequency of 0.

**Constraints** : 1<=N<=1e6;

I have spent a lot of time on them but could not come up with an optimal approach for either. I realize that the 2nd problem can be solved in O(NlogN) in a similar fashion in which we solve LIS in NlogN time, But other than that, I am clueless about both problems. Any help will be appreciated, Thanks!

Auto comment: topic has been updated by randomx_123 (previous revision, new revision, compare).A few hints for problem 2 in your post:

Hint 1What happens if you replace the 0 values with -1 values?

Hint AnswerThe problem reduces to the number of subarrays with a strictly positive (greater than 0) sum. How can you count this?

Hint 2Consider building a prefix-sum array. This is an array where position $$$i$$$ contains the sum of the first $$$i$$$ elements. For example, the prefix sum array of $$$[3, 5, 6]$$$ is $$$[0, 3, 3+5, 3+5+6]$$$, or $$$[0, 3,8, 14]$$$. How can you use this to find the answer?

Hint AnswerIf for some $$$L$$$ and $$$R$$$ with $$$L < R$$$, the sum of the first $$$R$$$ elements is greater than the sum of the first $$$L$$$ elements, then the sum of the elements in the middle would be positive (and this means there are more 1s than 0s. Just count the number of such pairs. How can you do this?

Hint 3How can you count inversions? How can you modify this to solve this problem?

Hint AnswerIt's exactly the same as counting inversions, except instead of counting pairs where the first item is bigger, you just count pairs where it is smaller.

I used segment tree to find no of prefixes with sum strictly less than current.

Yeah, that's the way we do LIS in NlogN. But is there any simpler method?

I simply made an array of size 2*N... First N indices to store frequency of negative sums and next N to store positive sums.

Then at each index i called a query(0, N+cur-1) where cur is current prefix sum and updated +1 to present index.

Could not think of anything else during challenge.

Can you Please Share your code if you dont mind ? i need to know how excatly you did ? Sorry Sir but i am noob here .

Could you import ordered set on that compiler?

Yes , I imported gnu_pbds

I think socho has explained this part.

You have 2 indexes

`l`

and`r`

.Now you have to find number of ordered pairs of

`(l, r)`

.`Such that l < r and a[l] < a[r].`

This can be changed into a known problem of

`counting-inversions`

.Totalways = (n choose 2) — Ways such that (l < r and a[l] >= a[r]).

You can solve the latter using

`merge-sort`

.You can count inversions using policy based data structure, ordered set in nlogn

[Deleted]

Can you explain your solution and how we can calculate this efficiently.

to find how many smaller values are thereto find how many smaller values are thereto find how many smaller values are thereYou can find in detail explanations here

yes, there is. every next prefix will increase or decrease by 1, so you can use answer of previous element and add/subtract frequency of specific value. this can be done in O(n)

I did the exact same thing, But then since the prefix sums are not increasing,how do we find the number of positive-sum subarrays in less than O(n^2) ? edit : thanks for taking up the time and explaining, Really appreciate.

will it overflow when pre[i] will reach beyond 1e9 ?? how to deal then with fenwick tree ??

SpoilerWill it overflow?No. Each value is now 1 or -1. If

allvalues were $$$-1$$$, the minimum we reach would be $$$-N$$$. On the other hand, if all values were $$$1$$$, the maximum we reach would be $$$N$$$. So the boundary on the inputs is $$$-N$$$ to $$$N$$$.To avoid negative indices, you can just "shift" all indices up by $$$N$$$ ($$$N+1$$$ for a fenwick tree because indices start at 1) in your data structure. Just store the value for $$$-N$$$ at index 1, $$$-N+1$$$ at index 2 and so on.

Sir , in my implementation i have used prefix sum . So if all elements are 1 . Then sum of 1e6 numbers is nothing but sum of first 1e6 natural numbers . and it will overflow . so Am i doing correct in my implementation?? what do am i missing here Sir ??

The sum of all elements in your modified (replace 0 with -1) array is at most $$$N$$$. You don't need the prefix sum on your prefix sum array, that's what you seem to be doing.

ok Thanks Sir .

Continuing after Hint 1 — I think we do not have to count inversions. We could just use two pointer method to calculate number of subarrays with sum greater than or equal to X.

Problem 1:

`a[i] & x == 0`

iff`a[i]`

is a submask of`~x`

. Rephrasing the problem, you're given a mask and you want to know whether there's its submask in the array. Use sum over submasks dp to pre-compute the answer for all`~x`

values at once and then answer each query in $$$\mathcal{O}(1)$$$ by accessing this pre-computed array.Another approach would be to make a dp, where dp(i) denotes the smallest element in the set (of A) such that:

i&dp(i) = 0, or -1, if no such element exists.

Updates can be done as : dp[i] = dp[i^2

^{j}], where j is the highest active bit in i. Base case is that we will have to find dp(i) for all i which are powers to 2 separately, that can be easily.ImplementationBoth the problems are well known :) .

Not to me.

Problem-1 can be solved using Trie.

Can you explain more ? Split for every bit ? And then for each query in O(32) ?

Yup.

Not Really, you could have solved this using Trie if the operation was Bitwise-Xor instead of Bitwise-And. Because incase of And when your current bit is 0, you can go either way in the trie because it doesn't really matter, so you might end up traversing the whole trie for each query.

The key observation to solve this problem is that A[i] <= 1e5 and then do something like sum of subsets dp to preprocess answer for every possible query

Yeah you are right. Did't noticed that case.

In reasonable time ~ n•log(n)? Doubt it. It’s not Xor.

Off topic,

I have seen another set in which the first question was some significantly easier stack question and the second question was same. So while selection is some kind of normalization done? Or is it the case that people of same college get same set?(which is kind of pointless)

I don't know about normalization is being done or not, but everybody gets a different set

Problem 1 is same as this one https://mirror.codeforces.com/contest/165/problem/E The only difference being the array itself forms the queries

the first one is a basic question of tries , you need to find the prefix of your choice if the number has a 1 find a 0 else if its 0 then look for 1 or 0.

the second one is also a standard question called inversion count , we can make a prefix. sum array of the string by converting the 0's to -1 and keeping 1 as 1, then it can be solved using merge sort, bit tree, avl tree, ordered set and idk if there are more ways

Wont't the time complexity for first one be O(n^2) in worst case in your solution?

Wouldn't looking for a 1 or 0 in the trie increase the time complexity? Because now instead of checking a single path in the trie, in the worst case (X is all zeroes) , you would have to traverse the whole trie, which takes O(n log(A)) per query.

I have a very easy approach for problem A which no-one has mentioned.

We'll create a vector of sets in which v[i] will contain set of possible elements if we consider rightmost i bits for every element

Now for X let j be the index of most significant bit of X (from right) So we will query in v[j] for complement of X and if it exists answer is yes otherwise no.

Proof :- if X&Y==0 then after removing those bits which are not present in X, Y will become compliment of X. If Y<X then we'll pad Y with zeros.

I think this approach will work, feel free to point out any mistakes you find.

could you explain the time complexity?

I thought on a funny solution to problem 2 and it ended up being O(N) after some optimization (which is what i think is the complexity you were aiming for).

I'll define 0 to be -1 instead so it will make calculations easier. Make an array V of size 2*n+1, where we will be storing all the the number of sequences that end on the index we're iterating and have a certain sum. Initially we will assume the point in that array that represents sum 0 to be n. Let's say we filled this array from the start of the string to i-1. Consider these two cases:

If S[i] = 1: it's like the "zero point" of the sequences from j to i-1 "went down", because we will be adding 1 to all other sufixes that come before it. In addition, we do on V[newzeropoint+1].

If S[i] = 0: like the previous case, it's like the "zero point" went up, and in addition to that V[newzeropoint+1]++.

At any given index we're iterating over the initial string the number of subsequences that end on that point is the sum of all V[i] such that i > zeropoint. We can do that in O(logn) with a fenwick tree (aka BIT) but we just need to mantain a variable storing this sum and update it as we're moving the zero point up and down for a O(n) solution

Approach for the 2nd question:

1.replace each

0 with -12.calculate prefix sum of newly formed array after replacement ( call this array

pref)3.now we have to find the number of sub-arrays who's

sum is > 0.4.let's say we have to find the sub-arrays

ending at index j.5.then sub-array [i,j] has +ve sum iff,

pref[j] — pref[i-1] > 0 =>pref[j] > pref[i-1]6.it means we have to find number of elements pref[i] which are smaller than pref[j] such that i < j.

8.for this

(in python )we useSortedLista special type of container which keeps the list sorted and upon binary searching elements we can get the number of elements < pref[j] .9.as SortedList is a special type of container it won't be allowed in the online rounds, hence here is the link for the source code for it, https://ideone.com/RyMNKu

10.we will traverse from left to right and keep pushing elements into the SortedList and add answer for each index to our final answer.

link for the code I have solved using SortedList: https://mirror.codeforces.com/contest/1536/submission/122169458

I have no idea how to solve this in c++ ,as set/multiset returns pointer, how to convert it to index ?

problem 2Brute force (c++)Pbds (c++)Fenwick tree | BIT (c++)Does anybody have an idea how to solve Problem-1? (Considering the Trie solution is not very optimal in the worst case)

Best I could think is maintaining a set for each i from 1 to 32. If a number is having set bit at position 'i', then we add it to that set. Now to answer for query X, we check the set bit positions and take intersection of all those sets. But again, taking intersection might give TLE. Can this be improved or any other way to solve this?

There is a concept known as SOS DP , which is helpful for the first question

But won't it be of exponential complexity? It is given Q,X <= 7x10^3 Am I missing something obvious?

a_i <= 10^ 5, which is something like 20 bits, so the complexity is gonna be O(20*2^20 +N) for precomputation and O(Q) for answering the queries.

I too got the same set.

I couldnt do 2nd one but I did 1st one For the first one I created an array of

powers of 2 of length 20and then for every element of this array I made sure whether it is in the keys array or not. Then for each query I traversed through thepower of two's arrayand checked if that element exists in keys array and itsandwith the X = 0 then yes otherwise no.PS: I had done 1st one partially...sorry for the mistake.

wait, what? you're saying you passed all test cases using this completely wrong method?

I think there exists an o(n) solution for the 2nd question, we convert all 0 to -1 and want to find all l < r such that pref[r] > pref[l],

so the optimization basically boils down to : given an array arr, can you find all l < r such that arr[l] < arr[r] in O(n), if abs(arr[i + 1] — arr[i]) == 1; and this seems possible if we store just two arrays what is the current prefix and what have we not updated yet.

For the first, read about SOS DP

Here and here is the soln for first one:

First problem can be done with trie data structure

We can do second problem by using lazy propagation on array. In second problem we have to form an array which will intially have a[i] = (count of prefix having cnt(1)-cnt(0) >= i). At each index we have to find the value of a[(count(1)-count(0) + 1 upto i)] . After finding this value, we have to decrement a[min(count(1)-count(0))] upto a[(count(1)-count(0) upto i)] and since the difference between counts is changing by 1 we can apply lazy propagation. Note: count(1)-count(0) can be negative and we can handle it by sufficient positive offset. Time and space:O(n)

First problem can be done by graphs. We have to take values of a[i] as nodes and we have to add directed edge between two nodes such that the number of zeroes in binary representation of parent is 1 more than number of zeroes in binary representation of child node and all the indices having 0 in child nodeis subset of those in parent node. Then we have to create a flag array for occurence of each node value and then using bfs we will assign flag of child to be true if it is true or if any one of its parents is true. Complexity O(logn).

Let me know if there us any doubt

Since I found some coincidence with this blog so thought of sharing it. So recently, I have qualified for the Round2 of Google Girl Hackathon 2023 and to qualify for the further rounds, I had to appear for a Google Online Challenge on 25th June'23. Therein, I got two questions and the first question of my goc is exactly the second problem of this blog. I solved it using merge sort (NlogN) approach and it did pass all the testcases. Waiting for the results now :)