Thanks for the participation!
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
In problem A, it is also necessary to add 4, since clicks are counted as a separate move
1883E can anyone tell me why in my code i am getting negative output for the first test-case ? SUBMISSION :- 229287818
Most likely overflow
then it should be giving diff output in pypy then cpp . but it is giving same tho .
I think in PyPy you may be accepted with that idea (because in PyPy has BigNum), but in C++, if you want to solve with that idea, you must self-code BigNum to avoid overflow. Or you can see my idea and my code here: 229357134
thanks
It's impossible to solve Div3D in Python without implementing custom Multiset :(
a list and sorting might work.
create two lists, Left and Right Sort both, if the upper bound of Right[0] exists in Left, print yes.
Right; but list deletion (and insertion) is O(n) a pop. Removals (and inserts) alone would bump you up to O(n^2); a recipe for TLE especially in python.
It is possible https://mirror.codeforces.com/contest/1883/submission/229319134
I skipped it during the contest, but did it just now using heaps. Let's be honest, it's clearly more difficult than just using sortedcontainers.
And it's just this problem. There may be problems where using heaps cannot suffice. Why would someone using Python be required to implement SortedList or to copy/paste 200-1000 lines just to run their code? Smh.
Well I wrote a solution using 2 sets and a map.
Submission Link
In C++, a set by default stores elements in sorted order. This is not the case with Python set.
Oh, I think you can compress the number left and right then use the Fenwick tree to check does it exists or not a range has right < current left or left > current right. This is my code: 229304991.
I LOVE YOUR TECHNIQUE!
thank you bro!!
Why so impossible? I use 2 heap check the maximum left and minimum right and 2 dictionary to count. Here is my code. https://mirror.codeforces.com/contest/1883/submission/229247926
Wow, your idea is very interesting! I have never seen this technique before.
You can use map + set to replate the multiset. Pop from the set when map[thing] == 1
still there is a way to solve this broblem
Actually, in G2, you don't need binary search. You can just sort the two arrays and pair them up greedily.
Can you explain in more detail please?
you can see my submission. link
Nice solution. Congratulations for CM
Thanks! ^_^
Can you explain your solution please .?
First, he paired up elements in $$$A$$$ with those of $$$B$$$ greedily after sorting them while keeping track of elements of $$$B$$$ that are not paired with anyone. Then we can simply pair the value of the first element of $$$A$$$ (the one which is varying) with the largest unpaired element of $$$B$$$. Then
To make the order of array A irrelevnt, do it for the last n-1 elements with multiset and then look at the first element from 1 to m
https://mirror.codeforces.com/contest/1883/submission/272451827
can u tell why my D is getting wrong
Div 1. B can be solved in linear time. https://mirror.codeforces.com/contest/1888/submission/229257846
Div2 D2/G2 can be solved in O(nlogn). If we increase a number then we need to check if it satisfies at only 1 index. If currently the number is at a[i], then we can increase a[i] till min(a[i+1], b[removals+i]) without changing the answer. Do some casework. My AC code link
I have done something similar but from a different angle, at each ith step from 1-m, I calculated the position of i in sorted vector a, now if that position is among the number of elements removed in the previous step, then no of elements to be removed will be same again, else i checked if the new i is still lesser than the current b's element at that position else just calculated how many more elements will I have to remove.
I traversed from 1-m by doing binary searching as ans for many continuous steps will be same.
Link — https://mirror.codeforces.com/contest/1888/submission/229293946
Div. 1 A can be solved with
multiset
in $$$\Theta(n\log n)$$$: submissionI am beginner,, i stuck in A and cannot proceed further,,Can anyone tell me how to improve myself>
no, google search and figure it out yourself.
1888E. I'm curious about the cases where my code fails. Code
Maybe because of this line
lli ind = upper_bound(record_time[rec].begin(), record_time[rec].end(), dist[u]) - record_time[rec].begin();
. When it's the source node you can go to a neighbouring node in your same time so it's lower_bound.My source node have distance 0.
I figured it out now. Thanks
Div3 D is really misleading. The question doesn't clearly describe the ability to pair l and r randomly.
Two codes below use very similar algorithms but only different data structures, only the first code is accepted:
implement using two multiset left,right:
implement using a multiset<pair<int,int>>
You are not free to pair them up randomly.
However, you can prove that if each pair of intervals has nonempty intersection, then there is an index that is contained in all of the intervals. So it is enough if you check for this.
Me — "I can solve it piece of cake" Div 3E ; 'Haha I am E Don't even think' Nice problems and I personally like the experiment :)
Div1 D can also be solved using divide and conquer: https://mirror.codeforces.com/contest/1887/submission/229298128
If you don't mind, can you explain your solution because that's the method I tried with but couldn't come up with a good time complexity solution
My solutions a bit more complicated than the intended.
For a single query, can initially add $$$a_l$$$ to the left split and keep expanding the left split while there is a smaller element in the right split. Similarly, you can add $$$a_r$$$ to the right split and keep expanding the right split while there is a larger element in the left split.
If you had only prefix queries, you could sweep on the right index of the query and maintain $$$prv[i]$$$ = left index of the right split for the interval $$$[1, i]$$$. For suffix queries you can maintain $$$nxt[i]$$$ = right index of the left split for the interval $$$[i, n]$$$.
In the divide and conquer, when while on the interval $$$[l, r]$$$, run a sweep to compute $$$prv[i]$$$ for $$$[mid + 1, r]$$$ and $$$nxt[i]$$$ for $$$[l, mid]$$$. This can be done in $$$O(n log n)$$$ time so by the master theorem, a total complexity of $$$O(n log^2 n)$$$.
When in the interval, process all queries that cover $$$mid$$$. The conditions for there being no split is that the interval $$$[l_i, nxt[l_i]]$$$ contains an element greater than one in $$$[mid + 1, r_i]$$$ and the interval $$$[prv[r_i], r_i]$$$ contains an element smaller than one in $$$[l_i, mid]$$$. I used some segtrees to maintain this, which is also O(n log n). This is also $$$O(n log^2 n)$$$ by the master theorem.
I think my implementation has a bunch of redundant code :clown:
Thanks, I understand your solution! Structure of divide and conqueror is similar to disjoint sparse table if I understood correctly so I think it can be optimised to O(nlogn) with next observations: 1) To find prv(i) for [x,y] you can find it in O(n) time(prv(i) = prvi(i — 1) or i because if this element i is smaller than max from prv(i — 1), then new prv must be i, so maintaining maximum can be done with a different pointer that can go from x to y which is O(n) time, same for getting prv) 2) Because you can find answer in disjoint sparse table in O(1), you can find your solution to every query in O(logn) because you only need to check your conditions from last paragraph which are O(logn) (or O(1) if you precompute them in O(nlogn) So you get a O(nlogn) + (O(1/logn) per query)
Feeling that 1B=1C=1D=1E. Small difficulty gap. Nice round.
I can't see the English solution to Div1C
Fixed
Oh my goodness, When I solve Dances (Easy Version), I accidentally mistook 'reorder' in the question for 'record', thinking that the task taught me to store it in a local array—I even thought the question was elaborately detailed!!! Now, I would like to know if there are any algorithms capable of solving this problem without the 'reorder' condition. I tried dp but it seems too difficult, I would be grateful if someone could share some thoughts with me.
Problem 1887C is awesome. I really like the idea of 1887C.
I agree, the idea in the editoral is very clever
Do you know what is the segtree solution that many other implementations used?
Prob E. AC 229306255 and WA 229307572.
My idea is to use log2. In AC, I use double and ceil, while in WA, I use long long and handle ceil in an alternative manner. Help me
Not sure if this is the case, I got WA on test 7 using DOUBLE and ceil and WA on test 13 using LONG DOUBLE and ceil. Maybe the test data is made so that precision errors accour and affect your answer.
how you finally avoided that?
I didn't :( this is how i solved it in the end. I don't think this problem intended for us to use double/long double though.
Can someone explain solution to 1883F? Why it works?
Never mind. I understood the difference between subarray and subsequence. Now this solution make sense.
In problem E why do we get TLE if we go naive way multiplying them by 2?
Not sure why tle, but I think it’s because you’re multiplying very large integers
even if we are multiplying large numbers, i.e a[0] >= 1e9. and rest are 1. still 32 times 1e5 shouldn't result in TLE right?
Suppose the array size is 1e5 and a[0]=1e9 , a[1]=1e9-1, a[2]=1e9-2 and so on..in cases like these the multiplication won't be limited to 32 bits and it will go beyond that and the product will also become too large. I think this is where the time limit can exceed.
I'm wondering is there any C++ implementation without using
multiset
? I tried using vector and sort after every update but it leads to TLE.i dont see why u would use vector and sort after every update when thats literally what a multiset does there r tho ways to implement this without multiset and some ds instead, tho why bother when multiset works
Yes, you are right. I'm just curious if
multiset
is the only solution. It seems to be yes.it's not, as i said u can use some data structure like segment tree but its pointless when multiset exists
Bruh, pushing into a vector and sort requires O(nlogn) everytime. let x=number of elements u push one by one, then the time complexity is O(xnlog n).
Whereas multisets allows O(logn) sorting. the time complexity is O(xlogn).
Yes exactly, thanks a lot!
U are 邹必谦???
oh i thought it was personal talk, sry mb
https://mirror.codeforces.com/contest/1883/submission/229322842
can anyone tell me where I am going wrong. Any help would be appreciated . Thanks in advance
What if the segments are [1,5], [2,3], [4,7]? As 5 > 4 your code will give “NO” but there’s still the [2,3]
in 1883B even when we donot not remove all k characters but get the frequencies even we print "Yes"... why?
let o be the number of characters that appear odd number of times in the string u have to delete at least max(0, o — 1) characters, since a palindrome can have at most one odd character, appearing at the center, and the others have to be symmetrical and appear on both sides if k >= max(0, o — 1), u can delete the max(0, o — 1) first, then u can delete by 2, if theres 1 remaining u can remove a center or create a center by deleting one, either way u can palindromize it
In F:
Note that a subarray suits us if al is the leftmost occurrence of the number al in the array and ar is the rightmost occurrence of the number ar in the array.
How can we proof this?
if theres another al appearing earlier, we can just start the subsequence there and we would have at least 2 such subsequences, same for ar
Thanks! Very beautiful observation :)
In problem Div3, F, I got TLE (submission 229270873) using unordered_map. I replaced it with map (submission 229332985) and it got accepted. Isn't unordered_map supposed to be faster?
Unordered structures are linear at worst. They use hash functions internally, so its O(1) in average but in case of multiple collisions the search takes O(n) operations. Regular map, on the other side, is O(log n) at worst. Use custom hash
There is a fun solution of 1887C using something that is sometimes called Radewoosh trick (https://mirror.codeforces.com/blog/entry/62602).
First, let's use the idea of transition to differences array.
Suppose that we are given some array state after several operations. Let's call this STATE. Than, for every state that has ever occured we can find out, whether this state is lexicographically smaller than STATE, or not — we can process change operations in the original order and maintain fenwick tree f, where f[i] = 0 if value in index i in current state is the same to the STATE and 1 otherwise. In order to check, whether current state is lexicographically greater than STATE or not we just need to get first 1 in fenwick and perform one comparison.
After that we can just change STATE to the random state that is lexicographically smaller than STATE. Expected number of such runs is $$$O(\log n)$$$, which gives $$$O(n \log ^ 2 n)$$$ solution, which is fast enough to pass the tests.
My code: https://mirror.codeforces.com/contest/1888/submission/229293148.
Dang this is cool, thanks for sharing
div2.D2 can be solved in linear time with following algorithm. First, you greedily find minimum number of needed operations with your array 'a' having only n — 1 elements(greedily as until b[j] <= a[i] we decrease 'i' and only then decrease 'j' in previously sorted arrays). So we now found solution in O(n). My idea was that when you try to insert your m at position i(so it stays sorted) following can happen: 1)Because you shifted all elements left of m 1 more place left now their conditions are maybe ruined with their new pairing with a. 2)Your m can be >= b[i + numOfOperations](we paired every a[i] with b[i+numOfOperations] with our greedy method, check for yourself) In both of those cases, we need to increase numofOperations for this m by 1, but let's look at it reversely. Let's say our answer is (numOfOperation + 1) * m so we 'predict' that 1 of these 2 conditions will always hold, so we need to find when it won't. We can deduce that if we start increasing m from 1 and come to a point where first condition disturbs the pairing, it means that it will also disturb it for all m >= current one. But if the second condition happens, it just means we need to shift our search between next two elements in array a.SO, the solution will look like this: 1)Find greedy for starting a and b 2)In every interval between a[i] and a[i + 1], we can fast check for which m-s was our prediction bad(none of 2 conditions happened), which will be for all m-s between a[i] and b[i + numOfOperations] 3)If, at any moment while iterating i, we come to a point where first condition happens, we end the iteration because to fix the pairing from now on, we need to add 1 to every solution, which we already did. Time Complexity:O(n)
thank you for the solution!
229293708 should give tle
how to solve for k=4 in Div 3 problem C raspberries?
The product can be made divisible by 4 by following two cases: first is, two or more even numbers i.e. divisble by 2... and second, by adding minimum x to make an element divisible by 4
In Question 1883 F why I am getting TLE?? 229371473 it's O(n)
Ok i Got it i was using unordered map
The rounds were beautiful, they helped me get to Specialist for the first time! Really glad :D
About Div1F,there are some details about the boundries---there should be no $$$nxt_i$$$ equal to $$$r_1$$$,but the editorial didn't metion this,please update your editorial.
I am a python user and wondering how can i solve 1883D In Love question as python don't have the inbuilt feature of multiset.
you can implement it using set and map
maybe i play genshin impact to much...
tooooooooooooooooo much
Could any body explain why this is correct?
I mean, given a rigorous proof?
It's true as we will have the exact one occurrence of the current subarray as the subsequence only when we will have the fixed boundaries of the subarray which is true only when they don't have any other occurrences not present in the current subarray!
Thanks very much
Suppose i, j where a[i] == a[j] && i < j and we claim that a[j] is a suitable left boundary.
Then there is a contradiction as we can build another subsequence that is the same by swapping a[i] with a[j]. In other words, if a[j]....a[R] is the subarray, there exist another subsequence in A where a[j] can be swapped with a[i]. Hope this helps :)
Thank you
In G2. Dances (Hard Version), how to proof the line "changing one element of array a cannot worsen the answer by more than 1"? :")
we always can just delete this element and the smallest in b
1883F Anyone know why my solution TLE when i use unordered_map but AC when i use map?
Submission: 229484801
That task got an antitest for
unordered_
structures. Hash table inside of the structure allows to have $$$O(1)$$$ on average, but that test was build to just cause hash collisions. Those collisions result in raising the asymptotics up to $$$O(n)$$$, which is the worst case.Normal
map
doesn't use hash table and has $$$O(\log n)$$$ always. That kind of increases the asymptotic, but a lower constant and the impossibility of "unlucky cases" make it work way faster, comparing tounordered_map
.You can find more info in comments under the announce post.
In problem 1883E, can anyone tell me why my code is getting runtime error? My submission
HELP PLS:
https://mirror.codeforces.com/contest/1887/submission/229544478
DIV 1 D PROBLEM
I DONT KNOW WHY THIS FAILS AT TESTCASE 29
Div3 E can be solved easily using logarithm and its properties, Its more optimal and even simpler to understand,.. the code for the same is as follows:
could u tell what is wrong in this
Could someone explain me why the solve of the test case: 3 2 2 3 for the problem 1883F — You Are So Beautiful is 3 and not 4.
there are 4 differeces subsequences ( [2,2] [2,3] [2,2,3] , [3])
The subsequence
[2,3]
isn't unique. We can choose indexes2,4
or3,4
(similarly with the subsequence[3]
)In this case we have suitable arrays
[3,2,2]
,[2,2]
,[2,2,3]
and[3,2,2,3]
could anyone please tell me what is wrong with my submission? submission- 229277838
This part, i believe it can lead to wrong answers.
Why I m getting TLE ...can anyone please tell me the reason behind the TLE... while(t--){
// code of the problem F(You are so beautiful)
Seems correct to me. Try not using "unordered" sets and maps; their internal hash implementation is not that great, which can cause collisions resulting in TLE. Either use an ordered map or an unordered map with a custom hash. See this for more info.
can someone tell me why cant we just delete bad subarrays from total in question You Are So Beautiful.
pragma GCC optimize("Ofast")
pragma GCC target("avx,avx2,fma")
pragma GCC optimization("unroll-loops")
include
include
include
include
include
include
include
include
include
include
include
include
include
include
include
include
include
include<stdio.h>
include
include<math.h>
include
include
include
include "ext/pb_ds/assoc_container.hpp"
include "ext/pb_ds/tree_policy.hpp"
using namespace std; using namespace __gnu_pbds;
template using ordered_set = tree<T, null_type, less, rb_tree_tag, tree_order_statistics_node_update> ;
template<class key, class value, class cmp = std::less> using ordered_map = tree<key, value, cmp, rb_tree_tag, tree_order_statistics_node_update>;
// find_by_order(k) returns iterator to kth element starting from 0; // order_of_key(k) returns count of elements strictly smaller than k;
define inf 2e18
define ll long long
define FAIZAN ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL)
const ll mod = 1e9 + 7;
vector sieve(int n) {int*arr = new intn + 1; vector vect; for (int i = 2; i <= n; i++)if (arr[i] == 0) {vect.push_back(i); for (int j = 2 * i; j <= n; j += i)arr[j] = 1;} return vect;}
ll inv(ll i) {if (i == 1) return 1; return (mod — ((mod / i) * inv(mod % i)) % mod) % mod;}
ll mod_mul(ll a, ll b) {a = a % mod; b = b % mod; return (((a * b) % mod) + mod) % mod;}
ll mod_add(ll a, ll b) {a = a % mod; b = b % mod; return (((a + b) % mod) + mod) % mod;}
ll gcd(ll a, ll b) { if (b == 0) return a; return gcd(b, a % b);}
ll ceil_div(ll a, ll b) {return a % b == 0 ? a / b : a / b + 1;}
ll pwr(ll a, ll b) {a %= mod; ll res = 1; while (b > 0) {if (b & 1) res = res * a % mod; a = a * a % mod; b >>= 1;} return res;}
int main() { FAIZAN; int t; cin>>t; while(t--) { ll n; cin>>n; vector v(n); map<ll,vector> mp; for(int i=0; i<n; i++) { cin>>v[i]; mp[v[i]].push_back(i); }
}
Wonderful problems. Thank you!
Alternate brainless $$$O((n + q)\sqrt{n})$$$ solution for div1-D:
Divide the array into square root blocks. Now for queries spanning only one or two square root blocks we can simply bruteforce in $$$O(\sqrt{n})$$$ time.
For queries spanning > 2 blocks, lets think of how we can efficiently check whether a split point exists within a square root block $$$B$$$ which lies completely within the query.
Let $$$B_l$$$ and $$$B_r$$$ be the left and right bounds of the block $$$B$$$. Let $$$mx(l, r) = max(a_l, a_{l + 1} \dots , a_r)$$$ and $$$mn(l, r)$$$ is similarly defined for the minimum.
Now, for a query $$$(l, r)$$$, a split point $$$S$$$ exists within block $$$B$$$ if $$$max(mx(l, B_l - 1), mx(B_l, S))$$$ < $$$min(mn(S + 1, B_r), mn(B_r + 1, r))$$$.
Assume that $$$mx(l, B_l - 1) < mn(B_r + 1, r)$$$ and $$$mx(B_l, S) < mn(S + 1, B_r)$$$ from here onwards as if that's not true, its trivial too see that $$$S$$$ cannot be a split point.
Claim: For any query $$$(l, r)$$$, a split point is present in block $$$B$$$ if and only if there exists some $$$S$$$ such that the ranges $$$[mx(l, B_l - 1), mn(B_r + 1, r)]$$$ and $$$[mx(B_l, S), mn(S + 1, B_r)]$$$ have a non zero intersection.
If there are no such ranges, then each $$$S$$$ falls under of two cases
Therefore, for every block, we need to create a structure which holds information about a set of ranges of size $$$p = O(\sqrt{n})$$$ and can efficiently answer queries of the form:
Given a range $$$[l, r]$$$ answer whether any range in the set intersects with $$$[l, r]$$$.
We can easily solve this subproblem online in $$$O(p + range)$$$ time with some precomputation.
Create a boolean array $$$a$$$ in which $$$a_i$$$ is true if some range in the given set covers $$$[i, i + 1]$$$. This array can be created very easily in $$$O(p + range)$$$ time and its a very standard task.
Now the answer to any query $$$[l, r]$$$ will be true if and only if some $$$a_i$$$ is true for some $$$i \in [l, r)$$$. This can be answered in $$$O(1)$$$ with prefix sums.
So, we prebuild this structure for each block in $$$O(n\sqrt{n})$$$ time and $$$O(n\sqrt{n})$$$ memory.
Now each query is answered by firstly brute forcing over the blocks at the edge, and for each block $$$B$$$ which lies completely in the query, we query the structure corresponding to $$$B$$$ to check if the range $$$[mx(l, B_l - 1), mn(B_r + 1, r)]$$$ intersects with anything.
Finally, we achieve a total time complexity of $$$O((n + q)\sqrt{n})$$$ and consume $$$(O(n\sqrt{n}))$$$ memory.
Code: link
https://mirror.codeforces.com/contest/1883/submission/233002798
O(NlogN) solution for G2.
I solved 1883D - In Love by convert a[i] to log(a[i]) and binary search to find the minimum k that a[i] + k >= a[i-1]. That's so much simple but I love your technique!
In problem 1883E (Look Back) the naive way is faster then the optimized way if the input is like that: 1e9 1 1e9 1 1e9 1 1e9 1 1e9 1 So why it get TLE in naive approach?
in your test-case for second element you need 30 operations (2^30>=1e9) then for third element you need 1 more (1e9 * 2 which is approximately 2^31) so you will reach a position where the number can be equal to 2^(1e5) which will lead to overflow
Could someone please provide the Python code for 1883E? I'm having trouble implementing the idea into Python code. I was able to find a C++ version through Google, but I couldn't find any Python code.
Can somone explain more problem E of div 3 or give us other solution , I didn't understand the editorial .
// Can someone please tell me what's wrong in this code for Problem D
include<bits/stdc++.h>
using namespace std;
define ll long long
define mod 1000000007
// For ordered_set data structure
include <ext/pb_ds/assoc_container.hpp>
include <ext/pb_ds/tree_policy.hpp>
// *s.find_by_order , s.order_of_key typedef __gnu_pbds::tree<int, __gnu_pbds::null_type, less, __gnu_pbds::rb_tree_tag, __gnu_pbds::tree_order_statistics_node_update> ordered_set;
ll power(ll a,ll n) // Binary Exponentiation { ll res = 1; while(n) { if(n&1) // n is odd { n--; res = (((res) % mod) * (a % mod)) % mod; } else { n /= 2; a = (((a) % mod) * (a % mod)) % mod; } } return res; }
// Sieve of Erastoshenes const ll N = 100000;
vector sieve(N,0);
// Sieve of Erastoshenes is for Finding divisors of Number void sieveoe() { for(ll i=2;i*i <= N;i++) { for(ll j = i*i;j<=N;j+=i) { sieve[j] = 1; } } }
template inline std::string to_string (const T& t) { std::stringstream ss; ss << t; return ss.str(); }
int main() { ios_base::sync_with_stdio(false);
cin.tie(NULL);
cout.tie(NULL);
ll q;
cin>>q;
multiset <pair<ll,ll>> ms;
while(q--) {
}
problem G2, $$$O(n\,\,log\,\,n)$$$ with two pointers method. https://mirror.codeforces.com/contest/1883/submission/255096685
Idea: Sort both arrays. Only iterating forward, greedily match $$$i$$$ with lowest $$$j$$$ s.t. $$$a[i] < b[j]$$$. Since we can match at most $$$n-1$$$ elements, we will always be left with at least one unmatched in $$$b$$$. Out of all these unmatched, we will save the largest, $$$LU$$$. Denote $$$x = \#(removed\,\,both\,\,in\,\,a,b)$$$. Then for each $$$m >= LU$$$, we use $$$x + 1$$$ operations. And for $$$m < LU$$$, we use $$$x$$$ operations.
Can anyone tell me if the solution of problem B will work for the string "daabdad" when k = 3? Shouldn't the answer be "YES"? But the solution code is printing "NO".
This is Reshwanth, Hi 1883 G1 (https://mirror.codeforces.com/problemset/problem/1883/G1) doesn't deserve the 1400 it's just the LANGUAGE is so hard to understand, it took less time to think the idea of answer than understanding the question itself.
My Opinion tho :)
can anyone tell me on which testcase this code is failing: 278325131
In problem b the chemistry one if the k >= 25 then it will always be yes because of the pigeon hole principle
There is a linear solution for 1888E.
Let us call a node released if it is able to be reached from vertex $$$1$$$, and call a node unlocked if there exists an edge between it and a released vertex but itself has not been released yet. First, vertex $$$1$$$ is released. For a moment, we release the unlocked vertices that are connected with a released vertex at that moment, and then we unlock the nodes which are not released but are connected with the newly released nodes. In order to do so, we can maintain all the unlocked vertices for all time moments using vector. Since we only have to release a vertex once, and an edge only produces no more than two unlock operations, the total time complexity is $$$O(n + t + k + \sum m_i)$$$. Code
Note that you cannot push back elements into a vector while iterating it by
for (auto ele : vec)
, or you may receive Runtime Error.