Problem A1 : Pentagon Orchard(Easy version)
Author : Casual_W
First, calculate the total number of trees without worrying about visibility.
Layer 1 has 5 trees.
Layer 2 has 10 trees.
.......
Layer $$$k$$$ has $$$5k$$$ trees.
What is the sum of $$$5k$$$ for $$$k=1$$$ to $$$n$$$?
A tree is "visible" if no other tree blocks the line of sight from the center. Think of a tree's position as a fraction along one side of the pentagon. A tree at layer $$$k$$$ and position $$$m$$$ (where $$$0 \le m \le k$$$) corresponds to the fraction $$$m/k$$$. When is the fraction $$$m/k$$$ "blocked" by a previous layer? It is blocked if the fraction can be simplified (e.g., $$$2/4$$$ is blocked by $$$1/2$$$).
A tree is visible only if its position fraction $$$m/k$$$ cannot be simplified. This happens when $$$\gcd(m, k) = 1$$$. The number of integers $$$m \lt k$$$ coprime to $$$k$$$ is given by Euler's Totient Function, $$$\phi(k)$$$. The vertices of the pentagon are special cases; handle them carefully. Total Hidden = Total Trees — Visible Trees.
We need to find the number of "hidden" trees in an orchard formed by concentric pentagons. Let's break the counting into two parts:
- Calculate the Total number of trees.
- Calculate the Visible number of trees.
1. Counting Total Trees
Layer $$$k$$$ is a regular pentagon. The problem states it has exactly 5 vertices and $$$5(k-1)$$$ additional trees on the edges.
Total trees in layer $$$k = 5 + 5(k-1) = 5k$$$.
Summing this from $$$1$$$ to $$$n$$$:
2. Counting Visible Trees
Consider one side of the pentagon on layer $$$k$$$. The trees on this side divide it into $$$k$$$ equal segments. We can assign each tree an index $$$m$$$ ($$$0 \le m \le k$$$) representing its position along the side. The relative position of a tree can be described by the fraction $$$\frac{m}{k}$$$.
A tree at layer $$$k$$$ with index $$$m$$$ is visible from the center if and only if there is no tree at the same relative position in any previous layer $$$k' \lt k$$$. Mathematically, this means the fraction $$$\frac{m}{k}$$$ must be irreducible.
Let's analyze the counts for each layer $$$k$$$:
- Vertices ($$$m=0$$$ and $$$m=k$$$): For $$$k=1$$$, the vertices are visible. For any $$$k \gt 1$$$, the vertices are hidden by the vertices of layer 1 (since $$$\gcd(k,k) \neq 1$$$ and $$$\gcd(0,k) \neq 1$$$).
- Edge Trees ($$$1 \le m \lt k$$$): The number of visible trees strictly on the edge of one side is the count of integers $$$m \in [1, k-1]$$$ such that $$$\gcd(m, k) = 1$$$. This is exactly the definition of Euler's Totient Function, $$$\phi(k)$$$.
(Note: For $$$k=1$$$, $$$\phi(1)=1$$$, so $$$5\phi(1)=5$$$, which correctly counts the 5 initial vertices).
Conclusion
The number of visible trees up to layer $$$n$$$ is $$$\sum_{k=1}^n 5 \phi(k)$$$.
The number of hidden trees is:
Since $$$n \le 10^5$$$, we can precompute $$$\phi(k)$$$ for all $$$k$$$ using a sieve (or simple iteration) and use prefix sums to answer each test case in $$$O(1)$$$.
#include <bits/stdc++.h>
using namespace std;
#define int long long
const int MAXN = 100005;
int phi[MAXN];
int prefix_phi[MAXN];
void precompute() {
for (int i = 0; i < MAXN; i++) phi[i] = i;
for (int i = 2; i < MAXN; i++) {
if (phi[i] == i) {
for (int j = i; j < MAXN; j += i)
phi[j] -= phi[j] / i;
}
}
prefix_phi[0] = 0;
for (int i = 1; i < MAXN; i++) {
prefix_phi[i] = prefix_phi[i-1] + phi[i];
}
}
void solve() {
int n;
cin >> n;
int total_trees = 5 * n * (n + 1) / 2;
int visible_trees = 5 * prefix_phi[n];
cout << total_trees - visible_trees << endl;
}
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
precompute();
int t;
cin >> t;
while(t--) {
solve();
}
}
Problem A2 : Pentagon Orchard(Hard version)
Author : Casual_W
The core formula remains the same as the Easy Version:
However, since $$$n \le 10^9$$$, a simple linear loop is too slow.
You need an algorithm that runs faster than $$$O(n)$$$.
Let $$$S(n) = \sum_{i=1}^n \phi(i)$$$. We need to calculate $$$S(n)$$$ efficiently.
Recall the property of the Euler Totient function: $$$\sum_{d|n} \phi(d) = n$$$.
If we sum this identity over all $$$i$$$ from $$$1$$$ to $$$n$$$, we get:
Can you rearrange the left side to express $$$S(n)$$$ in terms of smaller values of $$$S$$$?
By changing the order of summation, we derive the recurrence relation known as the Du Jiao Sieve:
You can compute this using recursion with memoization.
1. For small $$$n$$$ (up to $$$\approx 2 \cdot 10^6$$$), precompute $$$S(n)$$$ using a linear sieve.
2. For large $$$n$$$, use the recursive formula and a hash map (or std::map).
Difference from Easy Version
In the hard version, $$$N$$$ can be up to $$$10^9$$$. The formula derived in the easy version is:
Calculating $$$\sum \phi(i)$$$ linearly takes $$$O(N)$$$ time, which is too slow for $$$10^9$$$. We need a faster approach, specifically the Du Jiao Sieve, which computes this prefix sum in roughly $$$O(N^{2/3})$$$ time.
Explanation: Du Jiao Sieve
Let $$$S(n) = \sum_{i=1}^n \phi(i)$$$.
We know the property of Euler's totient function involving Dirichlet convolution:
Algorithm:
- Precompute $$$\phi(i)$$$ and its prefix sums for small values (up to $$$K \approx N^{2/3}$$$, e.g., $$$2 \cdot 10^6$$$) using a linear sieve.
- Recursion with Memoization: For values larger than $$$K$$$, use the formula above.
- The summation $$$\sum_{g=2}^n S(\lfloor n/g \rfloor)$$$ can be computed efficiently using harmonic ranges (since $$$\lfloor n/g \rfloor$$$ takes limited distinct values).
- Store computed results in a hash map (map or unordered_map) to avoid re-calculation.
Time Complexity: $$$O(N^{2/3})$$$. For $$$N=10^9$$$, this is approximately $$$10^6$$$ operations, which fits within the time limit.
#include <bits/stdc++.h>
using namespace std;
#define int long long
const int MAXK = 2000005;
int phi[MAXK];
int prefix_phi[MAXK];
map<int, int> memo;
void precompute() {
phi[1] = 1;
vector<int> primes;
static bool is_prime[MAXK];
memset(is_prime, true, sizeof(is_prime));
for (int i = 2; i < MAXK; i++) {
if (is_prime[i]) {
primes.push_back(i);
phi[i] = i - 1;
}
for (int p : primes) {
if (i * p >= MAXK) break;
is_prime[i * p] = false;
if (i % p == 0) {
phi[i * p] = phi[i] * p;
break;
} else {
phi[i * p] = phi[i] * (p - 1);
}
}
}
prefix_phi[0] = 0;
for (int i = 1; i < MAXK; i++) {
prefix_phi[i] = prefix_phi[i-1] + phi[i];
}
}
int get_sum_phi(int n) {
if (n < MAXK) return prefix_phi[n];
if (memo.count(n)) return memo[n];
int total = n * (n + 1) / 2;
for (int l = 2, r; l <= n; l = r + 1) {
r = n / (n / l);
int count = (r - l + 1);
total -= count * get_sum_phi(n / l);
}
return memo[n] = total;
}
void solve() {
int n;
cin >> n;
int total_trees = 5 * n * (n + 1) / 2;
int visible_trees = 5 * get_sum_phi(n);
cout << total_trees - visible_trees << endl;
}
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
precompute();
int t;
cin >> t;
while(t--) {
solve();
}
return 0;
}
Problem B : Helicopter Rescue
Author : Godbot
The answer is monotonic: if capacity $$$T$$$ works, any $$$T' \gt T$$$ works. Use Binary Search on the Answer.
Check Function: Greedily add people to the current trip. Start a new trip only when adding the next person would exceed the limit $$$T$$$.
Update Logic:
- If
check(mid)istrue, thenmidis possible. Try to find a smaller answer:ans = mid,r = mid — 1. - If
check(mid)isfalse, we need more capacity:l = mid + 1.
The problem asks us to partition the array of people $$$m_1, m_2, \dots, m_N$$$ into at most $$$K$$$ contiguous subarrays (trips) such that the maximum tension required for any single trip is minimized. The tension for a group with total mass $$$M_{load}$$$ is given by $$$T = M_{load} \cdot G_{eff}$$$.
This is a classic "Minimize the Maximum" problem, which can often be solved using Binary Search on the Answer.
1. Monotonicity
Let's define a predicate function check(T) which returns true if it is possible to rescue all people in $$$\le K$$$ trips using a cable with tension rating $$$T$$$, and false otherwise.
If a tension rating $$$T$$$ is sufficient to rescue everyone, then any rating $$$T' \gt T$$$ is definitely sufficient as well. This monotonicity allows us to binary search for the minimum valid $$$T$$$.
2. The Check Function (Greedy Strategy)
For a fixed tension capacity $$$T_{limit}$$$, how do we determine the minimum number of trips required? Since the order of people is fixed, we can use a greedy approach:
- Start the first trip with the first person.
- Continue adding the next person in the queue to the current trip as long as the total tension does not exceed $$$T_{limit}$$$.
- If adding the next person would exceed $$$T_{limit}$$$, we must finalize the current trip and start a new one with that person.
- If any single individual requires a tension greater than $$$T_{limit}$$$ on their own ($$$m_i \cdot G_{eff} \gt T_{limit}$$$), then this $$$T_{limit}$$$ is immediately invalid.
After iterating through all people, if the number of trips used is less than or equal to $$$K$$$, then $$$T_{limit}$$$ is feasible.
We perform binary search within the range $$$[L, R]$$$. In each step, if check(mid) is true, we store mid as a potential answer and try smaller values ($$$R = mid - 1$$$). Otherwise, we look for larger values ($$$L = mid + 1$$$).
3. Time Complexity
The check function iterates through the array once, taking $$$O(N)$$$ time. The binary search runs for $$$O(\log(\sum m_i \cdot G_{eff}))$$$ iterations.
The total time complexity is $$$O(N \cdot \log(\text{Total Mass} \cdot G_{eff}))$$$, which fits comfortably within the time limit for $$$N \le 2 \cdot 10^5$$$.
#include<bits/stdc++.h>
#define int long long
#define pb push_back
using namespace std;
bool ok(int n,int k,int g,vector<int>&a,int m){
int c=1,cur=0;
for(int i=0;i<n;i++){
int w=a[i]*g;
if(w>m) return 0;
if(cur+w<=m){
cur+=w;
}else{
c++;
cur=w;
}
}
return c<=k;
}
int32_t main(){
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin>>t;
while(t--){
int n,k,g;
cin>>n>>k>>g;
vector<int>a(n);
int mx=0,sum=0;
for(int i=0;i<n;i++){
cin>>a[i];
mx=max(mx,a[i]);
sum+=a[i];
}
int l=mx*g,r=sum*g,ans=r;
while(l<=r){
int m=l+(r-l)/2;
if(ok(n,k,g,a,m)){
ans=m;
r=m-1;
}else{
l=m+1;
}
}
cout<<ans<<"\n";
}
}
Problem C : Bitwise Transitions
Author : Godbot
Analyze the bitwise difference for small numbers manually.
$$$1 (01) \to 2 (10)$$$: Difference is 2 bits. (Ends in 1 one)
$$$3 (011) \to 4 (100)$$$: Difference is 3 bits. (Ends in 2 ones)
$$$7 (0111) \to 8 (1000)$$$: Difference is 4 bits. (Ends in 3 ones)
Do you see the relationship between the number of trailing ones and the bitwise difference?
If a number ends in exactly $$$p$$$ ones, adding 1 will flip those $$$p$$$ ones to zeros and flip the next zero to a one. The total change is $$$p+1$$$ bits.
To get a difference of exactly $$$k$$$, the number must end in exactly $$$k-1$$$ ones.
We need to count numbers $$$i \lt n$$$ that end in the pattern ...011...1 (with $$$k-1$$$ ones).
This pattern repeats every $$$2^k$$$ numbers.
We need to count how many integers $$$i$$$ in the range $$$[0, n-1]$$$ change exactly $$$k$$$ bits when incremented to $$$i+1$$$.
Mathematical Derivation
1. The Bitwise Property
When we add $$$1$$$ to an integer $$$i$$$, the number of bits that flip is determined by the number of trailing ones in $$$i$$$.
Specifically, if $$$i$$$ ends with $$$z$$$ ones (e.g., $$$\dots 0\underbrace{11\dots1}_{z}$$$), adding $$$1$$$ will flip all those $$$z$$$ ones to zeros, and flip the zero immediately to their left to a one.
The total bits changed (the bitwise difference) is $$$z + 1$$$.
2. The Condition
We are given that the difference is $$$k$$$.
In binary, $$$i$$$ must look like: $$$\dots 0 \underbrace{11\dots1}_{k-1}$$$.
3. The Formula
This pattern repeats with a period of $$$2^k$$$. The first number satisfying this is $$$2^{k-1} - 1$$$.
To count how many such numbers exist strictly less than $$$n$$$, we can use the direct formula for counting arithmetic progressions:
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve() {
int n, k;
cin >> n >> k;
if (k == 0) {
cout << 0 << endl;
return;
}
int ans = (n + (1ll << (k - 1))) >> k;
cout << ans << endl;
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
Problem D : Score Normalization
Author : Godbot
The problem asks us to minimize $$$\sum a_i^2$$$.
First, notice that the sign of $$$a_i$$$ does not affect its square ($$$(-x)^2 = x^2$$$). Since we can increment or decrement freely, we can treat all scores as their absolute values $$$|a_i|$$$ and always choose to decrement them towards 0.
The problem effectively becomes: Reduce the sum of squares of $$$|a_i|$$$ using $$$k$$$ operations.
The function $$$f(x) = x^2$$$ is convex. Reducing a larger number gives a bigger drop in the sum of squares than reducing a smaller number.
This implies a Greedy Strategy: We should always apply our operation to the student with the currently largest absolute score. We are essentially "leveling down" the peaks of the distribution from top to bottom.
Sort the array of absolute values in descending order.
Instead of simulating one by one (which is too slow), process them in groups. If the top $$$i$$$ elements are equal, we can try to reduce all of them together to match the next largest value ($$$a_{i+1}$$$).
Let $$$S$$$ be the sum of all absolute values. If $$$k \ge S$$$, we can reduce everyone to 0. If $$$(k-S)$$$ is odd, the answer is 1; otherwise, it is 0.
If $$$k \lt S$$$, we need to perform the operations greedily on the largest elements.
Sorting Algorithm
- Convert all $$$a_i$$$ to $$$|a_i|$$$ and sort them in descending order. Append a $$$0$$$ at the end to handle the boundary.
- Iterate through the sorted array. Let's say we are at index $$$i$$$ (1-based), meaning the first $$$i$$$ elements are currently the "largest" and have been flattened to the same value $$$a_i$$$.
- Calculate the operations needed to bring these $$$i$$$ elements down to the level of the next element $$$a_{i+1}$$$.
$$$ \text{needed} = i \times (a_i - a_{i+1}) $$$ - If $$$k \ge \text{needed}$$$:
We have enough operations. Perform them (subtract `needed` from $$$k$$$) and continue to the next level. Effectively, the first $$$i$$$ elements are now equal to $$$a_{i+1}$$$. - If $$$k \lt \text{needed}$$$:
We cannot fully lower this group to $$$a_{i+1}$$$. We stop here.
We distribute the remaining $$$k$$$ operations as evenly as possible among these $$$i$$$ elements.
Each of the $$$i$$$ elements is lowered by $$$k / i$$$.
The remaining $$$k \% i$$$ elements are lowered by an additional 1.
Calculate the final sum of squares and print the answer.
Time Complexity
Sorting takes $$$O(N \log N)$$$. The linear pass takes $$$O(N)$$$.
Total: $$$O(N \log N)$$$.
Problem E : Equal Floors
Author : Its_Tarun
Use the property that $$$\lfloor a/i \rfloor$$$ takes at most $$$2\sqrt{a}$$$ distinct values.
This is often called the Harmonic Lemma.
You can find all ranges $$$[l, r]$$$ where $$$\lfloor a/i \rfloor$$$ is constant using the standard loop:
for(l=1; l<=n; l=r+1) r=n/(n/l);
The number of $$$i$$$'s yielding a specific floor value in this range is $$$r - l + 1$$$.
Store the frequency of each floor value for $$$a$$$ in a map (or a vector of pairs).
Do the same for $$$b$$$.
Finally, iterate through the stored values. If a value $$$k$$$ appears in both, add $$$(\text{freq}_a[k] \times \text{freq}_b[k])$$$ to the answer.
The total answer is the sum over all possible values of $$$k$$$:
1. The Harmonic Lemma
A key property of the floor function is that the value $$$\lfloor n/i \rfloor$$$ takes at most $$$2\sqrt{n}$$$ distinct values as $$$i$$$ ranges from $$$1$$$ to $$$n$$$.
Furthermore, these values remain constant over contiguous ranges of $$$i$$$. We can iterate through these ranges efficiently using the standard "Square Root Decomposition" loop (also called the Harmonic Lemma loop):
for (l=1; l<=n; l=r+1) r=n/(n/l)
In each iteration, the floor value $$$val = \lfloor n/l \rfloor$$$ is constant for all $$$i \in [l, r]$$$. The number of such $$$i$$$'s is simply the length of the range: $$$r - l + 1$$$.
2. The Strategy
We can compute the frequency of every floor value produced by $$$a$$$ and store it in a map (or a list). We do the same for $$$b$$$.
Then, we iterate through the values generated by $$$a$$$. If a value $$$k$$$ is also generated by $$$b$$$, we multiply their frequencies and add to the total.
Time Complexity: $$$O(\sqrt{a} + \sqrt{b})$$$ (or with log factor if using maps). Given the constraint on the sum of square roots, this fits comfortably within the time limit.
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define MOD 1000000007
void solve(){
int a , b ;
cin >> a >> b ;
map<int,int> mp1;
for (int l=1; l<=a; ){
int val = a/l;
if (val == 0) break;
int r = a/val;
if (r > a) r = a;
int count = (r — l + 1) % MOD;
mp1[val] = (mp1[val] + count) % MOD;
l = r + 1;
}
int ans = 0;
for (int l=1; l<=b; ){
int val = b/l;
if (val == 0) break;
int r = b/val;
int countB = (r — l + 1) % MOD;
if (mp1.count(val)) {
int countA = mp1[val];
int pairs = (countA * countB) % MOD;
ans = (ans + pairs) % MOD;
}
l = r + 1;
}
cout << ans << endl;
}
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while(t--){
solve();
}
return 0;
}
Problem F : The Broken Staircase
Author : Casual_W
Extract the initial heights $$$H_1, H_2, \dots, H_M$$$ from the grid input.
Note that the row index $$$r$$$ corresponds to the height.
This problem can be solved using Dynamic Programming.
Define a state: $$$DP[i][h]$$$ = the minimum cost to arrange the first $$$i$$$ pillars such that the $$$i$$$-th pillar ends up exactly at height $$$h$$$.
To calculate $$$DP[i][h]$$$, look at the previous pillar $$$i-1$$$.
The constraint $$$|h_i - h_{i-1}| \le 1$$$ means the previous height could only have been $$$h-1$$$, $$$h$$$, or $$$h+1$$$.
Transition: $$$DP[i][h] = |H_i - h| + \min(DP[i-1][h-1], DP[i-1][h], DP[i-1][h+1])$$$.
Problem Analysis
We have $$$M$$$ pillars, and each pillar $$$i$$$ has an initial platform height $$$H_i$$$. We need to choose new heights $$$h_1, h_2, \dots, h_M$$$ for each pillar such that the staircase is "smooth". The condition for smoothness is $$$|h_i - h_{i+1}| \le 1$$$ for all adjacent pillars. The cost of moving a platform is $$$|H_i - h_i|$$$. We want to minimize the total cost $$$\sum |H_i - h_i|$$$.
Dynamic Programming Approach
Since the choice of height for the current pillar depends only on the height of the previous pillar, this problem can be solved using Dynamic Programming.
Let $$$DP[i][j]$$$ be the minimum cost to arrange the first $$$i$$$ pillars such that the $$$i$$$-th pillar ends up exactly at height $$$j$$$.
Transitions:
To place the $$$i$$$-th pillar at height $$$j$$$, the $$$(i-1)$$$-th pillar must have been at a height $$$k$$$ such that $$$|j - k| \le 1$$$. The valid values for $$$k$$$ are $$$j-1, j, j+1$$$.
Therefore, the recurrence relation is:
Base Case:
For the first pillar ($$$i=1$$$):
Final Answer:
After filling the table up to column $$$M$$$, the answer is the minimum value in the last column of the DP table:
Time Complexity: $$$O(N \cdot M)$$$ per test case. Since the sum of $$$N \cdot M \le 3 \cdot 10^6$$$, this fits well within the time limit.
#include <bits/stdc++.h>
using namespace std;
#define int long long
const int INF = 1e18;
void solve() {
int N, M;
cin >> N >> M;
vector<int> H(M + 1);
for (int r = 1; r <= N; r++) {
for (int c = 1; c <= M; c++) {
int val;
cin >> val;
if (val == 1) {
H[c] = r;
}
}
}
vector<vector<int>> dp(M + 1, vector<int>(N + 2, INF));
for (int h = 1; h <= N; h++) {
dp[1][h] = abs(H[1] - h);
}
for (int i = 2; i <= M; i++) {
for (int h = 1; h <= N; h++) {
int prev_min = dp[i-1][h];
if (h > 1) prev_min = min(prev_min, dp[i-1][h-1]);
if (h < N) prev_min = min(prev_min, dp[i-1][h+1]);
dp[i][h] = abs(H[i] - h) + prev_min;
}
}
int ans = INF;
for (int h = 1; h <= N; h++) {
ans = min(ans, dp[M][h]);
}
cout << ans << endl;
}
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
Problem G : Bilewater
Author : JAS1123
You can write, $$$A_j \pmod k = A_j - k \cdot \lfloor \frac{A_j}{k} \rfloor$$$
To calculate $$$\lfloor \frac{A_j}{k} \rfloor$$$ try to group elements of $$$A$$$ by their quotient when divided by $$$k$$$.
By substituting the identity $$$A_j \pmod k = A_j - k \cdot \lfloor \frac{A_j}{k} \rfloor$$$ into the function
we get : $$$f(k) = \sum_{j=1}^{N} A_j - \sum_{j=1}^{N} \left( k \cdot \left\lfloor \frac{A_j}{k} \right\rfloor \right)$$$
$$$f(k) = \text{TotalSum} - k \cdot \sum_{j=1}^{N} \left\lfloor \frac{A_j}{k} \right\rfloor$$$.
To maximize $$$f(k)$$$, we must minimize the subtrahend: $$$k \cdot \sum_{j=1}^{N} \lfloor \frac{A_j}{k} \rfloor$$$.
Calculating $$$\sum \lfloor \frac{A_j}{k} \rfloor$$$ directly for every $$$k$$$ takes $$$O(N)$$$ per $$$k$$$. Instead, we group elements of $$$A$$$ by their quotient when divided by $$$k$$$ :
$$$\sum_{j=1}^{N} \lfloor \frac{A_j}{k} \rfloor = \sum_{p=1}^{\lfloor \max(A)/k \rfloor} p \cdot (\text{count of } A_j \text{ in range } [pk, (p+1)k - 1])$$$
For counting number of $$$A_j$$$ in range $$$[pk, (p+1)k - 1]$$$ we can use prefix array
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
typedef vector<ll> vll;
#define endl '\n'
#define all(x) (x).begin(), (x).end()
template <class T> void input(vector<T> &v){for(auto &x:v)cin>>x;}
void cout_vll(const vll&arr){for(auto it:arr){cout<<it<<" ";}cout<<endl;}
void solve(){
ll n; cin >> n;
ll l, r; cin >> l >> r;
vll a(n);
input(a);
ll mx = *max_element(all(a));
vll cnt(mx + 1, 0);
ll sum = 0;
for(ll i = 0; i < n; i++){
cnt[a[i]]++;
sum += a[i];
}
vll p(mx + 2, 0);
for(ll i = 1; i <= mx; i++){
p[i] = p[i - 1] + cnt[i];
}
ll ans = -1;
for(ll i = l; i <= r; i++){
ll t = 0;
for(ll j = 1; j*i <= mx; j++){
ll left = j*i;
ll right = min(mx, (j + 1)*i - 1);
ll count_in_range = p[right] - p[left - 1];
t += j*count_in_range;
}
ll current_f = sum - i*t;
ans = max(ans, current_f);
}
cout << ans << endl;
}
int main(){int tt; cin >> tt; while(tt--){solve();}}








I was right.. It's euler's func in A2...., every tree lies in some dir from the center. If there is already a tree closer to the center in the same dir, then the new tree be hidden. Two trees lie in the same dir when their ratios are the same, which means their gcd > 1.... (Common factor) dir that appears for the first time.... coprime nums. Euler’s function exactly did same thing: how many nums are coprime with a given num So when I think that visbl trees = coprime dirs, it became clear that Euler’s function should be used here. BTW Very Good Contest...
For G the testcase
21 1001 100000This 1000 times would lead to TLE and is a valid testcase
the solution in editorial passed, took around 700ms, iirc there's already a test case which includes the max value of A in almost all arrays
I ran the editorial solution on the testcase in custom invocation in C++ 17, for that it ran in 3000ms, however for C++ 20 it ran in 800 ms, i don't really know why is that the case