JasonMendoza2008's blog

By JasonMendoza2008, 4 months ago, In English

This problem: https://mirror.codeforces.com/gym/106160/problem/B has an editorial (it's a pdf attached) and they claim:

But to me the relationship should be:

$$$f(i, x) = cost(i, x) + min_{y ≤ x} f(i-1, y)$$$ where $$$cost(i, x)$$$ is either 0, 1, or 2.

Definitely have no idea where the max could come from.

Would love to have someone double check if what I'm claiming is true or if I'm just dumb and misreading the editorial.

Full text and comments »

  • Vote: I like it
  • +10
  • Vote: I do not like it

By JasonMendoza2008, 9 months ago, In English

I found that problem pretty cool, so I solved it and I wanted to formalise a proof and I figured, since the editorial isn't out yet I may as well give a formal proof of necessary and sufficient conditions for $$$a$$$ to exist. If there is an easier way to prove it let me know, I'm curious. Once you have the necessary and sufficient conditions, you can just check them, if they hold print(YES) (it'll work since they're sufficient), if they don't hold, print(NO) (it'll work because they're necessary).

Let $$$p$$$ and $$$s$$$ be two sequences of integers of size $$$n \gt 1$$$. I'm looking for necessary and sufficient conditions such that we can find a sequence $$$a$$$ of size $$$n$$$ such that we can write for all $$$1 \leq i \leq n$$$, $$$p_i$$$ as $$$\text{GCD}(a_1, a_2, ..., a_i)$$$ and $$$s_i$$$ as $$$\text{GCD}(a_i, a_{i+1}, ..., a_n)$$$.

Necessary conditions:

Suppose we do have a sequence $$$a$$$ of size $$$n$$$ such that we can write for all $$$1 \leq i \leq n$$$, $$$p_i$$$ as $$$\text{GCD}(a_1, a_2, ..., a_i)$$$ and $$$s_i$$$ as $$$\text{GCD}(a_i, a_{i+1}, ..., a_n)$$$.

  • By definition of the GCD, $$$\forall i \in {1, ..., n-1 } : p_{i+1} | p_i$$$

  • Similarly, $$$\forall i \in {1, ..., n-1 } : s_i | s_{i+1}$$$

  • $$$p_n = s_1$$$ (they're both the GCD of the whole array)

  • $$$p_i | a_i$$$ and $$$s_i | a_i$$$, so $$$\text{LCM}(p_i, s_i) | a_i$$$.

It is easy to show that it is not necessary that $$$a_i = \text{LCM}(p_i, s_i)$$$, e.g. with p=(12,12,6); s=(6,6,6); a=(12,24,6). We have $$$\text{LCM}(p_2, s_2) = 12 \neq a_2$$$ but $$$\text{GCD(a_1)} = 12 = p_1$$$, $$$\text{GCD(a_1, a_2)} = 12 = p_2, ...$$$.

How can we turn $$$\text{LCM}(p_i, s_i) | a_i$$$ into a useful necessary condition? $$$p_i = \text{GCD}(p_{i-1}, a_i) = \text{GCD}(p_{i-1}, k_i * \text{LCM}(p_i, s_i))$$$. Since $$$p_i$$$ divides both terms, we can rewrite this as $$$1 = \text{GCD}(p_{i-1}/p_{i}, k_i * \text{LCM}(p_i, s_i) / p_i)$$$. Since $$$p_{i-1}/p_{i}$$$ and $$$k_i * \text{LCM}(p_i, s_i) / p_i$$$ are coprime, $$$p_{i-1}/p_{i}$$$ and $$$\text{LCM}(p_i, s_i) / p_i$$$ also are (the second term is an integer so all good!). Here is our useful necessary condition: $$$p_{i-1}/p_{i}$$$ and $$$\text{LCM}(p_i, s_i) / p_i$$$ need to be coprime.

Proof that these conditions are sufficient:

Suppose we have $$$p$$$ and $$$s$$$ satisfying all the above conditions. Let's prove that there is a sequence $$$a$$$ of size $$$n$$$ such that we can write for all $$$1 \leq i \leq n$$$, $$$p_i$$$ as $$$\text{gcd}(a_1, a_2, ..., a_i)$$$ and $$$s_i$$$ as $$$\text{GCD}(a_i, a_{i+1}, ..., a_n)$$$.

Let $$$\forall i \in {1, ..., n } : a_i = \text{LCM}(p_i, s_i)$$$

  • Let's prove $$$p_1 = \text{GCD}(a_1)$$$. Since $$$p_n | p_1$$$, and $$$p_n = s_1$$$, $$$s_1 | p_1$$$, so $$$\text{LCM}(p_1, s_1) = p_1$$$. But $$$\text{LCM}(p_1, s_1) = a_1$$$. So $$$p_1 = a_1 = \text{GCD}(a_1)$$$
  • Let's prove that $$$\forall i \in {2, ..., n } : p_i = \text{GCD}(a_1, a_2, ..., a_i)$$$. We can re-write this as we want to prove $$$p_i = \text{GCD}(p_{i-1}, a_i)$$$. We can re-write this as we want to prove $$$p_i = \text{GCD}(p_{i-1}, \text{LCM}(p_i, s_i))$$$. We can rewrite this as we want to prove $$$1 = \text{GCD}(p_{i-1}/p_i, \text{LCM}(p_i, s_i)/p_i)$$$. We can re-write this that way because $$$p_i | p_{i-1}$$$ and $$$p_i | \text{LCM}(p_i, s_i)$$$ so everything is an integer all good. But we don't need to prove that since this is one of our above conditions.
  • A similar proof can be conducted for $$$s_i$$$. It's worth noting no extra “suffix” condition is missing, our 4 necessary conditions are enough (it'd be easier for the proof to add a necessary condition, but it's more fun to prove that it is redundant/useless). We want to prove $$$\forall i \in {1, ..., n } : s_i = \text{GCD}(a_i, a_{i+1}, ..., a_n)$$$. We can re-write this as we want to prove $$$s_i = \text{GCD}(s_{i+1}, a_i)$$$. This can be rewritten as $$$1 = \text{GCD}(s_{i+1}/s_i, a_i/s_i)$$$. This can be rewritten as $$$1 = \text{GCD}(s_{i+1}/s_i, \text{LCM}(p_i, s_i)/s_i)$$$. This can be rewritten as $$$1 = \text{GCD}(s_{i+1}/s_i, a_i/s_i)$$$. This coprimality can be rewritten as $$$q|s_{i+1}/s_i = \gt q \nmid a_i/s_i$$$ for all prime $$$q$$$. Let $$$q$$$ be a prime. Let $$$\pi_i$$$ be the multiplicity of $$$q$$$ in the prime factorisation of $$$p_i$$$ and $$$\sigma_i$$$ in $$$s_i$$$ and $$$\alpha_i$$$ in $$$a_i$$$. We want to prove $$$\forall i \in {1, ..., n-1} \; \sigma_{i+1} \gt \sigma_i = \gt \alpha_i = \sigma_i$$$.

If $$$\sigma_1 = \sigma_n$$$ there is nothing to prove because we know that $$$\sigma_i$$$ is non-decreasing (left term of the implication we want to prove is always False).

So suppose $$$\sigma_1 \lt \sigma_n$$$. $$$\pi_n = \sigma_1 \lt \sigma_n$$$ and $$$\pi_1 \geq \pi_n = \sigma_1$$$ so there is a $$$k \lt n$$$ such that $$$\forall i, 1 \leq i \leq k \; \sigma_i \leq \pi_i$$$ and $$$\forall i, k \lt i \leq n \; \sigma_i \gt \pi_i$$$. Thus for $$$i \gt k$$$, $$$\pi_i \lt \alpha_i$$$. But $$$p_{i-1}/p_{i}$$$ and $$$\text{LCM}(p_i, s_i) / p_i$$$ are coprime. So $$$\pi_{i-1} \gt \pi_i = \gt \alpha_{i} = \pi_{i}$$$. So, $$$\pi_{i-1} = \pi_i$$$ for $$$i \gt k$$$. So $$$\sigma_1 = \pi_n = ... = \pi_k \geq \sigma_k \geq ... \geq \sigma_1$$$. Therefore, equality everywhere. Hence $$$\sigma_i = \sigma_{i+1}$$$ for $$$1 \leqslant i \lt k$$$, and $$$\alpha_i = \sigma_i$$$ for $$$k \leqslant i \leqslant n$$$ ($$$\sigma_i \gt \pi_i$$$ for $$$i \gt k$$$ and $$$\sigma_k = \pi_k$$$, thus $$$\forall i \in {1, ..., n-1} \; \sigma_{i+1} \gt \sigma_i = \gt \alpha_i = \sigma_i$$$ (A => B is the same as non A or B).

EDIT => added the proof for $$$s_i$$$

Full text and comments »

  • Vote: I like it
  • +48
  • Vote: I do not like it

By JasonMendoza2008, history, 9 months ago, In English

MRE:

using System.Collections.Generic;

namespace cp;

internal static class Codeforces
{
    public static void Main()
    {
        SortedSet<int> mySortedSet = new SortedSet<int>();
        for (int i = 0; i < 1000000; i++) {
            mySortedSet.Add(i);
        }
        Console.WriteLine(mySortedSet.Min);
        Console.WriteLine(mySortedSet.Max);
    }
}

On codeforces, 358 ms with C#10, and 1030 ms with C#13.

As a control, using https://dotnetfiddle.net/, I get 292 ms using C#10 and 357 ms using C#13 so yes a bit of an increase but by no means, a factor 3.

I think I've seen another blog talking about C#13 being weirdly slow but I have lost it, I thought a minimal reproducible example with no input parsing would help find the root cause.

Full text and comments »

Tags .net, c#
  • Vote: I like it
  • +9
  • Vote: I do not like it

By JasonMendoza2008, history, 9 months ago, In English

I guess I keep a few problems I'd love to revisit a year later in my favourites tab and I would love to be able to sort them by difficulty so that I could try to solve them all in order of increasing difficulty (similarly to https://mirror.codeforces.com/problemset?order=BY_RATING_ASC). Is that possible? For now, I think it's sorted regarding the time it's been added in the favourites list and I can't find how to change it.

Full text and comments »

  • Vote: I like it
  • +1
  • Vote: I do not like it

By JasonMendoza2008, history, 9 months ago, In English

The problem goes like this:

You are given an integer array nums.

A nice permutation is defined as a permutation of any non-empty subset of nums such that the absolute difference between any two consecutive elements is exactly 1. A permutation of size 1 is considered nice by definition. A subset is any non-empty selection of elements from nums (regardless of order). A permutation of a subset is any reordering of the elements of a subset.

Your task is to compute the sum of all elements across all such nice permutations.

Since the total may be very large, output the result modulo $$$10^9 + 7$$$.


Input

  • An array nums of integers.
  • $$$1 \leq \text{nums.length} \leq 10^5$$$ (maybe no solution exists though ... I can't guarantee anything lol)
  • $$$0 \leq \text{nums}[i] \leq 10^5$$$ (maybe no solution exists though ... I can't guarantee anything lol)

Output

  • A single integer: the sum of all elements in all nice permutations of all non-empty subsets of nums, taken modulo $$$10^9 + 7$$$.

Examples

nums = [1, 2, 3]

  • [1], [2], [3], [1,2], [2,1], [2,3], [3,2], [1,2,3], [3,2,1] are all valid.
  • The sum is 34.
  • [1, 3], [3, 1], [2,3,1], [2,1,3], [3,1,2], and [1,3,2] are not valid.

nums = [2, 3, 2]

  • [2], [2], [3], [2,3], [3,2], [2,3], [3,2], [2,3,2], [2,3,2] are all valid.
  • The sum is 41.
  • [2,2], [2,2], [2,2,3], [2,2,3], [3,2,2], and [3,2,2] are not valid.

Apart from brute-forcing all $$$\sum_{k=1}^{n} \binom{n}{k} \cdot k!$$$ possibilities, I can't find any way to do it. I feel like we should start by sorting and making clusters (for example [1, 2, 9, 10, 11, 12] would have two clusters, [1, 2] and [9, 10, 11, 12]), but that kind of got me nowhere.

Full text and comments »

  • Vote: I like it
  • +22
  • Vote: I do not like it

By JasonMendoza2008, history, 13 months ago, In English

Hi all

I'm pretty new to heuristic things and I tried to implement both Hill Climbing and Simulated Annealing for this begginner-friendly and considered-as-a-tutorial problem for heuristic contests on AtCoder https://atcoder.jp/contests/intro-heuristics/tasks/intro_heuristics_a.

It looks like Hill Climbing and Simulated Annealing yield similar results ( SA: https://atcoder.jp/contests/intro-heuristics/submissions/64189473, HC: https://atcoder.jp/contests/intro-heuristics/submissions/64189544 ).

I'm trying to understand why, and the only reason I could come up with was that there are "loads of neighbours" for each state ((26-1)*365 neighbours per state) so it's basically impossible to get stuck in a local optimum? I guess it could be deeper than that — I'm sure there are actually local optima (it makes sense when you read the problem), so what am I missing? Why wouldn't SA yield to better results here? Maybe I just implemented it wrong? Maybe Hill Climbing would only be bad if the algorithms were allowed to run for 15 hours? I checked locally the initial temperature and temperature decay, there are some "bad transitions accepted" early on and then very few when we get close to 2 seconds. So I don't think the problem is that my temperature + temperature decay are set up such that I'm actually never accepting bad transitions and hence degenerating to HC.

Any help is appreciated, keep it beginner friendly please :). I should add I'm not really looking for micro optimisation tips (although they're welcome*), but rather trying to grasp a better understanding of when SA >>>>> HC.

*if you just say code everything in C it'll be ever so slightly faster than C# AOT and will yield better results for both HC and SA, it won't be really helpful, I know that.

Full text and comments »

  • Vote: I like it
  • +18
  • Vote: I do not like it

By JasonMendoza2008, history, 13 months ago, In English

https://kenkoooo.com/atcoder/#/submissions/recent

Can't see any new submissions since the 15th of March 2025

Full text and comments »

  • Vote: I like it
  • +46
  • Vote: I do not like it

By JasonMendoza2008, history, 14 months ago, In English

Just wanted to share a way to solve https://atcoder.jp/contests/abc395/tasks/abc395_f. Tried 1000 random test cases to make sure it wasn't wrong against some top 10 solutions and they all matched so I think the claims I make and the proofs associated with them work but you're welcome to prove me wrong.

My idea was to:

  • First satisfy $$$ |U_i - U_{i+1}| \leq X \quad \text{for every integer } i \text{ with } 1 \leq i \lt N. $$$

  • Check all the sums $$$ U_i + D_i \text{ with } 1 \leq i \lt N. $$$ and take the minimum sum.

  • Iterate through all the indices $$$ 1 \leq i \lt N $$$ and remove $$$ U_i + D_i - min_{sum} $$$ to the bottom tooth in priority and then the upper tooth if the bottom tooth size goes to 0.

Two things need to be addressed:

  • How can I satisfy $$$ |U_i - U_{i+1}| \leq X \quad \text{for every integer } i \text{ with } 1 \leq i \lt N. $$$ in O(N)? If I do something like:
        for i in range(n-1):
            if u[i] - u[i+1] > x:
                cost += (u[i] - u[i+1] - x)
                u[i] -= (u[i] - u[i+1] - x)
            if u[i+1] - u[i] > x:
                cost += (u[i+1] - u[i] - x)
                u[i+1] -= (u[i+1] - u[i] - x)

Then this array $$$U = [10, 8, 6, 4, 2]$$$ with $$$X = 1$$$ is not correctly changed, since it would become $$$ U = [9, 7, 5, 3, 2]$$$. We can obviously do that $$$N$$$ times but then it becomes O(N^2).

The hack here is to do in both directions (for i increasing and for i decreasing). Indeed those "blocks" such as $$$U = [10, 8, 6, 4, 2]$$$ that make our algorithm fail are completely solved if we consider it the other way around. It looks like this:

    for i in range(n-1):
        if u[i] - u[i+1] > x:
            cost += (u[i] - u[i+1] - x)
            u[i] -= (u[i] - u[i+1] - x)
        if u[i+1] - u[i] > x:
            cost += (u[i+1] - u[i] - x)
            u[i+1] -= (u[i+1] - u[i] - x)
    for i in range(n-1, 0, -1):
        if u[i] - u[i-1] > x:
            cost += (u[i] - u[i-1] - x)
            u[i] -= (u[i] - u[i-1] - x)
        if u[i-1] - u[i] > x:
            cost += (u[i-1] - u[i] - x)
            u[i-1] -= (u[i-1] - u[i] - x)

We have to consider both directions because only considering the reverse direction would encounter a similar problem on $$$U = [2, 4, 6, 8, 10]$$$

  • When I iterate through all the indices $$$ 1 \leq i \lt N $$$ and remove $$$ U_i + D_i - min_{sum} $$$, how do I know I won't break $$$ |U_i - U_{i+1}| \leq X \quad \text{for every integer } i \text{ with } 1 \leq i \lt N. $$$

If I need to remove some upper tooth $$$i$$$'s length, it means that $$$D_i = 0$$$ at this point and the minimum sum $$$min_{sum}$$$ is smaller than $$$U_i$$$, so if $$$U_{i+1}$$$ (or $$$U_{i-1}$$$) is bigger than $$$U_i$$$, it will have to decrease as well, and $$$ |U_i - U_{i+1}| \leq X \quad \text{for every integer } i \text{ with } 1 \leq i \lt N. $$$ won't break. If $$$U_{i+1}$$$ (or $$$U_{i-1}$$$) is smaller than $$$U_i$$$, well I'm bringing $$$U_i$$$ closer to them, so it shouldn't be a problem (and if I'm bringing $$$U_i$$$ below them, we get back to the first point where $$$U_{i+1}$$$ (or $$$U_{i-1}$$$) is bigger than $$$U_i$$$).

And that's it, feels very hacky, but seems to do the job and there's currently no editorial yet so I don't know if it was the intended solution.

Python submission: https://atcoder.jp/contests/abc395/submissions/63338450

C++ submission: https://atcoder.jp/contests/abc395/submissions/63338490

Full text and comments »

  • Vote: I like it
  • +6
  • Vote: I do not like it

By JasonMendoza2008, history, 14 months ago, In English

I was solving 2063C - Remove Exactly Two and my idea (close to the editorial) is to brute force the first vertex to remove. And then to be clever, after updating the degrees of all adjacent vertices of that first vertex, when it comes to finding the subsequent max(degrees). Using a max-heap, that's easily doable in O(nlog(n)) and the editorial partially points in that direction.

The solution I wrote is similar but consists of having this list:

    degrees_list_counter: list[int] = [0] * (max(degrees) + 1)
    for degree in degrees:
        degrees_list_counter[degree] += 1

This allows me to brute force in that manner:

    best_ans: int = 0
    for node_first in range(n):  # BRUTE FORCE THE FIRST VERTEX
        ans: int = 1
        ans += degrees[node_first] - 1
        # UPDATE DEGREES_LIST_COUNTER
        for neighbour in graph[node_first]:  # THIS IS, LIKE IN A DFS, NOT MAKING ANYTHING QUADRATIC
            degrees_list_counter[degrees[neighbour]] -= 1
            degrees[neighbour] -= 1
            degrees_list_counter[degrees[neighbour]] += 1
        degrees_list_counter[degrees[node_first]] -= 1
        degrees[node_first] = 0
        degrees_list_counter[degrees[node_first]] += 1
        # NOW THAT DEGREES_LIST_COUNTER IS UPDATED
        # FIND THE VERTEX WITH THE HIGHEST DEGREE
        i: int = len(degrees_list_counter) - 1
        while i > 0 and degrees_list_counter[i] == 0:  
            # THIS LOOKS LIKE THIS MAKES THE SOLUTION QUADRATIC
            i -= 1
        ans += (i - 1) if i > 0 else -1

        best_ans = max(best_ans, ans)

Here, I claim that the solution is linear despite this while loop:

  • If there is a vertex with super high degree and all the others have a low degree, I will end up with a linear time in that while loop only once.

  • If there is (strictly) more than one vertex with super high degree, I will literally never end up with a linear time in that while loop.

  • If there is no vertex with super high degree, well, no need to worry about traversing that while loop.

Is this true? Or am I wrong and this is hackable?

Full solution: https://mirror.codeforces.com/contest/2063/submission/306341538

Full text and comments »

  • Vote: I like it
  • -7
  • Vote: I do not like it

By JasonMendoza2008, 17 months ago, In English

Context: Find the maximum flow through a directed graph from s to t. Capacities are all integers.

If I'm not mistaken, Edmonds-Karp runs in O(VE²); and if we don't use a BFS but a DFS instead, it runs in O(E*f) where f is the maximum flow (https://en.wikipedia.org/wiki/Maximum_flow_problem) (proofs: https://enos.itcollege.ee/~japoia/algorithms/GT/Introduction_to_algorithms-3rd%20Edition.pdf chapter 26).

I was a bit surprised that Edmonds-Karp wouldn't pass (but I guess it makes sense looking at the constraints) on a cses problem ( https://cses.fi/problemset/task/1694 ). What really surprised me is that the O(E*f) solution passed (https://cses.fi/paste/aba73738dacf15c9ac0337/ even though the capacities can go up to 10^9 so the max flow can definitely go up to 10^9). I'm now trying to hack my own DFS solution, so I tried to hardcore brute force it by generating billions of inputs and seeing if my DFS would time out on one of them: https://pastebin.com/ijUZDn3G. But that ran so fast (within a minute..) without finding any bad cases...

The worst-case example on wikipedia for that O(E*f) (https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm) doesn't work for a fixed adjacency list. It just shows that if a bad actor could choose what order DFS goes through the graph, then the algorithm could become crazily slow.

I heard there is this test case generator to hack those O(E*f) solutions but I'll be honest, I don't speak Japanese and I'm not advanced enough to figure out what's happening: https://web.archive.org/web/20211009144446/https://min-25.hatenablog.com/entry/2018/03/19/235802

TL;DR: this code passes https://cses.fi/paste/aba73738dacf15c9ac0337/ even though it's O(E*f) and the capacities can go up to 10^9; please hack me.

(EDIT: my comment in the code saying "DFS to find shortest s-t path" is wrong, it's a remnant comment from when I was coding BFS)

Full text and comments »

  • Vote: I like it
  • +29
  • Vote: I do not like it

By JasonMendoza2008, 17 months ago, In English

This code gets MLE but only uses, if I'm correct, three arrays of size n <= 2*10^5 for that problem: https://mirror.codeforces.com/contest/2038/problem/B.

Am I missing something?

I thought one long long was 8 Bytes and therefore 3*8*2*10^5 = 48*10^5 would be 4.8 MB?

Would love to be enlightened, thanks! Note that I'm not asking how to solve that problem, but rather why in the world I am getting MLE. Link to the MLE verdict: https://mirror.codeforces.com/contest/2038/submission/292217786

#include <iostream>
#include <vector>
#include <algorithm>
#include <numeric>

using namespace std;

void solve() {
    long long n;
    cin >> n;
    vector<long long> a(n), a_copy(n);

    for (long long i = 0; i < n; i++) {
        cin >> a[i];
    }

    auto is_feasible = [&](long long target) -> bool {
        for (long long i = 0; i < n; i++) {
            a_copy[i] = a[i];
        }
        bool first_time = true;
        while (a_copy[0] > target || first_time) {
            first_time = false;
            for (long long i = 0; i < n; i++) {
                long long a_i = a_copy[i];
                if (a_i > target) {
                    long long excess = ((a_i - target) / 2) + ((a_i - target) % 2);
                    a_copy[i] -= 2 * excess;
                    long long nxt_idx = (i + 1) % n;
                    a_copy[nxt_idx] += excess;
                }
            }
        }
        return all_of(a_copy.begin(), a_copy.end(), [&](long long x) { return x == target; });
    };

    long long total_sum = accumulate(a.begin(), a.end(), 0LL);
    vector<long long> targets;

    for (long long i = 0; i <= total_sum / n; i++) {
        targets.push_back(i * n);
    }

    long long l_ptr = -1, r_ptr = targets.size() - 1;
    while (l_ptr < r_ptr) {
        long long mid = (l_ptr + r_ptr + 1) / 2;
        if (is_feasible(targets[mid] / n)) {
            l_ptr = mid;
        } else {
            r_ptr = mid - 1;
        }
    }

    if (r_ptr == -1) {
        cout << -1 << endl;
        return;
    }
    cout << total_sum - targets[l_ptr] << endl;
}

int main() {
    long long n_tests;
    cin >> n_tests;

    for (long long test_nb = 0; test_nb < n_tests; test_nb++) {
        solve();
    }

    return 0;
}

Full text and comments »

  • Vote: I like it
  • 0
  • Vote: I do not like it

By JasonMendoza2008, history, 19 months ago, In English

I saw somewhere that if you were to pick randomly the next element to insert when building a binary search tree from an array, you would have 1/3 on the left and 2/3 on the right (or conversely) on average. I get that the best case is half-half and the worst case in everything in one of the branches but how do get formally to the 1/3-2/3?

Full text and comments »

  • Vote: I like it
  • +1
  • Vote: I do not like it

By JasonMendoza2008, history, 19 months ago, In English

I am looking for a problem that applies shortest path in a DAG using topological sort to get linear time (V + E). I couldn't find this kind of problem on the internet even though that's (I assume) a relatively standard problem, am I not looking correctly? Do you guys have an idea where I could find such an algorithm on an online judge (hackerrank, leetcode, codeforces, codechef, I don't particularly care about the platform).

Ideally, some edges are negative, otherwise Djikstra also works.

Thanks!

Full text and comments »

  • Vote: I like it
  • -4
  • Vote: I do not like it

By JasonMendoza2008, history, 20 months ago, In English

This problem https://mirror.codeforces.com/contest/285/problem/B says "Consider all glasses are moving simultaneously during one shuffling operation.".

I don't understand how simultaneously makes sense, I solved the problem assuming one shuffling operation means moving the glasses in sequence* but I was just wondering if there is something I misunderstood.

* "if glass at position 1 goes from 1 to 2 and glass at position 2 goes from 2 to 3 then if marble was in glass at position 1 it ends up in glass at position 3" was my assumption.

Thank you for your help.

Full text and comments »

  • Vote: I like it
  • +3
  • Vote: I do not like it

By JasonMendoza2008, history, 20 months ago, In English

Consider this problem (https://mirror.codeforces.com/contest/984/problem/D).

Solution one (https://mirror.codeforces.com/contest/984/submission/276446617) uses two recursive self-explanatory functions (compute_f, and compute_max_xor) that use unordered_map for memoization. It does not pass (2000 ms TLE).

Solution two (https://mirror.codeforces.com/contest/984/submission/276446798) is the same from a logic point of view (dp corresponds to compute_f and dp_max corresponds to compute_max_xor), except it uses Dynamic Programming. It passes in 546 ms.

I thought it could be that different because of hash collisions. Some people hack some other people by using clever inputs that blow up hashmaps ... but adding a random custom_hash did not help whatsoever.

Is the overhead of using an unordered_map that HIGH? Big enough to bring a x4 in time? Or am I missing something else?

Thank you!

EDIT Keeping memoization but using a 2D array instead of an unordered map did the trick. https://mirror.codeforces.com/contest/984/submission/276544726. Crazy. Thank you for your help!

Full text and comments »

  • Vote: I like it
  • 0
  • Vote: I do not like it

By JasonMendoza2008, history, 21 month(s) ago, In English

For this problem https://mirror.codeforces.com/gym/103708/problem/G, the "proper" way to solve it is to use Multi Dimensional Ternary Search.

If I were to solve it with random search (it passes: https://mirror.codeforces.com/gym/103708/submission/274048924 — very much easily), would this be considered a valid solution? I feel like saying no would invalidate the use of string / polynomial hashing but saying yes feels very much wrong.

What is the CP community opinion on this?

Full text and comments »

  • Vote: I like it
  • +21
  • Vote: I do not like it

By JasonMendoza2008, history, 21 month(s) ago, In English

Consider this code:

#include <stdio.h>
#include <float.h>

int main() {
    double a = 100.0;
    printf("a: %.6lf\n", a);
    return 0;
}

It should print 100.000000 right? It does on https://www.onlinegdb.com/online_c_compiler, it does on my local computer. But on codeforces: https://i.imgur.com/P2CCY7Q.png

I'm more of a Python user so maybe I'm missing something big and it's actually not a bug?

Full text and comments »

Tags gcc, bug
  • Vote: I like it
  • 0
  • Vote: I do not like it

By JasonMendoza2008, history, 2 years ago, In English

Coincidently I was wondering a few weeks ago why codeforces was using such an outdated version of Python (https://mirror.codeforces.com/blog/entry/122386) and now Python 3.12.1 is live. As an example of why that would be useful: https://mirror.codeforces.com/contest/356/submission/240013817 is accepted whilst the exact same solution (https://mirror.codeforces.com/contest/356/submission/233016779) was not a few months ago. Probably not the best example since PyPy was able to go through for that particular example but I thought it was worth sharing.

Full text and comments »

By JasonMendoza2008, history, 2 years ago, In English

The general consensus is that competitive programming is competitive so if you're going to choose a slower language (like Java or especially Python) you need to be ready to deal with the potential setbacks and advantages of your chosen language. It's fair to say that Codeforces isn't supposed to accommodate your language choice by increasing the time limits according to the language especially if it's a poor one that is known to be slower than something like C++. HOWEVER, Python version on codeforces is currently 3.8 (out for 4 years) and versions 3.11 and 3.12 both bring speed improvements (3.11 for a lot of stuff (https://docs.python.org/3/whatsnew/3.11.html#:~:text=Python%203.11%20is%20between%2010,See%20Faster%20CPython%20for%20details.) and 3.12 for comprehensions (https://docs.python.org/3/whatsnew/3.12.html#:~:text=This%20speeds%20up%20execution%20of%20a%20comprehension%20by%20up%20to%20two%20times.)). I know loads of people use PyPy (and it does seem to work very well, cf. https://i.imgur.com/v9iOJnw.png) but it would seem relevant to allow newer versions of Python (**I would even argue it will help lower the CPU demand on codeforces servers since both versions are known to be faster** so realistically ... why not ...).

I've just started codeforces so maybe this has been discussed before but I haven't seen any topics talking about it and i honestly don't think there'd be any drawbacks in getting a newer, faster version of Python on the website.

Full text and comments »

  • Vote: I like it
  • +19
  • Vote: I do not like it