Блог пользователя purplesyringa

Автор purplesyringa, история, 2 месяца назад, По-английски

When I was a child, I knew software development well (though not much of C++), and my parents motivated me to try out competitive programming.

The first thing I tried was dkirienko's section. In the first 20 minutes, GCD and the Euclidian algorithm were explained, the complexity of the Euclidian algorithm was proven, and that was pretty much it. We were then asked to solve a problem set on Codeforces individually. I didn't understand much from the complexity proof, couldn't solve a single problem, failed to figure out what I was supposed to do, and left crying. So yeah, stuff like that doesn't work.

I understood simpler topics, though. I knew basic math, like how to solve linear systems and quadratic equations, and could make simple observations, so the school stage of ROI was quite easy to get through. (The hardest part was to explain to my then-informatics teacher that yes, I want the adult problems.) During the municipal stage, I had 3 hours to solve what I'd classify as a 5-task Div 3 contest. I solved the first three and stumbled on D2B and D2C. That was still a good result for my age, so I was invited to a Moscow training camp.

My experience at the camp was a bit haphazard. I didn't understand the relative complexity of the topics. Which group was I supposed to join? Was it linear and stack algorithms, segment trees, or HLD? I knew what a stack is, I wrote a stack-based parser a month prior. With some help, I figured out I should join the former group anyway.

That day, I was taught how prefix sums work, two pointers, and some stack tricks in 1.5 hours. During this time, we discussed several problems, solved them together with other students (there were about two dozen of us), explained stuff to each other, and took notes. We then spent two or three hours solving problems individually. I didn't understand all the information, but I could apply some of it to the problems. That worked.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +178
  • Проголосовать: не нравится

Автор purplesyringa, история, 2 месяца назад, По-английски

I'm pretty sure I've spent more time writing blogs than contests on Codeforces. Over time, I've found plenty of rough edges. So this is a plea to the Codeforces team to fix them.

I'm effectively going to say "please redo everything" over dozens of paragraphs, but I'll hopefully make the steps to the solution tangible and clear. If this is deemed way too ambitious, few improvements are better than none.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +370
  • Проголосовать: не нравится

Автор purplesyringa, история, 2 месяца назад, По-английски

I haven't slept for 20 hours and I can't fall asleep. I've been reading others' topics on AI, and I had a sort of epiphany. These ideas have been brewing inside me for many years, and I thought it might be time to share them with the community.


There were times when I was a "competitive programmer" at the core. I spent many years doing that exclusively. I've never been to IOI, but I like to think that I've reached some good heights.

Two years ago, I stopped writing online competitions. I've written a few offline competitions after that, but eventually, I decided to stop that too.

Yet somehow Codeforces still attracts me. I don't want to solve problems, but I keep typing codeforces.com into the address bar. Why? Just yesterday, I didn't have an answer.


Imagine a layperson asking you what competitive programming is. What are you going to say? My answer has always been "Oh, we solve interesting algorithmic problems with time constraints and other limits". Many of you likely do the same.

Yet we keep calling what we do "competitive programming". Why does the explanation lack the word "compete" then?

Barring Codeforces-specific hacks, all participants solve problems independently. It's not like chess, where one competes against someone else. It's asymmetrical. So why did we add PvP? Because it's fun.

But why is it fun to me specifically? I'm in the minority here, but I hate writing rounds. I know my limits, I know what I can and can't achieve with my limited time. Seeing my rating dance around 2200 is no fun and not a good motivation to keep going. Even if I keep improving, what do I get in return? Unless I get to the top-rated list, it's only fake internet points. Hell, the only reason I use this account is because people trust an orange's posts more than cyan's.

I'm not here for contests. I'm here for the community. I'm here for you all, and for you, the one reading my post right now, in particular.

I like reading about interesting tricks. I like to see people inventing new techniques or uncovering bugs. I like watching people reporting UB as if it was a compiler bug, because sometimes it really is a compiler bug, and I get to chuckle for a minute.

To me, interacting with other people is the best human experience of all. I think many agree with this. There are so many comments on round announcements, so many people helping each other understand the editorial, and so many memes. (Do we have a mascot yet?) We've built quite a subculture.

CP isn't "competitive programming", really. It's not about solving problems faster than others. No, it's about making bets with your friends, achieving something to boast about, and making silly jokes only geeks understand.


This is something AI will never be able to take from us. We'll always have each other.

This is the only thing I ever cared about. I found my first friends in the CP world. We solved problems on a napkin. I never held any lectures, yet I always kept thinking about the easiest way to explain a certain technique. I vividly remember the moment I realized a bottom-up segment tree isn't really a tree and spent the next day telling that to everyone who cared to listen.

We never needed to compete against each other. Looking back, were we given a choice, we'd solve problems in teams more often than not.

So, to hell with the rating system, I'll happily read your article on improving FFT performance instead. Scratch the LGM title, I want to watch you speedrunning a round on Twitch. Hold April fool's contests every two months, no AI is going to help you solve those problems.

I think this is the way forward.


CP hasn't always been about complex algorithms or even difficult ideas. Early problems have been quite simple, yet a community was built around them anyway. We are not united by contests, we're united by problem-solving and a love of programming, mathematics, or both.

The Demoscene community is kinda similar to ours. It's about doing something efficiently with a computer, too, just wider, more practical in some ways, and less practical in others. They have "contests" too, called demo parties. The word choice is important: people come to show off, not to win prizes. It's common practice to talk to participants in breaks to ask them how the hell they did something cool.

What I mean to say is, this scheme works. It's new, it's radical, but it works. If the choice is between CP becoming even more fringe and detached from reality and this, I know what choice to make. And I hope I'm not alone.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +293
  • Проголосовать: не нравится

Автор purplesyringa, история, 2 месяца назад, По-английски

This is an experimental rewrite of a post by -is-this-fft-. I consider this post more intuitive and readable than the original, but -is-this-fft- was the one who researched this topic.

Minimum cuts in flow networks are quite fascinating. We often see flow-related problems where the solution is clear-cut (see what I did there?), but sometimes the idea of flow seems to appear unexpectedly. In some cases, thinking in terms of minimum cuts is more intuitive than flow itself.

Let’s explore two interesting problems about minimum cuts:

  • Proving that in a flow network where every pair of minimum cuts shares a common crossing edge, all minimum cuts must share an edge.
  • Finding the lexicographically smallest minimum cut in terms of the edges that cross it.

For simplicity, we'll assume all edges have unit capacities. This doesn't affect the generality of our solutions since edges with higher capacities can be split into multiple unit-capacity edges.

Key insight

It is well-known that the maximum flow and the minimum cut of a network have equal numerical values. But there’s more to this relationship:

Characteristic property. For any maximum flow $$$f$$$, an $$$s-t$$$ cut is minimal if and only if no edges in the residual graph $$$G_f$$$ cross it.

Proof. Every $$$s-t$$$ cut crosses $$$f$$$ by weight $$$F$$$ (the value of the flow). A cut is minimal when it crosses $$$G$$$ by weight $$$F$$$. Since $$$G = G_f + f$$$, this means the cut crosses $$$G_f$$$ by weight $$$0$$$. As $$$G_f$$$ has no negative edges, the cut must not cross any edges.

The original post provides an alternative proof based on the linear programming theory.

Now, let’s leverage this property to tackle our two problems.

Problem 1: Shared crossing edges in minimum cuts

Problem. In a flow network where any two minimum cuts have a common crossing edge, prove that all minimum cuts share a crossing edge.

First, take any maximum flow $$$f$$$ and examine its residual graph $$$G_f$$$. This gives us two (possibly identical) minimum cuts straight away:

  1. Put all vertices $$$v$$$ such that $$$s \leadsto v$$$ in $$$G_f$$$ to the $$$S$$$ side, and the rest to the $$$T$$$ side.
  2. Put all vertices $$$u$$$ such that $$$u \leadsto t$$$ in $$$G_f$$$ to the $$$T$$$ side, and the rest to the $$$S$$$ side.

By the problem's assumption, these two cuts are both crossed by some edge $$$v \to u$$$. Then:

  1. By the choice of cuts, $$$s \leadsto v$$$ in $$$G_f$$$. By the characteristic property, this path doesn't cross any minimim $$$s-t$$$ cut, so $$$v$$$ is in the $$$S$$$ side of every minimim cut.
  2. Likewise, $$$u$$$ is in the $$$T$$$ side of every minimum cut.

Thus, $$$v \to u$$$ crosses every minimum cut. Problem solved.

Problem 2: Edge-wise lexicographically smallest minimum cut

Problem. Given a flow network with labeled edges, find the lexicographically smallest minimum cut. The cuts are compared by the ordered lists of crossing edges.

The basic idea is to incrementally build the answer, tracking information known about the cut, and adding edges in lexicographical order as long as they don't violate any conditions. Here's how to do that in four steps:

  1. Find any maximum flow $$$f$$$ and construct the residual graph $$$G_f$$$. The time complexity of this step will dominate the overall algorithm. For Dinitz's algorithm, this takes $$$O(n^2 m)$$$.

  2. Prepare for enumeration by disabling edges $$$v \to u \in G$$$ such that a $$$v \leadsto u$$$ path exists in $$$G_f$$$. Such edges can never cross a minimum cut by the characteristic property. To identify such edges:

    • Check if a direct edge $$$v \to u$$$ exists in $$$G_f$$$.
    • If not, then naturally $$$u \to v$$$ must be in $$$G_f$$$. Under this condition, checking $$$v \leadsto u$$$ is equivalent to checking that $$$v$$$ and $$$u$$$ belong to one strongly connected component of $$$G_f$$$.
  3. An $$$s-t$$$ cut is any assignment of $$$S$$$ and $$$T$$$ to vertices such that $$$s$$$ is $$$S$$$ and $$$t$$$ is $$$T$$$. Due to the characteristic property, a minimum cut is also subject to these inference rules:

    • If $$$v$$$ is $$$S$$$ and $$$v \to u$$$ is in $$$G_f$$$, then $$$u$$$ is also $$$S$$$,
    • If $$$u$$$ is $$$T$$$ and $$$v \to u$$$ is in $$$G_f$$$, then $$$v$$$ is also $$$T$$$.

    Use DFS to mark each vertex as belonging to $$$S$$$, $$$T$$$, or unknown.

  4. It's easy to see that as long as there are no contradictions, any single unknown vertex can be forced to $$$S$$$ or $$$T$$$ (followed by inference) without introducing contradictions.

    • Before adding an edge $$$v \to u$$$, check that it's enabled, $$$v$$$ is $$$S$$$ or unknown, and $$$u$$$ is $$$T$$$ or unknown. If these conditions hold, the edge is safe to add.
    • Mark $$$v$$$ as $$$S$$$ and propagate this by marking everything reachable from $$$v$$$ as $$$S$$$ too.
    • Since $$$v \not\leadsto u$$$ holds for enabled edges, we can now safely mark $$$u$$$ as $$$T$$$ and propagate again.

    Continue this process until all edges have been processed. Once the edges are exhausted, any remaining unknown vertices can be marked $$$S$$$ to produce a final cut. As no vertex is marked more than once, this step takes $$$O(n + m)$$$ time.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +47
  • Проголосовать: не нравится

Автор purplesyringa, история, 3 месяца назад, По-английски

Does anyone know any way to solve this problem? My Google-Fu let me down.

$$$x_1, \dots, x_n$$$ are formal elements of an additive group. $$$S_1, \dots, S_m$$$ are subsets of $$$\lbrace 1, \dots, n \rbrace$$$. For a fixed family $$$S_1, \dots, S_m$$$, I want to devise an algorithm that computes each $$$s_i = \sum_{j \in S_i} x_j$$$, performing as few additions as possible in total.

For example, for $$$S_1 = \lbrace 1, 2, 3 \rbrace, S_2 = \lbrace 1, 3, 4 \rbrace$$$, the optimal algorithm is: $$$t = x_1 + x_3; s_1 = t + x_2; s_2 = t + x_4$$$. It performs 3 additions in total, as opposed to the simpler algorithm: $$$s_1 = x_1 + x_2 + x_3; s_2 = x_1 + x_3 + x_4$$$.

Heuristics like this one come to mind: find a pair of indices $$$i \ne j$$$ such that as many subsets as possible contain both $$$i$$$ and $$$j$$$, add $$$t = x_i + x_j$$$ to the algorithm and replace $$$x_i + x_j$$$ (up to commutativity/associativity) with $$$t$$$ in all subsets. Repeat until all subsets are singletons.

But other than that, I'm totally stuck here. I don't have any theoretical bounds, I don't know what an exhaustive solution looks like (other than iterating among all possible algorithms, that is), and I don't know what papers cover anything like this. Does anyone have any idea on any of this?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +24
  • Проголосовать: не нравится

Автор purplesyringa, история, 5 месяцев назад, По-английски

blazingio cuts corners by design. It keeps the constant factor small and uses long forgotten algorithms people used before processors supported SIMD and integer division. But another limitation made this task much harder.

Size.

Professional libraries start exceeding the Codeforces limit of 64 KiB really fast. Code minification barely helps, and neither does resorting to ugly code. So I cut a corner I don't typically cut.

Undefined Behavior.

These two words make a seasoned programmer shudder. But sidestepping UB increases code size so much the library can hardly be used on CF. So I took a gamble. I meticulously scanned every instance of UB I used intentionally and made sure the compiler had absolutely no reason to miscompile it. I wrote excessive tests and run them on CI on all architecture and OS combinations I could think of. I released the library without so much as a flaw. It worked like clockwork.

And then, 3 months later, I updated README, and all hell broke loose.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +76
  • Проголосовать: не нравится

Автор purplesyringa, история, 9 месяцев назад, перевод, По-русски

...воспользоваться простой советской содой:

#include <iostream>

int main() {
    std::ios_base::sync_with_stdio(false);
    std::cin.tie(nullptr);

    // Ваш код тут
    int n;
    std::cin >> n;
    long long sum = 0;
    for (int i = 0; i < n; i++) {
        int x;
        std::cin >> x;
        sum += x;
    }
    std::cout << sum << std::endl;
}

Это ускорит код настолько, что вам в жизни больше не придется думать о скорости ввода-вывода. Ставьте классы и подписывайтесь на мой канал!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +288
  • Проголосовать: не нравится

Автор purplesyringa, история, 10 месяцев назад, По-русски

Совсем недавно закончился первый тур очередного региона. Сама я отошла от дел, но поста нет, а централизованно обсудить происходящее наверняка хочется, так что пусть будет пост.

Традиционная сборная таблица результатов с условиями: reg.algocode.ru. Пока данные берутся только из опроса (заполните!) и нескольких официальных таблиц, когда появятся остальные -- добавится информация оттуда. Отдельная просьба организаторам и другим людям с доступом к результатам своего региона: пожалуйста, перешлите их @prog_cf_bot.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +54
  • Проголосовать: не нравится

Автор purplesyringa, история, 12 месяцев назад, По-английски

TL;DR: the popular empirical bounds for FFT stability overestimate the precision by a few bits -- multiplication might thus produce wrong answer even after thorough testing on random inputs.

Introduction

So here's how people typically advise you to use FFT after proving various theorems and providing a $$$\mathcal{O}(n \log n)$$$ algorithm.

Suppose you want to multiply two big integers, stored in binary. Split them into groups of $$$B$$$ bits, called $$$a_0, a_1, \dots$$$ and $$$b_0, b_1, \dots$$$ respectively, so that the integers are equal to $$$a_0 + a_1 x + \dots$$$, $$$b_0 + b_1 x + \dots$$$ for $$$x = 2^B$$$. Multiply the polynomials $$$(a_0 + a_1 x + \dots) (b_0 + b_1 x + \dots)$$$ via FFT and substitute $$$x = 2^B$$$ to obtain the $$$n$$$-bit product.

$$$B$$$ is a variable here -- the algorithm works for every $$$B$$$, but larger $$$B$$$s are faster. We're limited by precision of 'double' though, because 'double' can only precisely store integers from $$$0$$$ to $$$2^{53} \approx 9 \cdot 10^{15}$$$ (e.g. $$$2^{53}+1$$$ cannot be represented by double). So we certainly require $$$2B + \log_2 \frac{n}{B} \le 53$$$ to be able to represent the product, but the actual limit is not $$$53$$$ but a bit less because of precision loss at intermediate stages. So there's this incentive to use $$$B$$$ as large as possible, but exactly how much we can increase it cannot be predictable because of the sheer complexity of analysis of floating-point error.

So here's what we're gonna do: let's start with some large value of $$$B$$$, run a few polynomials through this algorithm and check if the product is correct. If it isn't, decrease $$$B$$$ and repeat. If it is, call it a day and use this value of $$$B$$$ for the rest of your life. That's what I've always done. That's what I did writing my own bigint library, but I wanted to be more thorough so I stress-tested it for 20 hours in 8 threads on random input. Zero errors.

Twist

The problem is -- this is bullshit. The fact that the algorithm works for random data says nothing about how it behaves in worst case, for one simple reason. For random data, the errors accumulated during the intermediate stages are sort of independent and can be approximated as a one-dimensional random walk. The neat part about a $$$k$$$-step 1D random walk is that, on average, the distance from $$$0$$$ is around $$$\sqrt{k}$$$, as opposed to the maximal possible value of $$$k$$$. So the error is severely underestimated.

What is the worst case for FFT, then?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +217
  • Проголосовать: не нравится

Автор purplesyringa, история, 2 года назад, перевод, По-русски

...или «как тестирующие системы борются с мельницами».

Привет, Codeforces!

Я уверен, что те из вас, кто пытался провести соревнования по программированию или написать свои утилиты для работы с контестами, знакомы с положением дел. Есть множество несовместимых форматов, никто не знает, как именно все должно работать, Polygon полон не описанных и на первый взгляд несовместимых опций, ej-polygon как будто никогда не работает полностью корректно, в архивы жюри приходится вносить изменения под каждую платформу, нестандартные виды задач требуют постоянной поддержки и хаков, и так далее.

Я столкнулся со всеми этими недочетами, когда пошел писать свою тестирующую систему и внезапно обнаружил, что самая нетривиальная часть — не тестирование сложных нестандартных задач, а обработка искусственной сложности, введенной устаревшим софтом и стандартами. Так что вот он я, решаю эти проблемы.


Я представляю новый формат, официально формат problem.xml, который, как несложно догадаться, основан на формате Polygon. Я добавил по одному-два специальных случая там и тут, чтобы добиться 99%-ой совместимости с архивами, генерируемыми Polygon сейчас. Однако, в отличие от формата Polygon, он полностью документирован и допускает как можно меньше свободы трактования без ущерба эффективности.

Этот формат допускает практически произвольные типы задач, в дополнение к обычным типам: стандартному вводу-выводу, интерактивному и двойному запуску и задачам с грейдерами. Например, поддерживаются:

  • Задачи с пользовательскими скорерами (пользователям Ejudge известные как программы оценки). Это значит, что баллы за решение не обязательно равны сумме баллов за каждый тест; возможны любые соотношения, в том числе отрицательные оценки, оценка программ по эффективности, вплоть до даже «напишите программу, которая выводит ваш юзерейм».

  • То, что я называю формульными задачами, когда решение пользователя выводит формулу или универсальный алгоритм, который потом исполняется программой жюри.

  • Опциональная компиляция на каждом тесте, которая пригодится на некоторых контестах по практической разработке.

  • Задачи только с выводом, когда от пользователя просят отправить ZIP архив, который содержит ответы на каждый тест.

  • (Опциональная поддержка) произвольные стратегии, что позволяет проблемсеттерам обобщать все вышеперечисленное так, как им кажется нужным: задачи с тройным запуском, тестирование во время компиляции, и даже CTF-подобные соревнования, с помощью всего нескольких строк кода.

  • Арбитраж, позволяющий создавать задачи марафонского типа (про ранние идеи на эту тему можно прочесть тут), то есть задачи, в которых баллы за решение могут зависеть от результатов других решений.

Cуществующим платформам, поддерживающим формат Полигона или что-то на него похожее, в целом потребуется немного модификаций для поддержки всего, кроме стратегий и арбитража: я надеюсь, что основные вендоры с этим справятся. Полноценную поддержку стратегий реализовать может быть несколько затруднительно, так что эта часть пока опциональна (впрочем, это не умаляет значимости всего остального). Технически, реализовать арбитраж должно быть просто, но для этого могут потребоваться некоторые изменения в архитектуре софта.

Черновик спецификации доступен здесь: https://github.com/imachug/problem-xml-specs. Хотя я думаю, что спецификация практически окончена и публикую ее здесь для большей обозримости, я буду рад услышать ваши мысли по этому поводу и изменить что-то в случае необходимости.


Тегаю некоторых людей для лучшей видимости — ваш вклад будет очень полезен: geranazavr555, MikeMirzayanov, grphil, andrewzta, dkirienko.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +296
  • Проголосовать: не нравится

Автор purplesyringa, история, 3 года назад, По-английски

Unfortunately, most derivations of the chance of polynomial hashing collision are invalid, wrong, or misleading, and finding reliable public sources with proofs is incredibly difficult. This article is a formal analysis of the method. The goal of this article is to complement well-known empirical facts with theory, provide boundaries on the probability of collision, justify common choices, and advise against certain popular parameters.

There is no code in this article, and it's mostly theoretical. It's more of a summa of everything we know about polynomial hashing in integers rather than a guide for beginners. You'll probably find something of interest still. I do, however, describe some methods of cracking hashes, which I can provide code and additional explanation for if someone asks in the comments and some general guidelines in the last section of the present article.

Table of contents

  • The concept of polynomial hashing
  • Classical justification of the coprimality requirement
  • The pitfall
  • Probability of size-bounded collision
  • Probability of value-bounded collision
  • Back to basics
  • Conditions of surjectivity
  • Handling lengths
  • Side note on coprimality
  • The two scenarios
  • Reversal
  • Thue-Morse sequence
  • Attempt to reuse Thue-Morse sequence for other moduli
  • Birthday paradox
  • Lower bounds for anti-hash tests
  • Particularities
  • Key takeaways

Полный текст и комментарии »

  • Проголосовать: нравится
  • +308
  • Проголосовать: не нравится

Автор purplesyringa, история, 3 года назад, По-английски

I'd like to share this one unpopular but awesome algorithm that I only heard a mention of once. Kapun's algorithm finds a hash collision for moduli as large as $$$10^{18}$$$. I know, we have lots of algorithms that do much better than that (and I'm writing an article on that at the moment, keep tuned), but Kapun's algorithm is really surprising in its simplicity. That its correctness is so difficult to prove is of more surprise even.

Here we go.

A polynomial hash of a string $$$s$$$ is defined as

$$$ H(s) = s_0 b^0 + s_1 b^1 + s_2 b^2 + \dots + s_{n-1} b^{n-1} \pmod M. $$$

We want to find two strings with the same polynomial hash using only two characters: $$$0$$$ and $$$1$$$.

We can reformulate this problem in another way. Let $$$a_i = b^i \mod M$$$, then we want to find two distinct subsets of $$$a$$$ with the same sum modulo $$$M$$$. Now forget about the modulus: let's just find two subsets with the same sum.

Firstly, when is this possible? There are $$$2^n$$$ possible subsets and $$$n(M - 1) - n + 1 = n (M - 2) + 1$$$ distinct sums. If $$$2^n > n (M - 2) + 1$$$, that is, $$$\frac{2^n}{n} \gg M$$$, there are certainly two subsets with the same sum. Generating $$$M$$$ or even just $$$\sqrt{M}$$$ strings (if you use birthday paradox) is impossible for large $$$M$$$, but there's still a way through.

It turns out there is a deterministic algorithm that attempts to solve this problem that is very easy to implement!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +246
  • Проголосовать: не нравится

Автор purplesyringa, история, 3 года назад, По-русски

Традиционное обсуждение (или осуждение -- кому как) регионального этапа ВсОШ 2022 года. Кажется, к 15 часам МСК он уже должен был закончиться везде.

Традиционная сборная таблица результатов: https://reg.prog.cf/. Пока данные берутся только из опроса (заполните!) и нескольких официальных таблиц, когда появятся остальные -- добавится информация оттуда.

Отдельная просьба организаторам и другим людям с доступом к результатам своего региона: пожалуйста, перешлите их @prog_cf_bot.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +137
  • Проголосовать: не нравится

Автор purplesyringa, история, 3 года назад, По-английски

The first day of IATI 2021 has finished about half an hour ago. Please feel free to discuss the competition here!

Problems

Results

Editorials

^ Please feel free to provide links when they are available

Полный текст и комментарии »

  • Проголосовать: нравится
  • +130
  • Проголосовать: не нравится

Автор purplesyringa, история, 3 года назад, По-русски

AtCoder

Условие

Даны числа A и B. Выведите A + B.

Разбор

В задаче требуется вывести сумму двух чисел. Обратите внимание: в языках C и C++ необходимо использовать достаточно большие типы данных.

Решение:

print(int(input()) + int(input()))

Полный текст и комментарии »

  • Проголосовать: нравится
  • +213
  • Проголосовать: не нравится

Автор purplesyringa, 3 года назад, По-английски

This is not an exit.

I am looking at the post called Is Mike Mirzayanov dictator? at the moment. I'm reading through the comments, I'm looking at people's reactions, I'm reading Mike's replies, and... I am utterly disappointed.

I demand progress.

For context, I do believe Codeforces is the best website for competitive programming contests at the moment. I sincerely give credit to Mike for creating Codeforces and providing participants and contest authors a platform for interaction completely for free. I am confident that Codeforces is the most appropriate website for being the platform for sharing knowledge, exchanging tricks and methods, and collaboration between both contest participants, their authors, and coordinators.

Please consider this an open letter, for this is a will not of a single person, but of many individuals. Please consider signing it by stating so in a comment under the post, should you incline to do so.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +527
  • Проголосовать: не нравится

Автор purplesyringa, история, 4 года назад, По-русски

image

Polygon is not world's greatest evil. Polygon doesn't even have a particular defect that makes it horrible. You can't say what's exactly wrong with Polygon. Polygon seems to work, it seems like every feature is supported, but if you touch it here it will fall apart, and if you touch it there a problem (pun not intended) will appear. Or not, depending on your luck. Polygon is like PHP. For those who haven't read the famous rant, I'll cite it for you:

I can't even say what's wrong with PHP, because-- okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.

You pull out a screwdriver, and you see it's one of those weird tri-headed things. Okay, well, that's not very useful to you, but you guess it comes in handy sometimes.

You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.

You pull out the pliers, but they don't have those serrated surfaces; it's flat and smooth. That's less useful, but it still turns bolts well enough, so whatever.

And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there's no clear problem with the set as a whole; it still has all the tools.

Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what's the problem with these tools? They're all I've ever used and they work fine!” And the carpenters show you the houses they've built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.

That's what's wrong with PHP.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +180
  • Проголосовать: не нравится

Автор purplesyringa, история, 4 года назад, По-английски

This is the tenth time I stumble upon a controversial blog, write a large comment that'd be useful both to the OP and other people, and when I finally post the comment Codeforces tells me the post is deleted and my giant rant doesn't get saved. I usually breath in and out and get over it, but this time I figured out I have spare contribution to post a complaint I can share my thought with the community.

It'd be great if comments were saved to drafts just like posts. This would be useful in case of network problems or power failure too, and would just improve UX overall.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +45
  • Проголосовать: не нравится

Автор purplesyringa, история, 4 года назад, По-английски

Hello, Codeforces!

A few days ago MohammadParsaElahimanesh posted a blog titled Can we find each Required node in segment tree in O(1)? Apparently what they meant was to find each node in $$$\mathcal{O}(ans)$$$, according to ecnerwala's explanation. But I was too dumb to realize that and accidentally invented a parallel node resolution method instead, which speeds up segment tree a lot.

A benchmark for you first, with 30 million RMQ on a 32-bit integer array of 17 million elements. It was run in custom test on Codeforces on Apr 6, 2021.

  • Classic implementation from cp-algorithms: 7.765 seconds, or 260 ns per query
  • Optimized classic implementation: (which I was taught) 4.452 seconds, or 150 ns per query (75% faster than classic)
  • Bottom-up implementation: 1.914 seconds, or 64 ns per query (133% faster than optimized)
  • Novel parallel implementation: 0.383 seconds, or 13 ns per query (400% faster than bottom-up, or 2000% faster than classic implementation)

Полный текст и комментарии »

  • Проголосовать: нравится
  • +402
  • Проголосовать: не нравится

Автор purplesyringa, история, 4 года назад, перевод, По-русски

Привет, Codeforces!

Я на днях написал скрипт, который публикует все новые посты (и их обновления) с Codeforces в Телеграм-канал -- с форматированием и всем причитающимся.

Обновления должны приходить достаточно быстро, меньше чем за 5 минут.

Приятного использования!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +63
  • Проголосовать: не нравится

Автор purplesyringa, история, 4 года назад, По-русски

Привет всем! Вы, возможно, заметили, что на какое-то время Codeforces сменил логотип на новый, но, кажется, админы не могут решить, что лучше. Давайте честно проголосуем всем сообществом и выберем логотип вместе. Голосуйте за один из трех комментариев под постом.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +214
  • Проголосовать: не нравится

Автор purplesyringa, история, 4 года назад, перевод, По-русски

Привет, Codeforces!

На текущий момент, провести свой раунд на Codeforces — непростая задача. Во-первых, на Codeforces нельзя предложить только одну задачу, так что, если у вас мало друзей, увлекающихся спортивным программированием, и придумать 6 задач вы тоже не можете, — вы в пролете. Во-вторых, часто в предложения попадают неприятные и неинтересные задачи, а координаторам приходится долго их разгребать и объяснять авторам, чем же задачи плохи. Я считаю, что такой статус кво — проблема и для участников, и для авторов.

Я предлагаю такую идею. Авторы предлагают кривые задачи потому, что когда им отклоняют одну из задач контеста, под угрозу попадает сам факт проведения раунда. Поэтому авторы 'брутфорсят' задачи, чтобы заделать дыру. Что, если мы поможем авторам публиковать задачи по одной, а затем собирать задачи от разных авторов в один контест? Это снизит долю плохих задач, так как авторам контестов не придется быстро латать дыры.

Вопрос в том, как обойти самое узкое место процесса — координаторов. Достаточно много красных (или 2300+) участников хотели бы помочь в координации и модерации задач. В Div 1 много людей, которые занимаются спортивным программированием, но редко пишут раунды Codeforces — может, из-за стиля или тайминга. Думаю, такие люди — идеальные модераторы: у них есть опыт, они не будут читерить, они готовы помочь и им интересно посмотреть 'за кулисы'.

Короче говоря, идея — помочь тестерам и авторам задач друг друга найти, чтобы снизить нагрузку на координаторов и чаще проводить качественные раунды.

Как вам идея?

Полный текст и комментарии »

  • Проголосовать: нравится
  • +66
  • Проголосовать: не нравится

Автор purplesyringa, история, 4 года назад, По-английски

I'm interested in non-classic problems here on Codeforces, so I've looked through the problems with special tag. The ones with non-standard format are:

  1. Problems that can only be solved in a single special language, such as Q# or a secret language. Example: a single problem 409B - Загадочный язык or the whole Kotlin Heroes 5: ICPC Round contest.
  2. Problems with a stateful checker, which detects how many submissions one has made. Example: 409H - A + B наносит ответный удар.

I also vaguely remember a problem where one had to output his username with some limitations on the source code, but it could be just a comment on a blog.

Anyway, I can't find out how to create any similar problem on Polygon. Obviously there's a whitelist of supported languages, but what about allowing the user to add an interpreter for the language in C? Or allowing the checker to read the original code, not its output? I'm interested how this is implemented for the official contests and if I can do that in a gym.

As for the second type, it'd be useful if the checker could get meta-information such as the user handle or ID, and access a local DB or use network as a state store. I couldn't find any sane documentation so I'm asking here.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +134
  • Проголосовать: не нравится