As far as I noted, Codeforces has had a guide for interactive problem since 2016.
I started Codeforces from middle 2017, and met my first interactive problem on August of that same year (843B - Interactive LowerBound). Honestly, it was a fun experience, though the solution was a bit out of expected convention for interactives.
Ever since, interactive to me has growth in its own way, even appearing in some more local contests as well. Yet, it seems to me that there were still a considerable portion of people having problems with those things even from their mind.
Ever from my first round in early 2019, someone already asked this.
first congrats on your first round second why interactive?
Umm...
Yep, along with that is a major lengthy discussion/prediction on the interactive problem (let's not talk about what happened after that).
Still, at those days, I simply gave them the benefits of a doubt. After all, the problem type is still novel, and there are a quadzillion ways to make an interactive problem function (other than the classic binsearch-simulation).
But that should be a thing back in 2019, right?
Well...
Then I saw this just a few days ago on announcement of Codeforces Round 988 (Div. 3).
...Really?
Okay, okay, but let's just put down my own personal hate on that comment and discuss seriously. If it's just a "new kind of fun", isn't it the same analogy towards the more advanced topics in lower rated participants' eyes? (i.e. flow, 2-SAT; though I honestly won't encourage raw implementations of such in a div3, of course). What is the problem with interactives that give many people an immediate frown even before seeing the problem itself?
The year is 2024 (no it's almost 2025 even what the heck) already, and why are interactives still uniquely discriminated?
(insert some funny propagating hashtags for interactive rights here, I'm a lame guy I can't make up one on the spot XD)
(the discussion I want to raise is still serious though)
People seem to not like problems that are different, and interactive problems typically have a different way than normally expected to solve. It requires a different mindset.
I think interactive problems are kind of fun. They can be frustrating, but they are unique.
People should just create habits for interactive problems, and continue with it as a normal problem. For me in C++, I just make sure to have cout << flush; and to comment out my fast IO. I don't know if that is even necessary, but I make that a habit for interactive problems and I never need to worry about the interactive part of the problem.
Thank you for reading this. I believe that if you struggle with what to do on the interactive portion of the problem, try a few interactive problems out. Try different ways of flushing the output, using the different types of input/output, etc. If you are comfortable with it outside of the contest, you can be comfortable with it inside the contest.
Same here. I (almost) always define all query syntaxes into functions accordingly, then forget about them and do the problems proper.
The main reason why people hate interactive is because it is not easy to test the validity of the code locally in most cases.
I think people have different opinions in interactive problems.
In my opinion,I like these problems because it's really interesting and it can widen my sight.
Some of these problem really helped me a lot to learn some algorithm that i have never seen.
Although sometimes it makes my rating dropped,I still enjoy it by its interesting,its unique glamour.
PS:I'm bad in English,but I try my best. :p
TLDR: My own experience on interactive problems, the pattern of thinking is... far different from my solved problem. So it might very frustrating for 1-3 first times encounter, the later is kinda normal.
After a few failed attempts on doing interactive problem on live contest, I read the editorial and learned a few things about it. My lesson is drafting out a more sample cases and queries and make observation on them, actually a LOT of queries...
Then I actually solved the first interactive problem on live contest (Div. 3 988) and have the same idea as the editorial. Very worthy and fun experience tbh.
They are not so much discriminated. Most people are fine with them. Complaints are from those who care much about their rating and is especially bad in interactives.
Yeah, it's indeed on the few. Yet it still amazed me that of all the scapegoats naggers tend to use for excuses, interactive always had a considerable portion.
It's just about preference and about how much people are used to interactive problems. It's still not common to see many interactive problems outside of Codeforces. Even in CF, interactive problems are still taking only a small portion of the problemsets and they are avoided in D2B and easier problems due to the complexity of the problem format, lengthy description of the interaction process, and difficulty for testing. It's not something that's fundamentally hard (unlike the "advanced topics" you mentioned), but is something that you need to learn and spend some time to get used to.
It's no wonder why some people think it's a thing only for high-rated participants, because even if they very occasionally see interactive problems in their range, they can just skip those and it's usually not a huge matter even to reach Experts, and those are indeed the majority of people. For this reason, I'm actually welcome to introduce more interactive problems in somewhere mid-early in Div. 3 with only basic thought process required, so that people can easily get used to them and be prepared to confront them in Div. 2 legitimately.
Interesting take, thank you. So I guess intro-interactive at Div3C/Div3D is the way to go if we can't do that in Div2?
Or maybe a Div4F/Div4G like a contest a while ago would also do.
Yes, I liked 2037E - Kachina's Favorite Binary String in the last Div. 3, but this one wasn't so easy. I think something easier than this one that is not just straight "implement something" can be used for D3C/D3D.
Interactive problems are bad in normal rounds due to several reasons:
Proper interactive setup is a long (at least a week) PvP contest where is no optimal deterministic solution and you have to compete against other players strategies. For example having an arena with obstacles and power ups where two robots navigate the arena\fight\collect power ups.
On point 2, I believe this is a matter of problem preparation. Some problems still allow RTE upon assert failure (though of course, leaning on verdicts to debug by itself is a risky move and should not be fully endorsed). Check this submission: 244924082.
Though, I do agree that we need a better guideline towards interactive problem's preparation, to set a common standard on such behaviors.
Unfortunately it's not an issue of preparation, but a matter of luck, at least in the current judging systems. The following two cases are indistinguishable:
Even though the cause is different, it's not possible to find out what happened first. That's why we ask the submissions to handle the
-1
printed by the interactor as an exit request.I believe most judging systems would report RE in both cases though. Is there a case when an assertion failure leads to WA?
I can't fully say with confidence, but 2036G - Library of Magic was reported to be unable to force a RTE on assertion failure.
Some submissions: 289885238, 289598032 — first one was mine, second one was from a report I found on announcement blog — see this comment).
I think the following would at least help: Whenever RE, WA (and ideally, every other error including TLE, MLE, for some other unfortunate soul like me) the "end state" of the interactor should be provided. For the purpose of looking at test 1 (and main tests during practice). I am thinking the following:
With these information, it could mitigate a lot of the frustration from the RE/WA issue.
I haven't noticed your point 3 to be an issue before. Can you share some problems that you think suffer from this?
Out of the recent ones https://atcoder.jp/contests/arc184/tasks/arc184_a is the worst offender:
The entire problem is a one giant artificial corner case with solution given as is out of thin air. In comparison to the rest of the problems (which were great) from the same ARC it looks especially bad.
This one also: https://contest.yandex.com/contest/69390/problems/F
Again, fixed constraints, entire problem is basically a Hamming code knowledge check. Good luck trying to derive something like that organically on your own during the contest, especially when there's not much time left after first 5 tasks.
Now, compare these two problems to this one: https://mirror.codeforces.com/problemset/problem/2022/D2
This is much better, because author made an effort to design an actually good problem, not just slap some "interactive" garbage, just because interactive is "unusual" or whatever.
At least don't give interactive in Div 3
Would've been an expert if not for the interactive problem last div 3 :')
My bad (but also, git gud)
I think the best thing to do here is ignore the opinion of low rated people. They aren’t very objective and their opinions are inextricably tied to their rating delta.
Interactive problems are just hard to debug. You cannot just give a simple input and get a verdict or "Correct" or "Incorrect" within a second like normal problems. Frequently, it is worth it to just submit after comfirming basic input/output format. It essentially demands perfect code on first try, and the penalty for not doing so is much higher. It is frutsrating when you need to go fast, and just happens to skill issue.
It isn't bad (as a problem itself), but I definitely find it surprisingly frustrating.
This only covers the case where it appears in an early position. If it occurs in later positoins, then coding a testing programme is basically expected, in which case it is a guaranteed cost so there is less issues with it.
True. They are easy to think but a real pain to debug. I was attempting 1207E - XOR Guessing and got the logic in about 10 mins as if it were a 1300 but am struggling to debug it since yesterday because I can't test it. This also means I need to submit every attempt since I can't test it locally. I figured out a few issues which were making a few numbers repeated and came to my final version which is 292759751. This one says incorrect query format which is hella frustating because I cant seem to notice what error I did this time because logic seems fine and I tried every thing from adjusting spaces to new lines. This thing makes them hellishly frustating. Btw please feel free to point out the stupidity I am doing this time it will be real help since this problem has frustated me enough now.
Have you tried debugging it in your head?
Yes by dry running it. Since I can't print the things locally
One thing that can help a lot with debugging is to have a global boolean variable which you make false when you submit and true when you're debugging, then if the variable is true then you'd first read the correct solution before each test case and make the query function return the answer to the query based on the correct solution you read at the beginning instead of printing the query and reading the answer.
All those precious in-contest minutes lost to computing answers to queries by hand each time could have been saved with this simple trick. Why am I learning this only today? :crying
I am today year old under Dori's genius T.T
I like solving interactive problems! They seem cool.
Could you help me figure out why my solution 292759751 to 1207E - XOR Guessing says incorrect query format? Some interactives like this are really frustating to debug.
query format issue got fixed — you were not giving space between ! and ans.
https://mirror.codeforces.com/contest/1207/submission/292772723
How can someone not see this as a fun problem
Boring