If you are a problem setter: reverse test the questions if they can be solved via any LLM models.
Especially ChatGPT 3.5 Turbo (which is the default free available model as of Oct 2023)
It's unfortunate but true, since the past one year, standard competitive programming has taken a blow because of malpractices- heavy cheating and plagarism.
If you are a high rated profile, it might not have affected you (not yet) but when it comes to entry-level ranked people (like myself) ranking system is getting more and more unjust.
I propose:
A guidline for the the problem setters which enforces that every question should be throughly reversed checked, if chatGPT is able to solve it, then rephrase the questions until it no longer can.
Systems must be developed to check if the code has been copied by a LLM model
bold of you to assume chatgpt can actually solve anything, i fed them 800-1300 rating questions and it was able to solve only after i basically held its hand through the whole process
Imagine caring about few newbies solve problem with AI
Why check against something that can barely solve Div3 B?
Please give evidence of mass problem solved with chatgpt
Sounds easy, but if an AI can independently solve a significant number of problems, then it may be reasonable to assume that it can understand most problem statements. In that case, rephrasing the problem to such a degree to confuse the AI would likely make that problem harder for humans to understand, too.