For the past few weeks, the majority of CodeForces blogs that have been gaining traction are about complaining, whining, or attempting to give solutions to the ever-increasing power of LLMs to solve CodeForces problems.
Here is a quick narrative of my current state because why not:
As a consistent Pupil, it is demotivating to keep on going, had it not been for olympiads (NOI.ph, USACO due to cut-off scores) that do not seem to be affected too much by the plague of LLMs. In addition, the point of CodeForces is to strengthen problem-solving intuition for the real world (at least for me), so I hope that I will be able to continue building a great foundation for whatever CS endeavours I plan to pursue in the future. #Sidetracked!
Anyway, back to my main point. I feel like there might be a way within the realm of AI (in particular NLP, LLMs) to detect levels of usage in AI during contests based on many factors, e.g., rating of participants, performance of very trusted participants, and many other ways. However, similar to my CF tier right now, I am very noob in this field but want to work on a project kind of like this, if anyone is willing to work on a project similar to this. I feel like this could become a really cool side-project if feasible enough.
Comments and discussions within this CF post would be heavily appreciated, and if you're interested in a project like this, feel free to reach out by sending a message in CodeForces to TroySer.




