NaimSS's blog

By NaimSS, 8 months ago, In English

On 15th August, we had the 2025 Selection Contest at UNICAMP, which is an internal individual competition we hold every year. We would like to invite to take a look! Statements are available in Portuguese and English (translated by Polygon). There is no official editorial but I would be happy to discuss problems solutions on the comments!

I would like to thank the organization team from UNICAMP Fábio Usberti (bituser), our testers racsosabe, Binario, Bashca, and also Henrique (HenriqueCP) for creating and setting problem F.

Congratulations to our winners:

  1. defnotmee
  2. Lalic
  3. MarcosPauloEvers

Also congrats to Yvens for winning on the "Online" category.

The problemset consists of 12 problems, telling some true stories that happened on ICPC on the past few years. I hope you enjoy!

You can find the problem on codeforces Gym: UNICAMP Selection Contest 2025

Previous editions: 2023, 2024

Updated: Now there is also an editorial in Portuguese!

Full text and comments »

  • Vote: I like it
  • +22
  • Vote: I do not like it

By NaimSS, history, 13 months ago, In English

I just came across this post about the CCC Results, the Canadian Computing Competition. A tl;dr, is that due to AI cheating, they are not releasing the standings, thus affecting all participants.

On my experience, I also saw mass use of AI cheating in the Brazilian OI (OBI). The way our olympiad works is:

  • Three phases. You need a minimum score to go to the next phase, which is based on the number of competitors.

  • Top X users get a medal, and are invited to the IOI selection contest.

  • Contest is held online, with 2-3 hours on the first 2 phases and 5 hours on the last phase. We ask for each school to have at least one supervisor on class, but unfortunately that is not always held or the supervisors simply don't care.

  • You can submit many times and get your score in real time.

As on the first and second phase there are thousands of competitors, it was really hard to do cheating detection, and couldn't be done by the small team of volunteers working on the contest.

On the third and final phase, we did a through analysis of submissions and detect some AI submissions. One competitor that could have gotten a medal got disqualified for that, and others that stole spots on the final phase from honest players. We also went through the submissions of first and second phase after it all ended, and there were more AI correct solutions than on the final phase (the problems are also way simpler, meant for people who just started programming).

For me, that is very sad to see, and it will only grow if we don't take any serious measures. Checking manually is very limited and the plagiarism checks only extend to a certain point as well.

One thought I have in mind is to limit the number of submissions to each problem to $$$2$$$ submissions, the second of which is blind. From what I saw, the AI couldn't get most of the problems done on the first shot (that also helped us detect if someone was cheating). When I participated in OBI 5 years ago, there wasn't a submission system, so everything was blind. Though it sucked, it could be one way to do it...

Another thing that helps is to state the problem with some story, instead of a direct application. For example, one of the problems was a "mergesort", but had a story of lines going from one side to another and crossing. For some reason, the AI really struggled, and used multiple tries until getting it right (sometimes).

A third idea is to implement in the plagiarism checkers a check to look for non-conventional characters, like ≤ . There is no way someone typed that, so we could just ban this user I think. And yeah, the cheaters were lazy to not even delete this in many submissions.

Finally, the statements were created in a PDF that you cant copy the text. This is useless if someone knows how to screenshot and pass it to the AI.

The ideal scenario would be to have some proctoring, but that is very hard to do on scale. It is hard to make sure that all users would have a suitable system or would have the technical skills to set it up. For example, on ICPC-style contests we use a linux-based system that turns off all internet access, but it would be very hard to impose this on all schools.

What are your suggestions to avoid AI cheating? How are other countries managing it?

Full text and comments »

Tags oi, obi, ccc, ai
  • Vote: I like it
  • +50
  • Vote: I do not like it

By NaimSS, history, 15 months ago, In English

This week we are having the Brazilian ICPC Summer School, a camp for people from Latina America who are training for ICPC.

On the second day I made an educational contest with a combinatorics theme. More specifically, the topics are: combinatorics, probability and expected value.

It is now available at the Gym section here. Statements are in both Portuguese and English, and there is also a sketch of the solutions in Portuguese. Also you can find the solution source codes here.

I invite you to take a look at the problems. Hopefully you can have fun and also learn something new :)

Full text and comments »

  • Vote: I like it
  • +51
  • Vote: I do not like it

By NaimSS, history, 22 months ago, In English

I am getting the following error when trying to build my contest: Contest "statements.ftl" doesn't contain '\usepackage {xparse}', but problem X does.

Has someone already encountered such an issue? I have no idea why this is happening, because it is a very standard statement, written in Portuguese.

I have other problems and they seem to work just fine. I even tried to delete the statement and re-write but the issue persisted even for a blank statement!

I would appreciate any help, thanks!

Full text and comments »

  • Vote: I like it
  • +15
  • Vote: I do not like it

By NaimSS, history, 3 years ago, In English

Yesterday was the UNICAMP Selection Contest and it is now available to everyone! Statements are available in Portuguese and English and editorial is available in Portuguese.

I would like to thank the organization team from UNICAMP Fábio Usberti (bituser) and Tiago Domingos (tdas), our testers Dranoel321, emaneru, Leonardo_Paes, LoboLobo, Luca, MvKaio, nathan_luiz and perchuts, and LoboLobo for coming with the idea for problem H — "Team Division".

Congratulations for our top 3 winners:

  1. defnotmee

  2. enzopsm

  3. luiz_oda

The contest was done in memorial of Technoblade, legendary Minecraft Youtuber, so he will appear in a lot of the problems. I hope you enjoy, and remember, TechnobladeNeverDies!

 Source

Link to the contest: UNICAMP Selection Contest 2023.

Editorial is available in the Contest material.

Full text and comments »

  • Vote: I like it
  • +89
  • Vote: I do not like it

By NaimSS, history, 3 years ago, In English

Last week we had a local Contest available here (statements only in portuguese), and I was surprised when the easiest problem was getting a lot of Wrong Answers on test 1. All of them seemed correct, gave the correct output locally and were using GNU C11. For some reason, re-submitting the same code in GNU C++ gave AC. Why?

Problem Statement

Given the formula of gravitational attraction $$$F = \frac{G\cdot M_1 \cdot M_ 2}{d^2}$$$, find the value of $$$G$$$ given the mass of the 2 objects, the force and their positions on space (which is only a line).

The code

WA on 1

AC

After seeing this, I made an announcement for people to submit their problem using GNU C++ and a ton of AC came in... as a setter I was frustrated by such a bug and want to know why that happened :(

Full text and comments »

  • Vote: I like it
  • +7
  • Vote: I do not like it