somilm's blog

By somilm, history, 11 months ago, In English

Hello everyone! We are creating a DSA contest for our college apprenticeship program and are facing a few challenges regarding setting up questions and their testcases. In the previous contest, we tried creating our own questions and generated testcases by inserting a few known edge cases and the rest by simply using random function, but unfortunately, those testcases ended up becoming weak. So could you guys guide us in setting up the contest and where could we get both good quality problems and testcases? Thanks!

  • Vote: I like it
  • +9
  • Vote: I do not like it

»
11 months ago, # |
  Vote: I like it +12 Vote: I do not like it

So you want problem writers or something? If your test cases are weak try to think of more edge cases, write wrong solutions and stress test with them, maybe get some testers for your contest, and check that their solutions are indeed what you intended.

  • »
    »
    11 months ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Thank you for your reply , I am very interested in knowing the procedure of making a contest and the prefered website to host it on .

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Hey!

At Algoritmi Academy, we have experienced people for this. Reach out to me if you still need help with this.

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Which college?

»
11 months ago, # |
Rev. 3   Vote: I like it +20 Vote: I do not like it

If you want quality original problems, you need good problemsetters, that's unavoidable. Making good original problems is difficult and time-consuming.

As for test quality, I think there are two "approaches" and ideally you want to apply them both.

First would be a more "theoretical" view of your test data, and that's fully problem-dependent. You want to try and cover as many different types of tests as you can, since you don't know what kind of weird solutions might be thought of. Generating random tests is a necessary base, but you have to think carefully how random tests for your problem look from a statistical point of view, and potentially write many different random generators.

For example, consider a problem whose input is a tree. If you generate truly uniformly random tests, the tree will be quite balanced in depth, which is quite weak for most problems. On the other hand, for some mathy combinatorics on a permutation it might turn out that random permutations are completely sufficient for strong test data.

Your test data should cover as broad spectrum of test types as possible, and you should ideally assume that the competitors are smart and skilled. Adding a single hand-crafted chain-tree test to otherwise random inputs is not good enough, as a skilled competitor might recognize the situation (especially if given sufficient system test feedback) and circumvent it.


The second approach is to get some skilled testers that will actively try to get AC with a wrong/slow solution. Unfortunately here is where the efficiency of this process scales with people, so if you're setting problems alone this part is very error-prone. You want to have as many different fresh ideas of wrong greedies or faulty optimizations so as to get a realistic sample of what competitors may try during the contest.

This is especially critical if your problem is trying to disambiguate close theoretical complexities, or has seemingly reasonable greedy-like approaches. For example, if you want a problem to be solved in $$$O(NlogNsqrtN)$$$, but there is a simple $$$O(N^2)$$$, you better spend a good amount of hours trying to pass with $$$O(N^2)$$$ before you give this problem (and often you'll have to end up scrapping it because there's no way to avoid the "wrong" solution passing for reasonable sized inputs).


TL;DR on test quality: Get multiple skilled problemsetters involved; think carefully of the space of valid inputs from a statistical point of view and identify critical parts that will be missed by most random generators; get skilled people to actively try to pass with wrong solutions and then iterate on test data and constraints

  • »
    »
    11 months ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    Hmmm Makes Sense .. Thannk you very much loved the insights , i'll keep your advice in mind and follow through

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Hey, what platform do you guys use for organizing college contests for team and individual participation?

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Maybe you can start a hacking phase after the contest to make the tests strong.

»
11 months ago, # |
  Vote: I like it 0 Vote: I do not like it

What standard are you pitching the contest at? If the standard is not that high (I think CF Cyan or even Blue or below), you can consider just using existing problems from not so well known sources (non-english/non-local olympiads, random contests that have testcases) and preferably old problems. It is not likely that even a cyan person would have so much exposure to these problems.