CounterGen: Auto Counter-Example Generator for Debugging
Tired of getting "Wrong Answer" on gym problems with no clue why? Or being handed a giant unreadable testcase that doesn’t help at all?
Introducing CounterGen — an LLM-assisted tool that automatically generates minimal, human-readable counter-examples for your buggy programs.
Uses LLM APIs (Gemini, Claude, or OpenAI)
Each generated code is automatically tested to ensure it functions correctly before being used.
Once verified, these codes are combined into a stress-testing pipeline that runs your buggy solution against many testcases.
Produces minimal failing testcases automatically.
Workflow at a glance:

Try It Out!
The full source code is available in Github.
Just follow the instructions in the README to get started!
Important Notes
Beta disclaimer: CounterGen is still in development. Sometimes the counter-examples may not be successfully generated (especially in harder problems). We’re actively improving it, and your feedback would be valuable.
Contest rules: Please DO NOT use CounterGen during official contests. As per this Codeforces blog post , using AI assistance during contests may violate the rules. CounterGen is meant for practice, learning, and debugging outside of contests.
Any feedback or suggestions are very welcome!








