canine's blog

By canine, history, 3 months ago, In English

hello all -- if you happen to be in the market for a new competitive programming extension and use VSCode, may i humbly suggest my new extension? i think it is relatively feature complete, but it's still in alpha (especially for languages other than C++) and may have some kinks that need working out. it theoretically supports C++, Java, Python, and Rust (and it's very easy to add more).

i created this because i was dissatisfied that most existing solutions can't import a bunch of test cases from a directory then run them in parallel. i also believe it has a cooler UI than other extensions. as usual, i didn't think it would take this long when i started...

if you encounter any difficulties, please let me know on github. thanks for taking a look!

hero image

Realtime Input/Output

unlike other test runners for VSCode, we let you use prewritten inputs and still interact with your program on the fly.

Input and Output Handling

Test Editor

extensive configuration and support for interactors and custom checkers (floating point is easy -- just use fcmp! thanks testlib).

Interactive Test

Stress Testing

run stress tests using a generator and brute force solution against your efficient solution. testlib is automatically included in generators/checkers/interactors.

Stress Testing

Debugging

debug with CodeLLDB / debugpy / the Java extension for VSCode. integrates with clangd to provide linting based on your compiler arguments.

Competitive Companion Integration

integrates with the Competitive Companion browser extension for one-click test imports.

note: you can't use Hightail's Competitive Companion integration while this extension is active (they bind to the same port).

File I/O and Directory Import

perfect for USACO!

website / visual studio marketplace / github

Full text and comments »

  • Vote: I like it
  • +177
  • Vote: I do not like it

By canine, history, 4 months ago, In English

tldr: GPT-powered hints on CF problems at https://hint.purduecpu.com.

Full text and comments »

  • Vote: I like it
  • +9
  • Vote: I do not like it

By canine, history, 6 months ago, In English

there have been quite a few attempts to create language models able to solve competitive programming problems. i'm nowhere near this ambitious, but i think providing quality problem analyses by providing more detailed tags compared to what Codeforces currently offers, showing who people took which approach to solving a problem, and creating problem embeddings to enhance problem search (e.g. capable of finding a problem similar to the intersection of two problems) could have a more positive direct impact on competitive programmers than behemoths like AlphaCode. don't you wish you knew what proportion of submissions to a problem used push-relabel vs ford-fulkerson, or whether a problem tagged data structures used lazy segment trees or simply a fenwick tree?

if you think this is a good idea -- i might be going insane with tunnel vision because i've spent a lot of time on this concept -- you can help me by annotating your own solutions on my site. if you find anything missing there (tags or features), or have other concerns please let me know. i've trained an AST-based ~30M BERT-like transformer on masked language modeling and just need some data to fine-tune it to predict specific tags. if just 50 people annotate 20 submissions, that would definitely be enough! this will also enhance the embedding, helping it encode salient aspects of approaches to a problem. i've tried using ChatGPT for this task, and the results were pretty bad. i think better, or at least human, data could fix this.

tldr: please consider donating data to my little project here

thank you! :>)

Full text and comments »

  • Vote: I like it
  • +50
  • Vote: I do not like it