I usually don't like writing negative blogs, but after seeing some of the comments on recent posts, I feel we need to talk about the direction our community is heading.
It started when I noticed user monkey_is_back replying to me with an offensive picture. Curious, I decided to check his other comments to see if it was a one-off mistake. Unfortunately, I found a pattern of pure toxicity and racism that I think crosses every line a competitive programmer should have:
- Targeting a newcomer with racism: "Hello, bloody indian. I am from your father country."
- Abusing someone trying to post solution of a problem if it will be helpful to someone: "Stop posting this shit you bloody sun of a beach"
- Insulting another user directly in a joke thread: "Because alice is a hoe like GHOUS1425."
- Using racial slurs while mocking someone's achievement: "Yo N1GGA, how you managed to get -ve rating bruh? Big congo btw"
We are all here to learn, compete, and grow. Comments like these don't just break the rules; they actively discourage beginners and create a hostile environment for everyone. It’s the opposite of sportsmanship.
I urge monkey_is_back to reflect on this and stop. I also want to ask the community and the admins (MikeMirzayanov, Vladosiya): What can we do to prevent this?
Should there be stricter penalties for first-time hate speech? Is there a better way to report such users quickly? I believe we can all agree this is unacceptable, but I’d love to hear your thoughts on how to fix it.
UPD: The same user has returned with a new handle: doantunglamdatrang, and is now posting even more offensive comments. This pattern of toxicity continues.
Solutions (from comments)
- IP-based bans: Instead of just banning accounts, consider IP bans after a certain number of reports (e.g., $$$n$$$ reports) to prevent users from simply creating new accounts.
- Keyword-based harassment detection: Implement a simple filter that flags comments containing common offensive words or racial slurs.
- NLP-based harassment detection: Use a more advanced natural language processing model that can understand context and detect harassment even without explicit keywords.



