Блог пользователя armandillo

Автор armandillo, 2 года назад, По-английски

There were several recent developments in AI — e.g Dall-e, AlphaCode, GPT-3 or GATO, and atleast to me it seems that we are starting to approach the knee of the exponential curve. I'm curious to know what does the community think about the future developments. This is not because I want find the most accurate answer (for which other more AI oriented sites are better suited), but because in general it seems that people aren't aware of the recent developments and their implications. I don't know whether this is also true for CF community where people might have a more accurate intuition and it would be interesting to find out. I think it might be fun to see how correct/wrong codeforces predictions were after a few years.

Therefore, inspired by reddit's singularity predictions, what are your views on when/whether we'll develop: 1) Artificial General Intelligence 2) Artificial Superintelligence 3) Ultimately, when the Singularity will take place

Bonus points if you explain your reasons, but it is also great to know just your gut-feeling by writing numbers only if you aren't sure/feel like you aren't an expert on this, so that we can check back later and see how close the CF predictions were (CF should allow making polls!).

Edit: It turns out that codeforces actually allows making polls, its just not a visible feature of the editor and I missed MikeMirzayanov's blog where he announced it. Anyway let's try it again:

What is your best guess for when/whether we'll develop:

  1. Artificial General Intelligence
  • 2022-2025
  • 2026-2029
  • 2030-2034
  • 2035-2039
  • 2040-2049
  • 2050-2075
  • 2076-2099
  • 2100+
  • Never
  1. Artificial Superintelligence
  • 2022-2025
  • 2026-2029
  • 2030-2034
  • 2035-2039
  • 2040-2049
  • 2050-2075
  • 2076-2099
  • 2100+
  • Never
  1. Singularity
  • 2022-2025
  • 2026-2029
  • 2030-2034
  • 2035-2039
  • 2040-2049
  • 2050-2075
  • 2076-2099
  • 2100+
  • Never
  • Проголосовать: нравится
  • +20
  • Проголосовать: не нравится

»
2 года назад, # |
Rev. 5   Проголосовать: нравится +2 Проголосовать: не нравится

I'll start. I don't think we would get AGI which matches human intelligence exactly as in some areas it would likely be much smarter than us and in others not. But I think we can have something based on GATO that would look very close to an AGI quite soon. For example, I think it could help us significantly in using computer through some natural-language interface, assist in coding and performing other tasks. My guess is that something that could speed up a very large number of tasks we do through computer by 10x and many sides of it would be looking close to an AGI would be invented in late 2023 and become more widely used in 2024. It wouldn't pass adversarial turing test, but it would be close.

By 2027, 99% of what humans can do, AI could do too and mostly vastly better — the difference would just be that it might be hard to align it with us so some emotional/human-side of it may be lacking. I would assign also more than 50% probability to it passing adversarial turing test. It may be possible that the AI still wouldn't be widely used due to the high compute requirements and Moore's law being stuck but I wouldn't assign high probability to it(<10%).

From then, since it could also do or atleast significantly assist in doing research after 2027, we would be on a very steep part of the exponential, and by 2030 it should be obvious that the singularity is here. I also think that if alignment problem turns out to be as difficult as some people say, than there is a non-negligible chance, that things could instead go really badly and the AI would cause some irreversible catastrophe.

TLDR: 1) Weak AGI: 2023-2024 2) mostly ASI: 2027 3) Singularity: 2030

What do you think?

»
2 года назад, # |
  Проголосовать: нравится +34 Проголосовать: не нравится

The first important machine learning model was announced in May 2020.

I don't think this is the right place to ask, even though there are many smart people here. Language models are complex pieces of magic that defy intuition. The fact that someone is smart, even an expert in another field such as competitive programming, does not magically make them a superforecaster of AI progress. Any predictions without taking at least the scaling laws into account might as well be noise.

This is like asking a bunch of smart people in 1935 on nuclear bombs. Yes, in hindsight the path was obvious from whatever Rutherford and Chadwick did in 1932, but in practice no one but nuclear physics researchers could have made any confident prediction as to what happens next.

P.S. Note that even within nuclear physics, there were statements like: "In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction."

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +5 Проголосовать: не нравится

    I see your point that this isn't the best place to get the most accurate predictions and some AI oriented forum would be better. This wasn't my reason for asking — to get the most accurate answer. My reason for posting this was actually because I thought it would be interesting to see what the views of CF community are, to find out how precise the guesses of CP community were and whether they would be closer to reality than what other groups of people(e.g. the reddit community) predict.

    To me personally(ML engineer/former competetive programmer), it seems that in general people aren't aware of the impact of most recent developments, but I don't know whether this is true also for communities like codeforces where people might have a better intuition.

    I think it might be at least fun to see how correct/wrong codeforces predictions were after a few years.