armchairhacker 2 days ago

RL doesn't completely "work" yet, it still has a scalability problem. Claude can write a small project, but as it becomes larger, Claude gets confused and starts making mistakes.

I used to think the problem was that models can't learn over time like humans, but maybe that can be worked around. Today's models have large enough context windows to fit a medium sized project's complete code and documentation, and tomorrow's may be larger; good-enough world knowledge can be maintained by re-training every few months. The real problem is that even models with large context windows struggle with complexity moreso than humans; they miss crucial details, then become very confused when trying to correct their mistakes and/or miss other crucial details (whereas humans sometimes miss crucial details, but are usually able to spot them and fix them without breaking something else).

Reliability is another issue, but I think it's related to scalability: an LLM that cannot make reliable inferences from a small input data, cannot grow that into a larger output data without introducing cascading hallucinations.

EDIT: creative control is also superseded by reliability and scalability. You can generate any image imaginable with a reliable diffusion model, by first generating something vague, then repeatedly refining it (specifying which details to change and which to keep), each refinement closer to what you're imagining. Except even GPT-4o isn't nearly reliable enough for this technique, because while it can handle a couple refinements, it too starts losing details (changing unrelated things).

  • dceddia 2 days ago

    I wonder how much of this is that code is less explicit than written language in some ways.

    With English, the meaning of a sentence is mostly self-contained. The words have inherent meaning, and if they’re not enough on their own, usually the surrounding sentences give enough context to infer the meaning.

    Usually you don’t have to go looking back 4 chapters or look in another book to figure out the implications of the words you’re reading. When you DO need to do that (maybe reading a research paper for instance), the connected knowledge is all at the same level of abstraction.

    But with code, despite it being very explicit at the token level, the “meaning” is all over the map, and depends a lot on the unwritten mental models the person was envisioning when they wrote it. Function names might be incorrect in subtle or not-so-subtle ways, and side effects and order of execution in one area could affect something in a whole other part of the system (not to mention across the network, but that seems like a separate case to worry about). There’s implicit assumptions about timing and such. I don’t know how we’d represent all this other than having extensive and accurate comments everywhere, or maybe some kind of execution graph, but it seems like an important challenge to tackle if we want LLMs to get better at reasoning about larger code bases.

    • fullstackchris 2 days ago

      This is super insightful, and I think there is at least part of what you are thinking of: an abstract syntax tree! Or at the very least one could include metadata about the token under scrutiny (similar to how most editors can show you git blame / number of references / number of tests passing in the current code you are looking at...)

      It makes me think about things like... "what if we also provided not just the source code, but the abstract syntax tree or dependency graph", or at least the related nodes relevant to what code the LLM wants to change. In this way, you potentially have the true "full" context of the code, across all files / packages / whatever.

      • dceddia 2 days ago

        Yeah! I think an AST is sort of what I'm envisioning here, but with much broader metadata, including requirements and implicit assumptions and stuff.

        As a concrete example, a random bit of code from the minih264 encoder:

            /**
            *   Quantized/dequantized representation for 4x4 block
            */
            typedef struct
            {
                int16_t qv[16];     // quantized coefficient
                int16_t dq[16];     // dequantized
            } quant_t;
        
        Someone who's built an encoder or studied h264 probably knows what this is for (I have a very fuzzy idea). But even with the comment there's lots of questions. Are these arrays restricted to certain values? Can they span the full int16, or are there limits, or are the bits packed in an interesting way? Can they be negative? Why would you want to store these 2 numbers together in a struct, why not separately? Do they get populated at the same time, or at different phases of the pipeline, or are they built up over multiple passes? Are all of these questions ridiculous because I don't really understand enough about how h264 works (probably)?

        LLMs already have a lot of this knowledge, and could probably answer if prompted, but my point is more that the code doesn't explicitly lay out all of these things unless you carefully trace the execution, and even then, some of the requirements might not be evident. Maybe negative numbers aren't valid here (I don't actually know) but the reason that invariant gets upheld is an abs() call 6 levels up the call stack, or the data read from the file is always positive so we just don't have to worry about it. I dunno.

        Anyway I imagine LLMs could be even more useful if they knew more about all this implicit context somehow, and I think this is the kind of stuff that just piles up as a codebase gets larger.

    • debone 2 days ago

      Not really true.

      You can have a book where in the last chapter you have a phrase "She was not his kid."

      Knowing nothing else, you can only infer the self-contained details. But in the book context this could be the phrase which turns everything upside down, and it could refer to a lot of context.

      • dceddia 2 days ago

        The whole book could be the surrounding context, not just a sentence or two, and I think that still fits with the point I wanted to make - that written words are more linear or in the same plane compared to code which is more "multidimensional" in a sense, when you start to consider the reasons behind the code, the order of execution, things being executed multiple times, etc.

  • bionhoward a day ago

    Claude and 4o aren’t RL trained IIRC? Also, who’s using these for code? You’re cool not being able to train on your chat logs used to develop your own codebase? Sounds pretty sus

thetrustworthy 2 days ago

For those who are knowledgeable about the field but not yet the author of this post, it is worth mentioning that Shunyu Yao has played a huge role in the development of LLM-based AI agents, including being an author / contributor to:

- ReAct

- Reflexion

- SWE-bench

- OpenAI Deep Research

- OpenAI Operator

wavemode 2 days ago

> AI has beat world champions at chess and Go, surpassed most humans on SAT and bar exams, and reached gold medal level on IOI and IMO. But the world hasn’t changed much, at least judged by economics and GDP.

> I call this the utility problem, and deem it the most important problem for AI.

> Perhaps we will solve the utility problem pretty soon, perhaps not. Either way, the root cause of this problem might be deceptively simple: our evaluation setups are different from real-world setups in many basic ways.

LLMs are reaching the same stage that most exciting technologies reach. They have quickly attracted lots of investor money, but that is going to have to start turning into actual money. Many research papers are being written, but people are going to start wanting to see actual improvements, not just theoretical improvements on benchmarks.

  • PaulHoule 2 days ago

    I think of some of the ways LLMs perform better in real life than they do in evals.

    For instance I ask AI assistants a lot about what some code is trying to do in applications software where it is a matter of React, CSS and how APIs get used. Frequently this is a matter of pattern matching and doesn't require deep thought and I find LLMs often nail it.

    When it comes to "what does some systems oriented code do" now you are looking at halting problem kind of problems or cases where a person will be hypnotized by an almost-bubble-sort to think it's a bubble sort and the LLM is too. You can certainly make code understanding benchmarks aimed at "whiteboard interview" kind of code that are arbitrarily complex, but that doesn't reflect the ability or inability to deal with "what is up with this API?"

    • animuchan 2 days ago

      I think what you're describing is, easy tasks are easy to perform.

      Which is, of course, true. Anecdotally, a lot of value I get from Copilot is in simple, mundane tasks.

      • PaulHoule 2 days ago

        I think easy tasks are basically "linear" in that you don't have interactions between components. If you do have interactions between components complexity gets out of control very quickly. Many practical problems for instance are NP-complete or undecidable. Many of them could be attacked by SMT or SAT but often you can solve them using tactics from math.

  • stapedium 2 days ago

    Current AI is like search. You still have to know the vocabulary and right questions to ask. You also need the ability to differentiate a novel answer from a hallucination. Its not going to replace lawyers or doctors any time soon.

mplanchard 2 days ago

Meta request to authors: please define your acronyms at least once!

Even in scientific domains where a high level of background knowledge is expected, it is standard practice to define each acronym prior to its use in the rest of the paper, for example “using three-letter acronyms (TLAs) without first defining them is a hindrance to readability.”

  • a1ff00 2 days ago

    Couldn’t agree more. Had a hell of a time looking at how they were using RL after first use, but gave up in frustration when the remainder of the text was more use of undefined symbols/acronyms.

conartist6 a day ago

"solving" Dota is a huge huge HUGE overstatement of the kind you are pointing out.

The players it played against had never played against something that behaved so weirdly. It had lightning reflexes and it clearly wasn't human. It was playing a toy game mode requiring about 5% of skills needed for a full match. I'm other words, they engineered it to look good at they toy task, and it did. But they didn't give the pros any time at all to learn their opponent -- after all they might have figured out how to play against it!

  • jebarker a day ago

    Not arguing that they solved DOTA, but "The players it played against had never played against something that behaved so weirdly." seems like a feature, not a bug. We want AI to find unexpected new ways of accomplishing tasks.

GiorgioG 2 days ago

More AI hype from an AI "expert". AI in software development is still a junior developer that memorized "everything" and can learn nothing beyond that, he/she will happily lie to you and they'll never tell you the most important thing a developer can be comfortable saying: "I don't know".

jarbus 2 days ago

I largely agree, and this is actually something I've been thinking for a while. The problem was never the algorithm; it's the game the algorithm is trying to solve. It's not clear to me what extent we can push this to aside from math, coding. Robotics should be ripe for this, though.

  • daveguy 2 days ago

    Unfortunately the feedback loop for robotics is many many orders of magnitude slower than math / coding problems. And when you get to artificial environments, you are learning artificial dynamics -- same limitations as the benchmarks.

cadamsdotcom 2 days ago

Benchmark saturation will keep happening.

Which is great! There's room in the world for new benchmarks that test for more diverse things!

It's highly likely at least one of the new benchmarks will eventually test for all the criteria being mentioned.

nottorp 2 days ago

Is it me or are they proposing making LLMs play text adventures?

yapyap 2 days ago

> Instead of just asking, “Can we train a model to solve X?”, we’re asking, “What should we be training AI to do, and how do we measure real progress?”

To say we are at a point where AI can do anything reliably is laughable, it can do much and it will tell you any answer whether right or wrong with full confidence. To trust such a technology in the big no-human decisions like we want it to is foolswork.

m0llusk 2 days ago

Um, what is RL?

  • animuchan 2 days ago

    Rocket Launcher?

    Please, let it be Rocket Launcher for once.

  • zomglings 2 days ago

    Reinforcement Learning.

    I hate acronyms with a fierce passion.

    • coolThingsFirst 2 days ago

      While i do agree that acronyms can be PITA, AFAIK RL seems to truly lead to AGI. ICBA to provide more detail.

      • zomglings 2 days ago

        My blood pressure just tripled.