Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(autofix) Explore ideas so that it doesn't start from a blank slate of context every time #1175

Open
roaga opened this issue Sep 18, 2024 · 0 comments

Comments

@roaga
Copy link
Member

roaga commented Sep 18, 2024

The AI models are probably smart enough. The problem is that every time an Autofix run starts, it starts its understanding from scratch. Like giving a senior engineer who started yesterday a complex issue in your codebase as their first task.

The problem is that AI has no context, because it's starting from scratch every time.
Can we make it...not start from scratch?

i.e. for a codebase or Sentry project that we have access to, can we behind the scenes build some sort of knowledge representation of how it works, and update it behind the scenes, so that whenever an issue occurs and someone runs Autofix, the run starts with the context a long-standing team member would have?


Pasting in the slack thread that inspired this:

jenn mueng
for the past couple months I've had the belief that autofix is not that useful on the majority of issues that aren't just straightforward bug fixes
we can probably use the issue summary to determine how effective an autofix can be on an issue

Rohan Agarwal
it would be useful on those issues, if it were smarter (betting on o1 or other architecture changes)
and/or if could still get you part way there (betting on QA flow) (edited)

jenn mueng
i don't think smarter is the solution
it's just missing context that you have
my hunch is
on the majority of root cause failure cases
if we just throw the exact same context we have to an engineer, even a senior engineer that has never seen the codebase before
and started yesterday
they'd probably say something similar to autofix
and fail at it

Rohan Agarwal
you might be right
but the solution to missing context is Q&A flow

jenn mueng
but strong agree on partway there with q&a flow

Rohan Agarwal
because then the model can ask the human for that context
:bufo+1:

jenn mueng
and more context

Rohan Agarwal
a lot's riding on this Q&A flow lol

jenn mueng
that's been the plan all along
:bufo-eyes:

Rohan Agarwal
i'm excited
did you see the copilot 2 announcement? Nadella's saying exactly what I've always thought, which is that AI models are becoming a commodity, and the real value is in the unique business data (sentry data) + the human-AI interaction (Q&A flow)

jenn mueng
not yet
but i do agree

Rohan Agarwal
they are also going with a "canvas/artifact" approach

jenn mueng
people expect ai to be an oracle but hes right the true value is in the unique data/insight + the HCI part
💯

i just always think back
for most of the failure cases I see in almost all llm tools
if you had the smartest version of a person
say senior/staff engineer
physics phd
whatever
and give them that exact prompt
can they really do what you're asking it to do?
useful thought exercise

Rohan Agarwal
you're 100% right, the model is usually pretty similar in capability, it just doesn't know exactly what you want, and it's a pain to interact with it compared to those smart humans
Humans hallucinate too!

jenn mueng
next frontier in AI isn't raw intelligence imo, it's the context/understanding gathering

Rohan Agarwal
Yes!
And interaction!

jenn mueng
and that's what is missing from current LLMs for AI like literally
if we had sonnet/o1 as is, but with the ability to research & learn on its own
that's AGI is it not?

Rohan Agarwal
well idk about that but yes that's very valuable

jenn mueng
and by research/learn I mean continuously improving
update its weights to remember the new knowledge

Rohan Agarwal
I wish AI wasn't so overhyped that way there wouldn't be so much anti-hype either. I think ppl forget that language models have been around forever, they just finally became good enough to use for more things
:bufo+1:

Rohan Agarwal
Ok random idea that probably won't work. The problem is that AI has no context, because it's starting from scratch every time.
Can we make it...not start from scratch?
i.e. for a codebase or Sentry project that we have access to, can we behind the scenes build some sort of knowledge representation of how it works, and update it behind the scenes, so that whenever an issue occurs and someone runs Autofix, the run starts with the context a long-standing team member would have?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant