Replies: 2 comments 1 reply
-
Hello all, basically the same question: So, I'm creating an internal tool that allows us to check for bugs, and do internal technical documentation of our c#/angular projects. I think it's really important that the conversation understands the full context of what it is working with to see the big picture. With Chat GPT you could provide all the different cs files and thus create one big conversation with the full context. How do you think you can work around this 'limitation' I'm looking at semantic kernel, but I'm not sure this will do the trick, if you start summarizing code it doesn't really make sense, or am I missing something here? Does anyone have an idea how to solve this core issue? |
Beta Was this translation helpful? Give feedback.
-
I'm starting to think it's all about creating a loop for the assistant to rapid fire compile attempts and code editing via function calls? For larger codebases, checking for semantic bugs, I struggle w/ this one. It's almost like we need a new modeling layer which plays nicely w/ gpt-4? |
Beta Was this translation helpful? Give feedback.
-
@stephentoub if I want to refactor code, using gpt-4, which is larger than 32k tokens, how would I go about it w/ SK? Is there a sample I should be looking at? Or is this a bad idea to begin with?
I refactor code all the time these days via cut/paste into gpt-4, but am starting to hit the wall wrt amount of code I can reckon w/ due to 8k, 16k, and 32k limits in gpt-x.
Beta Was this translation helpful? Give feedback.
All reactions