Automating code optimization with LLMs
You might have heard a lot about code generation tools using AI, but could LLMs and generative AI make our existing code better? In this episode, we sit down with Mike from TurinTech to hear about practical code optimizations using AI âtranslationâ of slow to fast code. We learn about their process for accomplishing this task along with impressive results when automated code optimization is run on existing open source projects.
Matched from the episode's transcript đ
Daniel Whitenack: Something that Iâm kind of getting in what youâre saying as well, which I know is often a misconception that I run into when Iâm either doing workshops, or working hands-on with people with generative models, is thereâs typically this misconception that you need to package everything into a single prompt, and then output your final result as a sort of one-step thing. Iâm getting the sense that your workflow, for one, it probably involves multiple calls throughout the codebase, because of the context size; I would assume itâs partly because of that. But then also, you mentioned this kind of iterative element, where - hey, thereâs kind of big rocks that you can move, that are the sort of worst offending areas, so thereâs hierarchy in that respect⌠Bt also, it seems like - letâs just assume; I know itâs not a good assumption, but if we assume that a personâs codebase is fully tested, integration tests, unit tests, it seems like this is something you could just loop over and over and over again to get increasing optimizations, probably with diminishing return. Could you speak a little bit to how you as a team think about that chaining element, I guess would be the way to say it? And then also, maybe iterative element.