My previous blog post in this series tested the ability of a range of large language models to analyze a piece of C code and determine what a mystery function did. That was interesting and entertaining, but possibly not a particularly “fair” test of the models’ capabilities. Most of time, I think people use “AI” to help write code, not to understand some tricky piece of algorithmic code. Thus, I turn the problem around and ask the models to write code for the algorithm I previously asked them to analyze.
Continue reading “(Local) AI, Please Write some Code”Month: December 2024
(Local) AI, Please Explain This Code
Continuing my exploration of what a local AI model can do, I decided to test them on the task of code analysis. It would be so nice to have an AI model that is tuned and trained on a particular tool or programming system, and that can be distributed for users to run on their own on their local machine, server, or cloud VM. Avoiding the need to run and charge for a custom cloud service and ensuring confidentiality and availability.
Updated 2024-12-12 with Llama-3.3-70B
Continue reading “(Local) AI, Please Explain This Code”More Exploration of (Local) AI Models
In my previous blog post about the Intel AI Playground, I tested it by asking it to draw cars. In this post, I share some more exploration of these local AI models and their limitations. Turns out that cars are easy, other things not so much…
Continue reading “More Exploration of (Local) AI Models”