How To Analyze and Code with AI and Large Language Models

2025-01-10 Antonio Canzanella

Disclaimer

Look, this whole field is moving ridiculously fast. What I'm sharing today might be totally outdated by tomorrow!

Overview

The work flow is very simple:

  1. The Context Preparation. Find a way to enrich the LLMs context with useful information (f.e. the source code, api documentation and so on).
  2. The Prompt Preparation. Invest time into write a good prompt.
  3. After conversation with LLM is over we have results, it's time to review and, if all is good, integrate them into our codebase.
  4. Repeat!!

The Context Preparation

You know how it goes – garbage in, garbage out. We need that engineer's balance – enough context to be useful without overwhelming the model. Manual selection works fine for smaller projects, but for bigger files we need to ask LLM to create a little script to simplify this process.

Here's the exact prompt I used:
    

This is the structure of a codebase

D:\GitHub\antoniocnz.com> ls
    Directory: D:\GitHub\antoniocnz.com
Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d----          22/03/2025    11:32                src
-a---          22/03/2025    11:30            338 .dockerignore
-a---          22/03/2025    11:27           7287 .gitignore
-a---          22/03/2025    11:27           7169 LICENSE
-a---          22/03/2025    11:27            199 README.md
antoniocnz.com is the root folder. Inside src/ we will put all projects sources.

PS D:\GitHub\antoniocnz.com> tree .
D:\GITHUB\ANTONIOCNZ.COM
└───src
    ├───.idea
    │   └───.idea.antoniocnz.com
    │       └───.idea
    └───services
        └───AntonioCnz.Com.Web
            ├───bin
            │   └───Debug
            │       └───net8.0
            ├───Middlewares
            ├───Models
            ├───obj
            │   └───Debug
            │       └───net8.0
            │           ├───ref
            │           ├───refint
            │           ├───scopedcss
            │           │   ├───bundle
            │           │   ├───Pages
            │           │   │   └───Shared
            │           │   └───projectbundle
            │           └───staticwebassets
            ├───Pages
            │   ├───Articles
            │   └───Shared
            ├───Properties
            ├───Services
            └───wwwroot
                ├───css
                ├───js
                └───lib
                    ├───bootstrap
                    │   └───dist
                    │       ├───css
                    │       └───js
                    ├───jquery
                    │   └───dist
                    ├───jquery-validation
                    │   └───dist
                    └───jquery-validation-unobtrusive

I want to create a script in javascript that is located on the root folder, close to README.md, called ai-context-gen.js.
The script should pick up .dockerignore and README.md and create a file called A-root-content.txt containing a json object of key-value pairs, 
the key is the file path and the value is the file content without irrelevant spaces and new lines and all irrelevant chars for LLMs.
After that it should take in input an array of relative path, for now we have only one element ["src/services/AntonioCnz.Com.Web"]
and create a file called A-AntonioCnz.Com.Web.txt it should contain the same json object as before. but it should exclude always obj, bin and wwwroot/lib folders from scanning.
then all non excluded files must be loaded into json object.
    
    

I've tweaked this script a dozen times – You'll probably want to customize it for your own projects too.

The Prompt Preparation

Once you've got your context sorted, it's time for the actual prompting. This is where most people (myself included, originally) get it wrong. I'll tell LLM exactly what I'm trying to achieve, what constraints I'm working with, and what style I prefer. Other LLMs might work for you, but I've found Claude gives me cleaner, more practical results. GPT can be hit or miss – great for explaining concepts but sometimes it gets a bit... creative with implementation details. Tried Gemini too, but not consistently impressed yet.

Pro tip: the paid subscription, of Claude.ai, lets you set up custom projects with specific guidelines. It's like having a virtual team member who remembers every coding standard and architectural decision you've ever made.

My Personal Takeaways

After about 6 months of working this way, here's what I've learned:

  1. Don't expect perfection on the first try. It's always a conversation.
  2. The more you work with these models, the better you get at "speaking their language" – I can now usually get what I need in 2-3 iterations instead of 10+.
  3. Be extremely specific about edge cases and error handling – that's where AI-generated code tends to cut corners.
  4. ALWAYS review.
  5. Treat AI like a junior dev with encyclopedic knowledge but questionable judgment – guide it firmly and verify everything.

This approach has legitimately transformed how I work. PoC and prototyping is ridiculously fast. Is it perfect? Fuck no. But it's another tool in the toolbox.

Would love to hear how others are using these tools – shoot me an email if you've got cool workflows to share!