Skip to content

ernestp/LlamaDiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Below is the full README as normal formatted text (no outer source/code fence), so you can copy it directly as markdown.


LlamaDiff

Private AI code review for local Git repositories. Analyze diffs between branches using local or cloud LLMs — without requiring GitHub, PRs, or code uploads.

LlamaDiff runs directly on your machine, works with local repos, and supports offline-first workflows using Ollama.


Why LlamaDiff?

Most AI code review tools require:

  • pushing code to GitHub/GitLab
  • sending diffs to cloud APIs
  • exposing proprietary code

LlamaDiff is built for:

  • local Git workflows
  • privacy-sensitive projects
  • offline environments
  • self-hosted development setups
  • developers working with code under NDA

Run reviews on:

  • local repositories
  • uncommitted changes
  • internal projects
  • air-gapped systems (with local models)

Features

  • Compare any two git branches and get AI-powered code review

  • Works directly with local Git repositories (no PR required)

  • Review uncommitted changes

  • Supports multiple LLM providers via LiteLLM:

    • Ollama (local, default)
    • OpenAI
    • Anthropic
    • OpenRouter
  • Filter review to specific files or patterns

  • Export reviews as Markdown

  • Interactive HTML review output

  • CLI-first workflow


Installation

Via pipx (recommended)

# Run directly without installing
pipx run llama-diff -s feature -t main

# Or install globally
pipx install llama-diff

Via pip

pip install llama-diff

From source

git clone https://github.com/ernestp/LlamaDiff.git
cd LlamaDiff
pip install -e .

Configuration

Copy .env.example to .env and configure your preferred provider:

cp .env.example .env

By default, LlamaDiff uses Ollama for local model inference.


Usage

Basic usage

# Compare current branch to main
llama-diff --source feature-branch --target main

Use a specific model

llama-diff -s feature -t main --model ollama/codellama

Use OpenAI

llama-diff -s feature -t main --model gpt-4o

Review specific files only

llama-diff -s feature -t main --files "*.py"

Output review to file

llama-diff -s feature -t main --output review.md

Interactive HTML review

llama-diff -s feature -t main --html

Review uncommitted changes

llama-diff --uncommitted

Model Examples

Provider Model Example
Ollama ollama/llama3.2, ollama/codellama
OpenAI gpt-5.2, gpt-5.1, gpt-4o, gpt-4o-mini
Anthropic claude-sonnet-4.5, claude-opus-4.5
OpenRouter openrouter/anthropic/claude-sonnet-4.5, openrouter/anthropic/claude-opus-4.5

Example Workflow

  1. Make changes locally
  2. Run LlamaDiff
  3. Get structured review before pushing
git diff → llama-diff → AI review → fix → commit/push

No PR required. No cloud dependency (when using local models).


Who is this for?

  • Developers working with code under NDA
  • Teams with proprietary or confidential repositories
  • Self-hosted / privacy-first environments
  • Offline or restricted networks
  • OSS maintainers reviewing patches locally

Roadmap

  • Git hooks (pre-commit / pre-push)
  • CI integrations
  • Better diff summarization
  • Model-specific review tuning
  • IDE integrations

License

Licensed under the Apache License, Version 2.0.

About

A code review tool that analyzes git diffs between branches using local or cloud LLMs.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors