If you’ve ever needed to give an LLM context for multiple files as you code, then you know how tedious it is to copy and paste all of the files into the LLM window.
This Visual Studio Code extension lets you compile all of your files into a single .txt file.
Using glob patterns you can include or exclude files that wouldn’t be helpful for an LLM context (such as build files and libraries).
๐ flatten-repo
Flatten your entire codebase into clean, readable .txt
files โ optimized for LLMs like ChatGPT, Claude, and Gemini.
โจ Features (v0.0.12)
- ๐ Auto-flattens your workspace into plain
.txt
chunks - ๐ง Built for LLM parsing, prompt engineering, and static code analysis
- ๐ Each chunk includes a directory tree overview of included files
- โ๏ธ Auto-chunks content using a configurable token limit (~4 characters per token)
- ๐ Powerful support for glob-based ignore, whitelist, and blacklist
- ๐งพ All configs live in a single
.flatten_ignore
file (generated automatically) - โ๏ธ Customize file extensions, folders to ignore, and token size limits
- ๐ Output saved in timestamped files under
/flattened
- ๐ซ Auto-adds
/flattened
to your.gitignore
Each chunk starts with a tree-like outline of included files, followed by:
=== FILE: path/to/file.ext ===
โ๏ธ How to Use
- Open a folder in VS Code
- Open Command Palette (
Cmd+Shift+P
/Ctrl+Shift+P
) - Run:
Flatten Project to TXT
- View flattened files inside the
/flattened
folder
๐ ๏ธ Configuration
You can configure behavior in your .vscode/settings.json
:
"flattenRepo.includeExtensions": [".ts", ".tsx", ".js", ".jsx", ".py", ".html", ".css"],
"flattenRepo.ignoreDirs": ["node_modules", ".git", "dist"],
"flattenRepo.maxChunkSize": 200000
Or configure per-project settings via .flatten_ignore
.
๐ .flatten_ignore
This single file controls:
- โ
Glob-based
global
ignore rules - โ Optional
whitelist
orblacklist
- ๐ Token limits via a
settings:
section
Auto-generated in /flattened
if missing.
๐ Sample .flatten_ignore
# Ignore rules
global:
node_modules
.git
dist
# Whitelist (optional)
whitelist:
src/**/*.js
# Blacklist (optional)
blacklist:
**/*.test.js
.env
# Settings (optional)
settings:
maxTokenLimit: 50000
maxTokensPerFile: 25000
# Suggestions:
# Claude 3.7: 128k tokens
# ChatGPT 4o: 128k tokens
# ChatGPT o3-mini-high: 200k tokens
# Claude 2: 100k tokens
# Anthropic Claude 3 Opus: 200k tokens
# Cohere Command: 32k tokens
# Google PaLM 2: 8k tokens
# Meta LLaMA 2: 4k tokens
๐ Output Format
Each .txt
output file looks like this:
=== Directory Tree ===
โโ App.tsx
โโ index.js
โโ components
โโ Header.tsx
โโ Footer.tsx
=== FILE: App.tsx ===
import React from 'react';
...
โ Use Cases
- Preparing source code for LLM input
- Clean context formatting for ChatGPT, Claude, Gemini, etc.
- Snapshotting your repo for AI audits or static reviews
- Prompt engineering pipelines
- Code flattening for full-project memory with agents
๐ Known Limitations
- No graphical UI (yet) โ command-only
- Does not flatten binary or image files
- Some advanced glob edge cases may need refinement
๐งช Contributing
Want to help improve this tool?
- Star the repo โญ
- Submit a feature request
- Open a pull request ๐ช
Ideas to explore:
- File token counts
- Markdown formatting
- Multi-model export formats
- UI interface for selecting flatten options
๐ Resources
Made with โค๏ธ to help devs and LLMs speak the same language.