DoCoreAI has crossed 16,000 PyPI installs, but less than 2% of users are viewing the charts or reports. This post explains what you’re missing — and how to unlock performance insights from your OpenAI or Groq prompts.
👋 First, What’s DoCoreAI?
A CLI tool that tracks and optimizes your LLM prompts — and shows you real-time charts for cost, token usage, success rates, bloat, and more.
No vendor lock-in. No raw prompt logging. Just actionable analytics.
📉 What We Noticed
- ✅ Devs install the CLI tool
- ✅ They run 1–2 prompts
- ❌ But they never check the dashboard reports
That's like installing Google Analytics but never logging in. You're doing the work without seeing what's working.
🧪 What You’re Missing
Run this:
> docoreai start - run your LLM prompts as usual > docoreai dash
Then get charts showing:
- ✅ Developer Time Saved
- ✅ Token Usage and Cost
- ✅ Temperature Trends
- ✅ Prompt Success Rate
Your prompts are never stored. Reports are generated anonymously.

One-line CLI run

Insightful Charts: Developer Time Saved, Cost, Time Distribution
⚙️ Getting Started
Install and run in minutes:
> pip install docoreai -add token to .env file > docoreai start > docoreai dash
The dashboard opens automatically in your browser.
🔐 About Privacy
We respect your data. DoCoreAI never logs your raw prompts. Only anonymous usage metric stats are collected — if you opt in.
Do you want to help improve DoCoreAI by enabling anonymous analytics? (y/N)
That’s it. Full transparency. Toggle off anytime.
🙏 Like it? Help us out
If you found this useful, please share this post on Reddit, Twitter, Hacker News, or with a friend building AI tools. Let’s make prompt optimization measurable — not magic.