DoCoreAI Docs – AI Prompt Optimization for Developers
Welcome to DoCoreAI – your #1 tool for AI prompt optimization. In this guide, you’ll learn how to install, configure, and leverage DoCoreAI to reduce LLM costs and gain prompt analytics as a developer.
Architecture: Server + Client for AI Prompt Optimization
Server: Account Registration & Token Generation
Sign up at docoreai.com → Verify your email → Receive your API token. This token authenticates and enables analytics.
Client (Python SDK)
The client connects to the Server using your token—all prompt events are securely sent for analytics and optimization.
Installing the DoCoreAI Client for Prompt Analytics
Install DoCoreAI Python SDK
pip install docoreai
Set Up Environment Variables
OPENAI_API_KEY=your_openai_key
DOCOREAI_TOKEN=your_docoreai_token
MODEL_PROVIDER=openai
MODEL_NAME=gpt‑4
DOCOREAI_API_URL=https://docoreai.com
EXTERNAL_NETWORK_ACCESS=False
DOC_DEBUG_PRINT=true
ALLOW_SYSTEM_MESSAGE_INJECTION=true
DOCOREAI_LOG_ONLY=true
DOCOREAI_ENABLE=true
DOC_SYSTEM_MESSAGE=You are a helpful assistant.
DOCOREAI_LOG_HOST=127.0.0.1
DOCOREAI_LOG_PORT=5678
Supports Python 3.7+. Works with LangChain, FastAPI, Django, and others.
Usage Modes: Run DoCoreAI via CLI, Test, or Python Import
Ensure .env file configured before proceeding:
1. CLI Mode
-
Plug n Play method (Fully automatic):
Launch the local engine: and open a new terminal window, run your existing app that prompts the LLM (e.g., OpenAI) - thats it & your logging starts automatically.docoreai start # Launch the local engine
-
For Testing purpose run docoreai test and from Postman, Hoppscotch.io, or
curl
, send a prompt in below JSON format: to http://127.0.0.1:8001/intelligence_profilerdocoreai test
{
"user_content": "Invent a brand-new type of sport and describe its rules.",
"role": "Creative Thinker"
}
Note: The "role"
sets the LLM’s persona (similar to agentic AI) to guide the response style.
2. Library / Import Mode for Developers
eg:-1 With a specific AI Role
from dotenv import load_dotenv
load_dotenv()
import os
# Import from your client
from docore_ai.model import intelligence_profiler
def main():
prompt = "Why docoreai is the best Optimizer?"
ai_role = "AI Researcher"
print("Running intelligence_profiler()...")
try:
result = intelligence_profiler(user_content=prompt, role=ai_role)
print("\n Result from DoCoreAI intelligence_profiler:\n")
print(result) # Pretty print for readability
except Exception as e:
print("❌ Error while running profiler:", str(e))
if __name__ == "__main__":
main()
eg:-2 Normal prompt call with default role
import os
from openai import OpenAI
from dotenv import load_dotenv
import time
import groq
from dotenv import load_dotenv
# 🔄 Load environment variables from .env
load_dotenv()
# Get provider and model info
provider = os.getenv("MODEL_PROVIDER", "openai").lower()
model = os.getenv("MODEL_NAME", "gpt-4")
count = 4 #int(os.getenv("TEST_COUNT", 3))
# Initialize client based on provider
if provider == "openai":
from openai import OpenAI
client = OpenAI()
elif provider == "groq":
from groq import Groq
groq_api_key = os.getenv("GROQ_API_KEY", "")
client = Groq(api_key=groq_api_key)
else:
raise ValueError(f"Unsupported MODEL_PROVIDER: {provider}")
# 🔁 Run test loop
for i in range(count):
print(f"\n🔁 Request #{i + 1}")
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": "does chatgpt have agentic ai agents that can sing?"}]
)
print("Response:", response.choices[0].message.content.strip())
time.sleep(1)
View Local Analytics
Use ` docoreai show ` to analyze local prompt sessions, identify waste, and profile prompt behavior.
docoreai show
Open the Dashboard
Run ` docoreai dash ` or visit your cloud dashboard to see overall insights.
docoreai dash
Local Analytics: Optimize Prompts & Reduce LLM Cost
Run docoreai show for immediate insights:
- Token usage breakdown
- Prompt bloat detection
- Intelligence profiling (creativity, precision, reasoning)
These insights help developers optimize prompt design and reduce API spend.
📈 Dashboard Reports: Analyze GPT Prompt Efficiency
Access your reports via docoreai dash or at docoreai.com/dashboard.
- Usage charts & trends
- Token cost savings
- Intelligence metric timelines
Gain prompt analytics for developers and see how DoCoreAI helps with LLM cost reduction.
💬 Feedback & Community Support
Your feedback matters! Share your test results, feature requests, and bug reports via GitHub Discussions, Reddit, or the HuggingFace forum.
DoCoreAI FAQ – Prompt Optimization & Troubleshooting
Common Issues
- My token isn’t working
- Ensure email verified and token is copied exactly into
DOCOREAI_TOKEN
. - Telemetry is disabled in analytics
- Check
TELEMETRY_ENABLED=true
in your `.env` file.
Upgrading Versions
Latest major release: 1.0.1 (Aug 2025). Use pip install --upgrade docoreai
.
📖 Glossary of Prompt Optimization Terms
- AI prompt optimization: Techniques to reduce prompt length and improve efficiency.
- Prompt analytics: Insights into how prompts behave and consume tokens.
- LLM cost reduction SaaS: DoCoreAI’s cloud‑based approach to minimizing API spend.
❓ DoCoreAI FAQ – Prompt Optimization & Troubleshooting
What is DoCoreAI?
DoCoreAI is an AI prompt optimization tool that provides analytics, token profiling, and dashboards to help developers reduce LLM costs and improve prompt efficiency.
How do I install the DoCoreAI client?
You can install the DoCoreAI client using pip: pip install docoreai
. After installation, set your token and OpenAI key in a .env file.
How do I generate my DoCoreAI token?
Register at docoreai.com, verify your email, and you will receive your personal API token to use with the client.
What are the usage modes of DoCoreAI?
You can use DoCoreAI via CLI commands like docoreai start
or docoreai test
, or import it as a Python library in your own code.
Where can I view analytics and reports?
You can use docoreai show
for local analytics or docoreai dash
to open the cloud dashboard on docoreai.com.