DoCoreAI Documentation
DoCoreAI is a privacy-first observability toolkit for developers building and operating GenAI applications. It helps you understand how prompts, models, and execution patterns impact cost, latency, and reliability — without storing prompt content or outputs.
This documentation walks you through installing DoCoreAI locally, configuring access, and interpreting the telemetry metrics that surface inefficiencies and behavioral drift in production AI systems.
⚡ Simple setup to run DoCoreAI locally
Get started in under a minute. DoCoreAI runs alongside your application and begins collecting cost and reliability telemetry without inspecting prompts or outputs.
-
Install the client
pip install docoreai
-
Create a free account
https://docoreai.com/register -
Generate an access token
https://docoreai.com/generate-token -
Add the token to your
settingsdocoreai config -
Start telemetry collection
docoreai start
-
Run your application as usual. Metrics appear automatically in the dashboard:
https://docoreai.com/dashboard
Architecture: Local Client + Cloud Control Plane
Cloud Server
The DoCoreAI server handles account registration, access control, and aggregated reporting. It receives structured telemetry metrics only — never raw prompts or model outputs.
Tokens generated via the dashboard are used to authenticate client instances and associate metrics with your account.
Local Client (Python SDK)
The client runs alongside your GenAI application and observes execution signals such as token usage, latency, retries, and model selection.
All signal extraction happens locally, ensuring sensitive prompt data never leaves your environment.
Installing the DoCoreAI Client for Prompt Analytics:
Install DoCoreAI Python SDK
pip install docoreai
Set Up Environment Variables (depricated) - Use New Settings via 'docoreai config'
OPENAI_API_KEY=your_openai_key
DOCOREAI_TOKEN=your_docoreai_token
MODEL_PROVIDER=openai
MODEL_NAME=gpt‑4
DOCOREAI_API_URL=https://docoreai.com
EXTERNAL_NETWORK_ACCESS=False
DOC_DEBUG_PRINT=true
ALLOW_SYSTEM_MESSAGE_INJECTION=true
DOCOREAI_LOG_ONLY=true
DOCOREAI_ENABLE=true
DOC_SYSTEM_MESSAGE=You are a helpful assistant.
DOCOREAI_LOG_HOST=127.0.0.1
DOCOREAI_LOG_PORT=5678
Supports Python 3.7+. Works with LangChain, FastAPI, Django, and others.
Usage Modes: Run DoCoreAI via CLI, Test, or Python Import
Ensure settings configured before proceeding:
1. CLI Mode
-
Plug n Play method (Fully automatic):
Launch the local engine: and open a new terminal window, run your existing app that prompts the LLM (e.g., OpenAI) - thats it & your logging starts automatically.docoreai start # Launch the local engine -
For Testing purpose run docoreai test and from Postman, Hoppscotch.io, or
curl, send a prompt in below JSON format: to http://127.0.0.1:8001/intelligence_profilerdocoreai test
{
"user_content": "Invent a brand-new type of sport and describe its rules.",
"role": "Creative Thinker"
}
Note: The "role" sets the LLM’s persona (similar to agentic AI) to guide the response style.
2. Library / Import Mode for Developers
eg:-1 With a specific AI Role
from dotenv import load_dotenv
load_dotenv()
import os
# Import from your client
from docore_ai.model import intelligence_profiler
def main():
prompt = "Why docoreai is the best Optimizer?"
ai_role = "AI Researcher"
print("Running intelligence_profiler()...")
try:
result = intelligence_profiler(user_content=prompt, role=ai_role)
print("\n Result from DoCoreAI intelligence_profiler:\n")
print(result) # Pretty print for readability
except Exception as e:
print("❌ Error while running profiler:", str(e))
if __name__ == "__main__":
main()
eg:-2 Normal prompt call with default role
import os
from openai import OpenAI
from dotenv import load_dotenv
import time
import groq
from dotenv import load_dotenv
# 🔄 Load environment variables from .env
load_dotenv()
# Get provider and model info
provider = os.getenv("MODEL_PROVIDER", "openai").lower()
model = os.getenv("MODEL_NAME", "gpt-4")
count = 4 #int(os.getenv("TEST_COUNT", 3))
# Initialize client based on provider
if provider == "openai":
from openai import OpenAI
client = OpenAI()
elif provider == "groq":
from groq import Groq
groq_api_key = os.getenv("GROQ_API_KEY", "")
client = Groq(api_key=groq_api_key)
else:
raise ValueError(f"Unsupported MODEL_PROVIDER: {provider}")
# 🔁 Run test loop
for i in range(count):
print(f"\n🔁 Request #{i + 1}")
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": "does chatgpt have agentic ai agents that can sing?"}]
)
print("Response:", response.choices[0].message.content.strip())
time.sleep(1)
View Local Analytics
Use ` docoreai show ` to analyze local prompt sessions, identify waste, and profile prompt behavior.
docoreai show
Open the Dashboard
Run ` docoreai dash ` or visit your cloud dashboard to see overall insights.
docoreai dash
Local Analytics: Optimize Prompts & Reduce LLM Cost
Run docoreai show for immediate insights:
- Token usage breakdown
- Prompt bloat detection
- Intelligence profiling (creativity, precision, reasoning)
These insights help developers optimize prompt design and reduce API spend.
📈 Dashboard Reports: Analyze GPT Prompt Efficiency
Access your reports via docoreai dash or at docoreai.com/dashboard.
- Usage charts & trends
- Token cost savings
- Intelligence metric timelines
Gain prompt analytics for developers and see how DoCoreAI helps with LLM cost reduction.
💬 Feedback & Community Support
Your feedback matters! Share your test results, feature requests, and bug reports via GitHub Discussions, Reddit, or the HuggingFace forum.
❓ FAQ – Troubleshooting & Updates
Common Issues
- My access token isn’t working
-
Ensure your email address is verified and that the token is copied
exactly into the
DOCOREAI_TOKENenvironment variable. - Telemetry appears disabled in analytics
-
Confirm that telemetry is enabled in your
docoreai configTELEMETRY_ENABLED=true
Upgrading Versions
To upgrade to the latest release, run:
pip install --upgrade docoreai
Latest major release: v1.0.1 (August 2025)
📖 Glossary of Prompt Optimization Terms
- AI prompt optimization: Techniques to reduce prompt length and improve efficiency.
- Prompt analytics: Insights into how prompts behave and consume tokens.
- LLM cost reduction SaaS: DoCoreAI’s cloud‑based approach to minimizing API spend.
❓ Frequently Asked Questions
What is DoCoreAI?
DoCoreAI is a privacy-first observability toolkit for GenAI applications. It helps developers understand how prompts, models, and execution patterns affect cost, latency, and reliability — without storing prompt content or model outputs.
How do I install the DoCoreAI client?
Install the DoCoreAI Python client using pip:
pip install docoreai
After installation, add your access token to a settings
No changes to your application code are required to begin collecting telemetry.
How do I generate an access token?
Create a free account at docoreai.com, verify your email, and generate an access token from the dashboard. This token authenticates your local client and associates telemetry with your account.
How can I run DoCoreAI?
DoCoreAI can be run using the CLI or embedded into your Python workflow:
docoreai start— Start local telemetry collectiondocoreai test— Run test executions and inspect metrics
Where can I view analytics and reports?
Telemetry can be explored locally or via the cloud dashboard:
- Local — Inspect metrics directly in your environment
- Dashboard — https://docoreai.com/dashboard