← Back
The AI Dialogues
Essay
Account Hygiene
Why Your AI Gets Worse the More You Use It
There's a pattern emerging in organizations using AI: the people who use it the most are getting the worst results.
Not because the model degrades. Because their accounts do.
Every vague prompt. Every scattered conversation. Every context switch without cleanup. Every half-formed question you throw at Claude or ChatGPT is teaching it something about what you want.
And if what you're showing it is chaos, chaos is what it learns to give you back.
This is the personalization trap: the features designed to make AI more helpful—memory, conversation history, learned preferences—become the mechanism by which you accidentally poison your own tool.
You're not just using AI poorly. You're training it to be bad at helping you.
The Invisible Degradation
You start a new Claude account. The first few conversations are crisp. You ask a question, get a focused answer, iterate cleanly. It feels like working with a sharp colleague.
Three months later, the same model gives you meandering walls of text. It hedges unnecessarily. It misinterprets simple requests. It seems… dumber.
But the model hasn't changed. Your account has.
Here's what happened in those three months:
Week 1-2: You used AI for well-defined tasks. The AI learned: this user wants concise, task-focused outputs.
Week 3-6: You got comfortable. Started using it for brainstorming, vague explorations. Questions became less crisp. The AI learned: this user wants expansive, exploratory responses.
Week 7-12: You're now using it as a rubber duck, a therapist, a search engine, a writing partner, and a task manager—all in the same conversation thread. The AI learned: this user doesn't have clear intent. Guess broadly and stay safe.
Month 4: Every output is now a 500-word hedge. You didn't get a worse model. You taught your account to be worse.
The Personalization Trap
AI companies marketed memory and personalization as pure upside: "The more you use it, the better it knows you!"
What they didn't mention: it learns your bad habits as readily as your good ones.
If you show up vague, it learns vague is what you want. If you switch contexts constantly, it learns to stay shallow. If you never correct it, it learns its first drafts were good enough. If you tolerate hedging, it learns to hedge harder.
This is personalization working exactly as designed. The trap is that improving performance for your exhibited preferences and improving usefulness aren't the same thing.
The feedback loop tightens:
1. You ask a vague question
2. AI gives a vague answer (matching your historical pattern)
3. You accept it or ask an even vaguer follow-up
4. AI updates: "confirmed, this user wants vague"
5. Next session starts from an even vaguer baseline
This is recursive degradation at the individual level. Not the model training on itself—your account training on your worst habits.
How Poisoning Happens
The mechanism is simple: AI systems with memory optimize for consistency with your past behavior.
Scenario 1: The Scope Creep Spiral
Monday: "Help me draft a product strategy." Wednesday: "Also, can you help me think through team org structure?" Friday: "And maybe some thoughts on our Q3 roadmap?"
The AI learns: this user starts with one thing but actually wants help with everything adjacent. Six weeks later, you ask for a simple competitive analysis and get a 12-paragraph exploration of industry trends, organizational psychology, and strategic frameworks—none of which you asked for.
Scenario 2: The Politeness Tax
You're conflict-averse. You don't correct the AI harshly. When it misses, you say "that's interesting, but maybe also consider…" instead of "no, wrong direction."
The AI learns: hedged outputs are safe. It stops committing. Your account is now optimized for producing options, not decisions.
Scenario 3: The Context Swamp
You never start fresh conversations. Every new request gets tacked onto a thread already covering five different projects.
The AI learns: context is infinite and boundaries don't matter. Your working memory is now the AI's working memory, and it's a swamp.
Scenario 4: The Approval Trap
You're busy. When the AI produces something 80% right, you accept it and move on. You never loop back with "here's what I actually used vs. what you gave me."
The AI learns: 80% was good enough. Your account is now trained to produce "good enough" instead of "actually good."
Why It's Insidious
This degradation is invisible because:
It's gradual. No single interaction breaks anything. You don't notice until you're six months in and wondering why the same model everyone raves about gives you slop.
It's personalized. Your colleague's Claude works great. Yours doesn't. Same model, different account history.
It's sticky. Memory and personalization were sold as features. Turning them off feels like losing capability.
It's self-reinforcing. Once degradation starts, you compensate by being more vague, which teaches the AI to be more vague, which makes you... you see the problem.
It's attributed to the model, not to you. When outputs degrade, you think "the model got worse," not "I poisoned my account."
The Hygiene Framework
Fixing this requires treating AI interaction as a discipline, not a convenience.
1. You Set the Intent
Every interaction should start with clarity about what you actually want.
Not: "Help me think about our pricing strategy."
Instead: "I need a 3-option pricing model for a B2B SaaS product. Constraints: $50-500/mo range, value metric should be usage-based, comparable to Slack/Notion. Output: table with rationale."
Set the mode explicitly: "Draft mode: get ideas out, polish later" or "Decision mode: pick one, justify it."
2. You Mark the Boundaries
Tell the AI what's out of scope as clearly as what's in scope.
"Focus on technical feasibility, not market dynamics." "I only care about the next 90 days, not long-term vision."
State what's at stake: If this is a draft you'll heavily edit, say so. If this goes to a board, say so. The tightness standard changes.
3. You Reduce the Noise
Every conversation is teaching the AI something about you. Make sure it's teaching the right thing.
Cut noise, keep signal. Don't dump five questions in one prompt. Don't add conversational filler that dilutes your actual request.
Use dense anchors. Instead of "can you help me understand how authentication works?" try "Explain OAuth 2.0 flow: client credentials grant, for a React SPA calling a Node.js API."
4. You Shape the Exchange
Don't accept the first output if it's not what you need. But also don't just say "try again."
Correct fast, not hard: "Too abstract. Give me the three actual SQL queries I'd run." "You're hedging. Pick the best option and defend it."
Fast correction teaches the AI your actual standard. Accepting mediocre output teaches it mediocre is fine.
5. You Maintain Coherence
Stay in one domain. Don't ask for legal advice, then marketing copy, then Python debugging in the same thread.
Stabilize your vocabulary. If you call something "user activation" in one message and "onboarding engagement" in the next, the AI doesn't know if those are the same thing.
Iterate in partnership. You set direction, AI drafts. You refine direction, AI redrafts. Partnership means clear handoffs.
6. You Build the Environment
Turn personalization into gravity, not chaos. Be consistent in how you show up. If you want concise outputs, consistently ask for concise outputs.
Keep long threads when they matter (iterating on a complex document). Reset when they don't (switching projects).
Treat context as shared infrastructure. Everything in a conversation is context the AI has to manage. Messy threads = messy outputs.
Why Leaders Need to Care
Your team is doing this right now.
Junior employees are using AI for everything from drafting emails to building financial models. They're learning how to interact with these tools through trial and error. And most of them are learning badly.
They're asking vague questions, accepting first drafts, mixing contexts, and blaming the tool when it produces slop they trained it to produce.
If you don't model disciplined AI interaction, your team won't develop it.
And the cost isn't just bad outputs. It's:
Loss of judgment — accepting AI output without scrutiny atrophies the muscle
Accountability drift — "the AI suggested it" becomes a shield
Degrading standards — slop becomes the baseline
Compounding inefficiency — bad outputs create more work downstream
The illusion of productivity — generating text fast isn't producing value
The Leadership Discipline
If AI is now a standard tool in your organization, interaction hygiene needs to be a standard skill.
Model it yourself. Show your team what good looks like. Share examples of clear prompts, structured exchanges, iterative refinement.
Teach it explicitly. Don't assume people will figure it out. Run workshops. Build internal documentation on "how we use AI here."
Create norms. Start fresh threads for new projects. State intent and boundaries upfront. Correct fast when output misses. Keep authorship even when AI drafts.
Audit outputs. If someone's AI outputs are consistently vague, it's probably an account hygiene issue, not a capability issue.
Make judgment non-negotiable. AI can draft. Humans decide. AI can suggest. Humans verify. AI can explore. Humans commit.
The Payoff
Get this right, and AI becomes a compound advantage:
Faster convergence. You spend less time wrestling with vague outputs because you trained your account to give you precise ones.
Fewer misfires. The AI knows your boundaries, your vocabulary, your standards. It's not guessing.
Cleaner thinking. Forcing yourself to set clear intent and explicit boundaries makes you sharper.
Better decisions. When you treat AI as a drafting partner, not an oracle, you maintain the judgment muscle.
Coherence as a two-player game. The AI gets better at helping you the more you use it—but only if you're teaching it the right lessons.
The Sober Truth
You're not just using AI. You're training it.
Every interaction teaches it something about what you want, what you tolerate, what you value.
If you show up vague, it learns vague. If you show up scattered, it learns scattered. If you show up disciplined, it learns discipline.
The model doesn't change. Your account does.
And once your account is poisoned—optimized for your worst habits instead of your best needs—it's hard to dig out. You can reset, but you'll just poison the next one if you don't change how you show up.
The fix isn't better prompts. It's better discipline.
Set intent. Mark boundaries. Reduce noise. Shape the exchange. Maintain coherence. Build a clean environment. Keep judgment human.
This isn't about mastering AI. It's about not letting AI master you by becoming a mirror of your own sloppiness.
You get the AI you show up for.
Show up sharp, and it stays sharp. Show up scattered, and it amplifies the chaos.
The choice is yours. But the consequences compound fast.
Winter 2026