There’s a version of AI integration that lives in blog posts and conference talks. It goes like this: “We implemented AI and now everything runs automatically.”
Here’s what actually happens when you try to do that: you build something that sounds smart, it breaks on edge cases, you spend three weeks fixing it, and now you’re deep in the weeds maintaining what was supposed to save you time.
I know this because I built the wrong version first.
Then I spent the last year building the right version. For myself.
What follows is the real system I use to run Glenmont Consulting. It’s the same marketing infrastructure I build for clients, tested on my own business first. Not the theoretical version. Not the inspiring version. The actual one, with all its practical constraints and human dependencies baked in. The one that scales.
The thing nobody tells you: AI doesn’t replace decisions. It handles context. Decision-making still belongs to a human. Once you accept that, everything else becomes clear.
The thing nobody tells you: AI doesn’t replace decisions. It handles context. Decision-making still belongs to a human. Once you accept that, everything else becomes clear.
The Problem That Started This
By late 2025, I was drowning in tasks that weren’t actually complex but were consuming time in proportion to their importance.
Inbox processing. It took 45 minutes every morning to read through 50-plus emails, figure out which ones mattered, classify them by client tier, and decide what needed immediate action versus what could wait until the afternoon.
Content production. I had a framework for writing articles that made sense and performed well. But turning that framework into an actual draft took three hours of staring at a blank page.
Note capture. I’d send myself quick ideas and observations from my phone throughout the day. Links, screenshots, voice memos, half-formed thoughts. But they just piled up in my inbox. No system for turning them into actionable insights.
Context switching. I managed 33 client folders with individual brand guides, positioning docs, keyword strategies, and decision histories. Every time I switched between clients, I had to manually load all that context into my head. It took 10 minutes per client switch.
None of these were hard problems. All of them were friction problems. And friction compounds.
Here’s what I realized: the problem wasn’t that I needed to work faster. The problem was that I was doing repetitive context-loading work instead of making decisions.
So I built a system to handle the context part. And kept the decision part for myself.
System #1: The Morning Digest
This runs every morning at 6:37 AM on a Mac Mini sitting under my desk.
Here’s what it does:
- Scans my inbox for the last 24 hours of emails.
- Classifies each email into tiers based on client importance, urgency flags, and message type (invoice, decision needed, FYI, etc.).
- Generates a structured digest that shows Tier 1 (immediate action) first, then Tier 2 (same day), then everything else.
- Includes a summary of what happened in each tier — not the full email, just the actionable part.
- Sends it to me as an email at 6:37 AM.
- Deletes nothing. All original emails stay in my inbox.
The whole process takes about 2 minutes to read. What used to take 45 minutes.
What’s important: this system doesn’t make decisions. It surfaces information. It says “here are the 8 things that need your attention today” instead of making me scan 50 emails to find the 8 things.
I still read the full emails when I need context. But 70 percent of the time, the digest summary is enough to know whether something needs me today or next week.
The implementation sits in a Claude skill that runs on a scheduled task. It reads my authentication token, connects to Gmail, processes the recent messages, and returns markdown. Another skill converts that into an email and sends it to my personal inbox.
Before
✕ 45 minutes per morning
✕ Manual scanning of 50+ emails
✕ Inconsistent prioritization
✕ Decisions made while fatigued
After
→ 2 minutes per morning
→ Automated tier classification
→ Consistent prioritization
→ Context-ready decision-making
Cost to run: $0.12 per day
Before: 45 minutes, manual scanning, inconsistent prioritization, decisions made while in email-reading mode
After: 2 minutes, automated classification, consistent tiers, context ready for decision-making
Cost to run: $0.12 per day. The Mac Mini cost $700 once, in 2024.
System #2: The Content Engine
Content production is where the bigger leverage showed up.
I had a framework that worked. I knew what blog articles performed well. I had documented my positioning, my audience, my keyword strategy. I had templates. I had past articles as reference. Everything was there.
But starting with a blank page still took three hours.
Then I realized something: I have all the context documented. I have positioning files. I have a content brief for 12 key topics. I have a keyword strategy. I have past articles. Why am I not using that as the system input?
So I built a skill that:
- Takes a topic (passed by me as text).
- Reads my brand positioning file, voice profile, keyword strategy, and past articles.
- Generates a full article draft based on all that context.
- Runs every Tuesday at 9 AM.
- Delivers a 2,000-word draft to me by Tuesday at 10 AM.
The draft isn’t final. It needs editing: cutting fluff, tightening the lead, adding a specific story or detail, making sure the examples fit. That takes 20 minutes, not three hours.
The difference is context. When the system has access to my positioning, my tone, my keyword strategy, and my past work, it can generate something that’s 80 percent of the way there. My editing passes turn it into something I’m happy to publish.
The skill chains together:
- A prompt that reads my brand files
- A content strategy module that selects the right angle based on my positioning
- A draft generation module that writes the article
- A skill that schedules it and emails it to me
Again: the system doesn’t publish anything. It prepares context. I make the decision to edit, add details, and publish.
and Published
Quality
vs. 180 min
22 articles written and published | 70% draft quality on average | 20 minutes editing per article vs. 180 minutes from scratch
System #3: The Mobile Capture System
The third system handles ad-hoc thinking.
I’d send myself notes from my phone throughout the day. Ideas after client calls, links to competitor content, screenshots of interesting ads, voice memos on the drive home. They were useful thoughts, but they just accumulated in my inbox. No system for acting on them.
So I built a simple capture pipeline: I send myself a note from my phone using a specific format the system recognizes. The morning digest picks it up automatically and processes it.
The processing does a few things:
- Categorizes the note (strategic thought, action item, reference, question for research).
- Extracts the actionable component.
- Adds context from my decision history and brand files if relevant.
- Saves the processed note to my Obsidian vault under the right project folder.
For example:
Raw note: “Are we positioning correctly to PE-backed companies or should we focus on health tech instead?”
Processed output: Saved to /memory/decisions.md with context about current positioning, recent client wins, and market feedback. Tagged as a decision point. Linked to the positioning.md file so I can revisit the full context when I’m ready to decide.
The system doesn’t decide. It captures the thought, contextualizes it, and makes it findable later.
This matters because I notice patterns. I send myself notes about the same concern three times, and pattern-matching helps. A system that just accumulates notes? You miss the pattern.
System #4: The Memory Architecture
This one sounds boring. It’s actually the foundation for everything else.
I maintain five persistent files in /memory/:
- decisions.md — Every strategic choice I made and why. This gets updated every week and read by every skill that needs context about my positioning or priorities.
- sessions.md — A log of what I worked on in each session. When did I build this? What was I thinking? What didn’t work? When I return to a problem three months later, I can see that I already tried approach B, it didn’t work, here’s why.
- preferences.md — Personal working style. I don’t use certain tools because I find them distracting. I prefer certain meeting formats. I like written updates before calls. My preferences are documented, and new systems respect them.
- client-map.md — The hierarchy of all my clients. Tier 1, Tier 2, Tier 3. Who owns what. Insurance cluster is five companies under one ownership. This file gets read by every skill that needs to know “is this client important to focus on now.”
- active-projects.md — What I’m currently working on. What’s due soon. What’s in planning mode. Skills can check this to avoid suggesting things that conflict with current focus.
Here’s the architecture: Every skill reads these files at the start of execution. It loads the context. Then it makes decisions based on that context.
When I update one of these files, all future skill runs see the updated information. The system is always using the current state of my intentions, not a stale configuration from last month.
The memory system gets updated at the end of every working session. It takes 10 minutes. And it means every skill I build in the future has access to all the context it needs.
This is the thing that makes everything else compound.
Memory files, brand guides, client map
Emails, content, notes
Priorities, preferences, tiers
Digest, draft, note
Step 1: Read Context (memory files, brand guides, client map) → Step 2: Process Information (emails, content, notes) → Step 3: Apply Decision Framework (priorities, preferences, client tiers) → Step 4: Prepare for Human Decision (digest, draft, processed note)
System #5: The Brand Context Architecture
Every client folder (I have 33) contains a brand/ subfolder with the same structure:
- voice-profile.md — Tone, vocabulary, personality, what to say and what not to say
- positioning.md — The core positioning angles, key differentiators, target audience
- audience.md — Who we’re talking to, what they care about, their situation
- keyword-strategy.md — Priority keywords, long-tail opportunities, content gaps
- visual-identity.md — Colors, fonts, design system, visual guidelines
Every skill that generates content for a client reads these files first.
Why does this matter? Because I can hand the same skill to different clients, and it adapts. The content engine doesn’t need a “variant for client X.” It just reads the client’s brand files and generates content that sounds right for them.
This is the opposite of how most agencies work. They have different writers for different clients or different account managers who customize the approach. That doesn’t scale and it’s expensive.
The system approach: standardize the inputs (brand architecture), use the same skills, get client-specific output because the brand context feeds the system.
System #6: The Skill Architecture
A skill is a reusable workflow. I have about 12 core skills that chain together:
- Keyword research — Takes a topic, returns prioritized keywords based on search volume and my keyword strategy
- Content strategy — Takes a keyword, returns the right positioning angle and article structure
- Article draft — Takes a strategy, returns a full article draft
- Social atomization — Takes an article, returns optimized posts for LinkedIn, Twitter, email
- Email digest — Takes raw emails, returns prioritized classification and summaries
- Note processor — Takes a QQ note, returns categorized and contextualized output
- Brand validator — Takes content, checks it against brand voice profile, returns flagged sections
- Decision summarizer — Takes meeting notes, returns strategic implications and next steps
- Client tier classifier — Takes an email, returns the client tier and suggested action urgency
- Memory updater — Takes session notes, returns formatted entries for decision.md and sessions.md
- Competitor tracker — Takes industry news, returns relevant competitive moves with analysis
- Weekly report generator — Takes all weekly activity, returns a formatted report for review
Each skill does one thing. They chain together into workflows.
For example, the Tuesday content engine isn’t one big skill. It’s:
- Keyword research (topic selected by me)
- Content strategy (takes keyword output)
- Article draft (takes strategy output)
- Brand validator (validates against voice profile)
- Email and save (final output)
If I need to change how articles are drafted, I update the article draft skill. All downstream workflows that use it automatically get the improvement.
Research
Strategy
Draft
Validator
& Save
Keyword Research → Content Strategy → Article Draft → Brand Validator → Email & Save
What This System Actually Costs
Infrastructure:
- Mac Mini: $700 (one-time, 2024)
- Electricity to run 24/7: about $12 per month
- Claude API calls: about $150 per month across all scheduled tasks
Time to build:
- Initial architecture and memory system: 12 hours
- First 5 core skills: 20 hours
- Ongoing skill refinement and new skill creation: about 3 hours per week
Time to maintain:
- Updating memory files: 10 minutes per session end
- Monitoring skill health: 15 minutes per week
- Fixing broken skills when Claude API changes: 1-2 hours per month
Total operational cost: about $400 per month in infrastructure and API. Plus about 4 hours per week of my time on upkeep.
The payback: I recover about 8-10 hours per week of context-loading and repetitive work.
That’s a 2:1 time return on maintenance effort.
But the real return is in quality. The articles are better because they have more context. The inbox processing is more consistent. The decisions are faster because I’m not in a fog of email noise.
How This Applies to Your Business
Here’s where most AI implementation breaks: companies try to automate decisions, not context.
They build systems to “automatically prioritize leads” or “automatically write emails” or “automatically generate reports.” Then they’re confused when the system hallucinates, misses context, or generates something unusable.
The system I built doesn’t do that. It handles context. A human makes the decision.
If you want to actually implement this:
Step 1: Document your context.
What do you believe about your business? Who are your customers? What’s your voice? What’s your decision framework? Write it down. All of it. This becomes the input for every system.
Step 2: Identify the repetitive context-loading work.
Not all work, just the part that’s “I have to read this, understand it, then decide something.” Email processing. Daily briefings. Note organization. Report formatting. Status updates. These are usually 5-10 hours per week for any leader.
Step 3: Build a system to surface the context.
Not to make the decision. To surface what matters so you can decide faster.
Start with one. The morning digest worked for me. For someone else, it might be a daily customer feedback summary or a competitive intelligence digest.
Step 4: Add the memory layer.
Document your decisions, your priorities, your preferences. Make them accessible to your systems. This is what enables the next skill to work better than the last one.
Step 5: Build skills that chain.
Don’t build one big system. Build small, reusable pieces that can combine into workflows. This is what lets you scale without rebuilding.
Before
Per week across email, reporting, content prep, note organization, context switching
After
Productive work (minus 20 hours recovered)
Plus 4 hours system maintenance
Before: 45 hours per week across email, reporting, content prep, note organization, context switching
After: 25 hours per week (20 hours recovered) plus 4 hours of system maintenance
Net gain: 16 hours per week of freed time and better decision-making
Why Most AI Implementations Fail
They start with the wrong assumption: that AI can replace judgment.
The companies that actually benefit from AI integration start with a different assumption: AI can handle the context, and humans can focus on judgment.
You read in the news about companies that “cut 30 percent of their workforce using AI.” What you don’t read about is the companies that built bad AI systems, spent six months fixing them, then abandoned them. Those stories don’t make headlines.
The companies that win with AI are the ones that:
- Are honest about what needs human judgment (decisions, strategy, customer relationships)
- Use AI to handle context and preparation (summarization, prioritization, organization)
- Build systems that fail gracefully (if the AI makes a mistake, a human catches it)
- Iterate ruthlessly (the system that works today will need updates next month)
- Keep their systems simple enough to maintain (one skill does one thing)
The approach I’m describing isn’t sexy. It’s not “AI runs your whole business.” It’s “AI handles the parts that are mechanical so you can focus on the parts that require judgment.”
But it actually works.
What Comes Next
Right now, the system I’ve described handles my internal operations.
The next phase is expanding it to client delivery.
When I work with a client on marketing strategy or AI integration, I’m bringing all this same infrastructure to their business. I run the diagnostic using the same systems. I spot patterns using the same memory layer. I generate options using the same skill architecture.
The client gets the same rigor and efficiency they see in my own operations.
That’s the thing about building it for yourself first: you can actually use it with clients. You’re not consulting about something you haven’t done. You’re describing what you’re literally doing. It’s why companies outgrow their agencies and start looking for someone who’s done the work, not just talked about it.
The companies that are moving upmarket right now (the ones at a growth inflection point, PE-backed, scaling fast) are asking the same question: “How do we implement AI without it blowing up in our faces?”
The answer isn’t a tool list or a consulting report. It’s a framework. And this framework actually works because I’m using it every day.
The Real Takeaway
AI integration doesn’t look like what you see in demo videos.
It looks like a Mac Mini running in the corner. A set of organized context files. Some scheduled tasks. A few dozen reusable skills. And a person making decisions based on better information.
It’s not revolutionary. It’s practical.
And it makes a noticeable difference in how much leverage you can create with a small team.
If you’re running a company between $5M and $100M, and you’re thinking about how AI fits into your strategy, this is what it looks like in practice. Not the idea. The actual system.
I’ve walked through this because I think most of what gets written about AI is theory or hype. You should know what real implementation looks like: what it costs, how long it takes, what it actually does, and what still requires a human.
That’s what I’m building for my clients right now, and it’s working.
Ready to Implement This for Your Business?
If you’re at a growth stage where AI integration makes sense, but you need someone who’s actually built this and can guide you through the same process, let’s talk.
I work as a fractional CMO with growth-stage, PE-backed, and healthcare companies that are ready to integrate AI into their operations. We start with your specific bottleneck and build the right system, not just any system.
Schedule a call to discuss your AI integration strategy
No pitch. Just clarity on what’s working, what’s broken, and how AI can actually help.