
Last week, I responded to Rohit Kumar Thakur’s compelling article “The Future of AI Has Already Been Decided” with a counterpoint about why deterministic orchestration beats autonomous agents for production business processes. The response has sparked some great conversations, and several people asked me to share more about how I personally use AI to stay ahead in this rapidly evolving space.
So here’s the truth: I use AI every single day – but not in the way the hype cycle suggests.
It turns out that the path from AI tool to AI agent isn’t a binary leap. It’s an iterative refinement process that starts with manual work, identifies patterns, builds reliable workflows, validates value, and only then scales to automation. And honestly? That journey has been the most productive learning experience of my career.
Identify bottlenecks, automate workflows, and build fast.
Get Started TodayRohit’s original article presents the inevitability of autonomous AI agents replacing human workers entirely. My previous response argued that deterministic orchestration with human validation is what actually works in production today. But there’s a middle ground I didn’t fully explore: personal productivity augmentation.
This is where AI is delivering immediate, measurable value – not as a replacement for humans, but as a force multiplier for knowledge work. And the key insight is this: you can’t automate what you haven’t first done manually and refined iteratively.
Think of it like building any other skill or process. You wouldn’t delegate a task to someone else if you couldn’t articulate exactly what success looks like. The same principle applies to AI.
Let me share a concrete example. At Proactive Technology Management, identifying and qualifying potential clients is mission-critical. We need to understand a prospect’s technology stack, business challenges, competitive landscape, and growth trajectory before we ever reach out.
Initially, this meant hours of manual work: scanning LinkedIn, reviewing websites, reading investor updates, checking tech stack databases, and synthesizing everything into a coherent picture. For a single prospect, this could easily consume 2-3 hours.
I started experimenting with ChatGPT and Claude to accelerate specific parts of this process. I’d ask them to summarize a company’s latest press releases, analyze their job postings for technology signals, or draft competitor comparison tables.
But here’s the thing: the results were inconsistent. Sometimes the output was excellent. Sometimes it was generic fluff. Sometimes it hallucinated facts. I had to manually verify everything, adjust my prompts, and refine my approach.
This iterative refinement is critical. I wasn’t just “using AI” – I was training myself to collaborate with AI effectively. I learned which questions produced useful results, which sources to prioritize, and how to structure prompts for maximum signal-to-noise ratio.
I kept notes on what worked. I saved my best prompts. I built a personal library of “research recipes” that consistently produced valuable outputs.
Once I had a reliable manual workflow using AI as a tool, I worked with our team to systematize it. We created structured templates for research outputs, defined what information was required versus nice-to-have, and established validation checkpoints.
The key was making it repeatable without making it rigid. The process had clear steps, but I could still exercise judgment about which areas to dig deeper based on what the AI surfaced.
Once the workflow consistently produced high-quality results, our team built it into an automated system that could process prospect lists overnight. Now, when I wake up, I have a digest of 10-15 fully-researched prospects with actionable insights waiting in my inbox.
This is what we call “AI while you sleep” in our client engagements – scheduled, automated workflows that deliver results without requiring real-time supervision.
But notice what didn’t happen: The system doesn’t autonomously decide which prospects to research or whether to reach out. It executes a well-defined research workflow I’ve already validated manually.
The orchestration is deterministic. I control the inputs. I review the outputs. The AI handles the tedious information gathering and synthesis in between.
This same iterative tool-to-agent progression plays out across everything I do:
When evaluating whether to integrate a new SaaS product into our offerings (e.g., a workflow automation platform or a vector database), I use AI to accelerate due diligence.
I start by asking ChatGPT or Claude to pull together everything publicly available: documentation summaries, pricing comparisons, security posture assessments, customer review analysis, and competitive positioning.
Early on, I’d just paste URLs and ask broad questions. The results were hit-or-miss. Now, I have refined prompts that extract exactly what I need: “Compare the enterprise security features of [Product A] vs [Product B], focusing on SSO, audit logging, and data residency options. Present as a table with sources.”
For products I evaluate regularly (like new Azure services or AI orchestration platforms), I’ve set up automated alerts that monitor for significant updates – new features, pricing changes, security incidents – and deliver weekly digests.
The AI landscape evolves daily. New orchestration patterns, deployment best practices, and security frameworks emerge constantly. Manually tracking this would be impossible.
I use AI to monitor technical discussions across GitHub, Stack Overflow, blogs, and research papers. I feed it specific topics I care about – like “event-driven architecture patterns for business process automation” or “cost optimization strategies for cloud-hosted AI workflows” – and it surfaces relevant updates.
I review these digests weekly and decide what’s worth incorporating into our practice. The AI doesn’t make architectural decisions – it dramatically expands my awareness of what’s possible.
When we kick off a new client engagement, we conduct extensive discovery to understand their current workflows and pain points. These sessions generate hours of conversation, whiteboards full of diagrams, and pages of notes.
I use AI transcription tools to capture everything, then ask Claude or ChatGPT to generate structured summaries: key themes, pain points ranked by frequency, technology gaps, and potential automation opportunities.
What used to take me a full day of post-meeting synthesis now takes 30 minutes of reviewing and refining AI-generated summaries. I catch things I would have missed. I spot patterns across multiple client conversations that inform our broader service offerings.
But the actual solution design? That’s still human-led, because every business process has nuance, risk tolerance, and organizational dynamics that AI can’t fully grasp.
Here’s the framework I recommend for anyone serious about leveraging AI:
1. Identify the tool you need. What repetitive cognitive task consumes your time? Where do you need faster access to information?
2. Use AI manually to build the tool. Experiment with prompts. Refine outputs. Learn what works and what doesn’t. Save what succeeds.
3. Validate that the tool provides value. Does it actually save time? Improve quality? Generate insights you wouldn’t have found otherwise?
4. Systematize the workflow. Turn your refined prompts into a repeatable process with clear inputs, outputs, and validation checkpoints.
5. Automate selectively. Only once you trust the workflow’s consistency, consider automating it to run on a schedule or in the background.
Notice what’s missing from this framework: there’s no step where the AI autonomously decides what to do next. You remain in control of the orchestration. The AI operates within guardrails you’ve defined.
If you’re a business leader evaluating AI investments, here’s my advice: Don’t start with agents. Start with tools.
The companies seeing real ROI from AI aren’t the ones deploying fully autonomous agents to “transform operations overnight.” They’re the ones systematically identifying high-value tasks, building AI-assisted workflows, validating results with human oversight, and then scaling through selective automation.
This approach delivers three critical advantages:
1. Faster time-to-value. You see productivity gains immediately, not after months of complex development.
2. Lower risk. Human validation prevents catastrophic errors from cascading through your operations.
3. Better learning. Your team develops AI fluency by collaborating with tools, not by watching autonomous systems work in black boxes.
And when you do eventually automate? The workflows are grounded in real processes you’ve already validated, not speculative promises about what AI “should” be able to do.
Here’s the bottom line: AI has made me dramatically more effective at my job. I research faster. I stay current with less effort. I make better-informed decisions. I punch well above my weight class.
But I haven’t been replaced by an autonomous agent. If anything, the value of my judgment, experience, and strategic thinking has increased because I spend less time on information gathering and more time on high-value decision-making.
That’s the future I see for knowledge workers: augmented, not automated. More capable, not obsolete.
Rohit’s article argues that the future of AI is inevitable, predetermined by the laws of technological progress. I still disagree with that framing—but I’ll add a nuance.
The future of AI isn’t written by the technology itself. It’s being written every day by practitioners who iteratively refine tools, validate value, and scale what works. That’s a fundamentally human process, not a predetermined outcome.
AI agents will continue to get better. Autonomous capabilities will expand. But the path forward isn’t a binary choice between “tools” and “agents.” It’s a continuous spectrum of increasing automation, where humans remain in control of the strategy and AI handles the execution.
And honestly? That hybrid model is more powerful, more reliable, and more valuable than full autonomy ever could be.
If you want to punch above your weight class with AI, don’t wait for perfect autonomous agents. Start building imperfect tools today. Refine them tomorrow. Automate them selectively when they’re ready.
That’s how you actually deliver ROI in 2025.
Michael Weinberger Co-Founder, Proactive Technology Management | Building AI tools that work today, not waiting for agents that might work tomorrow.
#AI #ProductivityTools #AIAgents #DigitalTransformation #KnowledgeWork #PragmaticAI #AIAdoption #BusinessLeadership #PersonalProductivity #ExecutiveEffectiveness
Identify bottlenecks, automate workflows, and build fast.
Get Started Today