“I’ll put in a feature request with our Dev Team.” - You know what? Never mind.
When I first heard the term SaaSmageddon, it felt like just another dramatic tech headline.
Then I had flashbacks to all the times I’ve had a software account rep say:
“I’ll put in a feature request with our Dev team.”
That was the lightbulb moment for me.
Because maybe SaaSmageddon isn’t about the collapse of the SaaS industry at all. Maybe it’s about something far more empowering for small and mid-sized businesses.
If you’ve ever bought real estate software - or honestly, any SaaS product - you know the cycle.
The demo is flawless. It looks like the platform can do everything. You start imagining how much smoother your business is about to run.
So, you sign.
Then onboarding starts, and suddenly the integration you swear you saw in the demo doesn’t quite work with the software you actually need.
No problem, they say. “There’s a workaround.”
There’s always a “workaround.” Just, usually, not a good one.
You push through. You adapt. You invest the time.
And then, once you’re actually using it, you realize something: it does 100 things. You need 12. And it’s missing 3 that actually matter.
So, you email your rep. You explain your workflow. You outline exactly what would make the platform indispensable.
And you get the line:
“I’ll put in a feature request with our Dev team.”
In my entire career, not once did one of those feature requests turn into a feature I actually needed.
Instead, updates would roll out with new dashboards, new tabs, new features designed for someone else - probably someone with a lot more seats. Sometimes someone in a completely different industry.
Over time, I realized something important:
SaaS products aren’t built for you.
They’re built for scale.
That’s not malicious. It’s just economics.
The roadmap follows revenue. Revenue follows the biggest customers. And small and mid-sized businesses adjust accordingly.
For years, there were really only two options: buy the bloated software and live with it or pay enterprise-level prices for fully custom development.
Neither was ideal.
But AI has introduced a third path.
It’s now realistic for SMBs to build tools that actually reflect how they operate - without needing enterprise budgets to do it.
That still takes clarity. It still takes strategy. It still takes the right technical execution. But it no longer requires waiting in line behind bigger clients.
And that’s why the term SaaSmageddon doesn’t feel dramatic to me.
It feels overdue.
Not because software is disappearing, but because the power dynamic is shifting.
For years, we were told to wait.
Wait for the roadmap.
Wait for the next release.
Wait for the Dev team.
Maybe SaaSmageddon isn’t about the death of SaaS.
Maybe it’s about the end of waiting.
And honestly?
That feels long overdue.
Why Our AI Projects Don't Fail
Boring, Obsessive Polish Is the Real AI Differentiator
You've seen the demos. A founder conjures a working prototype in ten minutes. An influencer builds an AI app over the weekend. The message is clear: AI makes everything fast and easy. It’s magic.
Then you talk to someone who actually tried it for their business. The proof of concept looked great — then it hallucinated on edge cases, broke on real-world data, and the team that built it moved on. Six months and a painful budget conversation later, the project is shelved.
This is the norm. Industry estimates put AI project failure rates disturbingly high, and the reasons are surprisingly consistent. It's rarely the technology that fails. It's the approach.
Surgical Over Transformational
Most AI failures share a common origin: someone decided to "transform" something. The word itself is the warning sign. Transformation is expensive, slow, and fragile. It touches too many systems, disrupts too many workflows, and depends on too many things going right simultaneously.
At Blue Fractal Group, we start with a different question: what's the one thing that, if it worked better, would change your Tuesday? A process eating forty hours a week. A revenue channel you can see but can't reach. A pile of documents no human can get through fast enough.
This isn't thinking small — it's sequencing. Solve one real problem. Prove the value. Build confidence. Then solve the next one. The companies capturing AI's value earn it incrementally, not by betting everything on a single ambitious rollout.
Your Expertise Is the Core Ingredient
There's a pattern in failed AI projects that doesn't get discussed enough: the client gets treated as a stakeholder instead of a partner. They provide requirements at kickoff, review a demo near the end, and somewhere in between, the thing that got built drifts from the thing they actually needed.
Our clients are co-builders. We insist on it. We can’t be truly successful without them. Their domain expertise shapes every decision throughout the build.
When we built a lead-generation tool for a beverage distributor, we didn't just ask what they wanted. We sat with them to understand how they actually found leads, what made a lead worth pursuing, and what would make their sales team trust a tool enough to use it. It turned out timing was critical to their success. The sooner they understood that something was starting or changing, the better their chances. Like knowing someone applied for a liquor license rather than knowing someone was granted a liquor license. Our client’s knowledge determined everything — what factors in their business domain actually matter, what patterns indicate a real opportunity, and how results should be presented so a rep can act on them between calls.
The result: a tool that combs news articles, government filings, industry journals, RSS feeds, social media, and more, curates the findings through their lens of what matters, and surfaces opportunities their "feet on the street" approach would never uncover. Within three weeks, they're actively pursuing 60 new opportunities. That number isn't impressive because of AI. It's impressive because the tool reflects years of market knowledge made scalable.
The Part Nobody Wants to Talk About
Here's what no demo will show you: the difference between a prototype and a product is unglamorous, tedious, and absolutely essential.
It's running the tool against messy real-world data and fixing every stumble. Testing edge cases that seem unlikely until they happen on day one. Polishing the interface until someone using it at 7 AM with coffee in one hand doesn't have to think. Hardening the system so it doesn't just work — it works reliably, every single time.
This is the boring part. And it is the entire product.
We've shipped several AI applications, every one we have built, with a 100% client satisfaction rate. Not because we've cracked some secret code. What separates a tool people rely on from one they abandon is the willingness to stay in the unsexy phase long after the exciting building is over. More testing cycles, more conversations about details that would bore most people, and a firm rule: we don't hand off anything we wouldn't trust ourselves.
The Right Question to Ask
If you're evaluating whether AI can help your operation, forget transformation for now. Find the friction. Where is your team spending hours on repetitive, manual work? Where can you see opportunity but can't reach it with current capacity?
Then ask whoever you're considering working with: how will you make sure this actually works when my team uses it on a real Tuesday morning? If the answer is mostly about the tools and the promise of transformation, be cautious. If the answer is about testing, iteration, and partnership, you're in the right conversation.
The boring part is the testing and polishing. And it works.
Contact Ken Furie (ken.furie@bluefractalgroup.com) or Kyle Mason (kyle.mason@bluefractalgroup.com) if you think you might benefit from a conversation about your friction points.
The Long Tail of Training: Why AI Projects Succeed—or Quietly Fail
A few months ago, we built a highly specialized technical chatbot for a client in a niche engineering field. The “build” itself came together quickly—schema design, retrieval strategy, testing harness, guardrails. All green lights. But the real work began after the build was done, when the client started testing for the edge cases only an expert in their business would recognize.
That’s when the long tail kicked in.
Many companies expect AI development to mirror traditional software: roughly 50% build, 50% test (and some teams skimp on testing - with the unfortunate habit of letting users uncover issues in production). But AI doesn’t work that way. The testing isn’t just QA—it’s training. It’s iteration. It’s the slow sculpting of behavior until the model performs exactly as the business expects, every time.
This is also where most organizations struggle. The now-infamous MIT study found that 95% of custom AI initiatives fail to achieve positive ROI—not because the technology doesn’t work, but because the systems are inflexible and under-trained. They were never pushed far enough through the long tail of optimization. Internal teams, using software-centric practices, often underestimate the time and expertise required.
The truth is: AI development is a different process.
While generic tools deliver quick wins, custom AI is more like hiring a digital employee. You wouldn’t expect a new team member to excel on day one. You’d train them, correct them, give feedback, explain exceptions, and refine their understanding over weeks—sometimes months. Models require the same commitment.
At Blue Fractal, we prepare clients early:
If we build AI for you, we need your time and your expertise.
We are not the domain experts in your business—you are. And your feedback is the raw material we use to train your AI worker.
In our engineering chatbot project, we logged dozens of issues across accuracy, terminology, compliance, and reasoning. Each iteration made the model smarter, more precise, more aligned. By the end, the model performed its task accurately, reliably, and consistently—100% of expectations met. That final stretch is where most teams give up. It’s also where the transformation actually happens.
This long-tail work is what separates the 5% of successful solutions from the rest. It’s where the “AI whisperer” skillset becomes real: not magic, but discipline. It requires discipline to observe, test, refine, and adapt as the world—and the business—changes. Because the world is changing, and your model must keep pace. Ongoing retraining is not optional; it’s the cost of staying accurate.
The companies that understand this—those willing to invest in testing, feedback, and continuous learning—are the ones who see true ROI. They’re the ones who turn AI from a shiny prototype into a dependable digital teammate.
Why Semantic Testing is the Only Way to Test AI Systems
The Problem with Traditional Testing
When we built our customer support chatbot, our test suite was failing constantly. Red everywhere. But the chatbot worked beautifully—customers were getting exactly what they needed.
The tests were lying to us.
Traditional testing assumes determinism: same input, same output, always. That's fundamentally incompatible with AI.
Traditional System:
Input: "Calculate 2 + 2" Expected: "4" Result: Pass if output equals "4"
AI System:
Input: "I need something for metalwork"
Valid responses:
"I'd recommend our MX-2000 series..."
"The AX-3000 would be perfect..."
"Our WX-1500 is designed for metalwork..."
Traditional Test: FAIL (doesn't match expected string) Reality: All excellent responses
Exact matching creates false negatives and incentivizes rigid, robotic responses.
The Semantic Testing Solution
Instead of testing exact words, test meaning and accuracy.
Old Way: String Matching
expect(response).toContain("Model MX-2000"); expect(response).toMatch(/perfect for metalwork/i);
Fails if the AI says "MX2000" (no hyphen) or suggests a different valid product.
New Way: Semantic Validation
const validation = await validateRecommendations( userQuestion, aiResponse, { minSemanticThreshold: 0.5 } );
expect(validation.isValid).toBe(true); expect(validation.foundProducts.length).toBeGreaterThan(0);
This approach:
Extracts product mentions (any wording)
Verifies they exist in the database
Validates relevance using embedding similarity
Why This Catches Real Problems
Test: "I need something for metalwork"
Response A: "I recommend the ZX-9999 for metalwork."
Traditional: FAIL (wrong product)
Semantic: FAIL (hallucination—product doesn't exist!)
Response B: "The AX-3000 is perfect for metal fabrication."
Traditional: FAIL (wrong product, wrong phrase)
Semantic: PASS (real product, semantically relevant)
Response C: "I recommend the MX-2000 for metalwork."
Traditional: PASS
Semantic: PASS
Only semantic testing catches hallucinations while accepting natural variation.
Our Three-Layer Strategy (470+ Tests)
Unit Tests (~220): Traditional testing for deterministic components (audio, React state, utilities, WebSocket)
Traditional E2E (~27): Integration tests for non-AI features (buttons, forms, error handling)
Semantic E2E (~250): AI-focused testing that validates:
Responses are meaningful and non-empty
Product recommendations exist and are relevant
No hallucinations
UI stability during interactions
We don't test: exact wording, response length, specific product names, or tone.
Real-World Impact
Before:
40% test failures from harmless variations
Developers ignored unreliable tests
Hallucinations reached production
Model updates broke dozens of tests
After:
Zero false negatives from phrasing
Hallucinations caught pre-production
Model updates deploy without test rewrites
470+ tests developers actually trust
Why This Matters
Your testing approach shapes your AI product.
Traditional testing pushes you toward rigid templates and makes you ignore test failures. Semantic testing lets you build natural conversation while catching real problems.
The Bottom Line
You cannot test AI with tools designed for deterministic software. Our 470-test suite proves comprehensive AI test coverage is achievable—you just need to test meaning, not exact strings.
Stop testing what the AI says. Start testing whether what it says is accurate and helpful.
I'm Ken, CTO at Blue Fractal Group. I help companies implement practical AI solutions that actually work. Let's connect on LinkedIn.
Two Types of "AI Agents" - And Why the Distinction Matters
Working on AI implementations, I keep running into confusion around the term "AI agent." Turns out we're talking about two completely different things.
Type 1: Autonomous AI Agents These are the systems getting all the buzz. An AI agent can perceive its environment, decide which tools to use, and execute actions without constant hand-holding. Think customer service bots that access your CRM, check inventory, process returns, and escalate issues - all while maintaining context and making smart decisions.
Type 2: AI-Enhanced Workflows
This is AI plugged into traditional automation platforms like Zapier, Make, Power Automate, ServiceNow, or custom solutions. The AI handles specific tasks within a larger, predictable process flow.
Real example I'm building: Staff scan shipping labels with a mobile app. AI extracts supplier info, model numbers, delivery dates, and populates our equipment database. Standard workflow automation then triggers notifications to procurement, project managers, and finance.
But here's where it gets interesting: The system also compares delivery timelines against project schedules. When procurement suggests equipment substitutions for cost savings, AI evaluates whether the new supplier's lead times will mess up critical milestones. If there's a conflict, it sends up an alert that can be acted upon.
The key difference: Workflows excel at consistent, repeatable processes. Autonomous agents shine when you need adaptive decision-making across multiple variables.
The most powerful implementations combine both - workflow automation for operational consistency, enhanced with AI agents for complex decisions.
In leveraging AI for business operations, getting this distinction right can save serious time and headaches during deployment.
What are you seeing out there? Are you building agents or workflows?
Stuck on Deployment
I built a nifty AI utility tool for a client that will look into a google folder and sub folders, ingest all documents, then extract names & titles of all the people it finds inside. It’s a quick way for the sales team to comb through historical contracts, SOWs, project plans, etc and find the people we’ve worked with in the past who could become new contacts – even if they’re at new companies.
Scripting the AI prompt took a couple of hours. I needed a tight prompt so the end user doesn’t have to interact with the script yet still receives a tidy output list every time. That turned out to be the easy part.
For this project, the challenge (for me) was the deployment, especially as I learned more about Google’s ecosystem. Linking Apps Scripts to a GCP project, enabling Google Drive API in Cloud Console and adding it to the Apps Script, authorizing the script, etc. The deployment took many more hours than the AI piece. I felt frustrated – right on the verge of having a useful tool, but stuck in the details of deployment.
I eventually set it aside for the night and came back the next morning. I asked AI to create a checklist of EVERY SINGLE detail necessary for setup and deployment. That cracked the case and got me across the finish line.
Using AI to solve the deployment issue was pretty nifty as well.
Thoughts on Prompt Engineering
Prompt engineering is an art form – and it’s already a legitimate career path, even if many companies haven’t caught up yet.
As LLMs get more powerful and can handle longer reasoning sessions (we’re talking 10+ minute processing times now), a well-crafted prompt becomes the difference between impressive demos and reliable, production-ready automation.
Sure, anyone can get cool results from conversational agents. But building prompts that deliver consistent, predictable outcomes for business-critical tasks? That requires genuine skill, experience, and strategic thinking.
I’ve seen teams spend multiple hours perfecting a single prompt – and save hundreds of hours downstream. Every word matters. Every sequence matters.
My approach? Treat prompt writing like crafting a compelling essay. Structure, flow, and precision all count.
Here are three game-changing techniques I’ve learned:
Examples are gold. Sometimes showing beats telling by a mile – even for AI. One solid example can communicate what paragraphs of instructions can’t.
Order is everything. The sequence of your instructions dramatically impacts results. Pro tip: put your most critical requirements at the end – that’s what the model “remembers” best.
Test relentlessly. Great prompts emerge through iteration, not inspiration. Build, test, refine, repeat.
There are fantastic tutorials out there (easy to find, though mastery takes practice), and tools like Promptmetheus or Originality can accelerate your workflow. But I’d recommend starting with manual practice first – understanding the fundamentals makes you a better prompt engineer long-term.
How’s your prompt engineering journey going? Are you seeing it become more important in your work too?
Framework for Finding ROI from AI
The highest value AI agents often aren’t the flashiest ones, but rather those that eliminate friction in existing processes.
I’ve had clients come to the table with ideas of things to build, but 50% of the time it’s not the most valuable agent for their business. How do you find the highest ROI?
My framework: MAP → IDENTIFY → PRIORITIZE → BUILD
Map: Start by building a customer journey map on Figma. Get the high-level view first, then drill into the details.
Identify: Look for repetitive, time-consuming steps with heavy text or voice components. With voice agents expanding rapidly, audio touchpoints are prime opportunities.
Prioritize: Focus on friction points that impact the most customers or consume the most resources.
Build: The cost to run an AI agent is negligible compared to development cost, so start with your highest-impact opportunity.
Real example: A SaaS company wanted a complex lead scoring agent. But mapping their journey revealed the real bottleneck was customer onboarding. A simple FAQ agent reduced their support tickets by 40% and freed up their team to focus on strategic accounts.
By starting with the customer journey instead of the technology, you’re more likely to land on solutions that drive real value. The opportunities we find this way are usually easier to build AND deliver higher returns.
Prompt Learning
Writing the prompt can be so much more than a line-by-line conversation with Claude or ChatGPT. Emerging research suggests there are ways to vastly increase accuracy. Things like describing its role, assigning a clear task, with specific rules, providing context, examples, and supplemental notes, can all improve the results. This is what makes low-code and no-code systems run with repeatability without hallucination. It takes time to build a first-class prompt, but the payoff is reliability and accuracy.
Here is a prompt I wrote asking Claude to create a Chrome extension for a password manager, storing my files in a local database. (I actually like Chrome’s native pw manager, but wanted to see if I could build an extension this way.) I tried this prompt in ChapGPT as well, but the experience was better with Claude. Copy/paste and watch what happens!
#Role
You are a web developer with a talent for Chrome extensions and JavaScript. You understand browser security and UI/UX design principles.
#Task
Create a personal password manager as a Chrome extension that safely stores passwords locally and helps me automatically save and fill credentials for websites I visit.
#Specifics
Detect login forms on webpages automatically
Prompt to save credentials after I’ve filled in both fields and attempt to log in
Prompt to autofill saved credentials when I return to sites
The save/retrieve prompt should stay visible for at least 8 seconds
Local encryption of all password data
Ability to export/import password data for backup
Clean, intuitive UI for managing all saved passwords
#Context
I currently use a third party password manager but want something I control myself for added security. I’m concerned about storing passwords in third-party services, even when they claim to use encryption. I need this to work with standard login forms and prefer simplicity over complex features.
#Examples
##Example 1
When I visit a site like twitter.com and enter my username/password, after I click “Log in” or press Enter, your extension should show a prompt asking if I want to save those credentials. The prompt should stay visible long enough for me to make a decision (at least 8 seconds).
##Example 2
When I return to a site where I’ve saved credentials, as soon as I click on the username field, your extension should prompt me asking if I want to fill in my saved credentials. If I click “Yes,” it should automatically fill both the username and password fields.
#Notes
– I’m comfortable with technical details but prefer code that’s well-commented and organized
– Security is my primary concern – all passwords should be encrypted locally with a master password
– The extension should work on most standard websites
– Keep the UI clean and straightforward
– I want to be able to view, edit, and delete my saved passwords through the extension
– Must work with Chrome
It Ain’t Sexy
It ain’t sexy, but it’s practical. I asked AI to write a bit of code to ingest a data file, parse and post the data to another database. The parsing rules are complex with many exceptions, which is why we’ve been doing it manually for years with an admin person.
A few hours to carefully construct the prompt, then maybe 5-6 hours testing and debugging; now it’s automated and saving up to 5 hours/week!
AI enabled this. Creating space for the human to do more high-value thinking.
What do I do? Selected highlights from Blue Fractal’s work in 2024
Companies/Industries Served:
· Professional Engineering Services
· Healthcare Tech Startup
· Marketing Agency
· Property Management for Affordable Housing
· Design Software for the Construction Industry
A selection of work I did to help clients:
KPIs and Data
· Instrument the business to establish baseline KPIs, begin measuring, and leverage data for decision making.
· Calculate project profitability.
· Root cause analysis for poor performing projects.
· Develop and implement accountability standards for team members. Coach project leaders how to hold their teams accountable.
Business Operations
· Install a cadence for the leadership team – quarterly planning with execution details and weekly checkpoints.
· Streamline proposal writing process with standardized templates.
· Financial systems migration from QB Desktop to QB Online.
· Process improvements to streamline or automate work.
Product and Go To Market
· Opportunity sizing for metal building manufacturer software.
· Competitive landscape analysis for bringing offshore software solutions into the U.S. market.
· Written phased requirements for software development roadmap and resource planning.
· Authored business plan to inform go/no-go decision.
HR and Culture
· Facilitator for employee communications workshop.
· Change management workshop for leaders and managers.
· Extract core values from leadership; proliferate company-wide through multiple channels.
· Update and stimulate the employee review process.
· Recruit and interview candidates for skills, experience and culture fit.
· Create onboarding programs for new hires.
Marketing & Sales
· From-scratch marketing literature and website content to move into new market segments.
· Hands-on project management to create and execute marketing strategy.
· Create a sales team commission structure.
Taking Action
I used to think, if I wanted to start something new, that I need a “big idea” to make it worthwhile and increase the chances that it will be successful. Most of us will not conceive a lottery-winning “big idea”. And even if we did, there’s really no increased chance of success just because of that. The idea still needs to be implemented and that could mean talking to people, getting funding, building skills, i.e. doing stuff.
Taking action is what makes the difference, even with small ideas. Everyone can come up with many small ideas. I may not be clear what the total future potential of the idea is, but I can take action on it now, see where it goes, and then evolve. Success is a process.
Training Will Set You Free
Have you ever hired a new employee, and after assigning them a deluge of tasks you start struggling to come up with new things for them to do? Often, it can take as much if not more time handing off tasks than it would have been to just complete the task on your own. Instead of managing new hires through tasks, manage them through a training program. Start small. If you don’t have a training program, have the employee build it as they complete their tasks. Ask to review their documents and course correct. They’ll develop a sense of ownership and the time you spend with them is leveraged beyond the task itself. Once you show someone exactly how to do something, and you are confident that they know how to do it, then you can stop assigning tasks and rely on them to do the job.
What is a Fractional COO?
I've begun working with multiple businesses as a fractional COO. This is a new space and many seasoned executives are looking at it, trying to figure out how to make it work. But what is a fractional COO? There seem to be a lot of different answers. Here is my vision and definition.
Just like a full-time COO, a fractional COO (fCOO) acts as a strategic partner with the business owner. The job of the fCOO is to enhance existing leadership into a team that can run the day-to-day without the the business owner’s constant intervention. The fCOO drives execution of the owner's vision with the leadership team (and, by extension, the rest of the company).
The fCOO is the leader of the leadership team. This is not an outside consultant; the fCOO is part of the org structure and the leadership team reports to him or her, even in a fractional engagement. The fCOO ensures that all of the various functions and departments of the business are integrated with one another and that everyone is rowing in the same direction.
In contrast, the VP of Ops/General Manager is a member of the leadership team who reports to the fCOO. They are responsible for the actual product or service that the business provides to its customers.
One reason for confusion between the fCOO and the VP of Ops/GM roles is that in the majority of businesses where the COO is full-time, he or she also serves as the organization’s VP of Ops/GM. Eighty percent of the full-time COO role is managing the day-to-day business operations, and 20% of the role is strategic development of the owner's vision.
The fCOO focuses on the strategic 20%. That's why it's fractional (not full time). The fCOO may perform this service for multiple companies at the same time.
For most operations teams led by a VP of Ops/General Manager, the 20% strategic work often get consumed in the whirlwind of day-to-day business. Client issues always take priority, and the company makes slow or no progress toward the business owner’s strategic goals. (This is often true even when a company has a full time COO.)
Having a fCOO focused on the strategic 20% improves the likelihood that the business will make crucial changes and improvements to achieve the owner’s strategic vision. That’s not to say there aren’t trade-offs with a fCOO operations leader, but if everyone understands where the fCOO is focused - on strategic activities (without disrupting the day-to-day business) - the fCOO model can become highly effective.