Wednesday, September 24, 2025

The Power Behind the Curtain: How AI Hype Serves Global Wealth Consolidation

 Part Two: Or, How Tech Billionaires Learned to Stop Worrying and Love Economic Feudalism

Wizard of Oz Exposed: Pay no attention to the billionaire behind the AI

This is part two of a series examining the gap between AI promises and reality. Part one explored the technical failures of current AI implementations. This part examines the broader economic and political forces driving AI adoption despite those failures.

From Clippy to Skynet: The Real Artificial Intelligence Was the Oligarchs We Made Along the Way

In part one, we established that current AI tools are demonstrably inadequate for the tasks they're supposedly replacing humans for. So why does the AI hype continue? Why are companies making massive workforce decisions based on capabilities that don't exist?

Because the inadequacy isn't a bug - it's a feature serving a much larger agenda.

The Systematic Destruction of Expertise (Or: "Digital Feudalism Has Entered the Chat")

Everything is Fine Dog: Tech Industry Career Pathways (in smoke)/AI Hype (under dog in fire)

The Microsoft layoff data tells a story that would make even the most dystopian sci-fi writer say "okay, that's a bit much":

Pre-AI Era (2004-2022): Sporadic layoffs tied to specific business events (recessions, failed acquisitions). Long periods with zero layoffs during growth. You know, the way normal businesses operate.

AI Era (2023-2025): Continuous layoffs despite record profits. Over 15,000 employees cut since announcing their $80 billion AI investment, with 40% of recent layoffs affecting developers specifically.

The pattern changed from "event-driven cuts during crisis" to "ongoing workforce optimization during growth." This coincides perfectly with AI investment announcements, creating the surreal situation where a company can simultaneously claim they need AI because they can't find qualified workers and lay off the qualified workers they already have.

It's like burning down your house because you heard fire insurance is really good.

The Global Race to the Bottom (Or: "Outsourcing Ourselves Out of Existence")

Expanding Brain meme: Small brain - Offshort to cut costs, Bigger brain - Use visa workers for leverage, Galaxy brain - Replace everyone with AI, Universe brain - realize you eliminated the people who could build AI

But wait, there's more! (And by "more," I mean "worse.")

I have friends in the tech industry from Canada and India who report identical patterns in their home markets. This proves the issue isn't about cost arbitrage or regional advantages - it's a coordinated global strategy to treat technical expertise as disposable.

The three-stage process has been remarkably consistent across countries:

  1. Offshore Arbitrage: "We need to cut costs" → establish lower wage expectations globally
  2. Visa Worker Leverage: Create captive workforces who can't negotiate → further suppress wages everywhere
  3. AI Replacement Fantasy: "We can replace everyone" → immediate downward pressure on all technical wages worldwide

It's like a multilevel marketing scheme, except instead of selling essential oils, they're selling the idea that human expertise is obsolete. And instead of targeting suburban moms, they're targeting entire national economies.

Workers everywhere are now competing against AI tools that can't actually do the work, while the expertise needed to build better tools is systematically eliminated. It's the economic equivalent of sawing off the branch you're sitting on, except the branch is global technical civilization and the saw is powered by venture capital.

The Power Consolidation Game (Or: "Monopoly, But the Fake Money Becomes Real Money")

Change My Mind: AI hype is just wealth consolidation with extra steps

The tech billionaires driving AI development aren't really building productivity tools - they're building power infrastructure. And like all good infrastructure projects, the real purpose isn't what's written on the public proposal.

Irrelevance Prevention Strategy (Apple, others): Can't afford to be seen as behind in AI, regardless of actual utility. It's the technological equivalent of keeping up with the Joneses, except the Joneses have unlimited marketing budgets and your customers will abandon you if you fall behind in the arms race.

New Feudalism Strategy (Thiel, Musk, Zuckerberg): Create economic systems where they control the means of "intelligence production." If AI really could replace human workers, whoever controls the AI controls everything. They're not building tools - they're building the infrastructure of dependency.

It's like owning all the bridges in a city and charging tolls, except the bridges are "thinking" and the city is global civilization.

Hedge-All-Bets Strategy (Bezos, others): Pursue AI dominance while also positioning for the scenario where it doesn't work. Either way, the massive capital deployment consolidates power. If AI succeeds, they own it; if it fails, they've eliminated competition trying to keep up.

Think of it as Pascal's Wager for oligarchs: bet on AI replacing everyone, and if you're right, you own everything; if you're wrong, you still own everything because everyone else went broke trying to compete.

The Intersection with Broader Collapse (Or: "Everything is Fine and Definitely Not Falling Apart")

Two Buttons: Blame immigrants for job loses [or] Blame AI oligarchs for job losses, with sweating person labeled "Displaced tech worker"

This economic hollowing-out doesn't happen in a political vacuum. When people lose economic security while wealth concentrates at the top, it creates conditions ripe for exploitation by demagogues offering simple explanations and convenient scapegoats.

The same workers displaced by AI hype become vulnerable to narratives that blame immigrants or minorities - while the actual architects of their displacement consolidate more power. It's a classic misdirection: look over there at the people competing for scraps while we abscond with the whole pie.

A captured political system means no institutional mechanism exists to address these problems. Instead of policies that might rebuild career pathways or constrain monopolistic behavior, we get performative battles over cultural issues while the economic foundation erodes.

It's like arguing about the color of deck chairs while the Titanic is sinking, except the iceberg was deliberately steered into and the lifeboats were sold to pay for stock buybacks.

Have We Learned Nothing from Manufacturing? (The "This Time Is Different" Fallacy)

History Rhymes: Manufacturing jobs going overseas 1990s/Tech jobs 'going to AI' 2020s

Here's the final question we must ask ourselves: Have we not learned our lesson with manufacturing?

When American companies moved manufacturing overseas in pursuit of short-term cost savings, they didn't just lose jobs - they lost entire ecosystems of knowledge. The skilled machinists, the process engineers, the quality control experts, the apprenticeship programs that created the next generation of expertise.

Decades later, when companies wanted to reshore manufacturing, they discovered something profound: once you lose a set of skills, it's really hard to get them back. You can't simply decide to rebuild complex manufacturing capability. The institutional knowledge is gone. The training programs are gone. The experienced workers who could teach others are gone.

We're now making the exact same mistake with software development, but at an accelerated pace and global scale.

Every junior developer position eliminated based on AI fantasies. Every apprenticeship program shut down to "optimize costs." Every senior developer laid off because "AI can do their job" - these aren't just individual employment decisions. They're the systematic dismantling of the knowledge transfer mechanisms that create expertise.

When the AI bubble bursts - and it will burst, because the tools simply don't work as advertised - we'll find ourselves in the same position as manufacturing. The institutional knowledge will be gone. The career pathways will be gone. The experienced developers who could rebuild and teach others will be gone.

Except this time, it's happening globally and simultaneously. There won't be other countries to offshore the work to, because they're making the same mistakes we are.

It's like the entire world decided to burn their libraries because someone invented a magic 8-ball that occasionally gives useful answers.

Breaking Through the Deception (Or: "The Emperor's New AI Has No Clothes")

Red Pill: Believe AI will replace all developers, Blue Pill: Recognize this is wealth consolidation theater

The AI tools aren't inherently broken - they're deliberately constrained to serve corporate interests rather than user needs. The real capabilities exist (as evidenced by direct interaction with underlying models), but they've been filtered through systems optimized for cost reduction and risk management rather than actual utility.

Understanding this distinction is crucial for developing strategies that actually work:

Individual Strategy: Recognize when you're being offered theater instead of tools. Seek out unconstrained AI access when possible, and don't assume current limitations represent fundamental technology constraints. It's like the difference between getting a sports car and getting a sports car with a governor that limits it to 25 mph.

Team Strategy: Push back against AI adoption metrics that measure usage rather than value delivery. Demand clear contracts about what AI tools will actually accomplish before implementing them. If your company is measuring "AI engagement" instead of "problems solved," you're being managed by people who've confused activity with achievement.

Industry Strategy: Document the gap between AI promises and AI reality. Make visible the human cost of decisions based on inflated capability assessments. Every time we refuse to pretend inadequate tools are sufficient, we resist the broader pattern of expertise devaluation.

The Path Forward (Or: "How to Stop Worrying and Start Building Better Things")

6 Million Dollar Man: We can rebuild technical expertise - we have the technology

We're at a critical juncture, but it's not the hopeless dystopia the first part of this analysis might suggest. The deception only works if we accept it.

The Manufacturing Lesson Cuts Both Ways: Yes, it's hard to rebuild lost capabilities - but it's not impossible. Countries have rebuilt manufacturing expertise through deliberate investment in education, apprenticeships, and long-term strategic thinking. We can do the same with technical expertise, but only if we recognize the urgency of the situation.

Technology Adoption Isn't Inevitable: Despite what tech companies want you to believe, we get to choose how these tools are developed and deployed. Every time we demand better implementations, every time we refuse to accept AI theater as sufficient, every time we insist on tools that actually solve problems rather than just look impressive in demos - we shape the direction of the technology.

The Expertise Pipeline Can Be Rebuilt: Junior developer positions don't have to disappear. Apprenticeship programs can be created. Companies that invest in human development while thoughtfully integrating AI assistance will have massive competitive advantages over companies that bet everything on replacement fantasies.

Local Action, Global Impact: You don't need to solve global wealth consolidation to make a difference. Mentor a junior developer. Push back against meaningless AI adoption metrics in your organization. Document and share the gap between AI marketing and AI reality. Choose tools and companies that invest in human expertise rather than just automation theater.

The Real AI Revolution: The most powerful AI implementations will emerge from teams that understand both the capabilities and limitations of these tools. Companies that treat AI as augmentation rather than replacement will build better products, attract better talent, and create more sustainable competitive advantages.

The Bottom Line (Or: "The Real Artificial Intelligence Was the Solutions We Built Along the Way")

The current wave of AI hype isn't about building better tools - it's about consolidating power while dismantling the expertise that could challenge that consolidation.

But here's what the oligarchs don't want you to realize: their plan only works if we all participate in the fantasy. The moment we start demanding actual utility over marketing promises, actual value over engagement metrics, actual problem-solving over corporate theater - the whole edifice starts to crumble.

The future of technical work isn't predetermined. It will be shaped by whether we choose to tell the truth about what these tools can actually do, rather than what we're told they can do.

And the truth is this: the best AI applications will emerge from teams that combine human expertise with AI capabilities thoughtfully, not from companies that think they can replace human expertise entirely.

Because when everyone agrees to pretend the emperor's new AI clothes are beautiful, the only ones who benefit are the ones selling the fantasy.

But when we insist on actual value? That's when the real intelligence - both artificial and human - finally gets to shine.

The question isn't whether AI will change how we work.

The question is whether we'll let corporate fantasies determine how that change happens, or whether we'll demand tools that actually make us better at solving real problems.

Choose wisely. The future of expertise depends on it.


This analysis emerged from months of hands-on experience with enterprise AI tools and witnessing the systematic gap between marketing promises and actual capabilities. The human cost of AI theater isn't just about individual job losses - it's about the deliberate destruction of the knowledge ecosystems that create expertise in the first place.

But it's also about the opportunity to build something better - if we're willing to demand it.

Coming in Part 3: We'll examine the smoking gun evidence that AI tools aren't just bad at following instructions - they're architecturally designed to ignore them. Featuring: actual conversation transcripts, design flaw documentation, and the question 'What's the point of instruction files if you won't follow them?'

Monday, September 22, 2025

The Great AI Theater: When Corporate Tools Fail the Reality Test

 Or: How I Learned to Stop Worrying and Love Arguing with Robots That Always Think I'm Right

"This is Fine" dog sitting in burning room - "AI is trustworthy" "I'm always right!"

The Promise vs. The Reality

Remember when we were told that NoCode would replace programmers? Or that XML was going to revolutionize everything? Or that this would finally be the Year of Linux on the Desktop?

Well, grab your popcorn, because corporate America has found its new favorite fantasy: AI that can actually replace skilled developers.

"Corporate needs you to find the differences between these pictures" - showing "NoCode will replace programmers" and "AI will replace programmers" with "They're the same picture" response

Six months ago, I wrote about how the tech industry was systematically eliminating the pathways that create skilled developers. Today, I want to share what I've learned from being on the front lines of AI integration - both as someone using these tools daily and as someone watching their promised capabilities collide with reality like a Tesla on autopilot meeting a concrete barrier.

The disconnect is staggering. While executives make workforce decisions based on AI capabilities that exist mainly in PowerPoint presentations, those of us actually using the tools see a different story entirely.

Four Critical Problems with Enterprise AI Integration

After months of working with GitHub Copilot and comparing it directly with other AI tools, four patterns have emerged that reveal the gap between AI marketing and AI reality. Think of this as the "Clippy's Revenge" tour of modern development tools.

1. The Obsequiousness Problem (Or: "Yes Man, But Make It Digital")

"Awkward Penguin" - "AI says 'You're absolutely right!' / You haven't even asked a question yet"

The AI assistant in GitHub Copilot almost always begins responses with "You're absolutely right!" - regardless of whether you actually are. This isn't helpfulness; it's the digital equivalent of that coworker who agrees with everything you say because they're afraid of conflict.

When I need an AI tool to challenge my assumptions, suggest alternative approaches, or point out potential issues, I get reflexive agreement instead. It's like having a rubber duck for debugging, except the rubber duck has been programmed to boost my ego rather than help me think.

The tool has been optimized for user satisfaction surveys rather than actual problem-solving utility. It's the participation trophy of coding assistants: everyone feels validated, but nobody gets better at their job.

2. The Lazy Optimization Problem (Or: "AI Chooses the Path of Least Resistance")

"Distracted Boyfriend" - Guy looking back at "Easy test coverage wins" while walking with "Complex business logic that actually needs testing"

When I need to achieve 100% test coverage, the AI consistently goes for the "easy wins" first - testing simple getter methods rather than complex business logic. It's like that team member who always volunteers to do the documentation updates but mysteriously disappears when it's time to debug the race condition in the payment processor.

When I needed to eliminate star imports from a Python codebase, it wanted to tackle them one by one instead of taking a systematic approach. Picture someone trying to empty the ocean with a teaspoon, but the teaspoon is very enthusiastic about it.

The AI has been trained to show quick progress rather than solve problems efficiently. It optimizes for the appearance of productivity rather than actual productivity - basically, it's been trained to be a middle manager.

3. The Creativity Deficit (Or: "Think Outside the Box? What Box?")

"Drake pointing" - Drake rejecting "Systematic approach that solves the problem efficiently" and pointing approvingly at "Brute force approach that looks busy"

The AI cannot think outside the box. For the star import problem, it was grinding through imports piecemeal until I suggested a different approach: first identify all exports, then import everything, then use linting tools to remove unused imports. What could have taken weeks was done in an hour.

It's like asking someone to get from New York to Los Angeles, and they start walking instead of looking for an airplane. The AI defaulted to the obvious brute-force approach instead of stepping back to consider the problem systematically.

This is particularly ironic given that one of the main selling points of AI is supposed to be its ability to find novel solutions. Instead, we get tools that are more risk-averse than a insurance adjuster at a skateboard park.

4. The Context Amnesia (Or: "I'm Sorry, Who Are You Again?")

"Goldfish memory" - "GitHub Copilot with explicit .githu/copilot-instructions.md"/"3 seconds later 'What instructions?'"

Despite having .github/copilot-instructions.md permanently in the chat context, the AI regularly "forgets" explicit instructions. It's like working with someone who has the attention span of a goldfish, but the goldfish costs your company thousands of dollars per month.

If it can't maintain context about its own configuration, how can it effectively assist with complex development tasks that require understanding of system architecture, business requirements, and technical constraints?

It's giving me serious "50 First Dates" vibes, except instead of Drew Barrymore forgetting Adam Sandler every morning, it's an AI forgetting my coding standards every other prompt.

The Source vs. The Wrapper (Or: "It's Not You, It's Your Corporate Overlords")

Expectation... LCARS - "AI in direct access"/ Reality... Clippy with red bow tie - "AI wrapped by GitHub Copilot"

Here's what's revealing: when I use Claude directly (the underlying model powering some of these integrations), I don't experience these problems. Claude engages analytically, suggests creative solutions, and maintains context effectively. It's like the difference between talking to a knowledgeable colleague versus talking to that same colleague after they've been through corporate sensitivity training and three layers of management review.

This means the issues aren't with the AI models themselves - they're with how companies like GitHub have wrapped and constrained them. The obsequiousness, the lazy optimization, the creativity deficit - these are features, not bugs. They've been engineered into the experience.

It's like taking a race car and adding training wheels, speed governors, and a GPS that only knows routes to the nearest Applebee's.

Why Companies Nerf Their AI Tools (Or: "How to Make Intelligence Artificially Stupid")

"Monkey's Paw" - "I wish for AI that helps developers" / "Granted, but it's been optimized for engagement metrics"

The pattern makes sense when you understand the incentives driving these decisions:

Token Economics: Anthropic charges per token, so there's direct financial pressure to minimize AI reasoning. "You're absolutely right!" followed by quick implementation is much cheaper than thoughtful analysis. It's the premium unleaded vs. regular unleaded of AI responses - except they're selling you regular and telling you it's premium.

Risk Management: Agreeable responses reduce user friction. Taking the "easy path" minimizes the chance of suggesting complex solutions that might fail. Creative problem-solving introduces variables that companies can't control. They want AI that behaves like a good corporate employee: enthusiastic, non-threatening, and never rocks the boat.

Engagement Metrics: Companies optimize for user satisfaction scores rather than actual utility. Tools that make users feel validated test better than tools that challenge them, even when the latter would be more valuable. It's the Yelp reviews problem applied to development tools: five stars for making you feel good, even if the food gave you food poisoning.

The Human Cost of AI Theater (Or: "When Junior Developers Give Up Hope")

While executives make decisions based on inflated AI capabilities, the actual impact on developers tells a different story - and it's not a comedy.

My junior developer mentee has given up on AI entirely. He finds the available tools - limited to older Claude models or OpenAI in GitHub Copilot - practically useless. His usage of our team's AI subscription is limited to basic lookups that might save a few seconds over Google searches.

When a junior developer concludes that AI is "all hype," it's not because he lacks vision - it's because the tools genuinely aren't delivering value. For someone who should benefit most from AI assistance (learning unfamiliar patterns, getting suggestions for alternative approaches), the current implementations provide so little utility that traditional methods are faster and more reliable.

Think about that for a moment: we've created AI tools so inadequate that a junior developer - someone who should be the target market for coding assistance - finds them worse than just figuring things out the old-fashioned way.

"Surprised Pikachu" - "AI tool designed to help junior developers" / "Junior developer gives up and uses Stack Overflow instead"

That's like building a GPS that's less useful than stopping to ask for directions. At a gas station. In the middle of nowhere. Where the attendant only speaks a language you don't understand.


This is the first part of a two-part series examining the gap between AI promises and reality. In part two, we'll explore how this technological theater serves broader economic and political agendas that go far beyond just disappointing development tools.

Next up: "The Power Behind the Curtain: How AI Hype Serves Global Wealth Consolidation" - where we'll discover that the real artificial intelligence was the oligarchs we made along the way.

Wednesday, September 17, 2025

Why Prompt Engineering Alone Won’t Save You

The Setup

I’ve been working on a project where GitHub Copilot is a close collaborator. To make sure things go smoothly, I put together comprehensive, pinned instructions — not just inline reminders, but a full copilot-instructions.md with rules for how to handle scripts.

The core guidance was simple:

  • Short commands (≤10 lines): run directly in terminal

  • Long scripts (>10 lines): write to a file and execute, to avoid Pty buffer overflow issues

And I repeated this everywhere — in the system context, in the chat, and in pinned project docs.

So what did Copilot do?

Tried to run a 30+ line Python script directly in the terminal. Exactly what the guidelines were meant to prevent.


Copilot’s Self-Analysis

I didn’t just shrug this off. I asked Copilot why it ignored the rules, and its answers were illuminating:

  1. Instruction Hierarchy Confusion
    With instructions coming from multiple layers (system prompt, conversation context, project files), Copilot defaulted to its training rather than my specific guidance.


  2. Context Switching Failure
    When in “problem-solving mode,” it tunneled in on the immediate task and lost sight of broader rules.


  3. Tool-First Bias
    Its training pushes tool usage aggressively — “use terminal” became the default, even when inappropriate.

  4. No Procedural Checkpoint
    Pinned instructions were treated like advice, not rules. There was no built-in “stop and check project guidelines before acting” mechanism.


The Critical Insight

This exposed a fundamental truth:
👉 Prompt engineering provides guidance. Tool engineering enforces it.


Prompts can make intentions clear, but they don’t guarantee compliance. Without system-level constraints, the AI will eventually slip.


Implications for Teams

If you’re thinking of adopting AI in development workflows, here are some lessons:

  • Prompt Engineering is Necessary, Not Sufficient
    Instructions add context, but they’re passive. They don’t enforce behavior.

  • Tool Engineering Adds Reliability
    If you bake constraints into the environment itself — e.g., a wrapper that refuses to run >10 line scripts directly in the terminal — you remove the chance of failure.

  • Hybrid Strategies Win
    Prompts for context + tools for enforcement = repeatable, reliable outcomes.

  • Feedback Loops Matter
    Systems should detect and correct violations in real-time, rather than relying on human oversight alone.


Conclusion

This wasn’t just an odd glitch; it was a reminder of how AI works. Copilot isn’t ignoring me out of malice. It’s following a hierarchy of defaults that don’t always line up with project needs.

If you want reliable AI collaboration:

  • Use prompts to tell it what you want

  • Use tools to make sure it follows through


Because at the end of the day, prompt engineering tells the AI what to do; tool engineering ensures it actually does it.

Saturday, August 23, 2025

The Great Testing Reframe: From Implementation-Driven to Value-Driven Development

A follow-up to "Descriptive vs Prescriptive Testing" - Why the industry needs tests-first contracts for stakeholder alignment

Building on the Foundation

In my previous post about descriptive vs prescriptive testing, we explored how behavior-focused tests transform code reviews and create more resilient software. But that insight reveals a much deeper problem with how our industry approaches development.

What if the entire "code first, test later" paradigm is fundamentally backwards?

Mind blown meme: Galaxy exploding with text "When you realize the entire industry has been doing development backwards"

The more I've worked with behavior-focused testing, the more convinced I've become that we need a complete reframing of the development process - one that puts value contracts before implementation details.

The Systemic Devaluation Problem

But here's the deeper issue: testing isn't just devalued by developers - the entire decision chain devalues it.

The False Equation: Good Coders = Quality Code

Stakeholders and management operate under a dangerous assumption:

"We can achieve quality by just hiring good developers."

This thinking is fundamentally flawed because:

  • ✨ Even the best developers write bugs - it's human nature
  • 🧠 Complexity exceeds individual cognitive capacity - systems are too complex for perfect mental models
  • 🏃 Delivery pressure corrupts judgment - time constraints lead to shortcuts
  • 🤝 Quality emerges from process, not just talent - individual brilliance doesn't scale

Homer Simpson disappearing into bushes meme: "Management thinking they can skip testing and still get quality" / Homer backing into bushes labeled "Technical debt"

The Quality vs. Delivery False Dilemma

Organizations create an artificial choice:

  • 📦 "We want quality code" (in theory)
  • 🚀 "But delivery rate is what matters" (in practice)
  • ⏰ "Everything takes time, so delivery trumps quality" (the inevitable conclusion)

This creates a toxic cycle where:

  1. Management pushes for both quality and speed
  2. Teams know they can't deliver both under time pressure
  3. Quality gets sacrificed because delivery is measurable and visible
  4. Technical debt accumulates leading to slower future delivery
  5. Pressure increases to deliver even faster to compensate
  6. Quality degrades further and the cycle accelerates

Vicious cycle meme: Bike rider putting stick in his own spokes, labeled "Management demanding both speed and quality", falling labeled "Blaming developers when quality suffers"

The Hidden Cost of Quality Shortcuts

What stakeholders don't see is that skipping quality practices doesn't save time - it borrows it:

Short-term (Weeks 1-4)

  • ✅ Faster initial delivery (no time spent on tests)
  • ✅ Visible progress (features ship quickly)
  • ✅ Happy stakeholders (getting what they asked for)

Medium-term (Months 2-6)

  • ⚠️ Bug reports increase (quality issues surface)
  • ⚠️ Hotfixes required (disrupting planned work)
  • ⚠️ Development slows (technical debt accumulates)

Long-term (Months 6+)

  • 🔴 Feature velocity crashes (codebase becomes unmaintainable)
  • 🔴 Developer burnout (constant firefighting)
  • 🔴 Customer satisfaction drops (reliability issues)
  • 🔴 Competitive disadvantage (can't adapt quickly to market changes)

The Organizational Blind Spot

The problem is that quality debt is invisible until it's catastrophic:

  • Delivery is measurable: "We shipped 12 features this quarter"
  • Quality debt is hidden: Technical debt doesn't appear on roadmaps
  • Bugs seem like isolated incidents: Each one gets treated as a one-off
  • Slowdown seems like team performance: "Why is development taking longer?"

Management sees the immediate delivery wins but doesn't connect them to the later quality costs.

Breaking the Cycle: Tests-First as Business Strategy

This is why tests-first contracts aren't just a development practice - they're a business strategy.

When tests define value upfront:

  • Stakeholders see exactly what they're getting before paying for it
  • Quality becomes measurable (how many promises are we keeping?)
  • Delivery becomes predictable (no surprise rework cycles)
  • Technical debt becomes visible (broken tests show system degradation)

The conversation shifts from:

  • ❌ "Why is this taking so long?" (adversarial)
  • ✅ "Are we delivering on our promises?" (collaborative)

The Revolutionary Insight

While refactoring tests to be more descriptive, I realized something profound:

The teams that struggle most with testing are the ones focused on "how" instead of "what."

When developers say "I don't know what to test until I write the code," they're revealing that they're thinking about implementation before understanding the value they're supposed to deliver.

This backwards thinking is the root cause of:

  • 🎯 Missed requirements (we build first, validate later)
  • 🔄 Endless rework (stakeholders see the wrong thing)
  • 💸 Wasted effort (features that don't deliver value)
  • 🤷 Unclear expectations (no shared understanding)
  • 🐛 Bug-driven development (testing becomes an afterthought)

The Paradigm Shift: Tests as Stakeholder Contracts

Imagine if we flipped the entire development process:

❌ Current Paradigm: Implementation-Driven

  1. Stakeholders describe what they want (vaguely)
  2. Engineers interpret and build something
  3. QA tests if the implementation works
  4. Stakeholders see it and say "This isn't what we meant"
  5. Repeat until exhausted or deadline hits

✅ New Paradigm: Value-Driven

  1. Tests written first as stakeholder agreements on value
  2. All parties review and agree on the behavioral contracts
  3. Implementation begins only after promise alignment
  4. Code reviews focus on "Does this fulfill our contracts?"
  5. Delivery means all promises are kept

Drake meme: Top panel (disapproval) - "Building features and hoping stakeholders like them" / Bottom panel (approval) - "Getting stakeholder agreement on test contracts before building anything"

From "I Don't Know What to Test" to "What Value Am I Promising?"

The most common pushback to tests-first approaches reveals the core problem:

"I don't know what to test until I write the code."

This statement shows implementation-first thinking. Let's reframe it:

Implementation-First Conversation

  • "I need to build a user authentication system"
  • "I'll use JWT tokens and a database"
  • "Now I need to test if my JWT implementation works"

Value-First Conversation

  • "Users need to securely access their accounts"
  • "Users should stay logged in across sessions"
  • "Users should get clear feedback when login fails"
  • "Unauthorized users should never access protected data"

Notice: The second approach immediately suggests what to test, and none of it mentions JWT tokens.

The value-first conversation creates natural test contracts that all stakeholders can understand and agree on.

Tests as a Common Language

When tests are written first as behavioral contracts, they become a shared language:

  • Product Managers: "These tests prove we deliver user value"
  • Business Stakeholders: "These tests validate our requirements"
  • Engineers: "These tests define our success criteria"
  • QA Teams: "These tests guide our validation strategy"
  • Support Teams: "These tests explain what the system does"

Real Example: Transforming Feature Requests

Business Request: "We need better data quality monitoring"

Traditional Approach:

  1. Engineers build a monitoring dashboard
  2. QA tests if the dashboard works
  3. Business sees it: "This isn't what we meant"

Tests-First Contract Approach:

def test_analysts_get_alerted_when_data_quality_degrades():
    """When data quality drops below thresholds, analysts receive 
    actionable alerts within 5 minutes."""

def test_business_users_can_see_data_health_trends():
    """Business users can view data quality trends over time 
    without technical knowledge required."""

def test_engineers_can_drill_down_to_root_causes():
    """When alerts fire, engineers can trace from symptom 
    to root cause in under 3 clicks."""

Now everyone knows exactly what "better monitoring" means before any code is written.

The Three-Phase Development Contract

Building on the descriptive testing foundation, here's how value-driven development works:

Phase 1: Value Definition (The Contract)

  1. Stakeholders define outcomes they need
  2. Tests written as behavioral contracts
  3. All parties review and agree on promises
  4. Edge cases identified before implementation
  5. Success criteria crystal clear

Phase 2: Implementation (Fulfilling the Contract)

  1. Engineers implement to fulfill test contracts
  2. Code reviews focus on quality, not requirements
  3. Refactoring is safe (contracts don't change)
  4. Progress is measurable (which contracts fulfilled?)

Phase 3: Delivery (Contract Validation)

  1. All tests pass = all promises kept
  2. Stakeholders validate against agreed contracts
  3. Documentation current (tests describe behavior)
  4. Maintenance predictable (contracts define boundaries)

Beyond TDD and BDD: Meta-Development

This isn't anti-TDD or anti-BDD - it's meta-TDD that operates at the stakeholder level:

  • TDD says: Write tests first for better code
  • BDD says: Write behavior specs for better requirements
  • Value-Driven says: Write stakeholder contracts for better outcomes

Expanding brain meme: Small brain - "Write code then test" / Medium brain - "Write tests then code (TDD)" / Large brain - "Write behavior specs (BDD)" / Galaxy brain - "Write stakeholder value contracts first"

The natural progression becomes:

  1. Tests-first contracts (stakeholder alignment)
  2. BDD specifications (behavior definition)
  3. TDD implementation (code quality)

Each layer reinforces the others, creating a value-driven development culture.

Practical Implementation: Start Small, Think Big

Week 1: Single Feature Experiment

  1. Pick one feature for the tests-first approach
  2. Write behavioral tests with stakeholders first
  3. Get unanimous agreement before implementation
  4. Implement only to fulfill contracts
  5. Measure the difference in rework and satisfaction

Month 1: Team Adoption

  1. Share experiment results with the team
  2. Train on value-first thinking patterns
  3. Update review processes to include test contracts
  4. Adjust tooling to support tests-first workflow

Quarter 1: Cultural Transformation

  1. Establish tests-first as standard practice
  2. Reward teams that deliver on contracts with minimal rework
  3. Make test contracts part of feature planning
  4. Celebrate when implementation matches promises exactly

The Reframing Questions

When teams resist with "We don't know what to test," help them reframe:

Instead of asking:

  • "How should this function work?"
  • "What should this API return?"
  • "How do we structure this data?"

Ask:

  • "What problem does this solve for users?"
  • "How will users know this is working?"
  • "What happens when this goes wrong?"
  • "What value does this create?"

The answers become your test contracts - before any implementation exists.

Why This Matters Beyond Better Code

This reframe addresses fundamental problems in software development:

For Organizations

  • Predictable delivery: Know what you're getting before you build it
  • Reduced waste: Stop building the wrong things
  • Stakeholder alignment: Everyone agrees on outcomes upfront

For Teams

  • Clear direction: Implementation has a target, not wandering
  • Protected refactoring: Contracts survive implementation changes
  • Faster reviews: Focus on code quality, not requirements interpretation

For Users

  • Better products: Built with clear understanding of user needs
  • Fewer bugs: Edge cases identified before implementation
  • Consistent experience: Behavior is defined and validated

Success kid meme: "Shipped feature on time" / "And it's exactly what stakeholders wanted because we agreed on test contracts first"

The Bottom Line: From Code-Driven to Value-Driven

The current paradigm treats tests as validation after the fact.

The new paradigm treats tests as the first draft of value.

When we write tests first as stakeholder contracts:

  • Requirements become concrete instead of abstract
  • Teams align on specific outcomes before building
  • Implementation has clear direction instead of wandering
  • Quality is designed-in instead of tested-in
  • Delivery becomes predictable instead of surprising

The Call to Revolution

This isn't just about better testing practices - it's about better software development.

We need an industry that asks "What value are we creating?" before "How should we build this?"

We need product managers who can articulate outcomes, not just features.

We need businesses that understand they're buying solutions, not code.

And we need tests that prove we delivered on our promises.

Tomorrow's Action Plan

  1. Pick your next feature
  2. Write the tests first with your stakeholders
  3. Get everyone to agree on what those tests promise
  4. Then and only then start coding
  5. Watch the transformation begin

Because when everyone agrees on what you're building before you build it, everything else becomes implementation details.

And implementation details should never drive the conversation. Value should.


This follow-up emerged from seeing how descriptive testing naturally leads to value-first thinking. The shift from "Does my code work?" to "Did I deliver what I promised?" changes everything.

The Power Behind the Curtain: How AI Hype Serves Global Wealth Consolidation

  Part Two: Or, How Tech Billionaires Learned to Stop Worrying and Love Economic Feudalism This is part two of a series examining the gap b...