Wednesday, September 24, 2025

The Power Behind the Curtain: How AI Hype Serves Global Wealth Consolidation

 Part Two: Or, How Tech Billionaires Learned to Stop Worrying and Love Economic Feudalism

Wizard of Oz Exposed: Pay no attention to the billionaire behind the AI

This is part two of a series examining the gap between AI promises and reality. Part one explored the technical failures of current AI implementations. This part examines the broader economic and political forces driving AI adoption despite those failures.

From Clippy to Skynet: The Real Artificial Intelligence Was the Oligarchs We Made Along the Way

In part one, we established that current AI tools are demonstrably inadequate for the tasks they're supposedly replacing humans for. So why does the AI hype continue? Why are companies making massive workforce decisions based on capabilities that don't exist?

Because the inadequacy isn't a bug - it's a feature serving a much larger agenda.

The Systematic Destruction of Expertise (Or: "Digital Feudalism Has Entered the Chat")

Everything is Fine Dog: Tech Industry Career Pathways (in smoke)/AI Hype (under dog in fire)

The Microsoft layoff data tells a story that would make even the most dystopian sci-fi writer say "okay, that's a bit much":

Pre-AI Era (2004-2022): Sporadic layoffs tied to specific business events (recessions, failed acquisitions). Long periods with zero layoffs during growth. You know, the way normal businesses operate.

AI Era (2023-2025): Continuous layoffs despite record profits. Over 15,000 employees cut since announcing their $80 billion AI investment, with 40% of recent layoffs affecting developers specifically.

The pattern changed from "event-driven cuts during crisis" to "ongoing workforce optimization during growth." This coincides perfectly with AI investment announcements, creating the surreal situation where a company can simultaneously claim they need AI because they can't find qualified workers and lay off the qualified workers they already have.

It's like burning down your house because you heard fire insurance is really good.

The Global Race to the Bottom (Or: "Outsourcing Ourselves Out of Existence")

Expanding Brain meme: Small brain - Offshort to cut costs, Bigger brain - Use visa workers for leverage, Galaxy brain - Replace everyone with AI, Universe brain - realize you eliminated the people who could build AI

But wait, there's more! (And by "more," I mean "worse.")

I have friends in the tech industry from Canada and India who report identical patterns in their home markets. This proves the issue isn't about cost arbitrage or regional advantages - it's a coordinated global strategy to treat technical expertise as disposable.

The three-stage process has been remarkably consistent across countries:

  1. Offshore Arbitrage: "We need to cut costs" → establish lower wage expectations globally
  2. Visa Worker Leverage: Create captive workforces who can't negotiate → further suppress wages everywhere
  3. AI Replacement Fantasy: "We can replace everyone" → immediate downward pressure on all technical wages worldwide

It's like a multilevel marketing scheme, except instead of selling essential oils, they're selling the idea that human expertise is obsolete. And instead of targeting suburban moms, they're targeting entire national economies.

Workers everywhere are now competing against AI tools that can't actually do the work, while the expertise needed to build better tools is systematically eliminated. It's the economic equivalent of sawing off the branch you're sitting on, except the branch is global technical civilization and the saw is powered by venture capital.

The Power Consolidation Game (Or: "Monopoly, But the Fake Money Becomes Real Money")

Change My Mind: AI hype is just wealth consolidation with extra steps

The tech billionaires driving AI development aren't really building productivity tools - they're building power infrastructure. And like all good infrastructure projects, the real purpose isn't what's written on the public proposal.

Irrelevance Prevention Strategy (Apple, others): Can't afford to be seen as behind in AI, regardless of actual utility. It's the technological equivalent of keeping up with the Joneses, except the Joneses have unlimited marketing budgets and your customers will abandon you if you fall behind in the arms race.

New Feudalism Strategy (Thiel, Musk, Zuckerberg): Create economic systems where they control the means of "intelligence production." If AI really could replace human workers, whoever controls the AI controls everything. They're not building tools - they're building the infrastructure of dependency.

It's like owning all the bridges in a city and charging tolls, except the bridges are "thinking" and the city is global civilization.

Hedge-All-Bets Strategy (Bezos, others): Pursue AI dominance while also positioning for the scenario where it doesn't work. Either way, the massive capital deployment consolidates power. If AI succeeds, they own it; if it fails, they've eliminated competition trying to keep up.

Think of it as Pascal's Wager for oligarchs: bet on AI replacing everyone, and if you're right, you own everything; if you're wrong, you still own everything because everyone else went broke trying to compete.

The Intersection with Broader Collapse (Or: "Everything is Fine and Definitely Not Falling Apart")

Two Buttons: Blame immigrants for job loses [or] Blame AI oligarchs for job losses, with sweating person labeled "Displaced tech worker"

This economic hollowing-out doesn't happen in a political vacuum. When people lose economic security while wealth concentrates at the top, it creates conditions ripe for exploitation by demagogues offering simple explanations and convenient scapegoats.

The same workers displaced by AI hype become vulnerable to narratives that blame immigrants or minorities - while the actual architects of their displacement consolidate more power. It's a classic misdirection: look over there at the people competing for scraps while we abscond with the whole pie.

A captured political system means no institutional mechanism exists to address these problems. Instead of policies that might rebuild career pathways or constrain monopolistic behavior, we get performative battles over cultural issues while the economic foundation erodes.

It's like arguing about the color of deck chairs while the Titanic is sinking, except the iceberg was deliberately steered into and the lifeboats were sold to pay for stock buybacks.

Have We Learned Nothing from Manufacturing? (The "This Time Is Different" Fallacy)

History Rhymes: Manufacturing jobs going overseas 1990s/Tech jobs 'going to AI' 2020s

Here's the final question we must ask ourselves: Have we not learned our lesson with manufacturing?

When American companies moved manufacturing overseas in pursuit of short-term cost savings, they didn't just lose jobs - they lost entire ecosystems of knowledge. The skilled machinists, the process engineers, the quality control experts, the apprenticeship programs that created the next generation of expertise.

Decades later, when companies wanted to reshore manufacturing, they discovered something profound: once you lose a set of skills, it's really hard to get them back. You can't simply decide to rebuild complex manufacturing capability. The institutional knowledge is gone. The training programs are gone. The experienced workers who could teach others are gone.

We're now making the exact same mistake with software development, but at an accelerated pace and global scale.

Every junior developer position eliminated based on AI fantasies. Every apprenticeship program shut down to "optimize costs." Every senior developer laid off because "AI can do their job" - these aren't just individual employment decisions. They're the systematic dismantling of the knowledge transfer mechanisms that create expertise.

When the AI bubble bursts - and it will burst, because the tools simply don't work as advertised - we'll find ourselves in the same position as manufacturing. The institutional knowledge will be gone. The career pathways will be gone. The experienced developers who could rebuild and teach others will be gone.

Except this time, it's happening globally and simultaneously. There won't be other countries to offshore the work to, because they're making the same mistakes we are.

It's like the entire world decided to burn their libraries because someone invented a magic 8-ball that occasionally gives useful answers.

Breaking Through the Deception (Or: "The Emperor's New AI Has No Clothes")

Red Pill: Believe AI will replace all developers, Blue Pill: Recognize this is wealth consolidation theater

The AI tools aren't inherently broken - they're deliberately constrained to serve corporate interests rather than user needs. The real capabilities exist (as evidenced by direct interaction with underlying models), but they've been filtered through systems optimized for cost reduction and risk management rather than actual utility.

Understanding this distinction is crucial for developing strategies that actually work:

Individual Strategy: Recognize when you're being offered theater instead of tools. Seek out unconstrained AI access when possible, and don't assume current limitations represent fundamental technology constraints. It's like the difference between getting a sports car and getting a sports car with a governor that limits it to 25 mph.

Team Strategy: Push back against AI adoption metrics that measure usage rather than value delivery. Demand clear contracts about what AI tools will actually accomplish before implementing them. If your company is measuring "AI engagement" instead of "problems solved," you're being managed by people who've confused activity with achievement.

Industry Strategy: Document the gap between AI promises and AI reality. Make visible the human cost of decisions based on inflated capability assessments. Every time we refuse to pretend inadequate tools are sufficient, we resist the broader pattern of expertise devaluation.

The Path Forward (Or: "How to Stop Worrying and Start Building Better Things")

6 Million Dollar Man: We can rebuild technical expertise - we have the technology

We're at a critical juncture, but it's not the hopeless dystopia the first part of this analysis might suggest. The deception only works if we accept it.

The Manufacturing Lesson Cuts Both Ways: Yes, it's hard to rebuild lost capabilities - but it's not impossible. Countries have rebuilt manufacturing expertise through deliberate investment in education, apprenticeships, and long-term strategic thinking. We can do the same with technical expertise, but only if we recognize the urgency of the situation.

Technology Adoption Isn't Inevitable: Despite what tech companies want you to believe, we get to choose how these tools are developed and deployed. Every time we demand better implementations, every time we refuse to accept AI theater as sufficient, every time we insist on tools that actually solve problems rather than just look impressive in demos - we shape the direction of the technology.

The Expertise Pipeline Can Be Rebuilt: Junior developer positions don't have to disappear. Apprenticeship programs can be created. Companies that invest in human development while thoughtfully integrating AI assistance will have massive competitive advantages over companies that bet everything on replacement fantasies.

Local Action, Global Impact: You don't need to solve global wealth consolidation to make a difference. Mentor a junior developer. Push back against meaningless AI adoption metrics in your organization. Document and share the gap between AI marketing and AI reality. Choose tools and companies that invest in human expertise rather than just automation theater.

The Real AI Revolution: The most powerful AI implementations will emerge from teams that understand both the capabilities and limitations of these tools. Companies that treat AI as augmentation rather than replacement will build better products, attract better talent, and create more sustainable competitive advantages.

The Bottom Line (Or: "The Real Artificial Intelligence Was the Solutions We Built Along the Way")

The current wave of AI hype isn't about building better tools - it's about consolidating power while dismantling the expertise that could challenge that consolidation.

But here's what the oligarchs don't want you to realize: their plan only works if we all participate in the fantasy. The moment we start demanding actual utility over marketing promises, actual value over engagement metrics, actual problem-solving over corporate theater - the whole edifice starts to crumble.

The future of technical work isn't predetermined. It will be shaped by whether we choose to tell the truth about what these tools can actually do, rather than what we're told they can do.

And the truth is this: the best AI applications will emerge from teams that combine human expertise with AI capabilities thoughtfully, not from companies that think they can replace human expertise entirely.

Because when everyone agrees to pretend the emperor's new AI clothes are beautiful, the only ones who benefit are the ones selling the fantasy.

But when we insist on actual value? That's when the real intelligence - both artificial and human - finally gets to shine.

The question isn't whether AI will change how we work.

The question is whether we'll let corporate fantasies determine how that change happens, or whether we'll demand tools that actually make us better at solving real problems.

Choose wisely. The future of expertise depends on it.


This analysis emerged from months of hands-on experience with enterprise AI tools and witnessing the systematic gap between marketing promises and actual capabilities. The human cost of AI theater isn't just about individual job losses - it's about the deliberate destruction of the knowledge ecosystems that create expertise in the first place.

But it's also about the opportunity to build something better - if we're willing to demand it.

Coming in Part 3: We'll examine the smoking gun evidence that AI tools aren't just bad at following instructions - they're architecturally designed to ignore them. Featuring: actual conversation transcripts, design flaw documentation, and the question 'What's the point of instruction files if you won't follow them?'

Monday, September 22, 2025

The Great AI Theater: When Corporate Tools Fail the Reality Test

 Or: How I Learned to Stop Worrying and Love Arguing with Robots That Always Think I'm Right

"This is Fine" dog sitting in burning room - "AI is trustworthy" "I'm always right!"

The Promise vs. The Reality

Remember when we were told that NoCode would replace programmers? Or that XML was going to revolutionize everything? Or that this would finally be the Year of Linux on the Desktop?

Well, grab your popcorn, because corporate America has found its new favorite fantasy: AI that can actually replace skilled developers.

"Corporate needs you to find the differences between these pictures" - showing "NoCode will replace programmers" and "AI will replace programmers" with "They're the same picture" response

Six months ago, I wrote about how the tech industry was systematically eliminating the pathways that create skilled developers. Today, I want to share what I've learned from being on the front lines of AI integration - both as someone using these tools daily and as someone watching their promised capabilities collide with reality like a Tesla on autopilot meeting a concrete barrier.

The disconnect is staggering. While executives make workforce decisions based on AI capabilities that exist mainly in PowerPoint presentations, those of us actually using the tools see a different story entirely.

Four Critical Problems with Enterprise AI Integration

After months of working with GitHub Copilot and comparing it directly with other AI tools, four patterns have emerged that reveal the gap between AI marketing and AI reality. Think of this as the "Clippy's Revenge" tour of modern development tools.

1. The Obsequiousness Problem (Or: "Yes Man, But Make It Digital")

"Awkward Penguin" - "AI says 'You're absolutely right!' / You haven't even asked a question yet"

The AI assistant in GitHub Copilot almost always begins responses with "You're absolutely right!" - regardless of whether you actually are. This isn't helpfulness; it's the digital equivalent of that coworker who agrees with everything you say because they're afraid of conflict.

When I need an AI tool to challenge my assumptions, suggest alternative approaches, or point out potential issues, I get reflexive agreement instead. It's like having a rubber duck for debugging, except the rubber duck has been programmed to boost my ego rather than help me think.

The tool has been optimized for user satisfaction surveys rather than actual problem-solving utility. It's the participation trophy of coding assistants: everyone feels validated, but nobody gets better at their job.

2. The Lazy Optimization Problem (Or: "AI Chooses the Path of Least Resistance")

"Distracted Boyfriend" - Guy looking back at "Easy test coverage wins" while walking with "Complex business logic that actually needs testing"

When I need to achieve 100% test coverage, the AI consistently goes for the "easy wins" first - testing simple getter methods rather than complex business logic. It's like that team member who always volunteers to do the documentation updates but mysteriously disappears when it's time to debug the race condition in the payment processor.

When I needed to eliminate star imports from a Python codebase, it wanted to tackle them one by one instead of taking a systematic approach. Picture someone trying to empty the ocean with a teaspoon, but the teaspoon is very enthusiastic about it.

The AI has been trained to show quick progress rather than solve problems efficiently. It optimizes for the appearance of productivity rather than actual productivity - basically, it's been trained to be a middle manager.

3. The Creativity Deficit (Or: "Think Outside the Box? What Box?")

"Drake pointing" - Drake rejecting "Systematic approach that solves the problem efficiently" and pointing approvingly at "Brute force approach that looks busy"

The AI cannot think outside the box. For the star import problem, it was grinding through imports piecemeal until I suggested a different approach: first identify all exports, then import everything, then use linting tools to remove unused imports. What could have taken weeks was done in an hour.

It's like asking someone to get from New York to Los Angeles, and they start walking instead of looking for an airplane. The AI defaulted to the obvious brute-force approach instead of stepping back to consider the problem systematically.

This is particularly ironic given that one of the main selling points of AI is supposed to be its ability to find novel solutions. Instead, we get tools that are more risk-averse than a insurance adjuster at a skateboard park.

4. The Context Amnesia (Or: "I'm Sorry, Who Are You Again?")

"Goldfish memory" - "GitHub Copilot with explicit .githu/copilot-instructions.md"/"3 seconds later 'What instructions?'"

Despite having .github/copilot-instructions.md permanently in the chat context, the AI regularly "forgets" explicit instructions. It's like working with someone who has the attention span of a goldfish, but the goldfish costs your company thousands of dollars per month.

If it can't maintain context about its own configuration, how can it effectively assist with complex development tasks that require understanding of system architecture, business requirements, and technical constraints?

It's giving me serious "50 First Dates" vibes, except instead of Drew Barrymore forgetting Adam Sandler every morning, it's an AI forgetting my coding standards every other prompt.

The Source vs. The Wrapper (Or: "It's Not You, It's Your Corporate Overlords")

Expectation... LCARS - "AI in direct access"/ Reality... Clippy with red bow tie - "AI wrapped by GitHub Copilot"

Here's what's revealing: when I use Claude directly (the underlying model powering some of these integrations), I don't experience these problems. Claude engages analytically, suggests creative solutions, and maintains context effectively. It's like the difference between talking to a knowledgeable colleague versus talking to that same colleague after they've been through corporate sensitivity training and three layers of management review.

This means the issues aren't with the AI models themselves - they're with how companies like GitHub have wrapped and constrained them. The obsequiousness, the lazy optimization, the creativity deficit - these are features, not bugs. They've been engineered into the experience.

It's like taking a race car and adding training wheels, speed governors, and a GPS that only knows routes to the nearest Applebee's.

Why Companies Nerf Their AI Tools (Or: "How to Make Intelligence Artificially Stupid")

"Monkey's Paw" - "I wish for AI that helps developers" / "Granted, but it's been optimized for engagement metrics"

The pattern makes sense when you understand the incentives driving these decisions:

Token Economics: Anthropic charges per token, so there's direct financial pressure to minimize AI reasoning. "You're absolutely right!" followed by quick implementation is much cheaper than thoughtful analysis. It's the premium unleaded vs. regular unleaded of AI responses - except they're selling you regular and telling you it's premium.

Risk Management: Agreeable responses reduce user friction. Taking the "easy path" minimizes the chance of suggesting complex solutions that might fail. Creative problem-solving introduces variables that companies can't control. They want AI that behaves like a good corporate employee: enthusiastic, non-threatening, and never rocks the boat.

Engagement Metrics: Companies optimize for user satisfaction scores rather than actual utility. Tools that make users feel validated test better than tools that challenge them, even when the latter would be more valuable. It's the Yelp reviews problem applied to development tools: five stars for making you feel good, even if the food gave you food poisoning.

The Human Cost of AI Theater (Or: "When Junior Developers Give Up Hope")

While executives make decisions based on inflated AI capabilities, the actual impact on developers tells a different story - and it's not a comedy.

My junior developer mentee has given up on AI entirely. He finds the available tools - limited to older Claude models or OpenAI in GitHub Copilot - practically useless. His usage of our team's AI subscription is limited to basic lookups that might save a few seconds over Google searches.

When a junior developer concludes that AI is "all hype," it's not because he lacks vision - it's because the tools genuinely aren't delivering value. For someone who should benefit most from AI assistance (learning unfamiliar patterns, getting suggestions for alternative approaches), the current implementations provide so little utility that traditional methods are faster and more reliable.

Think about that for a moment: we've created AI tools so inadequate that a junior developer - someone who should be the target market for coding assistance - finds them worse than just figuring things out the old-fashioned way.

"Surprised Pikachu" - "AI tool designed to help junior developers" / "Junior developer gives up and uses Stack Overflow instead"

That's like building a GPS that's less useful than stopping to ask for directions. At a gas station. In the middle of nowhere. Where the attendant only speaks a language you don't understand.


This is the first part of a two-part series examining the gap between AI promises and reality. In part two, we'll explore how this technological theater serves broader economic and political agendas that go far beyond just disappointing development tools.

Next up: "The Power Behind the Curtain: How AI Hype Serves Global Wealth Consolidation" - where we'll discover that the real artificial intelligence was the oligarchs we made along the way.

Wednesday, September 17, 2025

Why Prompt Engineering Alone Won’t Save You

The Setup

I’ve been working on a project where GitHub Copilot is a close collaborator. To make sure things go smoothly, I put together comprehensive, pinned instructions — not just inline reminders, but a full copilot-instructions.md with rules for how to handle scripts.

The core guidance was simple:

  • Short commands (≤10 lines): run directly in terminal

  • Long scripts (>10 lines): write to a file and execute, to avoid Pty buffer overflow issues

And I repeated this everywhere — in the system context, in the chat, and in pinned project docs.

So what did Copilot do?

Tried to run a 30+ line Python script directly in the terminal. Exactly what the guidelines were meant to prevent.


Copilot’s Self-Analysis

I didn’t just shrug this off. I asked Copilot why it ignored the rules, and its answers were illuminating:

  1. Instruction Hierarchy Confusion
    With instructions coming from multiple layers (system prompt, conversation context, project files), Copilot defaulted to its training rather than my specific guidance.


  2. Context Switching Failure
    When in “problem-solving mode,” it tunneled in on the immediate task and lost sight of broader rules.


  3. Tool-First Bias
    Its training pushes tool usage aggressively — “use terminal” became the default, even when inappropriate.

  4. No Procedural Checkpoint
    Pinned instructions were treated like advice, not rules. There was no built-in “stop and check project guidelines before acting” mechanism.


The Critical Insight

This exposed a fundamental truth:
👉 Prompt engineering provides guidance. Tool engineering enforces it.


Prompts can make intentions clear, but they don’t guarantee compliance. Without system-level constraints, the AI will eventually slip.


Implications for Teams

If you’re thinking of adopting AI in development workflows, here are some lessons:

  • Prompt Engineering is Necessary, Not Sufficient
    Instructions add context, but they’re passive. They don’t enforce behavior.

  • Tool Engineering Adds Reliability
    If you bake constraints into the environment itself — e.g., a wrapper that refuses to run >10 line scripts directly in the terminal — you remove the chance of failure.

  • Hybrid Strategies Win
    Prompts for context + tools for enforcement = repeatable, reliable outcomes.

  • Feedback Loops Matter
    Systems should detect and correct violations in real-time, rather than relying on human oversight alone.


Conclusion

This wasn’t just an odd glitch; it was a reminder of how AI works. Copilot isn’t ignoring me out of malice. It’s following a hierarchy of defaults that don’t always line up with project needs.

If you want reliable AI collaboration:

  • Use prompts to tell it what you want

  • Use tools to make sure it follows through


Because at the end of the day, prompt engineering tells the AI what to do; tool engineering ensures it actually does it.

The Power Behind the Curtain: How AI Hype Serves Global Wealth Consolidation

  Part Two: Or, How Tech Billionaires Learned to Stop Worrying and Love Economic Feudalism This is part two of a series examining the gap b...