AI Today: Brilliant, But Needs Babysitting

By Joey Boothe

Joey (Frasier) Boothe is a gay dad, entrepreneur, and new writer transitioning after selling his latest company, Worksuite. He focuses on making technology understandable and practical for busy professionals while blending tech insights with relatable life experiences, including raising 8-year-old twins. Joey brings a globally-experienced perspective shaped by years of leading diverse teams and navigating complex business challenges across multiple industries. Keep up with Joey on LinkedIn.


Artificial Intelligence (AI) is everywhere—drafting emails, analyzing data, generating art, and even providing companionship. We've been promised it will transform our lives and solve our toughest problems. But here's the truth: much of this is hype. 

Imagine trusting an incredibly sharp 8-year-old to handle your daily tasks—impressive in some situations, but lacking the maturity to fully take charge. Like this prodigy, AI impresses in specific areas but struggles with broader complexity. This article explores the current state of AI, its capabilities, limitations, and why we need to adjust our expectations.

As Charlie Warzel noted in the Atlantic, much of the conversation around AI’s greatest capabilities is built on a vision of a theoretical future—a sales pitch that ignores today’s real limitations and problems. This highlights the gap between the hype and the actual state of AI today, reinforcing the need to set realistic expectations. We are a long way from it being the fully autonomous solution many imagine it to be.

The 8-year-old child analogy

The true state of where AI is today is evident when we compare AI to a young child—capable, but still in need of supervision.

In its current form, it resembles an incredibly sharp 8-year-old more than the fully autonomous system we often imagine. It's impressive and capable of amazing feats when focused on the right task but still requires significant oversight and guidance. Like my twins, Katie and Frasier, AI struggles with tasks that are too vague or complex. Parents can relate to this with a simple example: my definition of "making your bed" is vastly different from my kids' definition. While I'm pleased they've started taking on this task, their efforts don't meet my expectations. I can either accept their level of output, remake the beds after they leave for school, or continue to guide them to improve. This is much like how we need to work with AI technology today.

The issue isn’t that AI isn’t “good enough”; it’s that our understanding of what it is—and what it can realistically do—is currently misguided.

AI struggles to meet the complexity of our world

There's no denying that AI can blow you away, especially when it's working within a clearly defined problem. As Santiago (@svpino) pointed out on X, “99% of AI applications are cool-looking demos. Don't get fooled by the hype. It takes a lot to build enterprise-grade products that deliver real value.” Many of the AI demos you see on social media look impressive, but when you try to build something real and scalable, things often become much more complicated and less exciting. Just like the twins, who can rattle off facts about dinosaurs and planets or build intricate LEGO structures, AI shines when it's operating within a well-understood, narrow context.

For instance, AI can perform incredible feats in fields like medical imaging, where it analyzes patterns in X-rays and Magnetic Resonance Imaging (MRI) scans faster than a team of radiologists. It can handle repetitive, rules-based tasks like document processing with ease—just like how my kids can follow a recipe to bake cookies (as long as the instructions are clear!). These are impressive, concrete achievements.

Building on the analogy of AI as a young child, it's clear that while it excels at well-defined tasks, it struggles when the complexity of a given task increases.

Things get tricky when you ask for more abstract results or when the task broadens in scope. It's like when I ask Katie and Frasier to work together to clean up their playroom: They can get it done, but there are always misunderstandings, and it often takes far longer than I expected. One of my favorite examples is when they started to figure out the concept of money and needing to earn an allowance to buy something they want. However, understanding that Daddy and Dadda have a finite amount of money—and that we have to work for it—is a concept they haven't fully grasped. It's simply too broad for them to hold in context at the moment. Similarly, AI can excel at narrow applications but often delivers poor results when faced with a broader challenge.

This challenge becomes more pronounced when AI faces more open-ended requests that require contextual understanding.

Here’s where the disconnect happens: As soon as the problem becomes more open-ended, AI—like an 8-year-old—becomes unreliable. The intelligence doesn’t disappear, but rather the machine doesn’t understand the context of what it’s being asked to do, especially when the task involves nuance, judgment, and/or a high degree of variability.

Let's explore how open-ended tasks can quickly reveal AI's weaknesses, extending the discussion of its limitations.

Let’s say you want to use AI for an enterprise-level task, such as analyzing sales data to generate actionable business insights. AI performs beautifully when the job is simple, like summarizing sales performance for a specific product line or generating a basic report. However, it starts to falter with more complex tasks, such as identifying nuanced market trends across diverse data sets or accounting for unstructured feedback from different sources. This is akin to asking my twins to mediate a sibling argument without adult help. For example, when Katie and Frasier try to solve a disagreement over who gets to choose the next family movie, they start off well, but it quickly escalates if someone feels slighted. Sure, they’ll get some things right—they might remember the basic rule of fairness—but when emotions and complexity rise, they’ll likely misread the situation.

Another prime example of AI's struggle with broad and complex tasks is autonomous driving, where the hype has often outpaced actual capabilities.

Back in 2016, Tesla CEO Elon Musk promised that all Tesla vehicles would soon have the hardware necessary for “full self-driving (FSD)”—even suggesting that owners could nap while their car drove them to work. But the reality has been far from that vision. Instead, we have seen numerous incidents involving Tesla's Full Self-Driving (FSD) and Autopilot systems, with hundreds of crashes and even fatalities. Musk's claims of fully autonomous vehicles being “just a year away” have become infamous, yet the technology has consistently failed to deliver on those promises.

Would you let a super smart 8-year-old drive your car just because they crush it on Roblox's Brookhaven RP? Probably not. They might understand the basics—how to steer, when to brake—but they lack the maturity and contextual awareness to handle unexpected events or difficult driving conditions. Similarly, autonomous driving systems are impressive in controlled environments, but struggle when the context becomes unpredictable—like sudden weather changes, erratic behavior from other drivers, or nuanced decision-making that requires more than just following rules.

Success when narrowing our focus

On the other hand, there are examples of success in more narrow applications of AI in this space.

Features like Adaptive Cruise Control (ACC) are excellent examples of AI performing well within a narrow focus—controlling speed and maintaining a safe distance from the car ahead. Liangyao Yu and Ruyue Wang reviewed ACC as an advanced driver assistance system that takes over vehicle longitudinal control to automatically adjust speed based on surrounding traffic. It reduces drivers’ workload, improves driving safety, energy economy, and traffic flow. Lane assist is another useful, well-defined tool that helps keep drivers within their lanes. These features work effectively because they have a clear, specific scope. However, expecting our cars to fully drive themselves with AI’s current limitations is unrealistic. The broader and more complex the task, the more likely it is that AI will encounter situations beyond its current capabilities.

The risks of broader applications

When the scope of AI applications broadens, the risks increase. It is crucial to understand this transition from narrow success to broader failure.

The broader the task, the more likely AI is to get some things right and others wrong, leaving you to correct and clarify along the way. In some cases, that's just fine and in others it's life-threatening. A recent survey by the Upwork Research Institute found that 47 percent of workers using AI tools have no idea how to deliver the expected productivity gains. Workers are feeling overwhelmed as the demands increase, and the promise of AI productivity gains often falls short. Not something I'm leaving in the hands of my twins just yet.

This paradox of AI's potential vs. its practical limitations is evident when considering productivity gains. AI can require more oversight than initially expected.

We’ve been sold on the idea that AI is the ultimate productivity booster. It’s supposed to free us from menial tasks, allowing us to focus on higher-level work. But here’s the rub: just as I can’t expect my 8-year-olds to manage an entire day without some degree of supervision, we can’t expect AI to handle broad, complex workflows without significant human intervention.

Sure, AI can automate parts of your job. But when the problem gets too big or too vague, the back-and-forth between human and machine becomes a productivity sinkhole. You spend more time fixing the AI’s mistakes, clarifying instructions, or revising its output than if you’d done it yourself from the start. It’s like giving my kids a list of chores—say, organizing their bookshelves—and then spending the afternoon overseeing every detail. While they can follow simple tasks, they often get distracted or miss the bigger picture, like the pile of books still on the floor.

The belief that AI is on the cusp of eliminating busy work is premature. We’re still in a phase where AI needs more hand-holding than we like to admit.

Resetting our expectations to reality

Ultimately, the disparity between AI’s capabilities and public perception underscores the need for a more grounded understanding of what AI can truly achieve. The core issue isn’t the technology itself—it’s how we think about it. Much of the general public and media have leapt to the conclusion that AI is on the brink of becoming a self-sufficient entity, capable of reasoning and adapting like an adult. What we have is something much closer to my 8-year-olds—bright, promising, and capable of handling specific tasks but still far from autonomous and self-sufficient.

Today, AI is indeed pretty brilliant, but it still needs babysitting—much like an incredibly sharp 8-year-old who can shine in certain tasks but isn't ready to take on the complexities of adulthood. We need to adjust our expectations accordingly. AI's strengths lie in narrowly focused applications where it can perform exceptionally well, but as the scope broadens, so too do the challenges and limitations. Just as I wouldn't let my twins drive a car or manage the household alone, we shouldn't assume that AI is ready to handle complex, autonomous decision-making without significant human oversight.

The promise of AI is vast, but realizing its potential requires understanding its current boundaries. By setting realistic expectations and recognizing AI for what it is—a powerful tool that still needs guidance—we can better leverage its capabilities while avoiding the pitfalls of misplaced trust. As AI continues to evolve, it's crucial that we remain both optimistic about its potential and pragmatic about its current limitations, ensuring we use this powerful technology responsibly and effectively in our personal and professional lives.

Previous
Previous

Nonprofits Are Facing Their Mass Extinction

Next
Next

Digital Darwin: The Veiled Evolution Haunting Our Devices