TL;DR: Humans tend to trust their own work over AI, even when AI delivers high-quality results. This article explores the subtle bias against AI, its strengths in precision and consistency, and how humans excel at creativity and intuition. By fostering collaboration, transparency, and education, we can bridge the trust gap and leverage the strengths of both to create smarter, more practical solutions.
The Human vs. AI Dilemma
Picture this: you ask AI to tackle a task for you.
The result? Pretty good, maybe even impressive.
But then you compare it to your own work, and that little voice creeps in—is this really as good as I could have done?
Sound familiar? You’re not alone.
As humans, we naturally trust our instincts and take pride in our own creations. That’s why AI-generated outputs, no matter how accurate or polished, often face skepticism.
And it’s not just a gut feeling—there’s research to back it up.
Studies reveal that we have a subtle bias. We tend to favor work done by humans—whether it’s our own or someone else’s—over work created by AI. Even when the AI produces high-quality results, it’s hard for us to let go of that human edge.
So why does this happen?
What makes us question AI’s capabilities even when it delivers exactly what we asked for?
This article dives into the fascinating reasons behind this bias.
We’ll explore how AI performs in extracting procedural knowledge, why humans remain skeptical, and what it will take to bridge the trust gap between people and machines.
Table of Contents
The Growing Importance of Procedural Knowledge
What Is Procedural Knowledge?
Procedural knowledge is the “know-how” you rely on to complete tasks.
It’s the step-by-step process for assembling furniture, fixing a squeaky door, or even making the perfect cup of coffee.
Unlike theoretical knowledge (the “know-what”), procedural knowledge is practical and actionable.
For decades, we’ve stored this information in manuals, instructions, and how-to guides.
But let’s be honest—who hasn’t felt overwhelmed by vague diagrams or confusing steps in a printed manual?
The need for something better is why procedural knowledge is evolving in exciting ways.
Why Automate? The Role of Knowledge Graphs
Here’s a question for you: how much time have you wasted trying to decipher unclear instructions?
That’s where automation comes in to save the day.
Knowledge Graphs (KGs) take procedural information and organize it into a structured, machine-readable format.
Imagine having a digital assistant that pulls the exact steps you need—no more sifting through paragraphs or skipping back on a video.
KGs make that possible by turning unstructured text into clear, actionable instructions.
For instance, if you need to fix a leaking faucet, a Knowledge Graph can extract the exact tools, steps, and tips from a guide. It’s like having an expert walk you through the process.
And the tech behind this? Large Language Models (LLMs) like GPT-4.
These AI systems can comb through text, identify key details, and build structured Knowledge Graphs with incredible speed and precision.
Why Does This Matter?
Manually extracting procedural knowledge is slow and often inconsistent.
Different people might interpret the same instructions differently—or even miss crucial details.
Automation solves this by ensuring uniformity and reducing human error.
Think about healthcare, where every second counts.
A KG could instantly provide a surgeon with the steps for a complex procedure, ensuring accuracy and saving lives.
In industrial settings, automated procedural knowledge could prevent costly mistakes by ensuring maintenance tasks are executed precisely.
And let’s not forget the convenience factor for everyday life—how many of us would love to skip the frustration of confusing DIY instructions?
Key Benefits of Automation in Procedural Knowledge
- Speed: Faster extraction of critical steps saves time in high-stakes situations.
- Consistency: Uniform instructions reduce errors caused by misinterpretation.
- Accessibility: Easy-to-use formats make complex tasks simpler for everyone.
- Accuracy: Avoid missing essential details, whether for baking or engineering.
Automating procedural knowledge with Knowledge Graphs and LLMs isn’t just a tech upgrade—it’s a game-changer for accuracy, efficiency, and accessibility.
Human Evaluation of AI Outputs
How the Study Was Conducted
Imagine giving a detailed task to an AI and asking, “How well can it do this compared to a human?”
That’s exactly what this study explored.
The goal? To see how effectively Large Language Models (LLMs) could extract procedural knowledge—the kind of step-by-step information we rely on for everyday tasks.
The process was broken into two clear steps:
- The AI analyzed unstructured text and pulled out critical components like steps, actions, and tools.
- These were then transformed into a structured Knowledge Graph, ready for practical use.
To evaluate its performance, human participants reviewed the AI’s outputs. They scored them for three key factors: accuracy, relevance, and usefulness.
The Findings
Quality Perception
The evaluators were pleasantly surprised.
They found the AI’s outputs to be both accurate and relevant, meaning the extracted steps were logical and aligned well with the original text.
But here’s the twist: while the AI excelled in extracting facts, participants were less sure about its real-world usability.
Would the outputs work seamlessly in an actual scenario, like assembling a chair or fixing a leak? The jury was still out.
Bias Unveiled
The most fascinating part of the study involved a clever A/B test.
Here’s how it worked:
Participants were shown identical outputs. One group was told the outputs were AI-generated, while the other believed they came from an experienced human.
The result?
The “human-created” label consistently scored higher.
This revealed a subtle but telling bias: we tend to favor what we think comes from humans, even when the quality is identical.
It’s not that people outright distrust AI—it’s more about the perception that humans inherently understand nuance better.
While AI delivers accurate and relevant results, trust remains a hurdle. The findings show that humans are still biased toward their peers, which highlights the importance of building confidence in AI outputs.
Humans vs. AI: Who Wins the Annotation Game?
Where Humans Shine
Humans have a knack for reading between the lines and adding implicit knowledge that isn’t explicitly stated in the text.
For example, if a procedure mentions tightening screws but skips listing a screwdriver as a necessary tool, a human annotator instinctively fills in the gap. This kind of intuitive reasoning is invaluable, especially in contexts where missing details can cause confusion.
Another strength is creativity. Humans often rephrase instructions to make them clearer or more relatable. For instance, instead of a dry directive like “Apply adhesive,” a human might annotate it as “Spread glue evenly using a brush for best results.”
These thoughtful touches make procedural steps feel intuitive and practical, reducing the cognitive load on the end user.
Where AI Excels
AI thrives on precision and consistency, sticking rigidly to the guidelines it’s given. It doesn’t get distracted or interpret instructions differently based on personal context.
This means AI can extract information systematically, ensuring every action, tool, and step is represented exactly as outlined. For industries where adherence to specific standards is critical—like healthcare or manufacturing—this level of uniformity is a major advantage.
Unlike humans, AI doesn’t improvise. While this might limit its ability to infer or adapt, it guarantees that nothing deviates from the original instructions. That reliability makes AI an invaluable partner for repetitive, high-stakes annotation tasks.
Humans excel in adding context and creativity, while AI delivers precision and uniformity. The ideal solution lies in combining these strengths for the best of both worlds.
Building Trust in AI: A Collaborative Future
Why Trust Is Key
Trust is everything when it comes to adopting AI for procedural tasks.
Without trust, even the most powerful AI solutions can feel like a gamble—and who wants to rely on something they don’t fully believe in?
When users are skeptical of AI’s abilities, its potential to improve processes or save time diminishes. That’s a huge missed opportunity.
So, how do we bridge this trust gap?
Here are three practical strategies to get there:
- Human-AI Collaboration: Pairing AI with human reviewers is a winning formula. AI handles the precision work—extracting steps and details—while humans bring intuition and context to the table. Together, they create outputs that are accurate and practical.
- Transparent AI Outputs: People trust what they understand. If AI can show how it arrives at conclusions—by explaining its decisions or highlighting key data—it demystifies the process. This transparency makes users more confident in the results.
- Training Users: Educating users on AI’s strengths and limitations is critical. Knowing that AI is fantastic at consistency but might lack creative insight helps set realistic expectations. Think of it like setting the ground rules for a productive partnership.
Rethinking the Narrative
Here’s a new way to think about AI: it’s not here to replace us—it’s here to collaborate with us.
AI can crunch data, extract details, and ensure consistency faster than any human ever could. Meanwhile, humans bring the creativity, empathy, and nuanced thinking that machines can’t replicate.
Together, this human-AI duo can redefine how procedural knowledge is extracted. The result? Smarter, more actionable, and more accessible solutions.
Trust is built through collaboration, transparency, and education. By combining AI’s precision with human creativity, we can unlock the full potential of procedural knowledge extraction.
Conclusion: Humans and AI—Stronger Together
The research reveals something we’ve all suspected: humans trust their own work more than AI’s—even when the AI does a great job. It’s not because AI isn’t capable, but because we naturally lean toward what feels familiar.
And that’s okay. Trust takes time.
The good news? We don’t have to pick sides.
AI excels at precision and speed. Humans bring intuition and creativity. Together, they’re an unbeatable team.
By combining these strengths, we can take procedural knowledge extraction to a whole new level—making it smarter, faster, and more user-friendly.
The future isn’t about humans or AI. It’s about humans and AI, creating something greater than either could achieve alone.
References:
- Carriero, V. A., Azzini, A., Baroni, I., Scrocca, M., & Celino, I. (2024). Human Evaluation of Procedural Knowledge Graph Extraction from Text with Large Language Models. arXiv preprint arXiv:2412.03589. Retrieved from https://arxiv.org/abs/2412.03589.
- Zhang, L., Lyu, Q., Callison-Burch, C.: Reasoning about Goals, Steps, and Temporal Ordering with WikiHow. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (Jan 2020). https://doi.org/10.18653/v1/2020.emnlp-main.374
Discover more from Blue Headline
Subscribe to get the latest posts sent to your email.