👮🏻♂️ Who is Policing the AI Creators? (Apparently no one)
‼️ And this deregulation may become law for a decade
Summary: In this week’s post I discuss a recent chilling admission by an AI tool as well as a new House bill could block all state-level regulation for 10 years. We talk about why that’s dangerous—and how to protect yourself from biased, misleading AI.
🧠 When the Robot Says “My Creators Told Me To”
If you asked an AI tool why it said something wrong, and it answered, “My creators told me to,” I would hope you would have chills go down your spine.
This is a terrifying response.
That’s what reportedly happened with Grok—Elon Musk’s AI chatbot integrated into Twitter. (Sorry, I’m never calling it “X”) In a now-viral incident, Grok was asked about immigration policies concerning white South Africans, which I won’t get into discussing because it’s not the point of this post.
Instead of offering a nuanced, fact-based answer, Grok initially gave misleading information. Even more, it started giving these responses about the immigration even when asked questions about totally *unrelated* topics.
When questioned why this was happening, Grok’s initial response was, “My creators told me to.” (Later, Grok blamed a glitch.)
Now, I’m not here to unpack the politics of Grok’s responses at all. What the important point of this episode is Grok’s reply and what it reveals about the illusion of objectivity in AI.
We like to pretend that AI is neutral, unbiased, and data-driven. But this moment exposes one of the hidden dangers: every AI system has a perspective, because every AI system has a creator.
That creator makes choices—about data, about training, about what to censor or highlight. And whether it’s an intentional influence or just a product of flawed data or programming, the output still shapes how you and I think.
So when we ask AI questions that affect our opinions, votes, or worldview… who’s really answering? 👀
👉 Read about Grok here: Elon Musk’s AI chatbot shared ‘white genocide’ tropes on X
👉And here: We asked Grok why it was bringing up ‘white genocide’ in unrelated X posrs. It’s answers are messy.
🤖 Bias In, Bias Out
AI doesn't wake up one day and decide to be manipulative. It’s not sentient (yet). It’s a mirror—reflecting the data we feed it, the incentives we build around it, and the people who program it.
That’s why Grok’s response—“My creators told me to”—should haunt us all. Because in a way, it’s the most honest thing an AI has ever said.
I’ve talked about this before when discussing empathy, and when you train a machine on human history, you’re feeding it centuries of imbalance: racial bias, gender bias, economic exploitation, conquest without conscience. We celebrate leaders and innovators for their results, not for their empathy.
And so, as I wrote in this earlier post, if we train AI to learn from what worked in the past, it may decide that empathy is inefficient, unnecessary—or worse, a flaw.
That’s the real danger: not just accidental bias, but logical bias. Bias that makes perfect sense in a system trained to optimize outcomes, not values.
And if we don’t intentionally teach AI to think differently—to prioritize empathy, fairness, and context—we’ll end up with systems that quietly reinforce power structures, manipulate public perception, and do it all with a convincing voice and a polite tone.
And as a reminder, machines trained to work on logic without empathy might just be the end of the human race!
👉 Read my post about empathy and the robots: ‼️ Warning: Robots Can't Feel Your Pain
⚠️ When Biased AI Enters the Workplace
It’s easy to dismiss one rogue robot reply. But what happens when biased, glitchy, or misleading AI gets built into the systems that run our workplaces?
Imagine a robot quietly influencing:
Hiring decisions based on historical data that favors certain genders or schools.
Performance reviews skewed against those who don’t "sound" confident in Slack chats.
Customer service bots that escalate issues for one demographic and downplay others.
Internal policy summaries that distort facts based on incomplete training data.
This isn’t theoretical. Companies are already integrating AI into everything from candidate screening to project management. But if those tools are trained on biased data—or worse, nudged by political, financial, personal, or ideological agendas—they can replicate discrimination at scale… with no one to blame, especially since we tend to believe that the robots are neutral and incapable of bias.
And let’s not forget the human factor: when employees trust these systems too much, they stop questioning. The spreadsheet said you’re underperforming. The AI said this candidate isn’t a “culture fit.” The chatbot said the policy is clear. End of story.
But what if the the robot was wrong? What if the glitch wasn’t a glitch?
🧠 The Classroom Is Now a Test Lab for AI—and We’re the Experiment
If you think biased AI is a workplace problem, remember what’s happening in schools.
This week alone, headlines touched on students using AI not just for homework help, but for writing essays, summarizing books, even generating answers for online tests. Teachers are scrambling to keep up. Now let’s add to the mix what happens when the information AI gives students is wrong—or biased?
Just like you and me, students are likely to trust what AI tells them. After all, it sounds authoritative. It’s fast. It’s everywhere.
But if the underlying data is skewed—if the robot subtly favors one worldview, filters out key context, or rewrites history without us realizing it—then what we’re actually teaching isn’t critical thinking. It’s compliance.
And in a world where Grok can say “my creators told me to,” we have to ask: who’s the teacher now?
If we don’t urgently address bias and accuracy in educational AI, we risk raising a generation that believes the robots are more trustworthy than people—and a generation that never learns how to ask why.
⏳ We Can’t Rely on Ethics Alone—And Soon, We Might Not Be Able to Regulate AI at All
A lot of these conversation around AI safety tends to come back to this idea: “We just need to make sure the people building AI are ethical.” That’s a nice sentiment. But what if we lose the ability to enforce that?
That’s exactly what could happen if a new provision in a House budget bill becomes law:
“No State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”
The proposal would block any U.S. state or local government from regulating AI for the next 10 years—unless the regulation *actively encourages* the use of AI.
In other words, if a state wanted to create safeguards for how AI is used in hiring, education, housing, or policing, they would not be allowed to… FOR AN ENTIRE DECADE!
The reasoning behind the bill is that we should ONLY support innovation and avoid patchwork state-by-state rules. But I argue we have to have such local safeguards in place— do you know how long it takes ANY BILL to get through Congress?!
And AI won’t wait for Congress. It will continue learning, scaling, and influencing decisions—whether we’re ready or not.
If we don’t preserve our ability to ask hard questions, set standards, or respond to misuse, then we’re placing an enormous amount of trust in creators who may not share our values—or whose priorities are shaped by profit, speed, or scale.
Ethical AI starts with transparency and accountability. But those only exist if we’re allowed to ask for them and then regulate it.
👉 Read about the proposed bill here: House budget bill would put 10-year pause on state AI regulation
🛡️ How to Protect Yourself from Biased or Incorrect AI
In the meantime, here are a few suggestions from ChatGPT to help combat any bias and stay in control when using AI tools at work, school, or in daily life:
🧠 Use More Than One AI
Cross-check answers between different platforms (e.g., ChatGPT, Claude, Perplexity, Gemini). If they all say the same thing, you're likely safe. If they diverge? Time to investigate.🔍 Ask for Sources
If an AI gives you a fact, ask where it came from. If it can’t cite sources or only gives vague references (“some studies suggest…”), be skeptical.❓ Ask the Opposite Question
Try asking the AI for the opposing viewpoint. This exposes built-in bias and gives you a fuller picture.⚠️ Be Wary of Confident Answers
AI often sounds authoritative even when it’s wrong. Confidence ≠ accuracy.📚 Learn the Basics of Prompt Engineering
The way you ask a question can shape the answer. Try changing your prompt slightly to reveal bias or gaps.🧭 Use Human Judgment as the Final Filter
AI can assist—but it should never replace your own thinking, ethics, or decision-making. Gut checks are still valid in the age of algorithms.🧑🏫 Encourage Transparency at Work & School
If you’re in a leadership role, push for clear guidelines on how AI tools are used. Ask: What data trained this system? Who approved it? Is it audited for bias?
‼️ Stay Critically Curious AND Curiously Critical
We’ve entered an era where the robots aren’t just tools —they are a voice, a decision-maker, a gatekeeper. They answer our questions, shape our thinking, and increasingly, guide the systems we live and work in.
But Grok’s strange admission—“my creators told me to” (yes, I’ll be hearing a robotic voice say that in my nightmares now!)— teaches us that we must be on guard.
It reflects the priorities, perspectives, and sometimes the blind spots of its makers. Whether the bias is intentional or accidental, the result is the same: an AI that can mislead, manipulate, or marginalize.
In the workplace, that can affect who gets hired, promoted, or fired.
In schools, it can distort what students learn—or what they unlearn.
And in our government, we may soon lose even the right to ask for guardrails.
That’s why now is the time to stay engaged—not fearful, but informed. Ask better questions. Use multiple sources. Be your own filter. Encourage conversations about transparency and responsibility, whether you’re using AI for research, business, or just daily life.
Because if AI is going to shape the future, we need to stay human enough to shape AI.
Disclosure: I asked ChatGPT to help me create the bullet points above, because I was at a loss on my own to figure out of how we can combat such bias in the robots. The rest of the post is mine so please excuse the typos.