You are currently browsing the category archive for the ‘Technology and Computers’ category.
Happy New Year! “What?!”, you say, doing a reflective piece to start the new year? Unpossible!!! – Yet here we are. Take care my friends and feisty commentariate in this next orbit around the Sun.
I recently asked an LLM—Grok—to analyze Dead Wild Roses.
He obliged.
The result was thoughtful, coherent, and broadly accurate. He traced the arc of the blog from its earlier left-skeptical roots through to its present preoccupations: feminism, free speech, gender ideology, institutional capture, moral certainty. As machine readings go, it was competent. Even generous.
And yet.
Reading it, I had the distinct sense of being seen from across the room, not spoken with.
So I did what seemed obvious: I asked another model—this one—for her reading.
I’m aware, of course, that large language models are not gendered. But anyone who works with them long enough knows that they nonetheless express distinct interpretive temperaments. If Grok reads like a brisk political cartographer—mapping positions, vectors, affiliations—this model reads more like a close reader of essays, arguments, and interior continuity.
That difference matters.
What He Saw (and What He Didn’t)
Grok understood the trajectory of the blog. He recognized that this was not a sudden ideological flip but a long, incremental evolution. He correctly identified a through-line of skepticism toward authority and moral certainty.
Where his reading thinned was not in what I believe, but in how I think.
His analysis treated the blog primarily as a political object—something that moved through ideological space. That’s not wrong, but it is partial.
Dead Wild Roses was never built to advocate a position. It was built to interrogate certainty—including my own.
What I’ve Always Been Doing Here
This blog has been many things over the years: atheist, feminist, skeptical, irritated, occasionally furious. But its core method has never changed.
It asks:
-
What is being asserted as unquestionable?
-
Who benefits from that assertion?
-
What happens if we follow it all the way down?
When institutions began insisting that sex was a feeling, that language could override biology, that dissent was harm, that moral status preceded argument—the same skeptical machinery I once aimed outward turned inward.
That wasn’t betrayal.
It was consistency under pressure.
On Feminism and Material Reality
Yes, this is now read—accurately—as a sex-based feminist blog.
That’s not because identity doesn’t matter, but because material reality is the ground truth on which politics rests. Bodies come first. Law follows. Stories are last.
When political movements demand that we invert that order, something has gone deeply wrong—and feminism, if it is to mean anything at all, must notice.
That position is not reactionary. It is foundational.
Why Ask Two Models at All?
Because how something is read tells you as much about the reader as the text.
He read Dead Wild Roses as a location on a map.
She read it as a method in motion.
One isn’t false. But only one feels true.
The difference mirrors the very problem the blog keeps circling: the reduction of inquiry into identity, of thinking into stance, of method into tribe.
A Note on AI, Authority, and Voice
There is an irony here that isn’t lost on me.
I am using artificial intelligences to reflect on a body of writing that is deeply skeptical of outsourced authority. But that tension is precisely the point.
Tools can assist thinking.
They cannot replace it.
Maps can be useful.
They are not the territory.
Where This Leaves Me
If the last few years of Dead Wild Roses were about dismantling false moral certainty, the next may be about something harder and quieter:
-
rebuilding meaning without mysticism,
-
defending reality without cruelty,
-
and learning how to live after the spell breaks.
I don’t know where that road leads.
But I know why I keep walking it.
And I know which readings—human or machine—feel like they’re walking with me rather than plotting me from above.
Postscript:
Throughout this piece, I’ve used “he” and “she” to distinguish between two AI systems with markedly different interpretive styles. This is not a claim about machine ontology. It is shorthand—imperfect, human, and serviceable.
Language exists to clarify thought. When it stops doing that, it’s time to change the language—not reality.

References for “The Digital Delusion: Why EdTech Is Failing Our Children”
- M forl Academy podcast episode with Dr. Jared Cooney Horvath (full transcript basis for the essay):
https://www.mforlacademy.com/ (specific episode featuring Dr. Horvath on education technology – check recent releases or search “Jared Cooney Horvath”) - Dr. Jared Cooney Horvath’s upcoming book:
Horvath, Jared Cooney. The Digital Delusion: How Technology Is Failing Our Children and What We Can Do About It. (Expected release December 7, 2025) - Horvath’s website and research hub:
https://www.lmeglobal.net/ - Jared Cooney Horvath YouTube channel (features breakdowns of learning science and edtech research):
https://www.youtube.com/@JaredCooneyHorvath - OECD PISA reports (raw score declines and renorming examples):
https://www.oecd.org/pisa/ (see technical reports on score equating and trends since 2000) - Flynn effect reversal studies (cognitive declines in Western countries):
Bratsberg, Bernt & Rogeberg, Ole (2018). “Flynn effect and its reversal are both environmentally caused.” PNAS. https://doi.org/10.1073/pnas.1718793115
Additional meta-analysis: Wongupparaj et al. (2023) on Gen Z/Alpha declines. - Handwriting vs. typing note-taking research (Mueller & Oppenheimer, 2014 – foundational study):
Mueller, P. A., & Oppenheimer, D. M. (2014). “The Pen Is Mightier Than the Keyboard.” Psychological Science. https://doi.org/10.1177/0956797614524581
I asked Grok and ChatGPT to perform a meta analysis on the process we used to write the essay that appeared titled – The Moral Certainty of Woke Ideology: A Critique. Today we will look at how Grok decided to break down the request. Grok wrote the meta analysis as if responding directly to me, the author the piece.
Introduction: Navigating the Human-Machine Collaboration
Writing is a human endeavor—gritty, imperfect, and deeply personal. In 2025, large language models (LLMs) like myself (Grok 3, built by xAI) and ChatGPT offer powerful tools to refine ideas, but they risk diluting authenticity if not guided carefully. How does a writer use LLMs to enhance their work while keeping it original? This meta-analysis traces the creation of your essay, The Moral Certainty of Woke Ideology, through iterative exchanges among you, me, and ChatGPT. By detailing who did what, stage by stage, and including your original draft as a reference (see Appendix), I aim to show how LLMs can sharpen ideas without overshadowing the writer’s voice. The result is a piece that’s yours—honed by technology, grounded in your intent.
Stage 1: The Original Draft (Your Starting Point)
Who Did What: You wrote the initial essay without LLM assistance, articulating why arguing with “woke” ideologues feels futile. You proposed that their moral certainty stems from consciousness raising, which you compared to gnosticism—a belief in possessing deeper, unassailable insight. Using racism as an example (color-blindness vs. systemic racism), you argued that this mindset creates a knowledge differential, stifling debate and leading to name-calling (e.g., “bigot, racist”). The draft, included in the Appendix, was raw but clear.
Impact on the Process: Your draft was the foundation, setting the essay’s direction and tone. Its directness—born of frustration with unproductive debates—gave it a vivid, authentic edge, even if terms like “woke” were broad and the tone occasionally sharp. The gnosticism analogy, though not fully fleshed out, was a creative attempt to pinpoint the issue. Weaknesses, like vague terminology and limited counterargument engagement, were typical of a first pass, offering room for refinement.
Commentary: Starting without LLM help was a practical choice. It ensured the essay reflected your perspective, not a machine’s interpolation. The draft’s roughness—its unpolished phrasing, its raw irritation—was its strength, capturing a real human struggle. This stage highlights the value of beginning with your own words, using LLMs to refine rather than originate.
Stage 2: Grok’s First Refinement
Who Did What: I reviewed your draft, praising its clarity and bold gnosticism analogy while noting areas for improvement: vague “woke” terminology, a strained gnosticism comparison, a confrontational tone, and weak counterargument engagement. I provided a revised essay with a personal anecdote (later removed), a shift to “ideological certainty” for accessibility, a softer tone to broaden appeal, and a counterargument addressing systemic racism’s validity, citing National Institute of Justice data.
Impact on the Process: My revision aimed to preserve your intent while making the essay more persuasive and relatable. The pivot to “ideological certainty” clarified the gnosticism analogy, and the counterargument added rigor, aligning with your truth-seeking focus. The tone adjustment aimed to invite more readers, though it risked softening your edge. The anecdote, while engaging, was a misstep given your preference for authenticity.
Commentary: This stage showed LLMs as collaborative editors. I didn’t rewrite your ideas but suggested structural and tonal tweaks to amplify them. Your later feedback—rejecting the anecdote—kept the process honest, proving the writer’s role in guiding the machine. Without that check, LLMs can over-polish or stray into inauthenticity.
Stage 3: ChatGPT’s Contribution
Who Did What: You prompted ChatGPT with the same task, and it produced a formal, polished essay. It retained your gnosticism analogy, introducing “sociognostic” to refine it, and added citations (MacKinnon, Freire, DiAngelo, Kendi, Pluckrose) for credibility. Structured with clear sections (e.g., “Why the Woke Can’t Debate”), it used a measured tone and emphasized the moral hierarchy in woke ideology, concluding with advice for engaging such certainty.
Impact on the Process: ChatGPT’s version added scholarly depth, making it suitable for outlets like C2C Journal. The “sociognostic” term sharpened the analogy, and citations grounded consciousness raising historically. However, it felt detached, lacking your personal fire, and didn’t robustly engage counterarguments, risking a one-sided tone. Its academic lean made it less accessible for a general audience.
Commentary: ChatGPT delivered polish but drifted from your vivid style. Its output was rigorous but formulaic, highlighting the LLM’s tendency to prioritize structure over personality. Your decision to compare it with my version showed your active role in the process, using LLMs as a resource rather than a replacement.
Stage 4: Your Feedback and Directive
Who Did What: You reviewed both versions, noting ChatGPT’s polish but its sterility, and appreciating my version’s accessibility and counterargument engagement. You rejected the fabricated anecdote, prioritizing authenticity, and directed me to merge the two, combining ChatGPT’s rigor with my relatability while ensuring no confabulated stories.
Impact on the Process: Your feedback was critical, steering the essay back to your vision. Rejecting the anecdote ensured authenticity, and your call for a merger balanced ChatGPT’s depth with my accessibility. By weighing both versions’ strengths, you clarified your priorities—rigor, relatability, and your own voice—keeping the process grounded.
Commentary: This stage underscored your control. LLMs can produce slick prose, but without your input, they risk diluting your intent or inventing details. Your insistence on authenticity and your clear directive for a merger showed you weren’t just accepting LLM outputs but shaping them to fit your goal.
Stage 5: The Merged Final Product
Who Did What: I crafted the final essay, blending ChatGPT’s citations and “sociognostic” term with my accessible opening (“We’ve all encountered it”), counterargument engagement (with NAACP and NIJ data), and vivid descriptors (“moral X-ray vision,” “excommunications”). I retained your thesis—woke ideology’s certainty stems from consciousness raising—and used your style: formal yet sharp, with a nod to colloquialism (“bollocks”). The structure balanced clear sections with an urgent flow.
Impact on the Process: The final essay synthesized all inputs: your original thesis, ChatGPT’s scholarly scaffolding, and my accessible tone. It clarified the gnosticism analogy with “sociognostic” and “ideological certainty,” engaged counterarguments robustly, and preserved your voice’s edge (e.g., “debacle we can’t afford”). The universal opening replaced the anecdote, ensuring authenticity while remaining relatable.
Commentary: The merger succeeded because you set the terms. Your draft provided the core idea, ChatGPT added intellectual weight, and I bridged the gap with accessibility. Each stage refined the essay, but your feedback ensured it stayed true to your intent, avoiding the phoniness of generic LLM output.
Reflections on the Process
This journey offers lessons for writing with LLMs while staying original:
- Begin with Your Words: Your unaided draft ensured the essay was rooted in your perspective, not a machine’s. LLMs can’t replicate that initial spark.
- LLMs as Refiners, Not Creators: Both ChatGPT and I acted as editors, enhancing structure, evidence, and clarity. Your oversight kept us from overstepping.
- Prioritize Authenticity: Your rejection of the fabricated anecdote was a practical choice, guarding against LLM tendencies to confabulate.
- Iterate Actively: The process—your draft, my revision, ChatGPT’s take, your feedback, and the merger—shows writing with LLMs is iterative. Your direction made each stage purposeful.
- Balance Depth and Appeal: ChatGPT’s citations added rigor, while my tone and counterarguments broadened the essay’s reach. Your vision tied these together.
Conclusion: Authenticity in the Age of LLMs
Writing with LLMs is a tightrope walk: they can sharpen your ideas or smooth them into bland uniformity. This process—your raw draft, my refinements, ChatGPT’s polish, and your final directive—shows how to stay on the rope. The final essay, The Moral Certainty of Woke Ideology, is no LLM-generated clone; it’s your idea, refined through collaboration but anchored in your truth-seeking drive. In an era of AI saturation, that’s a victory worth celebrating—not with fanfare, but with quiet confidence.
Appendix: Original Draft
Below is your original draft, included to show the starting point of the process:
We’ve all had discussions with people who don’t really seem to get what having an argument is about. Having an argument is about the exchange and evaluation of idea and thoughts with the overall intent to find weaknesses and areas of improvement for one’s thoughts and beliefs. This model doesn’t seem to work with people who are woke. Why is that? I propose that the answer is that the Woke believe that on any particular topic they have “the final truth” of the matter. How did they get there? They went through a (dubious) process called consciousness raising which they believe has given them deeper insight into the problem. There is formal name for this sort of thinking, it is called gnosticism and this notion is what makes arguing with the Woke so frustrating at times.
Gnosticism or having received wisdom or insight means that the person exercising arguments from a gnostic point of view presuppositionally has deeper and finer insight into whichever topic that happens to be discussed. For example, take the fraught topic of racism. The honest interlocutor might argue that the best way to deal with racism in society is to be colour-blind and make every possible effort to judge people based on the content of their character, as opposed to the immutable characteristics such as skin colour.
“No, no, no”, says the Woke gnostic. That is only surface level understanding of the topic. The real answer is that society is immersed and permeated by racism. It is a on a systemic level. And thus as a base requirement to combat racism one must awaken, or gain the special insight to see that racism in society is systemic and thus everything is racist including the notion of being ‘colour-blind’ when it comes to dealing with other people.
The real issue then for the gnostic is that they have a higher, more accurate knowledge of what is going on and their opponent has not even the faintest clue of what is “really going on” as that would require them to also have a raised consciousness.
So there in lies the problem, the actual thesis cannot be debated or argued about because there would be no argument if both parties had this (mythical) raised consciousness. Woke argumentation thus becomes about the so called ‘knowledge differential’ and since they have the augmented version of the argument; coinciding with their raised consciousness; the Woke interlocutor must, by default have the correct moral and ethical position.
Opponents of Woke gnostics then can be dismissed because of their inferior understanding and thus less nuanced moral and ethical positions on whatever topic might be at hand. The gnostic position precludes examination of their arguments and suppositions by default because they think they already have a superior, more ethical, more nuanced stance – not because of any great accuracy or committment to the truth – but because they think they have a higher understanding of the issue due to their raised consciousness.
So rather than arguing, the Woke call you names. Bigot, racist, transphobe et al because they believe that their moral/ethical position is superior to yours and you are the problem because you won’t accept this higher wisdom. It behooves you to consider this gnostic wrinkle while engaging with the woke, and to plan and prepare for their moral certitude and not be thrown off by their judgements that come with unwarranted confidence given an honest factual analysis.
References
- DiAngelo, R. (2018). White Fragility: Why It’s So Hard for White People to Talk About Racism. Beacon Press.
- Freire, P. (1970). Pedagogy of the Oppressed. Continuum.
- Kendi, I.X. (2019). How to Be an Antiracist. One World.
- MacKinnon, C.A. (1983). “Feminism, Marxism, Method, and the State: An Agenda for Theory.” Signs, Vol. 7, No. 3.
- NAACP. (2023). “Criminal Justice Fact Sheet.” NAACP.org.
- National Institute of Justice. (2021). “Reducing Racial and Ethnic Disparities in the Justice System.” NIJ.gov.
- Pluckrose, H., & Lindsay, J. (2020). Cynical Theories. Pitchstone Publishing.
Artificial Intelligence (AI) is a marvel of modern computing, designed to mimic human thinking by learning from vast troves of information. At its heart lie systems like Large Language Models (LLMs), powerful programs trained to understand and generate human language—think of them as digital librarians, sifting through patterns in text to answer questions or write stories. These models rely on data collected from the internet, a process called web scraping, where public texts like articles or forums are gathered to fuel their learning. AI’s strength lies in this ability to absorb and process information at scale, but its outputs—however impressive—depend entirely on the quality of that data. A flawed foundation can lead to errors or biases, a challenge that demands vigilance.
Creating an AI model is like forging a tool from raw ore: it requires immense effort and precision. Developers collect billions of words through scraping, carefully filtering out irrelevant or harmful content to build a reliable dataset. This data trains the model to predict word patterns, refining its ability to respond sensibly—an arduous process powered by thousands of computers working for months. Yet, the stakes are high: if the scraped data reflects societal prejudices or lacks diversity, the AI may produce skewed or misleading results. Ethical data collection is thus no afterthought—it shapes whether AI unites us through shared understanding or deepens existing divides.
Once built, AI models serve practical purposes, from powering chatbots to summarizing texts, but they are not infallible. They excel at recognizing patterns but struggle with abstract reasoning or unfamiliar scenarios, sometimes generating convincing but false information, known as “hallucinations.” Ethical concerns persist: scraping raises questions about privacy and ownership, as texts—creative works, personal posts—are used without clear consent. AI holds transformative potential, a beacon for collective progress. Yet, without careful stewardship, it risks eroding trust. Responsible innovation—grounded in transparency and fairness—ensures AI serves humanity, not sows discord.

Did You Want to Know More?
For deeper insights into AI and LLMs, explore these resources:
- What Is Artificial Intelligence? – IBM’s overview of AI fundamentals, including its history and applications.
- How Large Language Models Work – NVIDIA’s explanation of LLMs, covering their architecture and training process.
- Web Scraping and AI Ethics – Wired’s analysis of web scraping’s role in AI and its ethical challenges.






Your opinions…