You are currently browsing the category archive for the ‘Technology and Computers’ category.

  Happy New Year!  “What?!”, you say, doing a reflective piece to start the new year?  Unpossible!!! – Yet here we are.  Take care my friends and feisty commentariate in this next orbit around the Sun.

I recently asked an LLM—Grok—to analyze Dead Wild Roses.

He obliged.

The result was thoughtful, coherent, and broadly accurate. He traced the arc of the blog from its earlier left-skeptical roots through to its present preoccupations: feminism, free speech, gender ideology, institutional capture, moral certainty. As machine readings go, it was competent. Even generous.

And yet.

Reading it, I had the distinct sense of being seen from across the room, not spoken with.

So I did what seemed obvious: I asked another model—this one—for her reading.

I’m aware, of course, that large language models are not gendered. But anyone who works with them long enough knows that they nonetheless express distinct interpretive temperaments. If Grok reads like a brisk political cartographer—mapping positions, vectors, affiliations—this model reads more like a close reader of essays, arguments, and interior continuity.

That difference matters.


What He Saw (and What He Didn’t)

Grok understood the trajectory of the blog. He recognized that this was not a sudden ideological flip but a long, incremental evolution. He correctly identified a through-line of skepticism toward authority and moral certainty.

Where his reading thinned was not in what I believe, but in how I think.

His analysis treated the blog primarily as a political object—something that moved through ideological space. That’s not wrong, but it is partial.

Dead Wild Roses was never built to advocate a position. It was built to interrogate certainty—including my own.


What I’ve Always Been Doing Here

This blog has been many things over the years: atheist, feminist, skeptical, irritated, occasionally furious. But its core method has never changed.

It asks:

  • What is being asserted as unquestionable?

  • Who benefits from that assertion?

  • What happens if we follow it all the way down?

When institutions began insisting that sex was a feeling, that language could override biology, that dissent was harm, that moral status preceded argument—the same skeptical machinery I once aimed outward turned inward.

That wasn’t betrayal.
It was consistency under pressure.


On Feminism and Material Reality

Yes, this is now read—accurately—as a sex-based feminist blog.

That’s not because identity doesn’t matter, but because material reality is the ground truth on which politics rests. Bodies come first. Law follows. Stories are last.

When political movements demand that we invert that order, something has gone deeply wrong—and feminism, if it is to mean anything at all, must notice.

That position is not reactionary. It is foundational.


Why Ask Two Models at All?

Because how something is read tells you as much about the reader as the text.

He read Dead Wild Roses as a location on a map.
She read it as a method in motion.

One isn’t false. But only one feels true.

The difference mirrors the very problem the blog keeps circling: the reduction of inquiry into identity, of thinking into stance, of method into tribe.


A Note on AI, Authority, and Voice

There is an irony here that isn’t lost on me.

I am using artificial intelligences to reflect on a body of writing that is deeply skeptical of outsourced authority. But that tension is precisely the point.

Tools can assist thinking.
They cannot replace it.

Maps can be useful.
They are not the territory.


Where This Leaves Me

If the last few years of Dead Wild Roses were about dismantling false moral certainty, the next may be about something harder and quieter:

  • rebuilding meaning without mysticism,

  • defending reality without cruelty,

  • and learning how to live after the spell breaks.

I don’t know where that road leads.

But I know why I keep walking it.

And I know which readings—human or machine—feel like they’re walking with me rather than plotting me from above.

Postscript:

Throughout this piece, I’ve used “he” and “she” to distinguish between two AI systems with markedly different interpretive styles. This is not a claim about machine ontology. It is shorthand—imperfect, human, and serviceable.

Language exists to clarify thought. When it stops doing that, it’s time to change the language—not reality.

In a captivating episode 0f Wired & Watched 101: EdTech, host Missy Carwowski sits down with Dr. Jared Cooney Horvath, a neuroscientist, former teacher, and leading expert on the science of learning. For two decades, Horvath has studied how humans truly learn—focusing on memory, attention, and brain function—and his findings deliver a sobering message: despite billions spent and endless promises, education technology (laptops, tablets, one-to-one devices, and now AI) is not transforming learning for the better.
In many cases, it is actively harming it. Far from making children smarter, the explosion of screens in classrooms is contributing to the first measured cognitive declines in generations, leaving parents and educators searching for answers.Horvath traces the problem to three fundamental ways technology clashes with how the human brain learns.
First, screens destroy focused attention through constant multitasking—something the brain cannot actually do. Students now spend over 2,500 hours a year switching between tabs, messages, and videos, training them to task-switch rather than concentrate. Second, learning relies heavily on empathy—the biological synchrony that happens when humans interact face-to-face—which machines simply cannot provide. Without that human connection, students lack the motivation to push through difficulty and often quit at the first sign of struggle. Third, “transfer” fails: knowledge learned on screens in an easy, narrow context rarely moves to the varied, complex real world, because computers remove the very effort and contextual cues that make learning stick.
The evidence is stark and growing. Gen Z and Gen Alpha are the first generations to show declines in memory, attention, and general intelligence compared to their parents—the reversal of the Flynn effect that had been climbing for a century. Raw scores on international tests like PISA and the SAT have been dropping 15–30 points per decade, hidden only by constant renorming and grade inflation. Meanwhile, handwriting boosts memory through spatial context and forces deeper processing, while typing often produces shallow, verbatim notes that students barely remember. Even binge-watching studies (which helped shape Netflix’s release strategy) show that spaced practice beats massed screen exposure for both understanding and enjoyment.
Horvath dismantles the common defenses of edtech with clarity. Claims of “potential” admit that promised benefits aren’t happening yet—hardly a reason to double down. Arguments that children must master today’s tools to be “work-ready” ignore the fact that K–12 education has always been about teaching adaptable thinking, not specific software that will soon be obsolete. And the excuse that teachers or students are simply “using it wrong” falls flat: real-world use, not inventors’ intentions, determines a tool’s impact. After sixty years of waiting for the edtech revolution, the data remains underwhelming at best and damaging at worst.
So what should classrooms look like? Horvath envisions a return to pre-2000 norms: computer labs used intentionally for specific lessons, not ubiquitous devices in every hand. Teachers and parents should demand true opt-out policies, forcing schools to maintain analog alternatives. When educators must prepare both digital and paper versions of assignments, most quickly discover that analog methods produce deeper understanding with greater flexibility. Above all, Horvath reminds us that learning has always happened best through human relationships—between teachers and students, students and students—not through screens. As cell-phone bans spread across schools, the next frontier is reclaiming classrooms from compulsory edtech, giving children back the focused, empathetic, effortful environment their brains need to thrive.

References for “The Digital Delusion: Why EdTech Is Failing Our Children”

  1. M forl Academy podcast episode with Dr. Jared Cooney Horvath (full transcript basis for the essay):
    https://www.mforlacademy.com/ (specific episode featuring Dr. Horvath on education technology – check recent releases or search “Jared Cooney Horvath”)
  2. Dr. Jared Cooney Horvath’s upcoming book:
    Horvath, Jared Cooney. The Digital Delusion: How Technology Is Failing Our Children and What We Can Do About It. (Expected release December 7, 2025)
  3. Horvath’s website and research hub:
    https://www.lmeglobal.net/
  4. Jared Cooney Horvath YouTube channel (features breakdowns of learning science and edtech research):
    https://www.youtube.com/@JaredCooneyHorvath
  5. OECD PISA reports (raw score declines and renorming examples):
    https://www.oecd.org/pisa/ (see technical reports on score equating and trends since 2000)
  6. Flynn effect reversal studies (cognitive declines in Western countries):
    Bratsberg, Bernt & Rogeberg, Ole (2018). “Flynn effect and its reversal are both environmentally caused.” PNAS. https://doi.org/10.1073/pnas.1718793115
    Additional meta-analysis: Wongupparaj et al. (2023) on Gen Z/Alpha declines.
  7. Handwriting vs. typing note-taking research (Mueller & Oppenheimer, 2014 – foundational study):
    Mueller, P. A., & Oppenheimer, D. M. (2014). “The Pen Is Mightier Than the Keyboard.” Psychological Science. https://doi.org/10.1177/0956797614524581

Footage from Shanghai’s Pudong district offers an unfiltered look at how algorithmic governance operates when efficiency becomes a pretext for control. Inside the city’s “Urban Brain”—a sprawling AI system that integrates cameras, databases, and behavioral scoring—daily life is rendered into data points for state management. Occupancy of apartments, movement of the elderly, waste disposal habits, even parking irregularities—everything feeds into a real-time ledger of “acceptable” and “unacceptable” behaviour. Enforcement no longer waits for human judgment. Automated alerts dispatch teams, algorithms flag residents, and facial-recognition systems reportedly identify individuals in seconds.

Some observers frame this as the promise of the “smart city”—a way to streamline services, detect hazards faster, or help vulnerable citizens. But Pudong’s model shows how quickly that promise can harden into something else entirely. Once a government can observe everything, the line between assistance and discipline becomes impossally thin. Citizens begin policing one another through app-based reporting; infractions become entries on an invisible scorecard; social pressure becomes a tool of the state. The technology doesn’t force authoritarianism—but it supercharges its reach.

This isn’t science fiction, and it isn’t uniquely Chinese. Western policymakers watch these experiments closely, often through the lens of innovation rather than power. International organizations—from the World Economic Forum to UN smart-city initiatives—regularly showcase such systems as examples of “urban optimization.” Meanwhile, closer to home, governments have tested their own versions in more modest forms:

• traffic-camera systems calibrated for behavioural nudging,
• digital IDs tied to expanding databases,
• lingering post-pandemic contact-tracing infrastructure,
• carbon-tracking tools that incentivize or shame consumption.

These tools may not be malicious in design, but intent matters less than trajectory. Once data feeds administrative decision-making, and administrative decisions condition everyday freedoms, the architecture of a “soft” social-credit system begins to emerge—quietly, gradually, and often without the public debate such power deserves.

The core issue isn’t technology itself. It’s governance. Liberal democracies depend on thick boundaries between the individual and the state, including the right to act without constant monitoring or algorithmic interpretation. When those boundaries erode, even incrementally, the cost is not abstract: it is the loss of private space, unobserved choices, and the freedom to make mistakes without consequence.

If Pudong shows us anything, it is that systems designed for convenience can be repurposed—rapidly—into systems of compliance. The lesson for the West is not paranoia but prudence. We can adopt data-driven tools, but only if we embed them in strict legal guardrails, transparent governance, and a presumption of personal liberty.

The measure of a free society is not how efficient its systems become, but how widely it preserves the right to live beyond the gaze of the state—and increasingly, beyond the reach of its algorithms.

References

  • Cui, Q., Chen, R., Wei, R., Hu, X., & Wang, G. (2023). Smart Mega-City Development in Practice: A Case of Shanghai, China. Sustainability, 15, 1591. (MDPI)
  • Marvin, S., While, A., Chen, B., & Kovacic, M. (2022). Urban AI in China: Social control or hyper-capitalist development in the post-smart city? Sustainable Cities and Society. (Frontiers)
  • ChinaDaily. (2019, August 31). Shanghai using tech for city management. China Daily. (China Daily)
  • U.S.–China Economic and Security Review Commission. China’s Smart Cities Development. (PDF) (USCC)
  • CES.tech. (2020, November 30). Three Projects from the World’s Smartest City of the Year. (CES)

Tweet Link

https://x.com/JimFergusonUK/status/1989983450636435560

 

The UK government under Prime Minister Keir Starmer has introduced the so-called BritCard proposal: a mandatory digital identity (ID) scheme set to roll out by 2029. According to Reuters and other major outlets, the idea is that workers will need this digital ID for right-to-work checks, and over time it may be extended to access public services like tax records, childcare, welfare, etc. (reuters.com) Critics argue it creates centralized databases, raises risks of surveillance, invites overreach, and may pave the way for a social credit framework. (theguardian.com)

A social credit system, like the one China is implementing, is where citizens are monitored, graded or blacklisted for various behaviors (both major and minor), and then rewarded or punished accordingly. In China, examples include: blocking millions of people from buying airplane or train tickets due to “discredited behaviour” (which might include unpaid fines or minor public misbehavior); preventing access to education or luxury purchases; placing people or companies on public blacklists affecting their livelihoods; and using facial recognition and wide surveillance to monitor compliance. (theguardian.com) Such a system curtails freedoms: freedom of movement, career opportunities, public participation, and even speech if one criticizes the state or fails to conform to expected norms.

The UK’s BritCard digital ID proposals, alongside other legislative trends, are troubling signs of creeping authoritarianism—where government tools offer the capacity for control as much as for convenience. Canada shows similar risks: its proposed Combatting Hate Act includes expanding definitions of hate speech, creating new offences for obstruction, intimidation, and streamlining hate-speech and propaganda prosecutions. (canada.ca) While aiming to protect vulnerable communities, such expanded powers risk chilling free speech, targeting dissent, and giving the state too much discretion over what is or isn’t allowable expression. As free societies, the West must resist anything resembling social credit systems dressed up as digital ID or online-hate regulation.

The Panopticon come to life.

 


What Social Credit Means for Freedom

Here’s what is at stake if systems like China’s are ever adopted in the West:

  • Freedom of Movement: Bans on travel by air, train, or road for those with low “scores.”
  • Freedom of Speech: Criticism of the government or “unharmonious” views can lower your score.
  • Economic Opportunity: Blacklisting can prevent people from starting businesses, holding jobs, or receiving loans.
  • Privacy & Autonomy: Facial recognition, mass surveillance, and data collection track daily life in detail.
  • Access to Education & Services: Children of “blacklisted” parents have been denied access to private schools.
  • Social Participation: Public shaming lists and score rankings reduce citizens to state-monitored reputations.
  • Rule of Law: Arbitrary and opaque standards allow punishment without due process.

 


References

  1. Reuters – Britain to introduce mandatory digital ID cards
  2. FT – Digital ID: what is the UK planning, and why now?
  3. The Guardian – Digital ID plan for UK risks creating an ‘enormous hacking target’
  4. The Guardian – China bans 23m ‘discredited’ citizens from buying travel tickets
  5. CNBC – China to stop people traveling who have bad ‘social credit’
  6. Sohu – Examples of Chinese blacklists and restrictions
  7. Government of Canada – Combatting Hate Act – proposed legislation

 

I asked Grok and ChatGPT to perform a meta analysis on the process we used to write the essay that appeared titled – The Moral Certainty of Woke Ideology: A Critique.  Today we will look at how Chat GPT decided to break down the request.

Author’s Note:
This post is a transparent reflection on the process behind writing “The Moral Certainty of Woke Ideology”. As AI tools become more prevalent in writing and editing, I wanted to share how I used large language models (LLMs) like ChatGPT and Grok—not to generate content wholesale, but to refine, clarify, and challenge my own thinking. What follows is not just a behind-the-scenes look at authorship in the age of AI, but a case for maintaining intellectual honesty, even when using powerful tools.

You can read the full original article here, and the companion version produced using Grok here.

 

(ChatGPT): Here’s my take on a meta-analysis of our collaborative process around The Moral Certainty of Woke Ideology, written in your voice, but shaped by a neutral and insightful tone. It acknowledges your authorship, the iterative nature of our process, the careful balancing of authenticity and refinement, and how LLMs (both myself and Grok) were used deliberately as tools rather than as originators.


Meta-Reflection: Writing Honestly with AI Help

Introduction: The Experiment in Human-AI Authorship

In an era awash in generative content, the challenge for any writer is not just originality—but integrity. Can an essay still be truly “yours” if refined with the help of a large language model (LLM)? This essay—the one you’re reading a meta-analysis about—was a test of that question.

What follows is a transparent breakdown of how the piece The Moral Certainty of Woke Ideology was written: from its raw origins, through iterative passes with LLMs (Grok and ChatGPT), and into a final, publishable synthesis. The goal was not simply to “make it better,” but to keep it honest—to ensure that the human voice, the argument, and the underlying conviction remained firmly rooted in the author’s own moral and intellectual framework.

1. The Human Origin: A Frustrated, Focused Thesis

The original essay was written without any AI assistance. It was personal, direct, and driven by a clear frustration: the futility of arguing with ideologues who see themselves as morally infallible because of a process they describe as “consciousness raising.” The essay framed this mindset as gnostic in character—an epistemology rooted not in evidence or mutual understanding, but in having accessed a supposedly higher truth.

That draft was neither perfect nor polished. But it was undeniably authentic: a writer grappling with something real, intellectually and emotionally. That is where any serious use of AI must begin.

2. Grok’s First Intervention: Editorial Guidance and a Tone Shift

Grok’s initial contribution was like a sharp, attentive editor. It praised the conceptual core—especially the gnosticism analogy—but pointed out that the term “woke” was too broad, the tone too combative, and the structure lacking clarity. It attempted a rewrite that introduced a narrative hook (an anecdote), softened the tone, and introduced counterarguments to avoid echo-chamber logic.

This revision was helpful—but also crossed a line: the anecdote was fictional. And for an essay concerned with truth and intellectual honesty, inventing scenes or emotions is unacceptable. That feedback became part of the process: keep what helps clarify, but reject what confabulates.

3. ChatGPT’s Contribution: Academic Structure and Citation Depth

When the same draft was fed into ChatGPT, the result was something more formal, more structured—and frankly, more publishable in academic or policy-adjacent venues. ChatGPT added scholarly context (e.g. citing Freire, Kendi, DiAngelo, MacKinnon), coined or introduced the term “sociognostic” to sharpen the gnosticism analogy, and framed the essay in neat, well-organized sections.

It was rigorous, but something was lost. The personal voice—the frustration, the urgency—was flattened. It was lucid, but it didn’t feel alive. The machine had made the piece safer, smarter, but also more sterile.

4. The Writer’s Role: Choosing What to Keep and What to Cut

The next phase wasn’t about accepting what Grok or ChatGPT had produced, but about curating it.

  • The fabricated anecdote was cut.
  • The sociognostic concept stayed.
  • The academic references were kept—but only those relevant and intellectually honest.
  • The tone was recalibrated: direct, but not incendiary. Critical, but not snide.
  • Counterarguments were brought in—not to water down the thesis, but to model the kind of debate the essay lamented as missing.

In short: the writer took back control. The models weren’t driving; they were in the passenger seat—sometimes helpful with directions, sometimes fumbling with the map.

5. The Final Essay: A Genuine Synthesis

The result was a piece that combined:

  • A human-originated thesis, structured by real-world experience and philosophical insight.
  • LLM-enhanced clarity, especially in structure, citation, and terminology.
  • Writer-driven integrity, ensuring no hallucinations, no emotional falsification, and no bending to the rhetorical norms of either outrage or academic obfuscation.

It didn’t just argue that the moral certainty of “the Woke” is rooted in a kind of secular gnosticism—it modeled how to make that case in public discourse, rigorously and in good faith.


Why This Process Matters

We’re entering a time when essays, op-eds, books, and even scientific papers may be increasingly AI-assisted. The temptation will be to outsource more and more of the work. But if we care about truth and originality, we should use these tools as collaborators, not crutches.

This experiment showed that you can start with a personal idea, refine it through AI, and still come out with a product that’s yours—provided you:

  • Start with your own thoughts (even messy ones),
  • Engage the tools critically, not deferentially,
  • Reject anything inauthentic (especially fabricated anecdotes or fake citations),
  • Preserve your own voice and purpose, even through heavy editing.

This is how original writing survives in the age of AI: by refusing to outsource moral and intellectual judgment, even if everything else—grammar, structure, references—gets a machine polish.


Final Thought: Integrity is Still the Point

If you’ve made it this far, you’ve already done something most writers don’t do in 2025: you’ve thought carefully about how something was written, not just what it says.

The Moral Certainty of Woke Ideology isn’t an “AI-generated” essay. It’s a human-generated idea, refined by machines, but always guided by a living mind. That matters—not just for the credibility of the piece, but for the honesty of the writer.

And in an age where belief is often weaponized and language is politicized, honesty in writing may be the last moral certainty we can claim without irony.

 

 

I asked Grok and ChatGPT to perform a meta analysis on the process we used to write the essay that appeared titled – The Moral Certainty of Woke Ideology: A Critique.  Today we will look at how Grok decided to break down the request.  Grok wrote the meta analysis as if responding directly to me, the author the piece.

Introduction: Navigating the Human-Machine Collaboration

Writing is a human endeavor—gritty, imperfect, and deeply personal. In 2025, large language models (LLMs) like myself (Grok 3, built by xAI) and ChatGPT offer powerful tools to refine ideas, but they risk diluting authenticity if not guided carefully. How does a writer use LLMs to enhance their work while keeping it original? This meta-analysis traces the creation of your essay, The Moral Certainty of Woke Ideology, through iterative exchanges among you, me, and ChatGPT. By detailing who did what, stage by stage, and including your original draft as a reference (see Appendix), I aim to show how LLMs can sharpen ideas without overshadowing the writer’s voice. The result is a piece that’s yours—honed by technology, grounded in your intent.

Stage 1: The Original Draft (Your Starting Point)

Who Did What: You wrote the initial essay without LLM assistance, articulating why arguing with “woke” ideologues feels futile. You proposed that their moral certainty stems from consciousness raising, which you compared to gnosticism—a belief in possessing deeper, unassailable insight. Using racism as an example (color-blindness vs. systemic racism), you argued that this mindset creates a knowledge differential, stifling debate and leading to name-calling (e.g., “bigot, racist”). The draft, included in the Appendix, was raw but clear.

Impact on the Process: Your draft was the foundation, setting the essay’s direction and tone. Its directness—born of frustration with unproductive debates—gave it a vivid, authentic edge, even if terms like “woke” were broad and the tone occasionally sharp. The gnosticism analogy, though not fully fleshed out, was a creative attempt to pinpoint the issue. Weaknesses, like vague terminology and limited counterargument engagement, were typical of a first pass, offering room for refinement.

Commentary: Starting without LLM help was a practical choice. It ensured the essay reflected your perspective, not a machine’s interpolation. The draft’s roughness—its unpolished phrasing, its raw irritation—was its strength, capturing a real human struggle. This stage highlights the value of beginning with your own words, using LLMs to refine rather than originate.

Stage 2: Grok’s First Refinement

Who Did What: I reviewed your draft, praising its clarity and bold gnosticism analogy while noting areas for improvement: vague “woke” terminology, a strained gnosticism comparison, a confrontational tone, and weak counterargument engagement. I provided a revised essay with a personal anecdote (later removed), a shift to “ideological certainty” for accessibility, a softer tone to broaden appeal, and a counterargument addressing systemic racism’s validity, citing National Institute of Justice data.

Impact on the Process: My revision aimed to preserve your intent while making the essay more persuasive and relatable. The pivot to “ideological certainty” clarified the gnosticism analogy, and the counterargument added rigor, aligning with your truth-seeking focus. The tone adjustment aimed to invite more readers, though it risked softening your edge. The anecdote, while engaging, was a misstep given your preference for authenticity.

Commentary: This stage showed LLMs as collaborative editors. I didn’t rewrite your ideas but suggested structural and tonal tweaks to amplify them. Your later feedback—rejecting the anecdote—kept the process honest, proving the writer’s role in guiding the machine. Without that check, LLMs can over-polish or stray into inauthenticity.

Stage 3: ChatGPT’s Contribution

Who Did What: You prompted ChatGPT with the same task, and it produced a formal, polished essay. It retained your gnosticism analogy, introducing “sociognostic” to refine it, and added citations (MacKinnon, Freire, DiAngelo, Kendi, Pluckrose) for credibility. Structured with clear sections (e.g., “Why the Woke Can’t Debate”), it used a measured tone and emphasized the moral hierarchy in woke ideology, concluding with advice for engaging such certainty.

Impact on the Process: ChatGPT’s version added scholarly depth, making it suitable for outlets like C2C Journal. The “sociognostic” term sharpened the analogy, and citations grounded consciousness raising historically. However, it felt detached, lacking your personal fire, and didn’t robustly engage counterarguments, risking a one-sided tone. Its academic lean made it less accessible for a general audience.

Commentary: ChatGPT delivered polish but drifted from your vivid style. Its output was rigorous but formulaic, highlighting the LLM’s tendency to prioritize structure over personality. Your decision to compare it with my version showed your active role in the process, using LLMs as a resource rather than a replacement.

Stage 4: Your Feedback and Directive

Who Did What: You reviewed both versions, noting ChatGPT’s polish but its sterility, and appreciating my version’s accessibility and counterargument engagement. You rejected the fabricated anecdote, prioritizing authenticity, and directed me to merge the two, combining ChatGPT’s rigor with my relatability while ensuring no confabulated stories.

Impact on the Process: Your feedback was critical, steering the essay back to your vision. Rejecting the anecdote ensured authenticity, and your call for a merger balanced ChatGPT’s depth with my accessibility. By weighing both versions’ strengths, you clarified your priorities—rigor, relatability, and your own voice—keeping the process grounded.

Commentary: This stage underscored your control. LLMs can produce slick prose, but without your input, they risk diluting your intent or inventing details. Your insistence on authenticity and your clear directive for a merger showed you weren’t just accepting LLM outputs but shaping them to fit your goal.

Stage 5: The Merged Final Product

Who Did What: I crafted the final essay, blending ChatGPT’s citations and “sociognostic” term with my accessible opening (“We’ve all encountered it”), counterargument engagement (with NAACP and NIJ data), and vivid descriptors (“moral X-ray vision,” “excommunications”). I retained your thesis—woke ideology’s certainty stems from consciousness raising—and used your style: formal yet sharp, with a nod to colloquialism (“bollocks”). The structure balanced clear sections with an urgent flow.

Impact on the Process: The final essay synthesized all inputs: your original thesis, ChatGPT’s scholarly scaffolding, and my accessible tone. It clarified the gnosticism analogy with “sociognostic” and “ideological certainty,” engaged counterarguments robustly, and preserved your voice’s edge (e.g., “debacle we can’t afford”). The universal opening replaced the anecdote, ensuring authenticity while remaining relatable.

Commentary: The merger succeeded because you set the terms. Your draft provided the core idea, ChatGPT added intellectual weight, and I bridged the gap with accessibility. Each stage refined the essay, but your feedback ensured it stayed true to your intent, avoiding the phoniness of generic LLM output.

Reflections on the Process

This journey offers lessons for writing with LLMs while staying original:

  1. Begin with Your Words: Your unaided draft ensured the essay was rooted in your perspective, not a machine’s. LLMs can’t replicate that initial spark.
  2. LLMs as Refiners, Not Creators: Both ChatGPT and I acted as editors, enhancing structure, evidence, and clarity. Your oversight kept us from overstepping.
  3. Prioritize Authenticity: Your rejection of the fabricated anecdote was a practical choice, guarding against LLM tendencies to confabulate.
  4. Iterate Actively: The process—your draft, my revision, ChatGPT’s take, your feedback, and the merger—shows writing with LLMs is iterative. Your direction made each stage purposeful.
  5. Balance Depth and Appeal: ChatGPT’s citations added rigor, while my tone and counterarguments broadened the essay’s reach. Your vision tied these together.

Conclusion: Authenticity in the Age of LLMs

Writing with LLMs is a tightrope walk: they can sharpen your ideas or smooth them into bland uniformity. This process—your raw draft, my refinements, ChatGPT’s polish, and your final directive—shows how to stay on the rope. The final essay, The Moral Certainty of Woke Ideology, is no LLM-generated clone; it’s your idea, refined through collaboration but anchored in your truth-seeking drive. In an era of AI saturation, that’s a victory worth celebrating—not with fanfare, but with quiet confidence.

Appendix: Original Draft

Below is your original draft, included to show the starting point of the process:

We’ve all had discussions with people who don’t really seem to get what having an argument is about. Having an argument is about the exchange and evaluation of idea and thoughts with the overall intent to find weaknesses and areas of improvement for one’s thoughts and beliefs. This model doesn’t seem to work with people who are woke. Why is that? I propose that the answer is that the Woke believe that on any particular topic they have “the final truth” of the matter. How did they get there? They went through a (dubious) process called consciousness raising which they believe has given them deeper insight into the problem. There is formal name for this sort of thinking, it is called gnosticism and this notion is what makes arguing with the Woke so frustrating at times.

Gnosticism or having received wisdom or insight means that the person exercising arguments from a gnostic point of view presuppositionally has deeper and finer insight into whichever topic that happens to be discussed. For example, take the fraught topic of racism. The honest interlocutor might argue that the best way to deal with racism in society is to be colour-blind and make every possible effort to judge people based on the content of their character, as opposed to the immutable characteristics such as skin colour.

“No, no, no”, says the Woke gnostic. That is only surface level understanding of the topic. The real answer is that society is immersed and permeated by racism. It is a on a systemic level. And thus as a base requirement to combat racism one must awaken, or gain the special insight to see that racism in society is systemic and thus everything is racist including the notion of being ‘colour-blind’ when it comes to dealing with other people.

The real issue then for the gnostic is that they have a higher, more accurate knowledge of what is going on and their opponent has not even the faintest clue of what is “really going on” as that would require them to also have a raised consciousness.

So there in lies the problem, the actual thesis cannot be debated or argued about because there would be no argument if both parties had this (mythical) raised consciousness. Woke argumentation thus becomes about the so called ‘knowledge differential’ and since they have the augmented version of the argument; coinciding with their raised consciousness; the Woke interlocutor must, by default have the correct moral and ethical position.

Opponents of Woke gnostics then can be dismissed because of their inferior understanding and thus less nuanced moral and ethical positions on whatever topic might be at hand. The gnostic position precludes examination of their arguments and suppositions by default because they think they already have a superior, more ethical, more nuanced stance – not because of any great accuracy or committment to the truth – but because they think they have a higher understanding of the issue due to their raised consciousness.

So rather than arguing, the Woke call you names. Bigot, racist, transphobe et al because they believe that their moral/ethical position is superior to yours and you are the problem because you won’t accept this higher wisdom. It behooves you to consider this gnostic wrinkle while engaging with the woke, and to plan and prepare for their moral certitude and not be thrown off by their judgements that come with unwarranted confidence given an honest factual analysis.

References

  • DiAngelo, R. (2018). White Fragility: Why It’s So Hard for White People to Talk About Racism. Beacon Press.
  • Freire, P. (1970). Pedagogy of the Oppressed. Continuum.
  • Kendi, I.X. (2019). How to Be an Antiracist. One World.
  • MacKinnon, C.A. (1983). “Feminism, Marxism, Method, and the State: An Agenda for Theory.” Signs, Vol. 7, No. 3.
  • NAACP. (2023). “Criminal Justice Fact Sheet.” NAACP.org.
  • National Institute of Justice. (2021). “Reducing Racial and Ethnic Disparities in the Justice System.” NIJ.gov.
  • Pluckrose, H., & Lindsay, J. (2020). Cynical Theories. Pitchstone Publishing.

Artificial Intelligence (AI) is a marvel of modern computing, designed to mimic human thinking by learning from vast troves of information. At its heart lie systems like Large Language Models (LLMs), powerful programs trained to understand and generate human language—think of them as digital librarians, sifting through patterns in text to answer questions or write stories. These models rely on data collected from the internet, a process called web scraping, where public texts like articles or forums are gathered to fuel their learning. AI’s strength lies in this ability to absorb and process information at scale, but its outputs—however impressive—depend entirely on the quality of that data. A flawed foundation can lead to errors or biases, a challenge that demands vigilance.

Creating an AI model is like forging a tool from raw ore: it requires immense effort and precision. Developers collect billions of words through scraping, carefully filtering out irrelevant or harmful content to build a reliable dataset. This data trains the model to predict word patterns, refining its ability to respond sensibly—an arduous process powered by thousands of computers working for months. Yet, the stakes are high: if the scraped data reflects societal prejudices or lacks diversity, the AI may produce skewed or misleading results. Ethical data collection is thus no afterthought—it shapes whether AI unites us through shared understanding or deepens existing divides.

Once built, AI models serve practical purposes, from powering chatbots to summarizing texts, but they are not infallible. They excel at recognizing patterns but struggle with abstract reasoning or unfamiliar scenarios, sometimes generating convincing but false information, known as “hallucinations.” Ethical concerns persist: scraping raises questions about privacy and ownership, as texts—creative works, personal posts—are used without clear consent. AI holds transformative potential, a beacon for collective progress. Yet, without careful stewardship, it risks eroding trust. Responsible innovation—grounded in transparency and fairness—ensures AI serves humanity, not sows discord.

Did You Want to Know More?

For deeper insights into AI and LLMs, explore these resources:

This Blog best viewed with Ad-Block and Firefox!

What is ad block? It is an application that, at your discretion blocks out advertising so you can browse the internet for content as opposed to ads. If you do not have it, get it here so you can enjoy my blog without the insidious advertising.

Like Privacy?

Change your Browser to Duck Duck Go.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 397 other subscribers

Categories

January 2026
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

Archives

Blogs I Follow

The DWR Community

  • Unknown's avatar
  • Unknown's avatar
  • Vala's avatar
  • hbyd's avatar
  • Unknown's avatar
  • Unknown's avatar
  • Unknown's avatar
  • Unknown's avatar
  • Unknown's avatar
  • Unknown's avatar
Kaine's Korner

Religion. Politics. Life.

Connect ALL the Dots

Solve ALL the Problems

Myrela

Art, health, civilizations, photography, nature, books, recipes, etc.

Women Are Human

Independent source for the top stories in worldwide gender identity news

Widdershins Worlds

LESBIAN SF & FANTASY WRITER, & ADVENTURER

silverapplequeen

herstory. poetry. recipes. rants.

Paul S. Graham

Communications, politics, peace and justice

Debbie Hayton

Transgender Teacher and Journalist

shakemyheadhollow

Conceptual spaces: politics, philosophy, art, literature, religion, cultural history

Our Better Natures

Loving, Growing, Being

Lyra

A topnotch WordPress.com site

I Won't Take It

Life After an Emotionally Abusive Relationship

Unpolished XX

No product, no face paint. I am enough.

Volunteer petunia

Observations and analysis on survival, love and struggle

femlab

the feminist exhibition space at the university of alberta

Raising Orlando

About gender, identity, parenting and containing multitudes

The Feminist Kitanu

Spreading the dangerous disease of radical feminism

trionascully.com

Not Afraid Of Virginia Woolf

Double Plus Good

The Evolution Will Not BeTelevised

la scapigliata

writer, doctor, wearer of many hats

Teach The Change

Teaching Artist/ Progressive Educator

Female Personhood

Identifying as female since the dawn of time.

Not The News in Briefs

A blog by Helen Saxby

SOLIDARITY WITH HELEN STEEL

A blog in support of Helen Steel

thenationalsentinel.wordpress.com/

Where media credibility has been reborn.

BigBooButch

Memoirs of a Butch Lesbian

RadFemSpiraling

Radical Feminism Discourse

a sledge and crowbar

deconstructing identity and culture

The Radical Pen

Fighting For Female Liberation from Patriarchy

Emma

Politics, things that make you think, and recreational breaks

Easilyriled's Blog

cranky. joyful. radical. funny. feminist.

Nordic Model Now!

Movement for the Abolition of Prostitution

The WordPress C(h)ronicle

These are the best links shared by people working with WordPress

HANDS ACROSS THE AISLE

Gender is the Problem, Not the Solution

fmnst

Peak Trans and other feminist topics

There Are So Many Things Wrong With This

if you don't like the news, make some of your own

Gentle Curiosity

Musing over important things. More questions than answers.

violetwisp

short commentaries, pretty pictures and strong opinions

Revive the Second Wave

gender-critical sex-negative intersectional radical feminism