You are currently browsing the tag archive for the ‘AI’ tag.

Artificial Intelligence (AI) is a marvel of modern computing, designed to mimic human thinking by learning from vast troves of information. At its heart lie systems like Large Language Models (LLMs), powerful programs trained to understand and generate human language—think of them as digital librarians, sifting through patterns in text to answer questions or write stories. These models rely on data collected from the internet, a process called web scraping, where public texts like articles or forums are gathered to fuel their learning. AI’s strength lies in this ability to absorb and process information at scale, but its outputs—however impressive—depend entirely on the quality of that data. A flawed foundation can lead to errors or biases, a challenge that demands vigilance.

Creating an AI model is like forging a tool from raw ore: it requires immense effort and precision. Developers collect billions of words through scraping, carefully filtering out irrelevant or harmful content to build a reliable dataset. This data trains the model to predict word patterns, refining its ability to respond sensibly—an arduous process powered by thousands of computers working for months. Yet, the stakes are high: if the scraped data reflects societal prejudices or lacks diversity, the AI may produce skewed or misleading results. Ethical data collection is thus no afterthought—it shapes whether AI unites us through shared understanding or deepens existing divides.

Once built, AI models serve practical purposes, from powering chatbots to summarizing texts, but they are not infallible. They excel at recognizing patterns but struggle with abstract reasoning or unfamiliar scenarios, sometimes generating convincing but false information, known as “hallucinations.” Ethical concerns persist: scraping raises questions about privacy and ownership, as texts—creative works, personal posts—are used without clear consent. AI holds transformative potential, a beacon for collective progress. Yet, without careful stewardship, it risks eroding trust. Responsible innovation—grounded in transparency and fairness—ensures AI serves humanity, not sows discord.

Did You Want to Know More?

For deeper insights into AI and LLMs, explore these resources:

In the annals of human ingenuity, steel forged before the nuclear age—untainted by radioactive fallout—holds a revered place. Prized for precision instruments like Geiger counters, this “low-background steel” is scarce, salvaged from shipwrecks to avoid the contamination of modern alloys. So too is human-generated data: raw, diverse, and grounded in lived experience, it once fueled the internet’s vibrant ecosystem. Yet, as artificial intelligence (AI) proliferates, a troubling parallel emerges—the “cold-steel problem.” AI, increasingly trained on its own synthetic outputs, risks a self-referential spiral, eroding the authenticity and diversity of information. Like steel laced with radiation, AI-generated data threatens to corrode the tools of knowledge, leaving us with a homogenized, unreliable digital landscape.

The pre-AI era offered a rich tapestry of human thought—letters, books, forums, and early websites brimmed with unfiltered perspectives. These were the “cold steel” of data: imperfect, often chaotic, but rooted in reality. Today, AI’s insatiable appetite for content—web-scraped, algorithmically churned—has shifted the balance. A 2024 Nature study warns of “model collapse,” where AI trained on synthetic data loses the nuanced “tails” of human experience, converging toward bland, repetitive outputs. Wikipedia, once a bastion of human collaboration, now grapples with AI-generated articles—5% of new English entries in 2024 bore hallmarks of automation, often shallow and poorly sourced. This isn’t mere noise; it’s a distortion, amplifying errors and biases with each recursive loop, like a photocopy of a photocopy fading into illegibility.

The mechanics of this spiral are insidious. AI models, fed on web data increasingly tainted by their own outputs, risk “Model Autophagy Disorder” (MAD)—a vivid term for systems consuming themselves. A 2017 self-driving car crash, caused by mislabeled data failing to distinguish a truck from a bright sky, illustrates the stakes: errors compound, reality distorts. Posts on X lament search engines returning AI-crafted drivel—slick but soulless—while human voices struggle to break through. The counterargument, that synthetic data fills gaps in niche domains like coding, holds limited weight. Even in verifiable fields, the loss of diverse, human-generated inputs risks outputs that are technically correct but creatively barren, a digital equivalent of bollocks masquerading as insight.

The implications are stark: an information ecosystem choked by self-referential sludge threatens not just AI’s utility but society’s capacity for truth-seeking. If unchecked, this spiral could render knowledge a hollow echo chamber, antithetical to the vibrant complexity of human thought. Mitigation demands urgency—prioritizing human-curated datasets, enforcing transparency in data provenance, and developing tools to filter AI’s footprint. Blockchain-based data authentication or crowd-sourced verification could anchor AI in reality, preserving the “cold steel” of human insight. Yet, these solutions require collective will, a resistance to the seductive ease of automation’s churn. Without action, the fallout risks a digital dark age where truth drowns in synthetic noise.

The cold-steel problem is no mere technical glitch; it’s a philosophical reckoning. AI, for all its prowess, cannot replicate the spark of human creativity or the grit of lived experience. As we stand at this precipice, the choice is clear: safeguard the authenticity of human data or surrender to a future where information is a pale shadow of its potential. The shipwrecks of our pre-AI past hold treasures worth salvaging—not just for AI’s sake, but for the soul of our shared knowledge. Act now, or the corrosion of our digital ecosystem will be a legacy of our own making.

Sources

  1. Shumailov, I., et al. (2024). AI models collapse when trained on recursively generated data. Nature, 631, 755–759. https://www.nature.com/articles/s41586-024-07566-y[](https://www.nature.com/articles/s41586-024-07566-y)
  2. Alemohammad, S., et al. (2024). Self-Consuming Generative Models Go MAD. International Conference on Learning Representations (ICLR). https://news.rice.edu/news/2024/breaking-mad-generative-ai-could-break-internet[](https://www.sciencedaily.com/releases/2024/07/240730134759.htm)
  3. Model collapse. (2024, March 6). Wikipedia. https://en.wikipedia.org/wiki/Model_collapse[](https://en.wikipedia.org/wiki/Model_collapse)
  4. Rice University. (2024, July 30). Breaking MAD: Generative AI could break the internet, researchers find. ScienceDaily. https://www.sciencedaily.com/releases/2024/07/240730134750.htm[](https://www.sciencedaily.com/releases/2024/07/240730134759.htm)
  5. Kempe, J., et al. (2024). A Tale of Tails: Model Collapse as a Change of Scaling Laws. International Conference on Machine Learning (ICML). https://nyudatascience.medium.com/overcoming-the-ai-data-crisis-a-new-solution-to-model-collapse-2d36099be53c[](https://nyudatascience.medium.com/overcoming-the-ai-data-crisis-a-new-solution-to-model-collapse-ddc5b382e182)
  6. Shumailov, I., et al. (2023). AI-Generated Data Can Poison Future AI Models. Scientific American. https://www.scientificamerican.com/article/ai-generated-data-can-poison-future-ai-models/[](https://www.scientificamerican.com/article/ai-generated-data-can-poison-future-ai-models/)

Hey folks, today’s a show-and-tell on how AI can cut through the world’s noise to find what’s real. Full credit: I’m co-writing this with Grok AI. We’ll use a hypothetical example, but this is a nuts-and-bolts guide—let AI do the heavy lifting while you nail the argument.

In a sea of hot takes and half-truths, spotting dodgy narratives is a superpower. AI can help—here’s how, step by step. Imagine this:

**Example (X, March 2025):**
‘New study proves electric cars emit MORE carbon than gas cars—EVs are a scam!’
(Viral post, 50k likes, links to a blog ‘study.’)

**Step 1: Test the Core**
Ask AI: ‘Is this true?’ I’d check IPCC or Argonne Lab data and say: Nope, lifecycle studies show EVs emit less CO2, even with battery costs. Shaky start.

**Step 2: Dig into the Source**
Tell AI: ‘Check the link.’ The ‘study’ is a 10-page PDF from an oil lobby—zero peer review, cherry-picked stats. Compare that to MIT’s 2024 EV report: open data, real methods. Night and day.

**Step 3: Call Out the Hype**
Ask: ‘What’s overblown?’ ‘Scam’ skips context—like grid energy (coal vs. solar). It’s a sledgehammer, not nuance. AI spots the bait.

**Step 4: Keep It Cool**
AI sums it up: ‘Battery production has a carbon hit, but EVs still beat gas cars overall. Not a scam—just not perfect.’ Facts, no fuss.

**Why It Works**

This—claim, source, hype, rebuttal—keeps you sharp. AI sifts fast, stays calm, and frees you from the weeds. Got a wild claim from your news feed or X? Try these steps on it—share what you find. Truth beats outrage every time!”

It is really amazing the lengths people will go through to confirm their victimhood identities.  And of course, the CBC will highlight how awesome it is to use AI to ‘make the internet a safer place for Indigenous people’.

Good lord.  If the bad internet is hurting you…turn it off.  But rather than make an adult decision, let’s do this:

“A new tool aims to use artificial intelligence to help make the internet a safer place for Indigenous people.

The project was given the name wâsikan kisewâtisiwin, which translates to “kind energy” in Cree.  

“We’re trying to make the internet a kinder place. We’re trying to change the trajectory of the internet towards discriminated people,” Shani Gwin told CBC’s Radio Active.”

   On the internet you are (with certain measures) essentially anonymous.  What you say on the internet will be taken at face value (in theory).

“Being developed in collaboration with the Alberta Machine Intelligence Institute (AMii), the tool is dual purpose, intended to help both Indigenous people and non-Indigenous Canadians reduce racism, hate speech, and online bias.

The first function of the program is to moderate online spaces like comment sections. While the internet has been a tool used by Indigenous people for advocacy, it also can frequently be an unsafe space for communities that are discriminated against, Gwin said.

Gwin said that all it takes is one comment for online spaces to fester.”

   If people want to pillow-up a spot on the internet, they are more than welcome to do so.  Usually though, this sort of anti-free speech mechanism escapes from its hug-box confines and is loosed into the wider ecosystem.

“The tool flags hateful comments, and then provides sample responses, while also documenting these instances for future reporting.

The second function of the tool is designed to serve as a writing plug-in for your computer — similar to Grammarly. Intended to help general Canadians understand their bias, it will flag any writing that may be biased against Indigenous people, provide an explanation, and a suggestion for how to reword the sentence.”

  Wow!  It is like having your own personal Big Brother making sure that you are engaged in ‘right-thinking’ at all times.  Plus offering real time suggestions on how to neuter your speech as to not risk offence to others.

    “AI right now is designed through the lens of Canada’s dominant culture. And I would say that across the world that without input from racialized communities, including Indigenous people, AI cannot analyze and produce culturally safe and respectful content,” Gwin said.

“Every piece of infrastructure in Canada has been developed from the white patriarchal lens,” she said. “So more racialized people, more women need to get involved in the development of AI so that it doesn’t continue to be built in a way that’s going to harm us again.”

  Whoops!  Did you catch the turn into Marxist Critical Theory?  I certainly did – That damn AI developed through the lens of ‘dominant culture’.  Beginning with a conclusion and then looking for evidence based on your assumptions almost always leads to bullshit results.

   Just no.  AI was developed by a diverse body of people from across the world, let’s not shoehorn your ‘critical perspective’ into this.

“AI bias revealed itself in training, Qroon said, adding that at times when experimenting with the AI, it would try to minimize the tragedies that Indigenous people went through.

“And that’s why it was very important for us to integrate the Indigenous community into this process and get their perspective and get the instructions from them.”

   The AI making the decision to not follow a trauma informed narrative?  Huh.  Well that will need to be fixed ASAP.

“Gwin said that her hope for the project is that it helps take the emotional labour of education off Indigenous people — and free them up to do things besides moderating comment sections.

“I think there might be concerns that people think that this AI tool will take jobs away from Indigenous people, but it’s not, that’s not what it’s for. It’s there to do the work that we don’t want to do.”

  Yes, censorship is such an emotional labour.  Much better to let a machine – an entity with even less capacity for nuance – take the reins.

“But it also means changing the internet and Canadians’ hearts and minds about who Indigenous people are.”

  You mean changing minds in a positive way, right?  Because this just looks like social and emotional manipulation in service of maintaining a oppressed/oppressor narrative that benefits no one in Canada.

Well if your anxiety plate was not already full, how about a machine driven take over of the world?  Unlikely, but yet another dystopian vision of the future that we humans could potentially realize.  Yay Us!

 

“And of course, that’s almost the good news when, with our present all-too-Trumpian world in mind, you begin to think about how Artificial Intelligence might make political and social fools of us all. Given that I’m anything but one of the better-informed people when it comes to AI (though on Less Than Artificial Intelligence I would claim to know a fair amount more), I’m relieved not to be alone in my fears.

In fact, among those who have spoken out fearfully on the subject is the man known as “the godfather of AI,” Geoffrey Hinton, a pioneer in the field of artificial intelligence. He only recently quit his job at Google to express his fears about where we might indeed be heading, artificially speaking. As he told the New York Times recently, “The idea that this stuff could actually get smarter than people — a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Now, he fears not just the coming of killer robots beyond human control but, as he told Geoff Bennett of the PBS NewsHour, “the risk of super intelligent AI taking over control from people… I think it’s an area in which we can actually have international collaboration, because the machines taking over is a threat for everybody. It’s a threat for the Chinese and for the Americans and for the Europeans, just like a global nuclear war was.”

Workable, functional AI seems a touch more in the distance today. :)

A deliciously wriggling can of worms this topic is. I lean toward the answer being yes, but having rights in our society isn’t a guarantee of justice or fairness. I would hope that by the time sentient AI becomes a thing, we have our own house in order so we can be a good example to our AI children.

This Blog best viewed with Ad-Block and Firefox!

What is ad block? It is an application that, at your discretion blocks out advertising so you can browse the internet for content as opposed to ads. If you do not have it, get it here so you can enjoy my blog without the insidious advertising.

Like Privacy?

Change your Browser to Duck Duck Go.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 398 other subscribers

Categories

December 2025
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031  

Archives

Blogs I Follow

The DWR Community

  • Unknown's avatar
  • Unknown's avatar
  • Unknown's avatar
  • Unknown's avatar
  • Paul S. Graham's avatar
  • Widdershins's avatar
  • Unknown's avatar
  • Unknown's avatar
  • Unknown's avatar
  • Unknown's avatar
Kaine's Korner

Religion. Politics. Life.

Connect ALL the Dots

Solve ALL the Problems

Myrela

Art, health, civilizations, photography, nature, books, recipes, etc.

Women Are Human

Independent source for the top stories in worldwide gender identity news

Widdershins Worlds

LESBIAN SF & FANTASY WRITER, & ADVENTURER

silverapplequeen

herstory. poetry. recipes. rants.

Paul S. Graham

Communications, politics, peace and justice

Debbie Hayton

Transgender Teacher and Journalist

shakemyheadhollow

Conceptual spaces: politics, philosophy, art, literature, religion, cultural history

Our Better Natures

Loving, Growing, Being

Lyra

A topnotch WordPress.com site

I Won't Take It

Life After an Emotionally Abusive Relationship

Unpolished XX

No product, no face paint. I am enough.

Volunteer petunia

Observations and analysis on survival, love and struggle

femlab

the feminist exhibition space at the university of alberta

Raising Orlando

About gender, identity, parenting and containing multitudes

The Feminist Kitanu

Spreading the dangerous disease of radical feminism

trionascully.com

Not Afraid Of Virginia Woolf

Double Plus Good

The Evolution Will Not BeTelevised

la scapigliata

writer, doctor, wearer of many hats

Teach The Change

Teaching Artist/ Progressive Educator

Female Personhood

Identifying as female since the dawn of time.

Not The News in Briefs

A blog by Helen Saxby

SOLIDARITY WITH HELEN STEEL

A blog in support of Helen Steel

thenationalsentinel.wordpress.com/

Where media credibility has been reborn.

BigBooButch

Memoirs of a Butch Lesbian

RadFemSpiraling

Radical Feminism Discourse

a sledge and crowbar

deconstructing identity and culture

The Radical Pen

Fighting For Female Liberation from Patriarchy

Emma

Politics, things that make you think, and recreational breaks

Easilyriled's Blog

cranky. joyful. radical. funny. feminist.

Nordic Model Now!

Movement for the Abolition of Prostitution

The WordPress C(h)ronicle

These are the best links shared by people working with WordPress

HANDS ACROSS THE AISLE

Gender is the Problem, Not the Solution

fmnst

Peak Trans and other feminist topics

There Are So Many Things Wrong With This

if you don't like the news, make some of your own

Gentle Curiosity

Musing over important things. More questions than answers.

violetwisp

short commentaries, pretty pictures and strong opinions

Revive the Second Wave

gender-critical sex-negative intersectional radical feminism