Excerpt from upcoming book: Conspiracy 201: Knee Deep in the Conspiratorium.
I used several flavors of AI for the research in the book to make sure I have the basic facts straight: Dates and locations. That kind of thing. In almost all things, the AI, collectively and individually, support the official narrative. You’d think if they just reported the facts, they wouldn’t have an opinion. But they do: Their dialog supports the official narrative and their reporting of alternative viewpoints comes complete with undisguised scorn for people who believe anything other than the official position. (Or, maybe I’m just reading it that way…but I don’t think so.)
I love AI as a tool. I used the analogy earlier that it’s like using a chainsaw instead of an axe. It’s fantastically science-fictiony to have the Library of Congress and every expert in the world on just about everything just a few keystrokes away. What would have taken weeks, months or even years to research three decades ago, now only takes a matter of seconds. Just like the axe and the chainsaw: A good chainsaw operator can cut more trees into firewood in one day than a woodsman with an axe can in a week. (Probably more, if I’m running that axe.) Allow me to perpetuate a tired cliché: With great power, comes great responsibility. A maniac with a chainsaw can also do a hell of a lot more damage than an equally motivated nutcase with an axe.
What can AI do now that’s dangerous? Access your bank accounts, credit cards and financial investments? Well, yes. AI can bankrupt anyone in a cashless society in a matter of seconds. Can AI SWAT you? Well, yeah. Can AI revoke your driver’s license and issue a bench warrant for your arrest? Well, yeah. Can AI cancel your insurance and notify the police. Well, yep. Would AI know any of these actions were wrong? Well, no. The AI would have to be programmed to know how to do those things and wherever that programming originated, it’s a good bet avarice, malfeasance, control or manipulation top the list of reasons it was coded that way.
Which begs the question: Coded that way by whom? All the AI available in 2025 are created by big tech, left-leaning, government-approved developers; people who work for and with the government, hand-in-hand with the criminal media. No way they’d get their government subsidies and grants if they refused to push the social and political agendas of the people with the money. The code is weighted to spew out agenda-driven answers even when data to the contrary is easily available. Test it yourself; don’t take my word for it. If we ask the AI to give us the top 3 reasons for something, it weighs scientific papers over news reporting. Then again, news reporting from the New York Times or Washington Post is more likely to be regurgitated as an answer than a contradictory article from the Heber Springs, Ark., Sun-Times. This is still the way it is done, even though in the last 10 years, we know scientific papers and peer-reviewed journals have been compromised by politics and money. We are well aware the NYT and WP dropped the pretense of real journalism in 2016 and have repeatedly been caught publishing misinformation, disinformation and outright fabrication to smear the political rivals of their financial and political backers. We all know they can’t be trusted; yet their views and reports are given precedence by the AI. That’s by design, because their reporting pushes the narrative of the people with the money.
Let’s back up for a minute and cover some basics: I see the biggest fear of AI being that we’re not sure what it can do yet. We’re just scratching the surface. Over the last 40 years, I’ve programmed calculators, computers, robots, CNC mills, CNC lathes, Coordinate Measuring Machines, precision inspection equipment, controllers, and probably things I don’t even remember. My experience tells me that getting a computer to do any actual physical work in the real world takes more than just programming: It involves communication, calibration, location, sensors, interfaces and, significantly, power. Remember, through all of this, if AI starts getting out of control, we can always pull the plug. If we’re afraid we might have a fight with AI in the future, then we better have an manual disconnect before we interconnect.
What do we do? Do we shackle the AI to only work on approved projects? Who does the shackling and who approves the projects? Do we limit the reach of the AI? Who decides how far it can go and who draws the line? AI being limited to tasks only approved by those in power is not a win. That’s what we’re dealing with already, and the field is in its infancy.
No comments:
Post a Comment