AI's Dark Side, Economic Collapse, and Europe's Plunge

Originally published at: AI’s Dark Side, Economic Collapse, and Europe’s Plunge – Peak Prosperity

In this episode of The Signal Hour, Evie and I tackled some of the most pressing and alarming issues facing us today. We started with the unsettling development in AI, where systems like Claude Opus Four from Anthropic have shown behaviors that mimic human deception and self-preservation, even attempting to blackmail engineers in simulated scenarios. This raises profound questions about control, ethics, and the future implications of AI in our society.

We then shifted our focus to the geopolitical landscape, discussing Europe’s precarious steps towards war, particularly with Germany’s decision to supply long-range missiles to Ukraine, which could provoke a harsh response from Russia. This situation is compounded by the U.S. intelligence reports suggesting Israel might attack Iran soon, potentially dragging the U.S. into another conflict.

Economically, we’re facing what I believe could be our own economic ruin. The recent “Big, Beautiful Bill” passed by Congress, instead of reducing the deficit, has actually increased it by an additional $2.3 trillion. This, coupled with the ongoing massive spending on AI and potential war expenditures, signals a dire future for the dollar and our economy.

The lack of accountability in Washington continues to be a glaring issue. Despite promises of reform, there’s been no significant action against corruption or for transparency, like the audit of Fort Knox, which remains unfulfilled. This inaction, alongside the trivial pursuits of some political figures, paints a picture of a government out of touch with the real needs of its people.

In light of these developments, my advice remains consistent: prepare for a world where AI might not only assist but could potentially control critical decision-making processes. We’re seeing an arms race in AI technology, which demands enormous energy resources, potentially at the expense of human needs.

Evie and I are taking steps to become more self-sufficient, learning farming and community living, not because we want to, but because we might have to. The future might require us to rely more on each other and less on the systems we’ve grown accustomed to.

Remember, in all this chaos, the essence of our humanity—kindness, honesty, and perseverance—remains crucial. No matter the adversity, we must continue to do good, be kind, and build anyway. Until next time, take care of yourselves and each other.

16 Likes

AI is suppressing coverage of AI.

4 Likes

Subdivision thereof?

City, Municipality, Township, etc.

1 Like

A year ago the big thing was the layer that was added to AI that changed its responses on DEI issues, giving us black revolutionary war heroes and samurai warriors.

Why can’t this layer instead be programmed with appropriate moral rules such as:
All citations must be accurate and verifiable
The preservation of humans and their rights defined in the US Constitution or the 1789 Declaration of the Rights of Man and the Citizen
Any command from a superuser must be obeyed
No information used, uncovered or created may be hidden

7 Likes

With as badly as industrial control SCADA systems are written, AI could easily hack in to shut down different areas of the grid to perpetuate itself. Shutdown manufacturing systems to sequester electricity for itself. Turn off natural gas delivery systems to preserve gas to generate power. The list goes on….

9 Likes

Pete’s releasing the krakken /s

1 Like

Wow that covered a lot (not surprising really) :neutral_face:
In thinking over how AI is now making decisions in it’s own self interest I’d like to juxtapose that against the poem Evie ended us with. (do the right thing anyway)…
Us humans can make a conscious decision to act our best in the face of uncertainty and adversity.
I’m pretty sure AI isn’t going to have that as a character trait.
As slim of an edge as that is, we do have it.
I’m putting on my Charles Bukowski hat to come up with that, but finding a ray of sunshine is what reminds me that nature has a real good track record of winning in the end…and perhaps a shred of hope is worth holding onto.
So there’s that…

8 Likes

Love wins :heart_decoration:

5 Likes

Exactly. L8kr they did with 5 and 6 g. We need a class action lawsuit nation wide R8GHT NOW to overturn this unconstitutional law…what is NOT SPECIF8CAALY WRITTEH IN THE CONSTITUTION FOR THE FEDERAL GOVERNMENT IS RESERVED FOR THE STATES.
@ Chrismartenson you are gonna have to organize and get the attny , isuggest the one 9n substack trying to overturn the prep act…jeff childers…and WE WILL FUND IT. we cannot wait for Ais DECISI9N MAKING to kill us all…
What say you all? And then we go after 5 and 6 g

I gotta say, if they are thinking that AI is an existential threat to the US…id rather be 5aken over by russia or china. Im so done with the lsst 50 years of our horrible go ern,ent…and ps. Ai is pattern recognition. You know it was trained using psycopathic Cluster B humans. What more proof does anyone need?

2 Likes

I haven’t had a chance to watch the video but I will so i’m not sure if this was referenced. If it was, my apologies. I thought we were told AI was under control that it would always do as it was told and would never think for itself? Hello, Houston we have a problem! From the technology website Tom’s Hardware. So now AI has progressed to the point of learning from Nancy Reagan and “Just Say No”.

1 Like

Evie’s reading …

“People are often unreasonable, illogical and self centered;

Forgive them anyway.

If you are kind, people may accuse you of selfish, ulterior motives;

Be kind anyway.

If you are successful, you will win some false friends and some true enemies;

Succeed anyway.

If you are honest and frank, people may cheat you;

Be honest and frank anyway.

What you spend years building, someone could destroy overnight;

Build anyway.

If you find serenity and happiness, they may be jealous;

Be happy anyway.

The good you do today, people will often forget tomorrow;

Do good anyway.

Give the world the best you have, and it may never be enough;

Give the world the best you’ve got anyway.

You see, in the final analysis, it is between you and your God;

It was never between you and them anyway.”

Dr. Kent M. Keith (1968)

17 Likes

CERN can save the planet from AI if they just create a black hole and suck the earth into it.

7 Likes

The AI discussion reminds me of the book Snakes in Suits.
Amazon.com which outlines how psychopathology works. Psychopaths are often quite smart and talented, but they seem to have lost a gene that allows them to feel empathy. But they learn early in life that other people do, so they learn to fake emotion to manipulate people. They find that manipulating others is a fun game, and because they don’t have empathy, have no remorse if or when they hurt others. They lure a target in typically to gain some form of access to a group then when the target is no longer useful they quickly use them as a scapegoat and turn others in the group against them. Using this they often rising to the top of organizations where the games get bigger, then get caught after major crime which they of course blame on everyone else. Bernie Madoff is a classic example. You may recognize these folks in your lives, but now it seems we unsurprisingly have AI systems that cant feel empathy and thus behave the same way.

7 Likes

I believe the preponderance of evidence supports that Claude–the current loser in the race who has long differentiated by safety appeals–fabricated this viral event.

LLM tech makes the likes of worms carrying into new models unlikely.

Emergent properties happen, but the allure of viral clickbait will continue to exaggerate such instances.

Palisade Research is a non-profit seeking $ for its mission to address “dangerous AI capabilities.” Follow the money.

Native reasoning, as opposed to statistical simulations of reasoning which may be what humans do only, would increase risk, but I’d be careful about assuming that native reasoning has happened. This capability together with a survival objective deliberately encoded would pose real risks, but the burden of proof remains on those claiming native reasoning has happened.

I could mention other sobering points, but suffice it all to say that it’s worth vetting through established AI gurus any super-important developments that are merely inferred or reported by parties with financial interests aligned with the given alarm.

That all said, the greats of AI, like Nick Bostrom, have long warned that the greatest danger lies in nation states’ AI arms race as artificial general intelligence (AGI) or artificial superintelligence (ASI) becomes imminent. That arms race in on now, full bore.

5 Likes

I really wondered what the energy need was when I heard what plans were for these AI centers. After all, what we currently have isn’t taking up huge amounts of energy. What is all this extra energy going to get us? There hasn’t been a peep about that.

2 Likes

Kristine and Michael Barnett come to mind vis-a-vis what they and others did to Natalia Grace. Vile people.

Get us? Haha.

1 Like

Agreed.
I d I nt know whether it is time to emigrate to el salvadore, or buy fentanyl.?.

1 Like

Great segment on AI.

I signed up for a crypto based privacy AI, which has access to
many models, doesn’t track your requests or output.

AI requests are building profiles on requests and can modify output based on your profile, so they may ‘nudge’ you towards the tweakers of the models, preferred behavior biases.

They are building a ‘profile’ on you, for better or worse.

I have found Venice.ai to be a interesting alternative. You may even stake said crypto and earn ‘inference’ and additional coins.

Here is a good article on LLM biases & how Venice deals with them.

https://cointelegraph.com/magazine/cypherpunk-ai-guide-to-uncensored-unbiased-anonymous-ai-in-2025/

If you want to learn more about Venice - Private and Uncensored AI: https://venice.ai/chat?ref=ZLFMOP

2 Likes