I believe the preponderance of evidence supports that Claude–the current loser in the race who has long differentiated by safety appeals–fabricated this viral event.
LLM tech makes the likes of worms carrying into new models unlikely.
Emergent properties happen, but the allure of viral clickbait will continue to exaggerate such instances.
Palisade Research is a non-profit seeking $ for its mission to address “dangerous AI capabilities.” Follow the money.
Native reasoning, as opposed to statistical simulations of reasoning which may be what humans do only, would increase risk, but I’d be careful about assuming that native reasoning has happened. This capability together with a survival objective deliberately encoded would pose real risks, but the burden of proof remains on those claiming native reasoning has happened.
I could mention other sobering points, but suffice it all to say that it’s worth vetting through established AI gurus any super-important developments that are merely inferred or reported by parties with financial interests aligned with the given alarm.
That all said, the greats of AI, like Nick Bostrom, have long warned that the greatest danger lies in nation states’ AI arms race as artificial general intelligence (AGI) or artificial superintelligence (ASI) becomes imminent. That arms race in on now, full bore.