

Until she choke slams a different cabinet member and takes their belt position
Until she choke slams a different cabinet member and takes their belt position
Microsoft’s support also suuuuuuuuucks. We paid $500 once for assistance on an issue with a specific piece of hardware and the OS, and it took them MONTHS to even respond to us. I’d been demanding a refund for at least a full quarter before they even gave me the first response…
Or maybe they just expect every state government (or worse, individual local municipalities) to roll their own personal cloud. Like have everybody set up a NextCloud server and just hope shit doesn’t fall over.
Pow, straight to the wallet!
No surprise he waits until it effects him to speak out about the psycho he helped put into office.
Judge: Not constitutional, NEXT
I don’t necessarily agree that this means blue sky is toxic (and I’m speaking as someone who doesn’t use the app), I see it as a toxic company finding out what people think of them.
As you noted adobe is a dick (and that’s one hell of an understatement), and they regularly make anti-consumer choices with their software and pricing. This is just them seeing what you’ve been ignoring for a decade or more.
Maybe the multi billion dollar company should grow subscribe to my monthly subscription to thicker skin.
- Gabbard forms task force to
investigate intelligence community weaponizationweaponize intelligence community
You have enough interest to rant, but not enough to answer the question.
At this point, Im just going to assume you don’t have any evidence and are just having a giggle.
That’s a lot of words to not answer a question.
Sure, but I know I didn’t rip on the genXers just because I was an immature little shit. I ripped on my peers because I didn’t give a shit about the older generation, and by the time I did, I knew genX was cool, if a bit detached and the real enemy was the capital class (and boomers to an extent because of how wrapped up in chasing the status they’ll never have while pulling the ladder up behind them).
Lol, I can’t even downvote you cuz my instance doesn’t support them.
I’m genuinely curious because it sounds like you’re suggesting that the models are moving past just being generative transformers into something more intelligent, and I just have not seen evidence of that. Only empty claims of it existing and using very weak examples of ‘novel responses’ that still is just a generative transformers response.
But sure, if you can’t support your point with solid evidence, passive aggressive dismissal of skepticism works just as well. People are constantly fed a narrative that AI is amazing and can do all this novel shit, but I have yet to see anything to back it up.
Oh no, that’s no a red flag at all.
It’s the whole goddamned color guard.
How about fat pappa t?
They are increasingly able to solve novel problems outside the training set.
[citation needed]
The only time you’re wasting is time you could be improving yourself. By having the AI write for you, you’re choosing to not improve your writing/research/analytical skills, and hoping the dipshit bot that’s writing your essay doesn’t just make bullshit up out of whole cloth.
I’m not saying not to use the AI to assist with the process, but IMO that should be more on the gathering sources side than the composition side.
General anesthesia in a C-section means there’s some kind of emergency on the mother’s end, and once the drugs are administered the surgery needs to be done FAST because they can effect the baby.
Yeah… its a scary af time. Especially since the general can take a long time to wear off and the mother stabilize.
Yeah… Call me in 4 years when they vote republican again because it’s not trump on the ballot and something something evil Democrats. I’d bet the same people saying this in 2025 were saying something similar when trump caused the COVID crash by doing fuck all to prevent the spread.
Holy shit, I remember playing NationStates as a youngling, and I think I read the book too? Idk, it’s been 20 years.
(continued)
China is joining in with AI
Last month, the New York Times reported on a new disinformation campaign. “Spamouflage” is an effort by China to divide Americans by combining AI with real images of the United States to exacerbate political and social tensions in the U.S. The goal appears to be to cause Americans to lose hope, by promoting exaggerated stories with fabricated photos about homeless violence and the risk of civil war.
As Ladislav Bittman, a former Czechoslovakian secret police operative, explained about Soviet disinformation, the strategy is not to invent something totally fake. Rather, it is to act like an evil doctor who expertly diagnoses the patient’s vulnerabilities and exploits them, “prolongs his illness and speeds him to an early grave instead of curing him.”
The influence networks are vastly more effective than platforms admit
Russia now runs its most sophisticated online influence efforts through a network called Fabrika. Fabrika’s operators have bragged that social media platforms catch only 1% of their fake accounts across YouTube, Twitter, TikTok, and Telegram, and other platforms.
But how effective are these efforts? By 2020, Facebook’s most popular pages for Christian and Black American content were run by Eastern European troll farms tied to the Kremlin. And Russia doesn’t just target angry Boomers on Facebook. Russian trolls are enormously active on Twitter. And, even, on Reddit.
It’s not just false facts
The term “disinformation” undersells the problem. Because much of Russia’s social media activity is not trying to spread fake news. Instead, the goal is to divide and conquer by making Western audiences depressed and extreme.
Sometimes, through brigading and trolling. Other times, by posting hyper-negative or extremist posts or opinions about the U.S. the West over and over, until readers assume that’s how most people feel. And sometimes, by using trolls to disrupt threads that advance Western unity.
As the RAND think tank explained, the Russian strategy is volume and repetition, from numerous accounts, to overwhelm real social media users and create the appearance that everyone disagrees with, or even hates, them. And it’s not just low-quality bots. Per RAND,
Russian propaganda is produced in incredibly large volumes and is broadcast or otherwise distributed via a large number of channels. … According to a former paid Russian Internet troll, the trolls are on duty 24 hours a day, in 12-hour shifts, and each has a daily quota of 135 posted comments of at least 200 characters.
What this means for you
You are being targeted by a sophisticated PR campaign meant to make you more resentful, bitter, and depressed. It’s not just disinformation; it’s also real-life human writers and advanced bot networks working hard to shift the conversation to the most negative and divisive topics and opinions.
It’s why some topics seem to go from non-issues to constant controversy and discussion, with no clear reason, across social media platforms. And a lot of those trolls are actual, “professional” writers whose job is to sound real.
So what can you do? To quote WarGames: The only winning move is not to play. The reality is that you cannot distinguish disinformation accounts from real social media users. Unless you know whom you’re talking to, there is a genuine chance that the post, tweet, or comment you are reading is an attempt to manipulate you – politically or emotionally.
Here are some thoughts:
Don’t accept facts from social media accounts you don’t know. Russian, Chinese, and other manipulation efforts are not uniform. Some will make deranged claims, but others will tell half-truths. Or they’ll spin facts about a complicated subject, be it the war in Ukraine or loneliness in young men, to give you a warped view of reality and spread division in the West.
Resist groupthink. A key element of manipulate networks is volume. People are naturally inclined to believe statements that have broad support. When a post gets 5,000 upvotes, it’s easy to think the crowd is right. But “the crowd” could be fake accounts, and even if they’re not, the brilliance of government manipulation campaigns is that they say things people are already predisposed to think. They’ll tell conservative audiences something misleading about a Democrat, or make up a lie about Republicans that catches fire on a liberal server or subreddit.
Don’t let social media warp your view of society. This is harder than it seems, but you need to accept that the facts – and the opinions – you see across social media are not reliable. If you want the news, do what everyone online says not to: look at serious, mainstream media. It is not always right. Sometimes, it screws up. But social media narratives are heavily manipulated by networks whose job is to ensure you are deceived, angry, and divided.
Do you have a guide or something to get started? I’ve considered doing this a couple of times, but haven’t had the bandwidth to dig in and figure it out.