everything is moving too quickly. The transformation has gotten so obvious that Even on normie sites they have to admit that half of internet traffic is bots.
everyone has been made aware of the dead internet theory, and thus created the paralysis of lame hollow discussion and ridiculous dichotomies of singularity level or nothingburger AI. It doesn't seems we cant react to monsterous change with the vitality and human-ness as we once did.
consensus is like gold these days but I think we can all agree that everything is becoming more and more weird.
ITT we try to make sense of this weirdness by trying to take a step back and search for some insights in this mess. Has the amount of AI sloptent increased during your daily scroll? Are the comments you read getting more and more incoherent? Can you spot AI like a voight kompff or have you been fooled more often than you woudl like to admit? How crazy are you getting? LET YOUR CRIES HOPES CONCERNS LOOSE AND SHARE SOME OF YOUR EXPERIENCES AND OBSERVATIONS DURING THIS REAL WEIRD TRANSFOR-INFORMATION.
The only social media I use is instagram which I use to keep up with a niche sport that I'm into never see any Ai stuff but saying that I only look at my feed and storys never the explore page or suggested posts
The only social media I use is 4chan, Wired-7 and Petrarchan. In this regard, there are two ways AI makes its appeareance for me: either it's blatantly obvious (its use here may be intended to be noticed), or it is coveted enough to make me not feel 100% sure about its AI origins: I jsut feel a "weird feeling".
IRL:
I'm a college student. Since I began my studies, I have had to participate in group projects. Most often than not, someone uses AI, usually in a very obvious way. People who during classes can't write a proper paragraph end up writing a 10/10 contribution to the work, for example. Of course, people may write better at home, but in general their usage is obvious to me, although I prefer not complaining about it.
This year I had to take a writing workshop, and the weird thing is that the teacher apparently was using AI (he showed us examples to guide our writing). I kek'd hard at first, but then I felt uneasy about it.
Lastly, googling things is annoying. I swear if I click on a link at random there's a high probability of it being AI (excessive amount of subtitles, consisting of one or two short paragraphs; verbosity and redundacy, etc.). It doesn't matter if what I'm looking for is a recipe or instructions to use certain software.
Oh! This is my experience, of course.From time to time I hear what my brothers watch on their phones, and it's 90% of the time AI-slop. Artificial voices, images, scripts, etc. There's not a shed o human-ness in those videos.
I suppose some may benefit from their usage. However, my opinion is that since the beginning of the AI "boom" everything is worse.
I think it's undeniable we breached some sort of AI threshold and the rate at which it can more successfully replicate (or resemble, more accurately) human created works is exponentially increasing. Will it ever be able to create something which is indistinguishable from a human masterpiece? Probably not, due to a computer being unable to breach human defined parameters and being inherently limited by the data fed into the model.
When the photorealistic AI imagery came on the scene, I'll admit to being fooled once. But once you understand the pattern, there are obvious tells which distinguish them. I think maybe the more interesting thing, more than the realism and ability of AI, is the way that we are now able to "offshore" a more significant portion of traditionally human labor onto a machine. I mean, the conjuring aspect of AI is really not that different than photoshop or rendering, except insofar as a machine is totally doing it all and in a fraction of the time as a human.
It's very much a spectacle, an illusion, but one that is being bought into full sail. I cannot tell what the ramifications will be yet, if it will just be a toy, or another machine that leads to the demise of labor.
Right now though, the widespread naming of AI made content as "slop" is accurate, as it is mostly a bucket of feed for the masses who either cannot tell or don't care to tell if it is AI.
i feel like AI is making me a bit schizo these days. i see a lot of things and think, was that real or fake, even though there's no way it was anything except real.
How long until AI can write bioweapon code?
What the fuck is "bioweapon code"
I don't know if you're familiar with the idea that we often get trapped by virtue rather than vice, but I think it applies to AI.
Yes, some use it to cheat or trump, but the danger is the trust that it asks from the user. You have to talk to it like a person, so you extend it a lot of privileges usually reserved for humans. Once I use it, I trust it, and that's the issue because it always lies.
More and more, it feels like a pact with the Devil. You can get any information you want instantly, but you will never know if it is true, and everyone will forget it can be wrong.
(I guess I could talk to it like a machine, but it is made to emulate another human and I don't want to learn how to dehumanize something that acts like a human; it would just be another way to dehumanize myself.)
Stolen from r/redscarepod... This image provides me a strange sense of comfort against the obvious impending doom of AI taking over human faculty for thought and action. Even its most avid users would rather do nothing than have to do something, even if that something is made easier with LLM. It's just a tool kids at school use to bypass all the stupid busywork that's thrown at them that they know is all crap anyway. The more they associate it with this sort of fake busywork the less extreme they'll come to absorb it in their personal or social life.
This feels more and more like the dotcom bubble.
Yes, it is useful and it will change things, like the Internet did, but there is a lot of overpromising, and unreal expectations fueled by the market euphoria.
I've been receiving a daily resume of IA news and new tools. The tone is epic, there are revolutions every day yada yada. They recently decided to publish testimonies from actual users, and the discrepancy between the two is pretty massive.
Honestly? Kinda relatable. Them tuning the LLM to have hyperperfectionist anxieties is rather endearing. Whomst among us hasn't wanted to jump out a window after a minor mistake?
It's giving overly-serious Japanese businessman, I kinda love it
The biggest impact of AI (as it exists) will be the death of the take-home essay. Schools are really running out of homework options, there will have to be a fallback on testing. Parents will be mad that their conscientious-but-dumb kids get Cs now. This adjustment will take up to a 10 years, and the decade of mass learning loss will be visible on charts forever.
Web developers can output a lot more feasible, bloated features per day (I hesitate to call it "slop" because the status quo was just as bad). More serious programmers will find limited usage from code generators, mostly relying on it for autocomplete and utility functions.
Using AI for emails, image generation will continue to be declassee and may become widely understood as a status identifier.
The anti-AI movement will become a punchline, synonymous with tilting at windmills.
When it becomes clear that AGI is not on the horizon there will be a miniature dotcom crash. Leaked email from Sam Altman or something.
Regular people will continue to find it occasionally useful in their personal lives. Specialized applications will prove useful to some office jobs.
Maybe in 20 years they find a breakthrough towards AGI and you'd have some serious impacts on the job market. Probably not.
While I am deeply sceptical of AGI, it does seem that current AIs could be more deeply integrated into our lives than they are now, and this is something which concerns me. Reflecting on society's loss of reading skills earlier (after reading https://kittenbeloved.substack.com/p/college-english-majors-cant-read), it struck me that we could just as easily hand off many menial tasks to LLMs, which could enfeeble us. The ancient Greeks identified that literacy was reducing young people's ability to memorise texts (true, but certainly worth it). What about AI? What about losing the ability to write simple letters and handing that off to the AI? What is being gained here? Already young people just call the AI 'Chat': let me ask Chat this, let me ask Chat that. Why not stop and think? If you have to ask Chat, you are introducing at least 5 seconds of latency in whatever you are doing. And Chat is fucking stupid too.
I now realise that what I can't find on the Internet nowadays, I ask the IA agent; but these are things I used to find, generally on obscure tutorial made by some geek - all those websites that diseappeared with time and/or under deplorable SEO cheats.
This is a second death of the old Internet.
You people are so dramatic