"We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!".
I know, Dune and yeah, i get it - science fiction aint real life - but im still into these vibes.
"Once, men turned their thinking over to machines in hopes that this would set them free, but that only permitted other men with machines to enslave them."
Yeah, I'm in. Let me know when and where the meetings are held.
We're on Salusa Secundus, second pylon to the right, just off the Imperial Penal Complex. 6pm, Wednesday. Use the green parrot to enter, otherwise you'll be cast out. We don't want to let anybody from IX in.
Well, I've been surrounded by "machines that think" for my entire life, formed of unfathomably complex swarms of nanobots. So far we seem to get along.
If there were a new kind of "machines that think"--and they aren't a dangerous predator--they could be a contrast to help us understand ourselves and be better.
The danger from these (dumber) machines is that they may be used for reflecting, laundering, and amplifying our own worst impulses and confusions.
It's you: A swarm of ~37 trillion cooperating nanobots, each one complex beyond human understanding, constructing and animating a titanic mobile megafortress that shambling across a planet consumed by a prehistoric grey-goo event.
I wish we had machines that actually thought because they'd at least put an end to whatever this is. In the words of Schopenhauer, this is the worst of all possible worlds not because it couldn't be worse but because if it was a little bit worse it'd at least cease to exist. It's just bad enough so that we're stuck with the same dreck forever. This isn't the Dune future but the Wall-E future. The problem with the Terminator franchise and all those Eliezer Yudkowsky folks is that they are too optimistic.
Funny thing is that we still have hand made fabric today, and were still employing a frighting number of people in the manufacturing of clothing. The issue is that we're making more lower quality products rather than higher quality items.
I'm not sure that this message is meant to be taken as viable, let alone sacrosanct.
<spoiler>
I interpreted Thufir Hawat's massive misunderstanding of Lady Jessica's motivation (which was a huge plot point in the book but sadly didn't make it into the films) as evidence that the conclusion that humans are capable of the exact same undesirable patterns as machines.
The point of Dune, or the Butlerian Jihad within Dune, isn't that Humans are more capable than the Thinking Machines. It is that Humans should be the author of their own destiny, that and the Thinking Machines were enslaving humanity and going to exterminate them. Just like how the Imperium was enslaving all of humanity and was going to lead to the extinction of humanity. This was seen, incompletely, by Paul and later, completely, by Leto II who then spent 10,000 years working through a plan to allow humanity to escape extinction and enslavement.
I had never made that exact connection, but my impression of the Dune universe was that it was hopelessly dark and horrific, basically humans being relentlessly awful to each other with no way out.
I don't think its quite reducible to that. Take a step back and look at how you have an Emperor whose power is offset by both the Landsraad and the navigator's guild. This arrangement has all come about because of a scarce resource only available on one desert planet which makes space travel possible, and which has a population who have been fiercely fighting a guerilla war for centuries. It was all bound to come undone whether Paul accepted his part or not.
It’s kind of presented that way from the view of anyone under the oppression (Paul, Fremen, Jessica, etc). Therefor Paul’s vision is the way out, right? The whole thing is a mechanic to subdue the reader before they reveal that Paul doesn’t care he just wants to do things the way he sees it and controls it.
The golden path was to break prescient vision and prevent any possible extinction of the human race. Paul actually turned his back on the golden path and became the preacher, trying to prevent it, but after his death, his son, Leto II, followed the path.
I just look forward to the point where this is so common it becomes oversaturated and the original incentives go away/scare off the folks doing this stuff (inevitable as, like parasites, they only stay around as long as the host is providing them sustenance).
If you are old enough, you remember when having a CD player in your car made you a target for a break-in. Many models were removable so you could take the CD player with you when you left your car. Once it became a standard option, there was no point in stealing them anymore due to the saturation effect you brought up. Now having a CD player in your car is steampunk.
When the goal is "harassing someone into depression and suicide" though, the incentive will never go away. People are going to start doing things like this to be deliberately malicious, sending videos of their dead parents saying horrible things about them and so on.
The problem isn't that the technology is new and exciting and people are getting carried away, the problem is the technology.
I saw that Robin Williams setup a deed to prevent his image or likeness being used for film or publicity that covers a period of up to 25 years after his death.
Similarly, it drives me up the wall with people posting black and white "historical photographs" of history happenings, that are AI slop, and from the wrong era.
Just yesterday someone posted a "photo" of a 1921 where a submarine lost power, and built sails out of bedsheets to get home.
But the photo posted looked like a post WWII two submarine, rigged like a clipper ship, rather than the real life janky 1920's bed sheet rig and characters everywhere.
It is no surprise to me that AI images have become an aesthetic of ascendant fascism. AI contains the same distaste for the actual life and complexity of history and preference for a false memory of the past with vaseline smeared on the lens.
While also rhyming with the obsession for futurism from past fascism, the intertwining of calling back to a romanticised past with an inhuman futurism is very much a pillar of the ideology.
Or people that frequent a questions and answers website, only to copy the AI answer slop as if it was their own.
I mean, thank you I guess, but anyone can do that with the littlest of efforts; and anyone with actual intention of understanding and answering the question would have recognized it as slop ans stopped right there.
My Dad passed away two years ago, and for a time I thought it would be nice to have an AI of him just to hear his voice again, but then I realized that the reason I'd want that is because I don't have really any video or audio of him from when he was alive. The reason I'd want the AI is because I have no training data for it...
If I did have sufficient training data for an AI Dad, I'd much rather just listen to the real recordings, rather than some AI slop generated from it. This is just another dumb application of AI that sounds alright on paper, but makes no sense in real life.
As a fan of his work, I too wish it all to stop. People always go headlong for the people who we all miss the most, yet don't understand that it was their underlying humanity that made them so special.
Rather than "pleading" for them to stop, wouldn't she have more success going after the ai content creation companies via legal process? I thought actors have the right to control commercial uses of their name, image, voice, likeness, or other recognizable aspects of their persona, thus if people are paying for the AI creation wouldn't the companies be wrongly profiting off his likeness? Although I'm sure some laws haven’t yet been explicitly updated yet to cover AI replicas.
> Rather than "pleading" for them to stop, wouldn't she have more success going after the ai content creation companies via legal process?
But that shouldn’t be the first step. Telling your fellow man “what you are doing is bothering me, please stop” is significantly simpler, faster, and cheaper than contacting lawyers and preparing for a possibly multi-year case where all the while you’ll have to be reminded and confronted with the very thing you don’t want to deal with.
If asking doesn’t work, then think of other solutions.
You can't tell everyone. Barely anyone will know of this being published. And then there will be lots of people thinking "whatever, I don't care". And a not insignificant number of people thinking "lol, time to organise a group of people who will send Robin Williams creepy genAI to her every day!"
Asking solves it for you, biting the hand makes them think twice about doing it to others, but good luck doing that to the many-headed serpent of the internet.
I don't know. I mean, things could have gone very differently (in a better or worse way) and the world may be unrecognizable today if certain key events did not happen.
Like if nothing sparked the World Wars (conversely: or if Hitler won). Or Greece harnessed steam or electricity to spur an industrial revolution 2200 years ago. Or if Christianity etc. never caught on.
> Telling your fellow man “what you are doing is bothering me, please stop” is significantly simpler, faster, and cheaper
It's not because just telling people on the internet to stop doing something doesn't actually stop them from doing it. This is basic internet 101, streissand effect at full power
No, she would not have any success. Take a look at this list and think about the sheer number of companies she would need to sue: https://artificialanalysis.ai/text-to-video/arena?tab=leader.... You'll see Google, one of the richest companies on the planet, and OpenAI, the richest private company on the planet. You'll see plenty of Chinese companies (Bytedance, Alibaba, Tencent, etc.). You'll also see "Open Source" - these models can't be sued, and removing them from the internet is obviously impossible.
The most these lawsuits could hope to do is generate publicity, which would likely just encourage more people to send her videos. This direct plea has that risk too, but I think "please don't do this" will feel a lot less adversarial and more genuine to most people than "it should be illegal for you to do this".
> The most these lawsuits could hope to do is generate publicity, which would likely just encourage more people to send her videos.
It's not fruitless and doesn't only generate publicity. Some states like California and Indiana recognize and protect the commercial value of a person's name, voice, image, and likeness after death for 70 years, which in this case would apply for Robin William's daughter.
Tupac's estate successfully sued Drake to take his AI generated voice of Tupac out of his Kendrick Lamar diss track.
There is going to be a deluge of copyright suits against OpenAI for their videos of branded and animated characters. Disney just sent a cease and desist to Character.ai last week for using copyrighted characters without authorization.
What I'm saying is that successfully suing individual companies or people would have zero impact on her actual problem. If California says it's illegal and OpenAI says they'll ban anyone who tries it, then these people can effortlessly switch to a Grok or Alibaba or open source model, and they'll be extra incentivized to do so because they'll find it fun to "fight back" against California or luddites or whatever. Do you see the difference? Tupac's estate successfully stopped one guy from using Tupac's voice, but they have not and cannot stop the rest of the world from doing so. The same is true for Disney, it is trivial for anyone to generate images and videos using Disney characters today, and it will be forever. Their lawsuit can only hope to prevent a specific very small group of people from making money off of that.
The problem she wants to solve is "people are sending me AI videos of my dad". She will not have any success solving this problem using lawsuits, even if the lawsuits themselves succeed in court.
Is that really the problem she wants to solve? She could just turn off her phone to accomplish that. The problem is multi-layered and complex. Holy shit, It’s her dad. He’s dead. I don’t have her phone number, but let’s pretend I did and we were friends, why would I be texting her videos of her dead father? He’s Robin Williams, sure, but why? why! would I be making AI videos and sending them to her? Forget Sora, if I made a puppet of her father and made a video of him saying things he didn’t say, and then sent it to her, I think I’d still be a psychopath for sending it to her. I think she should sue open AI and California should have it be illegal without a license, and yeah there’s always gonna be a seedy underbelly. I’m sure there’s Mickey Mouse porn out there somewhere. A lawsuit is going to make it official that she is a person and she’s saying hey I don’t like that and that she would like for people to stop it, and that the rest of us agree with that.
Asking people to stop seems like the first step. Especially since this is specific to people sending them to her in particular. People think they are being nice and showing some form of affection, but as she mentions she finds it disturbing instead.
So I don't think there was actually malicious intent and asking people to stop will probably work.
I think there is also a major distinction between creating the likeness of someone and sending that likeness to the family of the deceased.
If AI somehow allowed me to create videos of the likeness of Isaac Newton or George Washington, that seems far less a concern because they are long dead and none of their grieving family is being hurt by the fakes.
How long would the legal process take? How much would it cost? Does she have to sue all proprietors of commercial video generator models? What about individuals using open source models? How many hours of her time will these suits take to organize? How many AI videos of her dad will she have to watch as part of the proceedings? Will she be painted as a litigious villain by the PR firms of these very well-capitalized interests?
Her goal seems to be to reduce the role in her life played by AI slop portrayals of her dad. Taking the legal route seems like it would do the opposite.
Not really. From what I understood of the interview is that her complain is not about money or compensation (which she may be entitled to), but about how people use the technology and how they interact with it and with her. Legal process or even the companies implementing policies won't change that problematic society behavior.
Since the raise of generative AI we have seen all sorts of pathetic usages, like "reviving" assesinated people and making them speak to the alleged killer in court, training LLMs to mimic deseased loved ones, generative nudification, people that is not using their brain anymore because they need to ask ChatGPT/Grok... some of them are crimes, others not. Regardless most of them should stop.
It's so frustrating that "just call the cops" is the answer, at the very same time that the cops are creating a massive disruption to our society.
And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?
We need systems that work without this one neat authoritarian trick. If your solution requires that you lean on the violence of the state, it's unlikely to be adopted by the internet.
Legal process is not an “authoritarian trick," it's the primary enforceable framework for wide scale, lasting societal change as it's the only method that actually has teeth.
Also, calling legal enforcement as “leaning on the violence of the state” is hyperbolic and a false dichotomy. Every system of rights for and against companies (contracts, privacy, property, speech) comes down to enforceable legal policies.
Examples of cases that have shaped society: Brown v Board of Ed, pollution lawsuits against 3M and Dow Chemical, Massachusetts v. EPA resulted in the clean air act, DMCA, FOSTA-SESTA, the EU Right to Be Forgotten, Reno v. ACLU which outlined speech protections online, interracial marriage protected via Loving v. Virginia, the ruling that now requires police have a warrant to access cell phone data was Carpenter v. US, and these are just a few!
> And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?
Jurisdictional challenges don't mean a law is pointless. Yes, bad actors can operate from other jurisdictions, but this is true for all transnational issues, from hacking to human smuggling to money laundering. DMCA takedowns work globally, as does GDPR for non-EU companies.
Nobody’s arguing for blind criminalization or over policing AI. But perhaps there should be some legal frameworks to protect safe and humane use.
To my mind this is no different to other forms of spam or harassment.
Back in the 00s I remember friends being sent “slimming club” junk mail on paper made to look like a handwritten note from “a concerned friend”. It was addressed but random.
Unfortunately it can be very distressing for those with body image issues.
We’re going to have to treat this slop like junk mail as it grows.
It is definitely changing it. We were already experiencing the move from a "celebrity" being an individual with huge talent to just a marketing tool that gets giant financial rewards for merely existing. These larger than life pop culture icons that 100s of millions or billions of people care about is a recent phenomenon and I welcome generative AI killing it off.
If media had one-shot generated actors we could just appreciate whatever we consumed and then forget about everyone involved. Who cares what that generated character likes to eat for breakfast or who they are dating they don't exist.
Is this really a change? Haven't people loved celebrities for as long as they've existed? Before this, characters in books, poems and songs commanded the same level of attention.
> I welcome generative AI killing it off.
It probably will, but that pushes us in the direction that Neal Stephenson describes in Fall - millions of people sitting alone consuming slop content generated precisely for them, perfect neuro-stimulation with bio-feedback, just consuming meaningless blinking lights all day and loving it. Is that better than what we have now? It's ad-absurdum, yes, but we live in a very absurd world now.
> Who cares what that generated character likes to eat for breakfast or who they are dating they don't exist.
You never needed to know this about celebrities, and you still don't need to now. Yes, others are into it; let them have their preferences. No doubt you're into something they would decry.
I definitely draw a distinction between like... famous medieval leaders and generals being well known versus what we do with people like Michael Jackson, Madonna, Kim Kardashian etc. I am sure they had local celebrities back then but there is no way they extended much beyond a small region or a single nation.
In my ideal world generating content becomes so easy that it loses all value unless you have some relation to the creator. Who cares about the new Avengers movie, Rick from down the street also made an action movie that we are gonna check out. Local celebrities return. Global figures are generated because why would Coke pay 100m for Lebron to dunk a can of coke on camera or some dumb shit when his image has been absolutely trashed by floods of gen content.
- How do I know Rick down the street, anymore, if I'm inside consuming perfectly addicting slop all day?
- How do I ensure that I am also consuming content that has artistic merit that is outside my pure-slop comfort zone?
- Why would I notice or care about a local artist when they can't produce something half as perfect "for me" as a recommendation algorithm trained on all human artistic output?
> Global figures are generated
This I agree with. Ian Macdonald wrote about "aeai" characters in his early 2000s novel River of Gods, where a subplot concerns people who watch TV shows starring AI characters (as actors, who also have an outside-of-the show life). The thing is, right now we see Lebron in the ad and it's an endorsement - how can an AI character of no provenance endorse anything?
You're describing commodification. Too bad it doesn't work out that way in practice because people are not interchangeable. Look at all the "ship of Theseus" entertainment companies we have today. They still have the IP rights, but the people who actually made it good are long gone. Franchises running on fumes.
> Williams continued: "To watch the legacies of real people be condensed down to 'this vaguely looks and sounds like them so that's enough', just so other people can churn out horrible TikTok slop puppeteering them is maddening," she continued.
> "You're not making art, you're making disgusting, over-processed hotdogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else's throat hoping they'll give you a little thumbs up and like it. Gross."
> She concluded: "And for the love of EVERY THING, stop calling it 'the future,' AI is just badly recycling and regurgitating the past to be re-consumed. You are taking in the Human Centipede of content, and from the very very end of the line, all while the folks at the front laugh and laugh, consume and consume."
if you're currently making a fortune working for Anthropic et al, maybe find some form of charity you can do as penance for your day job. Certainly there are people on this site who should atone for this.
Sending someone unwanted fake clips of their dead family is unhinged, but it's a bit much to just blanket assume you can't do cool things with generative models or that the artists themselves won't approve. e.g. Liam Gallagher's reaction to AISIS[0] was that it was "mad as fuck I sound mega". I don't know what he'd have to say about his AI vocals on a High Flying Birds song (maybe they'd be okay with that collab now?), but this "cover" is also pretty awesome[1]. As usual, it's up to the wielder of a tool to use it well.
Complaining about ML-augmented art "recycling and regurgitating the past to be re-consumed" strikes me as similar to complaining about mashups. If it's not for you, just move on. It's still a form of creative expression that others enjoy.
The entire industry, from the CEOs dreaming this up, all the way down to junior engineers making it reality, desperately needs a thorough self-reflection session, and everyone involved with this needs to be asking "Are we the baddies?" I know, I know, all those $100 bills stuffed in your mattresses give you a nice soft, comfortable night sleep, but a lot of it comes from doing harm (or at least enabling it).
Yes, but taking your self-reflection euphemism literally, and why it wouldn't be enough, here's a rough sketch of several categories of us workers, in an industry that became the new "greed is good" mecca, maybe 25 years ago:
* Reasonably well-intentioned, but we're not particularly bright, and not much for introspection. We never thought about where this money came from, and accepted executive chatter about creating value or whatever, but it was boring to us.
* We see a survival threat, and ourselves as doing things we're not proud of, and don't like to think about it. (It's much easier for an observer to be sympathetic with this today, than for most of the last 25-30 very comfortable years in tech, when most of us were chasing the jobs that were much more than comfortable, at companies that were in the news for being sketchy. We even twisted the entire field's way of interviewing for all jobs, to match the extensive practicing that everyone was doing for the interview rituals of the strictly best-paying, well-more-than-comfortable jobs.)
* Like the above, but we're not actually feeling a threat, just rationalizing greed.
* We're greedy sociopaths, who don't even care to rationalize, except to manipulate others.
* We're not a sociopath, but we've been fed a self-serving aggressive libertarian philosophy/religion, and ate it up. Like the first category in this list, we're not particularly bright or introspective, but we're much less likeable.
* We question the field-wide festival of sociopathic greed, and we've been implicitly assembling a list of things we apparently won't do, companies we apparently won't work with, etc. And it's because of reasoned values, not just appearances or fashion.
> Certainly there are people on this site who should atone for this.
Mate, if we are going down this route, a whole fucktonne of us should be atoning for this.
Look, technology, our technology has enabled, and continues to enable the every forward march of authoritarianism. It has made the destruction of entire cultures easier, practical and efficient.
WE have managed to trick the populous to share all their thoughts, opinions friendships and realtime location with a handful of corporations who are hell bent on complete domination.
Not only that, but we have cheered as those corporations have undermined our rights to privacy, moral rights[1] and freedom of speech.
[1] as in copyright, not morals as in killing is bad mmkay
I mean, I both agree and disagree with her here. Everything said is unambiguously right, but then I’m still using the LLM’s to great effect in my day job.
> AI is just badly recycling and regurgitating the past to be re-consumed
That’s just correct.
Unfortunately it doesn’t make the reconsumption any less entertaining.
I see wishing people weren't the way they are is a bit more... intrusive? than just wishing LLMs were slept on. Like if you had a genie in a bottle the less evil wish would be for LLMs to not have been developed.
It's not just an algorithm. it's also a massive application of computing resources and a complete indifference to the numerous people pointing out why x, y, and z are really bad ideas.
This is a bad argument. You may as well say “It’s just a thing that has been invented. Do you wish that all things that have been invented hadn’t been invented?”.
Science has allowed us to destroy our planet at industrial scale and provided the tools to destroy ourselves and as a result all humans will likely be dead long before the next planet killer asteroid.
It seems that most of the toxic AI stuff has really only been coming out of a handful of companies: OpenAI, Meta, Palantir, Anduril. I am aware this is a layman take.
Well, a lot of us have a curious mind. Like, fission is a property of this universe. Gradient descent is a property of this universe. All you're saying is you'd rather not know about it.
I'm happy that nuclear weapons and AI have been invented, and I'm excited about the future.
Ok, but regardless of your feelings about AI, I don’t understand why you wouldn’t wish that nuclear weapons had never been invented. (Well, maybe it ended the combat between the US and Japan faster…, and maybe prevented the Cold War from becoming a hot war, but still, is that really worth the constant looming threat of nuclear Armageddon?)
Well, you can learn what a cat looks like on the inside from a book. But someone did have to go around cutting up neighborhood cats, you're just benefiting from them. Which is the _whole reason_ why I maintain my position that inventing AI and nuclear weapons is a net positive for mankind.
If you're curious about that, are you curious about hypotheses like the Great Filter (Fermi paradox), and are you concerned that certain technologies could actually function as the filter?
I mean, what if the nuclear bomb actually did burn up the atmosphere? What if AI does turn into a runaway entity that eventually functions to serve its own purposes and comes to see humans the same way we see ants: as a sort of indifferent presence that's in the way of its goals?
There is a sort of people who read 1984 and blame the protagonist for being an idiot who called the fire upon himself, or still don't get what's wrong with ice9 and people behind it when turning the last page of Cat's Craddle.
And a sort of people who sympathize Winston and blame Felix Hoenikker, but still fail to see any parallels between "fiction" and life.
I don't know for certain if, when you say "a sort of people", you're referring to me, but... The sort of people you're describing sound like fascists, which is the opposite of me.
I suppose I'd rather see us understand nuclear weapons without vaporizing civilians and causing an arms race. I'm not claiming to have a solution to preventing the arms race aspect of technology, but all the same: I'd rather these weapons weren't built.
It's real hard for me to conjure up "good" uses for, say, mustard gas or bioweapons or nuclear warheads.
"Technology is neutral" is a cop-out and should be seen as such. People should, at the very least, try to ask how people / society will make use of a technology and ask whether it should be developed/promoted.
We are all-too-often over-optimistic about how things will be used or position them being used in the best possible light rather than being realistic in how things will be used.
In a perfect world, people might only use AI responsibly and in ways that largely benefit mankind. We don't live in that perfect world, and it is/was predictable that AI would be used in the worst ways more than it's used in beneficial ones.
Why? Soviets tried to re-route rivers with nuclear blasts in their infinite scientifically-based wisdom and godlike hubris. How much illness their radioactive sandbox would cause among people was clearly too minuscle a problem for them to reflect on.
But not in equal quantity. Technology does not exist in a contextless void. Approaching technology with this sort of moral solipsism is horrifying, in my opinion.
I strongly disagree. Many technologies aren't neutral, with their virtue dependent on the use given them. Some technologies are close to neutral, but there are many that are either 1) designed for evil or 2) very vulnerable to misuse. For some of the latter, it'd be best if they'd never even been invented. An example of each kind of technology:
1) Rolling coal. It's hard for me to envision a publicly-available form of this technology that is virtuous. It sounds like it's mostly used to harass people and exert unmerited, abusive power over others. Hardly a morally-neutral technology.
2) Fentanyl. It surely has helpful uses, but maybe its misuse is so problematic that humanity might be significantly better off without the existence of this drug.
Right. But a machine that helps plant seeds at scale could be used for bad by running someone over, but it's core purpose it to do something helpful. AI's core purpose isn't to do anything good right now. It's about how many jobs it can take, how many artists can it steal from and put out of work, and so on and so on. How many people die from computer mice each year? How many from guns? They're both technology and can be used for good or bad. To hand wave the difference away is dangerous and naive.
It only "take jobs" because it's useful. It's useful for making transcription at scale, text revision, marketing material, VFX, all those things. It also does other things that don't "take jobs", like computer voice control. It's just a tool, useful for everyone, and not harmful at all at its purpose. Comparing it to guns is just ridiculous.
But... the machine that plants seeds also takes away the livelihood to a bunch of folks. I mean, in my country, we were an agrarian society 100 years ago. I don't have the actual stats but it was close to 90% agrarian. Now, it's at about 5%. Sure, people found other jobs and that will likely be the case here. I will do the dishes while the AI will program.
I understand the industrial revolution happened. To say this revolution is the same and will produce the same benefits is already factually wrong. One revolution created a net positive of jobs. One has only taken jobs.
I would say we don't know that yet. Comparing the current state of LLMs to what they can lead to or what they might enable later on is like comparing early machine prototypes to what we have today.
I can also 100% tell you that the farming folk of 100 years ago also felt like the farming machines took away their jobs. They saw 0 positives. The ones that could (were young) went into industry, the others... well, at the same time we instituted pensions, which were of course paid for by the active population, so it kind of turned out ok in the end.
I do wonder, what will be the repercussions of this technology. It might turn into a dud or it might truly turn into a revolution.
However, aren't there now a lot of job openings out there for LLM-whisperers and other kinds of AI domain experts? Surely these didn't exist in the same quantity 10 years ago.
(I'm just picking nits. I do agree that this "revolution" is not the same and will not necessarily produce the same benefits as the industrial revolution.)
Maybe someday we will grow so tired of AI, that people will leave social media entirely. The most interesting thing about social media, the ability to build real human connections, is quickly becoming a relic of the past. So without that, what is left? Just slop content, rage bait.
This is already happening in my immediate surroundings, friends that have long complained of phone addiction are now feeling more able to act on it as it isn't this oasis of escape for them anymore. My only comment there is that they were not ready/didn't remember how bad t9 was for alphabetical inputs, even nostalgia can't cover that unfortunate marriage of convenience.
Not really. We might not be in the golden era of human connections, but you can still find people out there that think the same way somehow and are off grid from social media.
It's all downside. I've seen cases where this is used to 'give murder victims a voice' recently, both on CNN where they 'interview' an AI representation of the victim and in court, where one was used for a victim impact statement as part of the sentencing. Those people would laugh you out of the room if you suggested providing testimony via spirit medium, but as soon as it comes out of a complicated machine their brains sieze up and they uncritically accept it.
Hold on, really? That seems wildly crazy to me, but sadly I'd believe it. I'd love it if you had a source or two to share of some of the more egregious examples of this.
I've seen a lot of horrible uses of AI, but this particular application the most sickening to my very core (schoolchildren using AI to generate porn of their classmates is #2 for me).
What are your thoughts on James Earl Jones giving license for his voice after his passing for Disney to use for Darth Vader? Or Proximo having a final scene added in the movie Gladiator upon the passing of Oliver Reed during production?
I see both of these as entirely valid use cases, but I'd be curious to know where you stand on them / why you might think recreating the actors here would be detrimental.
We have an actor dress up as $historical_figure and do bits in educational stuff all the time. Changing that to ai generated doesn't seem all that wrong to me.
Maybe there needs to be a rule that, lacking express permission, no ai generated characters can represent a real person who has lived in the last 25 years or similar.
I agree, though I think creating AI simulacra of living people against their will and making them say or do things they wouldn't is in some ways even worse.
its not really a story, this is an instagram post about someone that can be tagged and forwarded items on instagram by strangers, for those of you that aren't familiar
this is not about any broader AI thing and its not news at all. a journalist made an article out of someone's instagram post
I think it's definitely newsworthy that so-called fans are sending AI slop of robin williams to his own daughter! it's sadly indicative of the general state of fandom that they didn't even think of how it would land, or that she would be anything other than appreciative.
If Robin Williams wrote a book as a teenager his elderly grandchildren could still own the rights to that work.
However a video likeness of him has virtually no restrictions.
It's too bad we have a dysfunctional government which is struggling to say no to dictatorial martial law and has decided that instead of passing legislation on reforming anything as the result of careful compromise the preferred method is refusing to pay the bill shutting everything down until one side caves.
We could have government that actually tried to address real issues if people actually cared.
Robin Williams lived in California which has legal protection for celebrity likenesses. But likenesses rights aren’t going to stop the problem, because it’s individual people who are recreating the likenesses en masse.
If people see someone’s request to stop sending them AI slop of their dead father and it causes them to send the person more of it, that goes beyond the Streisand effect (which is driven by curiosity) and into outright cruelty.
> "Please, just stop sending me AI videos of Dad," Zelda Williams posted on her Instagram stories.
> "Stop believing I wanna see it or that I'll understand, I don't and I won't. If you're just trying to troll me, I've seen way worse, I'll restrict and move on.
> "But please, if you've got any decency, just stop doing this to him and to me, to everyone even, full stop. It's dumb, it's a waste of time and energy, and believe me, it's NOT what he'd want."
Or maybe ignore, block or, heaven forbid, even get off social media if it affects one negatively?
It's interesting, when I first got a cell phone (probably 1998) it took me years to figure out that if you received an unwanted call you could just ignore the call. This wasn't really possible back in the landline days, (most people did not have caller ID) and so it wasn't really a scenario you were trained for.
Obviously this is only a metaphorical comparison, but I do wonder if people are going to figure this out with regard to social media. A lot of people are talking about how to "fix" social media, but almost no one is saying "maybe I'll delete it and just read a book or go for a walk or something."
It all reminds me of the Blaise Pascal quote: "All of humanity's problems stem from man's inability to sit quietly in a room alone."
I installed a landline in my house, and today my wife told me it was scary. Why?
Because it was ringing. I suggested she pick it up to find out why it was ringing, but apparently that’s not something you do in the age of mobile phones.
> "All of humanity's problems stem from man's inability to sit quietly in a room alone."
In this situation, who's the person unable to sit quietly in a room—the person who is receiving unsolicited artificial videos of her dead father, or the people who are generating artificial videos of a dead man and sending them to his daughter?
In this situation the person sending unsolicited videos more aptly fits, but I think you could argue it's both. Being connected to social media in some sense is a refusal to just sit alone in your room. And when bad things happen to you on social media -- even when blame should strictly be assigned to the people actually taking the action -- there's a sense in which you have failed to "sit quietly in a room alone."
People definitely read the parent comment as blaming Williams' daughter for the actions of others. I agree that the blame rests with the people actually sending the videos, but I think there's another reading of the parent comment: why do we subject ourselves to this? Why don't we just walk away, when it would be very easy to do so? I'm never going to be able to stop the flood of assholes online, and no one commenting on this thread will ever be able to stop it either. What's in my control is whether or not I engage in that system.
Oh sure, side with the harassers of a grieving person. Being harassed? Well you could just go away! Just stop using this service the rest of society uses! Too bad your dad died and people are harassing you about it!
A ghoulish take. Victim blaming. A compassionate person would delete it.
Well, he’s right. Entering the public square means exposure not only to what someone chooses to see but also to what others choose to share with them. That isn’t harassment, despite the victim mentality you’re promoting. If she doesn’t like it, she’s free to leave social media, simple as that. The same principle applies here on Hacker News: we all have to read posts from people who dislike AI. I believe those people will eventually be left behind, I don't care if they are left behind, and I have no interest in reading anti-AI arguments in 2025. After all, where were they when AI algorithms were being developed back in the 50s and 60s?
> That isn’t harassment, despite the victim mentality you’re promoting.
Are you stating that this isn't harassment, or that you are incapable of being harassed on social media?
> Entering the public square means exposure not only to what someone chooses to see but also to what others choose to share with them.
This isnt a public square, let alone even a physical one. Even in the physical space, I am not beholden to look at what others share. I am under no obligation to take the flyer as I pass by. I am under no obligation to stop and listen as they prance around and try to get into my face.
These companies could provide a set level of tools for its users to be able to appropriately weed their own gardens, but they mostly chose not too. If she could set her app to not show her any video links from parties she didn't follow that pinged her, this probably wouldnt be an issue. There could easily be an middle ground between accept all harassment, and not visit the social club.
If I compile the voice recordings of my dead father and create a ai replica that you could call and speak to him, and then I sent it to my extended family, that's not harassment if they choose to believe that they don't like it.
Suppose I (a stranger who never met your father) do it, the AI replica I make is obviously fake and wrong, but it becomes popular nad people send you versions of it 10 times a day?
> After all, where were they when AI algorithms were being developed back in the 50s and 60s?
...living in a world where AI wasn't an enabler for abuse? What a bafflingly weird take - "you didn't object to <thing> when it was first thought of, so now it's being used badly you can't object to it"
I have no interest in reading anti-AI arguments in 2025. After all, where were they when AI algorithms were being developed back in the 50s and 60s?
They were in the library. Philip K Dick among others wrote extensively on likely downsides of such technology during the period you mention. Even were this not the case, you're basically arguing that nobody under the age of 60 has any right to complain because they weren't around when the concepts were first articulated. This is asinine.
"We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!".
I know, Dune and yeah, i get it - science fiction aint real life - but im still into these vibes.
Anyone wanna start a club?
"Once, men turned their thinking over to machines in hopes that this would set them free, but that only permitted other men with machines to enslave them."
Yeah, I'm in. Let me know when and where the meetings are held.
We're on Salusa Secundus, second pylon to the right, just off the Imperial Penal Complex. 6pm, Wednesday. Use the green parrot to enter, otherwise you'll be cast out. We don't want to let anybody from IX in.
> other men with machines to enslave them
The machines didn't enslave anyone in this scenario. "Men with machines" did. I think of the techbro oligarchs who decide what a feed algorithm shows.
Well, I've been surrounded by "machines that think" for my entire life, formed of unfathomably complex swarms of nanobots. So far we seem to get along.
If there were a new kind of "machines that think"--and they aren't a dangerous predator--they could be a contrast to help us understand ourselves and be better.
The danger from these (dumber) machines is that they may be used for reflecting, laundering, and amplifying our own worst impulses and confusions.
> unfathomably complex swarms of nanobots
???
It's you: A swarm of ~37 trillion cooperating nanobots, each one complex beyond human understanding, constructing and animating a titanic mobile megafortress that shambling across a planet consumed by a prehistoric grey-goo event.
They are referring to the bacteria we animals need to survive.
I assumed it was a reference to humans being multicellular life as each cell is nanobot sized and automata
>We must negate the machines-that-think
I wish we had machines that actually thought because they'd at least put an end to whatever this is. In the words of Schopenhauer, this is the worst of all possible worlds not because it couldn't be worse but because if it was a little bit worse it'd at least cease to exist. It's just bad enough so that we're stuck with the same dreck forever. This isn't the Dune future but the Wall-E future. The problem with the Terminator franchise and all those Eliezer Yudkowsky folks is that they are too optimistic.
https://i.ytimg.com/vi/NdN153giLdI/sddefault.jpg
Ted Kaczynski went that way, but was more of a lone wolf guy.
Yes, I am in
I will never bow before the machine god
> Anyone wanna start a club?
You would not be the first, see: https://en.wikipedia.org/wiki/Luddite
Funny thing is that we still have hand made fabric today, and were still employing a frighting number of people in the manufacturing of clothing. The issue is that we're making more lower quality products rather than higher quality items.
Count me in!
I'm not sure that this message is meant to be taken as viable, let alone sacrosanct.
<spoiler>
I interpreted Thufir Hawat's massive misunderstanding of Lady Jessica's motivation (which was a huge plot point in the book but sadly didn't make it into the films) as evidence that the conclusion that humans are capable of the exact same undesirable patterns as machines.
Did I read that wrong?
</spoiler>
The point of Dune, or the Butlerian Jihad within Dune, isn't that Humans are more capable than the Thinking Machines. It is that Humans should be the author of their own destiny, that and the Thinking Machines were enslaving humanity and going to exterminate them. Just like how the Imperium was enslaving all of humanity and was going to lead to the extinction of humanity. This was seen, incompletely, by Paul and later, completely, by Leto II who then spent 10,000 years working through a plan to allow humanity to escape extinction and enslavement.
Dune's a wild ride man!
I had never made that exact connection, but my impression of the Dune universe was that it was hopelessly dark and horrific, basically humans being relentlessly awful to each other with no way out.
I don't think its quite reducible to that. Take a step back and look at how you have an Emperor whose power is offset by both the Landsraad and the navigator's guild. This arrangement has all come about because of a scarce resource only available on one desert planet which makes space travel possible, and which has a population who have been fiercely fighting a guerilla war for centuries. It was all bound to come undone whether Paul accepted his part or not.
It’s kind of presented that way from the view of anyone under the oppression (Paul, Fremen, Jessica, etc). Therefor Paul’s vision is the way out, right? The whole thing is a mechanic to subdue the reader before they reveal that Paul doesn’t care he just wants to do things the way he sees it and controls it.
The golden path was to break prescient vision and prevent any possible extinction of the human race. Paul actually turned his back on the golden path and became the preacher, trying to prevent it, but after his death, his son, Leto II, followed the path.
I just look forward to the point where this is so common it becomes oversaturated and the original incentives go away/scare off the folks doing this stuff (inevitable as, like parasites, they only stay around as long as the host is providing them sustenance).
If you are old enough, you remember when having a CD player in your car made you a target for a break-in. Many models were removable so you could take the CD player with you when you left your car. Once it became a standard option, there was no point in stealing them anymore due to the saturation effect you brought up. Now having a CD player in your car is steampunk.
When the goal is "harassing someone into depression and suicide" though, the incentive will never go away. People are going to start doing things like this to be deliberately malicious, sending videos of their dead parents saying horrible things about them and so on.
The problem isn't that the technology is new and exciting and people are getting carried away, the problem is the technology.
I saw that Robin Williams setup a deed to prevent his image or likeness being used for film or publicity that covers a period of up to 25 years after his death.
https://www.theguardian.com/film/2015/mar/31/robin-williams-...
I don't know if it could be extended further, but I feel like there is merit for it to be considered in this case.
In Denmark people own the copyright to their own voice, imagery and likeness:
https://www.theguardian.com/technology/2025/jun/27/deepfakes...
I think that it is probably the right way to go.
Similarly, it drives me up the wall with people posting black and white "historical photographs" of history happenings, that are AI slop, and from the wrong era.
Just yesterday someone posted a "photo" of a 1921 where a submarine lost power, and built sails out of bedsheets to get home.
But the photo posted looked like a post WWII two submarine, rigged like a clipper ship, rather than the real life janky 1920's bed sheet rig and characters everywhere.
Actual incident (with actual photo): https://en.wikipedia.org/wiki/USS_R-14
It is no surprise to me that AI images have become an aesthetic of ascendant fascism. AI contains the same distaste for the actual life and complexity of history and preference for a false memory of the past with vaseline smeared on the lens.
While also rhyming with the obsession for futurism from past fascism, the intertwining of calling back to a romanticised past with an inhuman futurism is very much a pillar of the ideology.
Or people that frequent a questions and answers website, only to copy the AI answer slop as if it was their own.
I mean, thank you I guess, but anyone can do that with the littlest of efforts; and anyone with actual intention of understanding and answering the question would have recognized it as slop ans stopped right there.
My Dad passed away two years ago, and for a time I thought it would be nice to have an AI of him just to hear his voice again, but then I realized that the reason I'd want that is because I don't have really any video or audio of him from when he was alive. The reason I'd want the AI is because I have no training data for it...
If I did have sufficient training data for an AI Dad, I'd much rather just listen to the real recordings, rather than some AI slop generated from it. This is just another dumb application of AI that sounds alright on paper, but makes no sense in real life.
I remember her having a similar reaction to that actor who uploaded "test footage" of him impersonating Robin in the hopes of landing a biopic deal:
https://www.latimes.com/entertainment-arts/movies/story/2021...
It's not necessarily disgusting by itself, but sending clips to the guy's daughter is very weird.
As a fan of his work, I too wish it all to stop. People always go headlong for the people who we all miss the most, yet don't understand that it was their underlying humanity that made them so special.
Rather than "pleading" for them to stop, wouldn't she have more success going after the ai content creation companies via legal process? I thought actors have the right to control commercial uses of their name, image, voice, likeness, or other recognizable aspects of their persona, thus if people are paying for the AI creation wouldn't the companies be wrongly profiting off his likeness? Although I'm sure some laws haven’t yet been explicitly updated yet to cover AI replicas.
> Rather than "pleading" for them to stop, wouldn't she have more success going after the ai content creation companies via legal process?
But that shouldn’t be the first step. Telling your fellow man “what you are doing is bothering me, please stop” is significantly simpler, faster, and cheaper than contacting lawyers and preparing for a possibly multi-year case where all the while you’ll have to be reminded and confronted with the very thing you don’t want to deal with.
If asking doesn’t work, then think of other solutions.
You can't tell everyone. Barely anyone will know of this being published. And then there will be lots of people thinking "whatever, I don't care". And a not insignificant number of people thinking "lol, time to organise a group of people who will send Robin Williams creepy genAI to her every day!"
Asking solves it for you, biting the hand makes them think twice about doing it to others, but good luck doing that to the many-headed serpent of the internet.
How about just praying for an asteroid to reset us and hope we get shit right the next time around
If we can't get it right this time, there's no indication a reboot would be any better (because humans).
who says the next time will be humans? there was no next time for the dinosaurs. maybe humans are not the end, but just a rung on the ladder.
How about squirrels? I think it would be fun to have a tail.
I don't know. I mean, things could have gone very differently (in a better or worse way) and the world may be unrecognizable today if certain key events did not happen.
Like if nothing sparked the World Wars (conversely: or if Hitler won). Or Greece harnessed steam or electricity to spur an industrial revolution 2200 years ago. Or if Christianity etc. never caught on.
> Telling your fellow man “what you are doing is bothering me, please stop” is significantly simpler, faster, and cheaper
It's not because just telling people on the internet to stop doing something doesn't actually stop them from doing it. This is basic internet 101, streissand effect at full power
The Streisand effect is a reactive effect, not an ongoing condition.
No, she would not have any success. Take a look at this list and think about the sheer number of companies she would need to sue: https://artificialanalysis.ai/text-to-video/arena?tab=leader.... You'll see Google, one of the richest companies on the planet, and OpenAI, the richest private company on the planet. You'll see plenty of Chinese companies (Bytedance, Alibaba, Tencent, etc.). You'll also see "Open Source" - these models can't be sued, and removing them from the internet is obviously impossible.
The most these lawsuits could hope to do is generate publicity, which would likely just encourage more people to send her videos. This direct plea has that risk too, but I think "please don't do this" will feel a lot less adversarial and more genuine to most people than "it should be illegal for you to do this".
> The most these lawsuits could hope to do is generate publicity, which would likely just encourage more people to send her videos.
It's not fruitless and doesn't only generate publicity. Some states like California and Indiana recognize and protect the commercial value of a person's name, voice, image, and likeness after death for 70 years, which in this case would apply for Robin William's daughter.
Tupac's estate successfully sued Drake to take his AI generated voice of Tupac out of his Kendrick Lamar diss track.
There is going to be a deluge of copyright suits against OpenAI for their videos of branded and animated characters. Disney just sent a cease and desist to Character.ai last week for using copyrighted characters without authorization.
What I'm saying is that successfully suing individual companies or people would have zero impact on her actual problem. If California says it's illegal and OpenAI says they'll ban anyone who tries it, then these people can effortlessly switch to a Grok or Alibaba or open source model, and they'll be extra incentivized to do so because they'll find it fun to "fight back" against California or luddites or whatever. Do you see the difference? Tupac's estate successfully stopped one guy from using Tupac's voice, but they have not and cannot stop the rest of the world from doing so. The same is true for Disney, it is trivial for anyone to generate images and videos using Disney characters today, and it will be forever. Their lawsuit can only hope to prevent a specific very small group of people from making money off of that.
The problem she wants to solve is "people are sending me AI videos of my dad". She will not have any success solving this problem using lawsuits, even if the lawsuits themselves succeed in court.
Is that really the problem she wants to solve? She could just turn off her phone to accomplish that. The problem is multi-layered and complex. Holy shit, It’s her dad. He’s dead. I don’t have her phone number, but let’s pretend I did and we were friends, why would I be texting her videos of her dead father? He’s Robin Williams, sure, but why? why! would I be making AI videos and sending them to her? Forget Sora, if I made a puppet of her father and made a video of him saying things he didn’t say, and then sent it to her, I think I’d still be a psychopath for sending it to her. I think she should sue open AI and California should have it be illegal without a license, and yeah there’s always gonna be a seedy underbelly. I’m sure there’s Mickey Mouse porn out there somewhere. A lawsuit is going to make it official that she is a person and she’s saying hey I don’t like that and that she would like for people to stop it, and that the rest of us agree with that.
Asking people to stop seems like the first step. Especially since this is specific to people sending them to her in particular. People think they are being nice and showing some form of affection, but as she mentions she finds it disturbing instead.
So I don't think there was actually malicious intent and asking people to stop will probably work.
what if the creators are not in the same legal jurisdiction or in some place that does not care about whatever rights you think are being wronged?
There's two things potentially at stake here:
1. Whether there is an effective legal framework that prevents AI companies from generating the likenesses of real people.
2. The shared cultural value that, this is not cool actually, not respectful, and in fact somewhat ghoulish.
Establishing a cultural value is probably more important than any legal structures.
I think there is also a major distinction between creating the likeness of someone and sending that likeness to the family of the deceased.
If AI somehow allowed me to create videos of the likeness of Isaac Newton or George Washington, that seems far less a concern because they are long dead and none of their grieving family is being hurt by the fakes.
https://sora.chatgpt.com/p/s_68e57c1b22708191be1d249c1a52b2b...
How long would the legal process take? How much would it cost? Does she have to sue all proprietors of commercial video generator models? What about individuals using open source models? How many hours of her time will these suits take to organize? How many AI videos of her dad will she have to watch as part of the proceedings? Will she be painted as a litigious villain by the PR firms of these very well-capitalized interests?
Her goal seems to be to reduce the role in her life played by AI slop portrayals of her dad. Taking the legal route seems like it would do the opposite.
Not really. From what I understood of the interview is that her complain is not about money or compensation (which she may be entitled to), but about how people use the technology and how they interact with it and with her. Legal process or even the companies implementing policies won't change that problematic society behavior.
Since the raise of generative AI we have seen all sorts of pathetic usages, like "reviving" assesinated people and making them speak to the alleged killer in court, training LLMs to mimic deseased loved ones, generative nudification, people that is not using their brain anymore because they need to ask ChatGPT/Grok... some of them are crimes, others not. Regardless most of them should stop.
It's so frustrating that "just call the cops" is the answer, at the very same time that the cops are creating a massive disruption to our society.
And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?
We need systems that work without this one neat authoritarian trick. If your solution requires that you lean on the violence of the state, it's unlikely to be adopted by the internet.
Legal process is not an “authoritarian trick," it's the primary enforceable framework for wide scale, lasting societal change as it's the only method that actually has teeth.
Also, calling legal enforcement as “leaning on the violence of the state” is hyperbolic and a false dichotomy. Every system of rights for and against companies (contracts, privacy, property, speech) comes down to enforceable legal policies.
Examples of cases that have shaped society: Brown v Board of Ed, pollution lawsuits against 3M and Dow Chemical, Massachusetts v. EPA resulted in the clean air act, DMCA, FOSTA-SESTA, the EU Right to Be Forgotten, Reno v. ACLU which outlined speech protections online, interracial marriage protected via Loving v. Virginia, the ruling that now requires police have a warrant to access cell phone data was Carpenter v. US, and these are just a few!
> And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?
Jurisdictional challenges don't mean a law is pointless. Yes, bad actors can operate from other jurisdictions, but this is true for all transnational issues, from hacking to human smuggling to money laundering. DMCA takedowns work globally, as does GDPR for non-EU companies.
Nobody’s arguing for blind criminalization or over policing AI. But perhaps there should be some legal frameworks to protect safe and humane use.
To my mind this is no different to other forms of spam or harassment.
Back in the 00s I remember friends being sent “slimming club” junk mail on paper made to look like a handwritten note from “a concerned friend”. It was addressed but random.
Unfortunately it can be very distressing for those with body image issues.
We’re going to have to treat this slop like junk mail as it grows.
AI is killing society now.
It is definitely changing it. We were already experiencing the move from a "celebrity" being an individual with huge talent to just a marketing tool that gets giant financial rewards for merely existing. These larger than life pop culture icons that 100s of millions or billions of people care about is a recent phenomenon and I welcome generative AI killing it off.
If media had one-shot generated actors we could just appreciate whatever we consumed and then forget about everyone involved. Who cares what that generated character likes to eat for breakfast or who they are dating they don't exist.
Is this really a change? Haven't people loved celebrities for as long as they've existed? Before this, characters in books, poems and songs commanded the same level of attention.
> I welcome generative AI killing it off.
It probably will, but that pushes us in the direction that Neal Stephenson describes in Fall - millions of people sitting alone consuming slop content generated precisely for them, perfect neuro-stimulation with bio-feedback, just consuming meaningless blinking lights all day and loving it. Is that better than what we have now? It's ad-absurdum, yes, but we live in a very absurd world now.
> Who cares what that generated character likes to eat for breakfast or who they are dating they don't exist.
You never needed to know this about celebrities, and you still don't need to now. Yes, others are into it; let them have their preferences. No doubt you're into something they would decry.
I definitely draw a distinction between like... famous medieval leaders and generals being well known versus what we do with people like Michael Jackson, Madonna, Kim Kardashian etc. I am sure they had local celebrities back then but there is no way they extended much beyond a small region or a single nation.
In my ideal world generating content becomes so easy that it loses all value unless you have some relation to the creator. Who cares about the new Avengers movie, Rick from down the street also made an action movie that we are gonna check out. Local celebrities return. Global figures are generated because why would Coke pay 100m for Lebron to dunk a can of coke on camera or some dumb shit when his image has been absolutely trashed by floods of gen content.
The problem is:
- How do I know Rick down the street, anymore, if I'm inside consuming perfectly addicting slop all day?
- How do I ensure that I am also consuming content that has artistic merit that is outside my pure-slop comfort zone?
- Why would I notice or care about a local artist when they can't produce something half as perfect "for me" as a recommendation algorithm trained on all human artistic output?
> Global figures are generated
This I agree with. Ian Macdonald wrote about "aeai" characters in his early 2000s novel River of Gods, where a subplot concerns people who watch TV shows starring AI characters (as actors, who also have an outside-of-the show life). The thing is, right now we see Lebron in the ad and it's an endorsement - how can an AI character of no provenance endorse anything?
You're describing commodification. Too bad it doesn't work out that way in practice because people are not interchangeable. Look at all the "ship of Theseus" entertainment companies we have today. They still have the IP rights, but the people who actually made it good are long gone. Franchises running on fumes.
Can we put Pandora back in the box?
Nit: Pandora wasn't in the box; she was the keeper of the box and told not to open it.
Pandora's monster, Pandora is the scientist.
I snorted. Thanks
If we are going to be full pedantry, it wasn't a box it was a jar, blame Erasmus for that one
I was waiting for someone to point this out.
HN came through for me.
> Williams continued: "To watch the legacies of real people be condensed down to 'this vaguely looks and sounds like them so that's enough', just so other people can churn out horrible TikTok slop puppeteering them is maddening," she continued.
> "You're not making art, you're making disgusting, over-processed hotdogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else's throat hoping they'll give you a little thumbs up and like it. Gross."
> She concluded: "And for the love of EVERY THING, stop calling it 'the future,' AI is just badly recycling and regurgitating the past to be re-consumed. You are taking in the Human Centipede of content, and from the very very end of the line, all while the folks at the front laugh and laugh, consume and consume."
if you're currently making a fortune working for Anthropic et al, maybe find some form of charity you can do as penance for your day job. Certainly there are people on this site who should atone for this.
Sending someone unwanted fake clips of their dead family is unhinged, but it's a bit much to just blanket assume you can't do cool things with generative models or that the artists themselves won't approve. e.g. Liam Gallagher's reaction to AISIS[0] was that it was "mad as fuck I sound mega". I don't know what he'd have to say about his AI vocals on a High Flying Birds song (maybe they'd be okay with that collab now?), but this "cover" is also pretty awesome[1]. As usual, it's up to the wielder of a tool to use it well.
Complaining about ML-augmented art "recycling and regurgitating the past to be re-consumed" strikes me as similar to complaining about mashups. If it's not for you, just move on. It's still a form of creative expression that others enjoy.
[0] https://www.youtube.com/watch?v=whB21dr2Hlc
[1] https://www.youtube.com/watch?v=R4R4VLQM0P4
> it's up to the wielder of a tool to use it well.
Wouldn't "using it well" include not using it to puppet dead people unless they've given permission to do that?
The entire industry, from the CEOs dreaming this up, all the way down to junior engineers making it reality, desperately needs a thorough self-reflection session, and everyone involved with this needs to be asking "Are we the baddies?" I know, I know, all those $100 bills stuffed in your mattresses give you a nice soft, comfortable night sleep, but a lot of it comes from doing harm (or at least enabling it).
Yes, but taking your self-reflection euphemism literally, and why it wouldn't be enough, here's a rough sketch of several categories of us workers, in an industry that became the new "greed is good" mecca, maybe 25 years ago:
* Reasonably well-intentioned, but we're not particularly bright, and not much for introspection. We never thought about where this money came from, and accepted executive chatter about creating value or whatever, but it was boring to us.
* We see a survival threat, and ourselves as doing things we're not proud of, and don't like to think about it. (It's much easier for an observer to be sympathetic with this today, than for most of the last 25-30 very comfortable years in tech, when most of us were chasing the jobs that were much more than comfortable, at companies that were in the news for being sketchy. We even twisted the entire field's way of interviewing for all jobs, to match the extensive practicing that everyone was doing for the interview rituals of the strictly best-paying, well-more-than-comfortable jobs.)
* Like the above, but we're not actually feeling a threat, just rationalizing greed.
* We're greedy sociopaths, who don't even care to rationalize, except to manipulate others.
* We're not a sociopath, but we've been fed a self-serving aggressive libertarian philosophy/religion, and ate it up. Like the first category in this list, we're not particularly bright or introspective, but we're much less likeable.
* We question the field-wide festival of sociopathic greed, and we've been implicitly assembling a list of things we apparently won't do, companies we apparently won't work with, etc. And it's because of reasoned values, not just appearances or fashion.
Good for her. I wish more people in a position like hers would speak out about this.
> Certainly there are people on this site who should atone for this.
Mate, if we are going down this route, a whole fucktonne of us should be atoning for this.
Look, technology, our technology has enabled, and continues to enable the every forward march of authoritarianism. It has made the destruction of entire cultures easier, practical and efficient.
WE have managed to trick the populous to share all their thoughts, opinions friendships and realtime location with a handful of corporations who are hell bent on complete domination.
Not only that, but we have cheered as those corporations have undermined our rights to privacy, moral rights[1] and freedom of speech.
[1] as in copyright, not morals as in killing is bad mmkay
I mean, I both agree and disagree with her here. Everything said is unambiguously right, but then I’m still using the LLM’s to great effect in my day job.
> AI is just badly recycling and regurgitating the past to be re-consumed
That’s just correct.
Unfortunately it doesn’t make the reconsumption any less entertaining.
I wish AI had never been invented.
Well it's just an algorithm, like any other algorithm, do you wish all algorithms had never been invented? How about science in general?
It's a technology that enables a set of behaviors. Seems reasonable to wish that didn't happen if you don't like the behaviors.
How about just wishing that the behaviors didn't happen?
I see wishing people weren't the way they are is a bit more... intrusive? than just wishing LLMs were slept on. Like if you had a genie in a bottle the less evil wish would be for LLMs to not have been developed.
Thoughts and prayers.
It's not just an algorithm. it's also a massive application of computing resources and a complete indifference to the numerous people pointing out why x, y, and z are really bad ideas.
This is a bad argument. You may as well say “It’s just a thing that has been invented. Do you wish that all things that have been invented hadn’t been invented?”.
> it's just an algorithm, like any other algorithm
Can you bubble sort your way into a video resembling Robin Williams?
Science has allowed us to destroy our planet at industrial scale and provided the tools to destroy ourselves and as a result all humans will likely be dead long before the next planet killer asteroid.
Science rules! (I'm only half joking)
It seems that most of the toxic AI stuff has really only been coming out of a handful of companies: OpenAI, Meta, Palantir, Anduril. I am aware this is a layman take.
If not them, it would just had been others doing the same thing.
Do you wish that nuclear weapons had never been invented?
My point is that like any technology, it's how you use it.
Absolutely yes.
The wheel may have been a better example. It can be used for good or evil.
Nuclear weapons just destroy stuff at scale, by design.
Yes.
Well, a lot of us have a curious mind. Like, fission is a property of this universe. Gradient descent is a property of this universe. All you're saying is you'd rather not know about it.
I'm happy that nuclear weapons and AI have been invented, and I'm excited about the future.
Ok, but regardless of your feelings about AI, I don’t understand why you wouldn’t wish that nuclear weapons had never been invented. (Well, maybe it ended the combat between the US and Japan faster…, and maybe prevented the Cold War from becoming a hot war, but still, is that really worth the constant looming threat of nuclear Armageddon?)
If only having a curious mind would imply having a far-sighted and responsible one.
It normally does. That's why I can consider that nuclear weapons might have better uses in the future, presently unknown to us, and you can't.
Nuclear weapons could have a better use in the future? Pray tell, what exactly have you envisioned here?
My point is that you don't know what the future holds, but it's better to know more than less. My point is valid even if I can't provide examples.
However, if you ask me to, I can imagine using those weapons against meteors headeds for Earth, or possibly aliens. We don't know.
Phew, I never thought that "it's better to know more than less" would be controversial on HN.
Lack of imagination often results in preconceived answers, to open ended questions.
The curious child takes apart an animal and learns surgery. The animal, however, is nonetheless killed.
I have a curious mind too but I don't go cutting up neighbourhood cats to see what they look like on the inside.
Well, you can learn what a cat looks like on the inside from a book. But someone did have to go around cutting up neighborhood cats, you're just benefiting from them. Which is the _whole reason_ why I maintain my position that inventing AI and nuclear weapons is a net positive for mankind.
If you're curious about that, are you curious about hypotheses like the Great Filter (Fermi paradox), and are you concerned that certain technologies could actually function as the filter?
I mean, what if the nuclear bomb actually did burn up the atmosphere? What if AI does turn into a runaway entity that eventually functions to serve its own purposes and comes to see humans the same way we see ants: as a sort of indifferent presence that's in the way of its goals?
There is a sort of people who read 1984 and blame the protagonist for being an idiot who called the fire upon himself, or still don't get what's wrong with ice9 and people behind it when turning the last page of Cat's Craddle.
And a sort of people who sympathize Winston and blame Felix Hoenikker, but still fail to see any parallels between "fiction" and life.
I don't know for certain if, when you say "a sort of people", you're referring to me, but... The sort of people you're describing sound like fascists, which is the opposite of me.
We're on the same side then, even if our opinions on the subject differ. Please take no offence.
I mean sure. I'm not saying "lets be wreckless". I'm saying "lets understand everything about everything, more rather than less".
I suppose I'd rather see us understand nuclear weapons without vaporizing civilians and causing an arms race. I'm not claiming to have a solution to preventing the arms race aspect of technology, but all the same: I'd rather these weapons weren't built.
ALL technology can be used for good or bad. It's the usage, not the invention.
It's real hard for me to conjure up "good" uses for, say, mustard gas or bioweapons or nuclear warheads.
"Technology is neutral" is a cop-out and should be seen as such. People should, at the very least, try to ask how people / society will make use of a technology and ask whether it should be developed/promoted.
We are all-too-often over-optimistic about how things will be used or position them being used in the best possible light rather than being realistic in how things will be used.
In a perfect world, people might only use AI responsibly and in ways that largely benefit mankind. We don't live in that perfect world, and it is/was predictable that AI would be used in the worst ways more than it's used in beneficial ones.
https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propuls...
Why? Soviets tried to re-route rivers with nuclear blasts in their infinite scientifically-based wisdom and godlike hubris. How much illness their radioactive sandbox would cause among people was clearly too minuscle a problem for them to reflect on.
https://www.bbc.com/future/article/20250523-the-soviet-plan-...
While this is true, the horrific usage of a tool can vastly outweigh pitifully minimal benefits.
I'm not implying those adjectives apply to AI, but merely presenting a worse case scenario.
Dismissing the question of "does this benefit us?" with "it's just a tool" evokes Jurassic Park for me.
But not in equal quantity. Technology does not exist in a contextless void. Approaching technology with this sort of moral solipsism is horrifying, in my opinion.
I strongly disagree. Many technologies aren't neutral, with their virtue dependent on the use given them. Some technologies are close to neutral, but there are many that are either 1) designed for evil or 2) very vulnerable to misuse. For some of the latter, it'd be best if they'd never even been invented. An example of each kind of technology:
1) Rolling coal. It's hard for me to envision a publicly-available form of this technology that is virtuous. It sounds like it's mostly used to harass people and exert unmerited, abusive power over others. Hardly a morally-neutral technology.
2) Fentanyl. It surely has helpful uses, but maybe its misuse is so problematic that humanity might be significantly better off without the existence of this drug.
Maybe AI is morally neutral, but maybe it isn't.
We have very little idea what is "good or bad," especially over the long term.
Right. But a machine that helps plant seeds at scale could be used for bad by running someone over, but it's core purpose it to do something helpful. AI's core purpose isn't to do anything good right now. It's about how many jobs it can take, how many artists can it steal from and put out of work, and so on and so on. How many people die from computer mice each year? How many from guns? They're both technology and can be used for good or bad. To hand wave the difference away is dangerous and naive.
It only "take jobs" because it's useful. It's useful for making transcription at scale, text revision, marketing material, VFX, all those things. It also does other things that don't "take jobs", like computer voice control. It's just a tool, useful for everyone, and not harmful at all at its purpose. Comparing it to guns is just ridiculous.
But... the machine that plants seeds also takes away the livelihood to a bunch of folks. I mean, in my country, we were an agrarian society 100 years ago. I don't have the actual stats but it was close to 90% agrarian. Now, it's at about 5%. Sure, people found other jobs and that will likely be the case here. I will do the dishes while the AI will program.
I understand the industrial revolution happened. To say this revolution is the same and will produce the same benefits is already factually wrong. One revolution created a net positive of jobs. One has only taken jobs.
I would say we don't know that yet. Comparing the current state of LLMs to what they can lead to or what they might enable later on is like comparing early machine prototypes to what we have today.
I can also 100% tell you that the farming folk of 100 years ago also felt like the farming machines took away their jobs. They saw 0 positives. The ones that could (were young) went into industry, the others... well, at the same time we instituted pensions, which were of course paid for by the active population, so it kind of turned out ok in the end.
I do wonder, what will be the repercussions of this technology. It might turn into a dud or it might truly turn into a revolution.
However, aren't there now a lot of job openings out there for LLM-whisperers and other kinds of AI domain experts? Surely these didn't exist in the same quantity 10 years ago.
(I'm just picking nits. I do agree that this "revolution" is not the same and will not necessarily produce the same benefits as the industrial revolution.)
Maybe someday we will grow so tired of AI, that people will leave social media entirely. The most interesting thing about social media, the ability to build real human connections, is quickly becoming a relic of the past. So without that, what is left? Just slop content, rage bait.
This is already happening in my immediate surroundings, friends that have long complained of phone addiction are now feeling more able to act on it as it isn't this oasis of escape for them anymore. My only comment there is that they were not ready/didn't remember how bad t9 was for alphabetical inputs, even nostalgia can't cover that unfortunate marriage of convenience.
Unfortunately this does mean it becomes hard(er?) to make human connections again
Which sucks
Not really. We might not be in the golden era of human connections, but you can still find people out there that think the same way somehow and are off grid from social media.
I honestly can't see any upside whatsoever in creating AI Simulacra of dead people. It kind of disgusts me actually.
It's all downside. I've seen cases where this is used to 'give murder victims a voice' recently, both on CNN where they 'interview' an AI representation of the victim and in court, where one was used for a victim impact statement as part of the sentencing. Those people would laugh you out of the room if you suggested providing testimony via spirit medium, but as soon as it comes out of a complicated machine their brains sieze up and they uncritically accept it.
I saw that too. It's so unbelievably and transparently emotional manipulation. It would be comical if it wasn't so sad and terrifying.
Hold on, really? That seems wildly crazy to me, but sadly I'd believe it. I'd love it if you had a source or two to share of some of the more egregious examples of this.
This was the main one I saw https://judicature.duke.edu/articles/ai-victim-impact-statem...
I've seen a lot of horrible uses of AI, but this particular application the most sickening to my very core (schoolchildren using AI to generate porn of their classmates is #2 for me).
What are your thoughts on James Earl Jones giving license for his voice after his passing for Disney to use for Darth Vader? Or Proximo having a final scene added in the movie Gladiator upon the passing of Oliver Reed during production?
I see both of these as entirely valid use cases, but I'd be curious to know where you stand on them / why you might think recreating the actors here would be detrimental.
We have an actor dress up as $historical_figure and do bits in educational stuff all the time. Changing that to ai generated doesn't seem all that wrong to me.
Maybe there needs to be a rule that, lacking express permission, no ai generated characters can represent a real person who has lived in the last 25 years or similar.
Not my upside, but there is a big one: money.
I agree, though I think creating AI simulacra of living people against their will and making them say or do things they wouldn't is in some ways even worse.
Because it's easier than doing it without AI.
stop sending them to her
its not really a story, this is an instagram post about someone that can be tagged and forwarded items on instagram by strangers, for those of you that aren't familiar
this is not about any broader AI thing and its not news at all. a journalist made an article out of someone's instagram post
I think it's definitely newsworthy that so-called fans are sending AI slop of robin williams to his own daughter! it's sadly indicative of the general state of fandom that they didn't even think of how it would land, or that she would be anything other than appreciative.
If Robin Williams wrote a book as a teenager his elderly grandchildren could still own the rights to that work.
However a video likeness of him has virtually no restrictions.
It's too bad we have a dysfunctional government which is struggling to say no to dictatorial martial law and has decided that instead of passing legislation on reforming anything as the result of careful compromise the preferred method is refusing to pay the bill shutting everything down until one side caves.
We could have government that actually tried to address real issues if people actually cared.
Robin Williams lived in California which has legal protection for celebrity likenesses. But likenesses rights aren’t going to stop the problem, because it’s individual people who are recreating the likenesses en masse.
It is technically correct that Sam Altman and everyone on OpenAI's board are individual people, but I don't see how that would prevent legal action?
I can't help but to think that this will inevitably lead to the Streisand effect.
If people see someone’s request to stop sending them AI slop of their dead father and it causes them to send the person more of it, that goes beyond the Streisand effect (which is driven by curiosity) and into outright cruelty.
That sounds like more of a 4chan effect.
> "Please, just stop sending me AI videos of Dad," Zelda Williams posted on her Instagram stories.
> "Stop believing I wanna see it or that I'll understand, I don't and I won't. If you're just trying to troll me, I've seen way worse, I'll restrict and move on.
> "But please, if you've got any decency, just stop doing this to him and to me, to everyone even, full stop. It's dumb, it's a waste of time and energy, and believe me, it's NOT what he'd want."
Or maybe ignore, block or, heaven forbid, even get off social media if it affects one negatively?
It's interesting, when I first got a cell phone (probably 1998) it took me years to figure out that if you received an unwanted call you could just ignore the call. This wasn't really possible back in the landline days, (most people did not have caller ID) and so it wasn't really a scenario you were trained for.
Obviously this is only a metaphorical comparison, but I do wonder if people are going to figure this out with regard to social media. A lot of people are talking about how to "fix" social media, but almost no one is saying "maybe I'll delete it and just read a book or go for a walk or something."
It all reminds me of the Blaise Pascal quote: "All of humanity's problems stem from man's inability to sit quietly in a room alone."
I installed a landline in my house, and today my wife told me it was scary. Why?
Because it was ringing. I suggested she pick it up to find out why it was ringing, but apparently that’s not something you do in the age of mobile phones.
Making phone calls is turning in to extinction at this point. I'm waiting for the day Android/Apple remove the actual dailer app.
> "All of humanity's problems stem from man's inability to sit quietly in a room alone."
In this situation, who's the person unable to sit quietly in a room—the person who is receiving unsolicited artificial videos of her dead father, or the people who are generating artificial videos of a dead man and sending them to his daughter?
In this situation the person sending unsolicited videos more aptly fits, but I think you could argue it's both. Being connected to social media in some sense is a refusal to just sit alone in your room. And when bad things happen to you on social media -- even when blame should strictly be assigned to the people actually taking the action -- there's a sense in which you have failed to "sit quietly in a room alone."
People definitely read the parent comment as blaming Williams' daughter for the actions of others. I agree that the blame rests with the people actually sending the videos, but I think there's another reading of the parent comment: why do we subject ourselves to this? Why don't we just walk away, when it would be very easy to do so? I'm never going to be able to stop the flood of assholes online, and no one commenting on this thread will ever be able to stop it either. What's in my control is whether or not I engage in that system.
Maybe she likes social media? Why would your first response be for her to quit doing something she enjoys to avoid accidental harassment?
Oh sure, side with the harassers of a grieving person. Being harassed? Well you could just go away! Just stop using this service the rest of society uses! Too bad your dad died and people are harassing you about it!
A ghoulish take. Victim blaming. A compassionate person would delete it.
Well, he’s right. Entering the public square means exposure not only to what someone chooses to see but also to what others choose to share with them. That isn’t harassment, despite the victim mentality you’re promoting. If she doesn’t like it, she’s free to leave social media, simple as that. The same principle applies here on Hacker News: we all have to read posts from people who dislike AI. I believe those people will eventually be left behind, I don't care if they are left behind, and I have no interest in reading anti-AI arguments in 2025. After all, where were they when AI algorithms were being developed back in the 50s and 60s?
> That isn’t harassment, despite the victim mentality you’re promoting.
Are you stating that this isn't harassment, or that you are incapable of being harassed on social media?
> Entering the public square means exposure not only to what someone chooses to see but also to what others choose to share with them.
This isnt a public square, let alone even a physical one. Even in the physical space, I am not beholden to look at what others share. I am under no obligation to take the flyer as I pass by. I am under no obligation to stop and listen as they prance around and try to get into my face.
These companies could provide a set level of tools for its users to be able to appropriately weed their own gardens, but they mostly chose not too. If she could set her app to not show her any video links from parties she didn't follow that pinged her, this probably wouldnt be an issue. There could easily be an middle ground between accept all harassment, and not visit the social club.
If I compile the voice recordings of my dead father and create a ai replica that you could call and speak to him, and then I sent it to my extended family, that's not harassment if they choose to believe that they don't like it.
Suppose I (a stranger who never met your father) do it, the AI replica I make is obviously fake and wrong, but it becomes popular nad people send you versions of it 10 times a day?
> After all, where were they when AI algorithms were being developed back in the 50s and 60s?
...living in a world where AI wasn't an enabler for abuse? What a bafflingly weird take - "you didn't object to <thing> when it was first thought of, so now it's being used badly you can't object to it"
I have no interest in reading anti-AI arguments in 2025. After all, where were they when AI algorithms were being developed back in the 50s and 60s?
They were in the library. Philip K Dick among others wrote extensively on likely downsides of such technology during the period you mention. Even were this not the case, you're basically arguing that nobody under the age of 60 has any right to complain because they weren't around when the concepts were first articulated. This is asinine.
How a block list with robust and easilly accessible ways of extending it is not a first thing that gets implemented for any social app is beyond me.