I liked the original title better "The Left Doesn't Hate Technology, We Hate Being Exploited". I think that sums up my grievances towards AI - amazing technology and certainly a booster to anyone's life, but what is the cost? Why AI companies get to download, consume and transform all copyrighted works essentially for free (I think there were some lawsuits that resulted in the companies paying), but normal people have to pay millions if they wanted to access all that data and pay to the original creators? I'm also not so ok with the workforce being displaced, but it's what happens with technological progress. But am not ok that it's displacing the writers while benefiting from their prior work without paying them a cent.
I care a lot more about the environmental harm, the impact on computer hardware prices, and what AI is doing to the energy prices which somehow become everyone else's burden to pay than I am about the rampant copyright infringement.
This is a false equivalency if I just share torrented data I can go to prison. These companies downloaded and seeded copy righted material and then sold a product made from that data. If I a civilian did this I would face time in prison. If you think this is fine great, but what people are made about is the hypocrisy of the current moment.
As the title said "Techno-cynics are wounded techno-optimists"
> Facebook parent-company Meta is currently fighting a class action lawsuit alleging copyright infringement and unfair competition, among others, with regards to how it trained LLaMA. According to an X (formerly Twitter) post by vx-underground, court records reveal that the social media company used pirated torrents to download 81.7TB of data from shadow libraries including Anna’s Archive, Z-Library, and LibGen. It then used this information to train its AI models.
> Aside from those messages, documents also revealed that the company took steps so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta. The court documents say that this constitutes evidence of Meta’s unlawful activity, which seems like it’s taking deliberate steps to circumvent copyright laws.
> According to the plaintiffs’ forensic analysis, Meta’s servers re-seeded the files back into the swarm, effectively redistributing mountains of pirated works.
and specifically talks about that being a problem.
I will grant that until/unless the cases are decided, this is allegedly, so we'll see.
If that's what you believe, then you have the understanding of how the training data for these companies came to be of a monkey, not a bright one at that
OpenAI, Meta, and Anthropic all are known to have done this. It's even been exposed in company internal communications. Anthropic already settled their court case. You're an 11 month old account and I suspect you are some sort of bot or user meant to spread misinformation on the forum.
Do you think that OpenAI or Anthropic should get a pass for using torrents if they used special BitTorrent clients that only leached? Do you think the RIAA would be cool with me if I did the same?
> There is no dispute that Meta torrented LibGen and Anna's Archive, but the parties dispute whether and to what extent Meta uploaded (via leeching or seeding) the data it torrented. A Meta engineer involved in the torrenting wrote a script to prevent seeding, but apparently not leeching. See Pls. MSJ at 13; id. Ex. 71 ¶¶ 16–17, 19; id. Ex. 67 at 3, 6–7, 13–16, 24–26; see also Meta MSJ Ex. 38 at 4–5. Therefore, say the plaintiffs, because BitTorrent's default settings allow for leeching, and because Meta did nothing to change those default settings, Meta must have reuploaded “at least some” of the data Meta downloaded via torrent. The plaintiffs assert further that Meta chose not to take any steps to prevent leeching because that would have slowed its download speeds. Meta responds that, even if it reuploaded some of what it downloaded, that doesn't mean it reuploaded any of the plaintiffs’ books. It also notes that leeching was not clearly an issue in the case until recently, and so it has not yet had a chance to fully develop evidence to address the plaintiffs’ assertions.
> A Meta engineer involved in the torrenting wrote a script to prevent seeding, but apparently not leeching.
Wrong. Michael Clark testified under oath that they tried to minimize seeding and not that they prevented it entirely. His words were: "Bashlykov modified the config setting so that the smallest amount of seeding possible could occur" (https://storage.courtlistener.com/recap/gov.uscourts.cand.41...)
They could have used or written a client that was incapable of seeding but they didn't.
> no if you had leeched its is very unlikely that you would face time in prison.
Not the one who claimed that, but if I think it's fair to say that doing what they did, at that scale, could easily result in me (and most people) being bankrupted by fines and/or legal expenses.
> I’ll be generous and say that sure, words like “understanding” and “meaning” have definitions that are generally philosophical, but helpfully, philosophy is an academic discipline that goes all the way back to ancient Greece. There’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.
The strawberry thing has been solved and LLM's have moved way beyond that helping in mathematics and physics. Its easy for the blog author to pick this but lets try something different.
It would be a good idea to come up with a question that trips up a modern LLM like GPT with reasoning enabled. I don't think there exists such a question that can fool an LLM but not fool a reasonably smart person. Of course it has to be in text.
Like others have said I relate to the title at least. I can look at most technological advances with a very optimistic perspective BUT has I've aged I've learned that these advancements are often driven by and increasingly controlled by people with bad intentions. A quote that resonates with me that I've seen on social media the last year or so is "I don't want AI to make art for me I want it to do my laundry." It would be a dream for technology to advance to a stage where we all work less and live more fulfilling lives but when I look at history the powers that be don't ever let that happen and manipulate the technology to keep most of us stuck in the rat race.
Cynicism is the mind's way of protecting itself from repeating unproductive loops that can be damaging. Anyone who ever had a waking dream come crashing down more than once likely understands this.
It doesn't necessarily logically follow that you wholesale reject entire categories of technology which have already shown multiple net positive use cases just because some people are using it wastefully or destructively. There will always be someone who does that. The severity of each situation is worth discussing, but I'm not a big fan of the thought-terminating cliché.
I don't think that there are very many people who do reject the technology itself. I think that under different circumstances even the people who seem the most hostile towards it would be mostly fine with the chatbots and image generation technology we call AI.
There's understandably some concerns over how it will impact people's jobs in the future, but that's a societal issue, not a problem with the technology.
I think the problem people have is with how that technology was created by people looking to privately profit from the hard work of others without compensation, how it is massively destructive to the environment, how it is being used to harm others, and how the people controlling it are indifferent to the harms they cause at best and at worst are trying to destroy or undermine our society. These are valid concerns to have and it's only natural for it to impact people's attitudes towards the technology as it's been implemented and how its used today.
You have to ask the question of "what exactly is Capitalism?"
By putting capital ahead of everything else of course capitalism gives you technological progress. If we didn't have capitalism we'd still be making crucible steel and the bit would cost more than the horse [1] -- but if you can license the open hearth furnace from Siemens and get a banker to front you to buy 1000 tons of firebricks it is all different, you can afford to make buildings and bridges out of steel.
Similarly, a society with different priorities wouldn't have an arms race between entrepreneurs to spend billions training AI models.
[1] an ancient "sword" often looks like a moderately sized knife to our eyes
> By putting capital ahead of everything else of course capitalism gives you technological progress. If we didn't have capitalism we'd still be making crucible steel and the bit would cost more than the horse [1] -- but if you can license the open hearth furnace from Siemens and get a banker to front you to buy 1000 tons of firebricks it is all different, you can afford to make buildings and bridges out of steel.
The history of how steel got cheap is not really capital-based. It wasn't done by throwing money at the problem, not until the technology worked. The Bessemer Converter was a simple, but touchy beast. The Romans could have built one, but it wouldn't have worked. The metallurgy hadn't been figured out, and the quantitative analysis needed to get repeatability had to be developed. Once it was possible to know what was going into the process, repeatability was possible. Then it took a lot of trial and error, about 10,000 heats. Finally, consistently good steel emerged.
That's when capitalism took over and scaled it up. The technological progress preceded the funding.
What do investors want? Returns on their investment right.
So, as an investor do you throw your money blindly at a high risk endeavor that is likely to fail due to competition, or
Do you invest in setting up a limited rent seeking market that guarantees income in the future.
Unregulated free market capitalism always turns into one large bully that dominates over everyone else because one large bully that dominates over everyone else is a very effective system. Vote based governments such as democracy are a means of attempting to ensure that said government are somewhat controlled by the people and not by a king/corporations in the first place.
For instance on Matt Stoller's blog there are endless articles about how private equity is buying up medical practices, veterinary practices, cheerleading leagues, all sorts of low-risk, high-reward rollups. You also see things like the current AI bubble where there is very much an "arms race" where it seems quite likely that investors are willing to risk wasting their money because of the fear of missing out.
Some other kind of social system is going to face the same trade-offs and note that "communism" in the sense of the USSR and China might not be a true alternative. I mean, Stalin's great accomplishment was starving his peasants to promote rapid industrialization (capital formation!) so they could fight off Germany and then challenge the US for world supremacy. People who are impressed with China today are impressed that they're building huge solar farms, factories that build affordable electric cars, have entrepreneurial companies that develop video games and social media sites, etc. That is, they seem to out-capitalize us.
The actual title of the article is "The Left Doesn't Hate Technology, We Hate Being Exploited" and I think anyone can agree with that sentiment regardless of your political leanings.
LLMs are amazing math systems. Give them enough input and they can replicate that input with exponential variations. That in and of itself is amazing.
If they were all trained on public domain material, or if the original authors of that material were compensated for having the corpus of their work tossed into the shredder, then the people who complain about it could easily be described as Luddites afraid of having their livelihood replaced by technology.
But you add in the wholesale theft of the content of almost every major, minor, great and mediocre work of fiction and non-fiction alike to be shredded and used as logical paper mache to wholesale replace the labor of living human beings for nickles on the dollar and their complains become much more valid and substantial in my opinion.
It's not that LLMs are bad. It's that the people running them are committing ethical crimes that have not been formally illegalized. We can't use the justice system to properly punish the people who have literally photocopied the soul of modern media for an enormously large quick buck. The frustration and impotence they feel is real and valid and yet another constant wound for them in a life full of frustrating constant wounds, which in itself is a lesser but still substantial portion of what we created society to guard the individual against.
It's a small group of ethically amoral people injuring thousands of innocent people and making money from it, mind thieves selling access to their mimeographs of the human soul for $20/month, thank you very much.
If some parallel of this existed in ancient Egypt or Rome, surely the culprits would be cooked alive in a brazen bull or drawn and quartered in the town square, but in the modern era they are given the power and authority and wealth of kings. Can you not see how that might cause misery?
All that being said, if the 20 year outcome of this misery is that everyone ends up in an GAI assisted beautiful world of happiness and delight, then surely the debt will be paid, but that is at bet a 5% likely outcome.
More likely, the tech will crash and burn, or the financial stability of the world that it needs to last for 20 years will crash and burn, or WWIII will break out and in a matter of days we will go from the modern march towards glory to irradiated survivors struggling for daily survival on a dark poisoned planet.
Either way, the manner in which we are allowing LLMS to be fed, trained, and handled is not one that works to the advantage of all humanity.
> One thing I do believe in are the words of Karl Marx: from each according to their ability, to each according to their need. The creation of a world where that is possible is not dependent on advanced technology but on human solidarity.
The author doesn't understand Marx but merely parrots leftist talking points. Marx strongly claims that without change in technology, feudalism would not have changed to capitalism.
For me, the change from optimism to cynicism happened when I realized the value of tech companies came primarily from being able to find new rules exploits. Not from any of the actual, y'know, technology. Like, sure, Apple invented the iPhone, but Uber found a way to turn your iPhone into a legal weapon aimed directly at your city's local taxi licensing scheme.
That's also why Apple is so worried about their App Store revenue above all else. The legal argument they make is that the 30% take is an IP licensing scheme, but the value of IP is Soviet central planning nonsense. Certainly, if the App Store was just there to take 30% from games, Apple wouldn't be defending it this fiercely[0], and they wouldn't have burned goodwill trying to impose the 30% on Patreon.
Likewise, the value of generative AI is not that the AI is going to give us post-scarcity mental labor or even that AI will augment human productivity. The former isn't happening and the latter is dwarfed by the fact that AI is a rules exploit to access a bunch of copyrighted information that would have otherwise cost lots of money. In that environment, it is unethical to evaluate the technology solely on its own merits. My opinion of your model and your thinly-veiled """research""" efforts will depend heavily on what the model is trained for and on, because that's the only intelligent way to evaluate such a thing.
Did you train on public domain or compensated and consensually provided data? Good for you.
Did you train an art generator on a bunch of artists' deviantART or Dribbble pages? Fuck off, slopmonger.
Did you train on a bunch of Elsevier journals? You know what? Fuck them, they deserve it, now please give me the weights for free.
Humans can smell exploitation a mile away, and the people shitting on AI are doing so because they smell the exploitation.
[0] As a company, Apple has always been mildly hostile to videogames. Like, strictly speaking, operating a videogame platform requires special attention to backwards compatibility that only Microsoft and console vendors have traditionally been willing to offer. The API stability guarantees Apple and Google provide - i.e. "we don't change things for dumb reasons, but when we do change them we expect you to move within X years" are not acceptable to anything other than perpetually updated live service games. The one-and-done model of most videogames is not economically compatible with the moving target that is Apple platforms.
I think this hazard extends up and down too; a balance we each have of how we regard possibility & value vs whether we default to looking for problems or denial. This becomes a pattern of perspective people adopt. And I worry so much at how doubt & denial pervade. In our hearts and… well… in the comments, everywhere.
I get it and I respect it; it's true: we need to be aware, alert, and on guard. Everything is very complicated. Hazards and bad patterns abound. But especially as techies, finding possibility is enormously valuable to me. Being willing to believe and amplify the maybe, even when it's a challenging situation. I cherish that so much.
Thank you very much Steve Yegge for the life-changing experience of Notes from the Mystery Machine Bus. I did not realize, did not have framing to understand the base human motivations of tech & building & the comments. I see the world so much differently for grokking the thesis here, see much more the outlooks people come from than I did. It has pushed me in life to look for higher possibility & reach, & to avoid closings of the mind, to avoid rejecting, to avoid fear uncertainty and doubt.
https://gist.github.com/cornchz/3313150
It's one of the most Light Side vs Dark Side noospherically illuminating pieces I've ever read. The article here touches upon those who care, and what they see: it frames the world. Yegge's post I think reflects further, back at the techie, on what happens to caring thoughtful people, Carlin's arc if idealist -> disappointed -> cynic. And to me Notes was a rallying cry to have fortitude, & to keep a certain purity of hope close, and to work against thought terminating fear uncertainty and doubt.
> I will spare you some misery: you do not have to read this blog. It is fucking stupid as hell, constantly creating ideas to shadowbox with then losing to them.
>Yes it is. It is still exactly as simple as it sounds. If I’m doing math billions of times that doesn’t make the base process somehow more substantial. It’s still math, still a machine designed to predict the next token without being able to reason, meaning that yes, they are just fancy pattern-matching machines.
I find this argument even stranger. Every system can be reduced to its parts and made to sound trivial thereby. My brain is still just neurons firing. The world is just made up of atoms. Humans are just made up of cells.
>here’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.
This shows that the author is not very curious because its easy to take the worst examples from the cheapest models and extrapolate. Its like asking a baby some questions and interpreting humanity's potential on that basis. What's the point of this?
> The questions leftists ask about AI are: does this improve my life? Does this improve my livelihood? So far, the answer for everyone who doesn’t stand to get rich off AI is no.
I'll spill the real tension here for all of you. There are people who really like their comfy jobs and have got attached to their routine. Their status, self worth and everything is attached to it. Anything that disrupts this routine is obviously worth opposing. Its quite easy to see how AI can make a person's life better - I have so many examples. But that's not what "leftists" care about - its about security of their job.
The rest of the article is pretty low quality and full of errors.
<< This shows that the author is not very curious because its easy to take the worst examples from the cheapest models and extrapolate.
I find this line of reasoning compelling. Curiosity ( and trying to break things ) will get you a lot fun. The issue I find that people don't even try to break things ( in interesting ways ), but repeat common failure modes more as a gospel and not an observed experiment. The fun thing is that even the strawberry issue tells us more about the limitations of llms than not. In other words, that error is useful...
<< Their status, self worth and everything is attached to it. Anything that disrupts this routine is obviously worth opposing.
There is some of that for sure. Of all days, today I had my manager argue against use of AI for a use case that would affect his buddy's workflow. I let it go, because I am not sure what it actually means, but some resistance is based on 'what we have always done'.
> The fun thing is that even the strawberry issue tells us more about the limitations of llms than not. In other words, that error is useful
That's a fair way to look at it - failure modes tell us something useful about the underlying system. In this case it tells us something about how LLM's work at the token level.
But if you go a step beyond that, you would realise that this problem is solved at a _general_ level with the reasoning models. GPT o1 was internally named strawberry as far as I remember. This would be a nice discussion to have but instead of shallow dismissal of AI as a technology with a failure mode that has been pretty much solved.
What really has not been solved is long context and continual learning (and world model stuff but I don't find that interesting).
<< What really has not been solved is long context and continual learning (and world model stuff but I don't find that interesting).
I wonder about that. In a sense, the solution seems simple.. allow more context. One of the issues, based on progression of chatgpt models, was that too much context allowed for a much easier jailbreak and the fear most corporates have over that make me question the service. Don't get me wrong, I am not one of those people missing 4o for telling me "I love you". I do miss it its nerfed capability to go across all conversations. Working context is was made more narrow now. For a paid sub, that kind of limitation is annoying.
My point is, I know there are some interesting trade-offs to be made ( mostly because I am navigating those on local inference machine ), but with all those data centers one would think, providers have enough power to solve that.. if they so chose.
>Every system can be reduced to its parts and made to sound trivial thereby
But the trivialization does not come from being reduced to parts, but what parts you end up with.
It is like realizing the toy that seems to be able figure out a path around obstacles, cannot actually "see", but works by a clever arrangement of gears.
>It is like realizing the toy that seems to be able figure out a path around obstacles, cannot actually see, but works by a clever arrangement of gears.
in this case can you come up with things that the toy can't do but a toy with eyes could have?
I liked the original title better "The Left Doesn't Hate Technology, We Hate Being Exploited". I think that sums up my grievances towards AI - amazing technology and certainly a booster to anyone's life, but what is the cost? Why AI companies get to download, consume and transform all copyrighted works essentially for free (I think there were some lawsuits that resulted in the companies paying), but normal people have to pay millions if they wanted to access all that data and pay to the original creators? I'm also not so ok with the workforce being displaced, but it's what happens with technological progress. But am not ok that it's displacing the writers while benefiting from their prior work without paying them a cent.
I care a lot more about the environmental harm, the impact on computer hardware prices, and what AI is doing to the energy prices which somehow become everyone else's burden to pay than I am about the rampant copyright infringement.
The hypocrisy in how copyright is enforced for AI companies vs everybody else is pretty infuriating though. We have courts ruling against people for downloading youtube videos to enable them to use clips for fair use purposes (https://torrentfreak.com/ripping-clips-for-youtube-reaction-...) while Nvidia is free to violate the DMCA in the exact same way to take youtuber's content in full (https://www.tomsguide.com/ai/nvidia-accused-of-scraping-80-y...).
> but normal people have to pay millions if they wanted to access all that data and pay to the original creators
please! you can go to anna's archive right now and do what they did. i find it truly strange to victimise oneself to such a degree!
This is a false equivalency if I just share torrented data I can go to prison. These companies downloaded and seeded copy righted material and then sold a product made from that data. If I a civilian did this I would face time in prison. If you think this is fine great, but what people are made about is the hypocrisy of the current moment.
As the title said "Techno-cynics are wounded techno-optimists"
> These companies downloaded and seeded copy righted material and then sold a product made from that data
but no company did this.
I'm not a lawyer and I don't follow this area super closely, but it sure sounds like they did?
https://www.tomshardware.com/tech-industry/artificial-intell...
> Facebook parent-company Meta is currently fighting a class action lawsuit alleging copyright infringement and unfair competition, among others, with regards to how it trained LLaMA. According to an X (formerly Twitter) post by vx-underground, court records reveal that the social media company used pirated torrents to download 81.7TB of data from shadow libraries including Anna’s Archive, Z-Library, and LibGen. It then used this information to train its AI models.
> Aside from those messages, documents also revealed that the company took steps so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta. The court documents say that this constitutes evidence of Meta’s unlawful activity, which seems like it’s taking deliberate steps to circumvent copyright laws.
where did they seed?
My second quote includes,
> so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta.
(emphasis added)
If you'd like it from another source using different words, https://masslawblog.com/copyright/copyright-ai-and-metas-tor... has
> According to the plaintiffs’ forensic analysis, Meta’s servers re-seeded the files back into the swarm, effectively redistributing mountains of pirated works.
and specifically talks about that being a problem.
I will grant that until/unless the cases are decided, this is allegedly, so we'll see.
If that's what you believe, then you have the understanding of how the training data for these companies came to be of a monkey, not a bright one at that
can you share a source for this please? i'll gladly fix my comment.
OpenAI did, and this is so uncontroversial, I'm surprised you are saying it didn't happen.
can you share a source? if it is credible, i'll gladly update and say i was wrong.
[dead]
OpenAI, Meta, and Anthropic all are known to have done this. It's even been exposed in company internal communications. Anthropic already settled their court case. You're an 11 month old account and I suspect you are some sort of bot or user meant to spread misinformation on the forum.
its a big allegation, can you share any source for OpenAI and Anthropic seeding torrents?
I can say for certain that Meta did it. They admitted to it. (https://www.wired.com/story/new-documents-unredacted-meta-co...)
Do you think that OpenAI or Anthropic should get a pass for using torrents if they used special BitTorrent clients that only leached? Do you think the RIAA would be cool with me if I did the same?
incorrect.
> There is no dispute that Meta torrented LibGen and Anna's Archive, but the parties dispute whether and to what extent Meta uploaded (via leeching or seeding) the data it torrented. A Meta engineer involved in the torrenting wrote a script to prevent seeding, but apparently not leeching. See Pls. MSJ at 13; id. Ex. 71 ¶¶ 16–17, 19; id. Ex. 67 at 3, 6–7, 13–16, 24–26; see also Meta MSJ Ex. 38 at 4–5. Therefore, say the plaintiffs, because BitTorrent's default settings allow for leeching, and because Meta did nothing to change those default settings, Meta must have reuploaded “at least some” of the data Meta downloaded via torrent. The plaintiffs assert further that Meta chose not to take any steps to prevent leeching because that would have slowed its download speeds. Meta responds that, even if it reuploaded some of what it downloaded, that doesn't mean it reuploaded any of the plaintiffs’ books. It also notes that leeching was not clearly an issue in the case until recently, and so it has not yet had a chance to fully develop evidence to address the plaintiffs’ assertions.
They did leeching but not seeding. https://caselaw.findlaw.com/court/us-dis-crt-n-d-cal/1174228...
> If I a civilian did this I would face time in prison
no if you had leeched its is very unlikely that you would face time in prison.
> A Meta engineer involved in the torrenting wrote a script to prevent seeding, but apparently not leeching.
Wrong. Michael Clark testified under oath that they tried to minimize seeding and not that they prevented it entirely. His words were: "Bashlykov modified the config setting so that the smallest amount of seeding possible could occur" (https://storage.courtlistener.com/recap/gov.uscourts.cand.41...)
They could have used or written a client that was incapable of seeding but they didn't.
> no if you had leeched its is very unlikely that you would face time in prison.
Not the one who claimed that, but if I think it's fair to say that doing what they did, at that scale, could easily result in me (and most people) being bankrupted by fines and/or legal expenses.
wow, ragebait, who could've thought
"you can go to anna's archive right now and do what they did"
This is such a troll statement.
Anybody could be OpenAI, all you need is anna archive and couple of PC's. all you losers could have been billionaires if you'd just do it.
> I’ll be generous and say that sure, words like “understanding” and “meaning” have definitions that are generally philosophical, but helpfully, philosophy is an academic discipline that goes all the way back to ancient Greece. There’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.
The strawberry thing has been solved and LLM's have moved way beyond that helping in mathematics and physics. Its easy for the blog author to pick this but lets try something different.
It would be a good idea to come up with a question that trips up a modern LLM like GPT with reasoning enabled. I don't think there exists such a question that can fool an LLM but not fool a reasonably smart person. Of course it has to be in text.
Like others have said I relate to the title at least. I can look at most technological advances with a very optimistic perspective BUT has I've aged I've learned that these advancements are often driven by and increasingly controlled by people with bad intentions. A quote that resonates with me that I've seen on social media the last year or so is "I don't want AI to make art for me I want it to do my laundry." It would be a dream for technology to advance to a stage where we all work less and live more fulfilling lives but when I look at history the powers that be don't ever let that happen and manipulate the technology to keep most of us stuck in the rat race.
One of the hardest hitting George Carlin observations:
“Scratch any cynic and you will find a disappointed idealist.”
>I do feel more and more like the Luddites were right
It seems to me that this take will start to resonate with more and more people
dang the top comments here are about the title being editorialized. Isn’t that against HN guidelines?
The title resonates with me. The post does not.
Cynicism is the mind's way of protecting itself from repeating unproductive loops that can be damaging. Anyone who ever had a waking dream come crashing down more than once likely understands this.
It doesn't necessarily logically follow that you wholesale reject entire categories of technology which have already shown multiple net positive use cases just because some people are using it wastefully or destructively. There will always be someone who does that. The severity of each situation is worth discussing, but I'm not a big fan of the thought-terminating cliché.
I don't think that there are very many people who do reject the technology itself. I think that under different circumstances even the people who seem the most hostile towards it would be mostly fine with the chatbots and image generation technology we call AI.
There's understandably some concerns over how it will impact people's jobs in the future, but that's a societal issue, not a problem with the technology.
I think the problem people have is with how that technology was created by people looking to privately profit from the hard work of others without compensation, how it is massively destructive to the environment, how it is being used to harm others, and how the people controlling it are indifferent to the harms they cause at best and at worst are trying to destroy or undermine our society. These are valid concerns to have and it's only natural for it to impact people's attitudes towards the technology as it's been implemented and how its used today.
You have to ask the question of "what exactly is Capitalism?"
By putting capital ahead of everything else of course capitalism gives you technological progress. If we didn't have capitalism we'd still be making crucible steel and the bit would cost more than the horse [1] -- but if you can license the open hearth furnace from Siemens and get a banker to front you to buy 1000 tons of firebricks it is all different, you can afford to make buildings and bridges out of steel.
Similarly, a society with different priorities wouldn't have an arms race between entrepreneurs to spend billions training AI models.
[1] an ancient "sword" often looks like a moderately sized knife to our eyes
> By putting capital ahead of everything else of course capitalism gives you technological progress. If we didn't have capitalism we'd still be making crucible steel and the bit would cost more than the horse [1] -- but if you can license the open hearth furnace from Siemens and get a banker to front you to buy 1000 tons of firebricks it is all different, you can afford to make buildings and bridges out of steel.
The history of how steel got cheap is not really capital-based. It wasn't done by throwing money at the problem, not until the technology worked. The Bessemer Converter was a simple, but touchy beast. The Romans could have built one, but it wouldn't have worked. The metallurgy hadn't been figured out, and the quantitative analysis needed to get repeatability had to be developed. Once it was possible to know what was going into the process, repeatability was possible. Then it took a lot of trial and error, about 10,000 heats. Finally, consistently good steel emerged.
That's when capitalism took over and scaled it up. The technological progress preceded the funding.
The trial and error was fueled by capitalism, trying to get the best product possible.
If it goes into a codified state system, it's regulated, resulting in a lack of motivation to take risks to make it better.
Eh, this is BS also.
What do investors want? Returns on their investment right.
So, as an investor do you throw your money blindly at a high risk endeavor that is likely to fail due to competition, or
Do you invest in setting up a limited rent seeking market that guarantees income in the future.
Unregulated free market capitalism always turns into one large bully that dominates over everyone else because one large bully that dominates over everyone else is a very effective system. Vote based governments such as democracy are a means of attempting to ensure that said government are somewhat controlled by the people and not by a king/corporations in the first place.
You can see examples of both.
For instance on Matt Stoller's blog there are endless articles about how private equity is buying up medical practices, veterinary practices, cheerleading leagues, all sorts of low-risk, high-reward rollups. You also see things like the current AI bubble where there is very much an "arms race" where it seems quite likely that investors are willing to risk wasting their money because of the fear of missing out.
Some other kind of social system is going to face the same trade-offs and note that "communism" in the sense of the USSR and China might not be a true alternative. I mean, Stalin's great accomplishment was starving his peasants to promote rapid industrialization (capital formation!) so they could fight off Germany and then challenge the US for world supremacy. People who are impressed with China today are impressed that they're building huge solar farms, factories that build affordable electric cars, have entrepreneurial companies that develop video games and social media sites, etc. That is, they seem to out-capitalize us.
The actual title of the article is "The Left Doesn't Hate Technology, We Hate Being Exploited" and I think anyone can agree with that sentiment regardless of your political leanings.
LLMs are amazing math systems. Give them enough input and they can replicate that input with exponential variations. That in and of itself is amazing.
If they were all trained on public domain material, or if the original authors of that material were compensated for having the corpus of their work tossed into the shredder, then the people who complain about it could easily be described as Luddites afraid of having their livelihood replaced by technology.
But you add in the wholesale theft of the content of almost every major, minor, great and mediocre work of fiction and non-fiction alike to be shredded and used as logical paper mache to wholesale replace the labor of living human beings for nickles on the dollar and their complains become much more valid and substantial in my opinion.
It's not that LLMs are bad. It's that the people running them are committing ethical crimes that have not been formally illegalized. We can't use the justice system to properly punish the people who have literally photocopied the soul of modern media for an enormously large quick buck. The frustration and impotence they feel is real and valid and yet another constant wound for them in a life full of frustrating constant wounds, which in itself is a lesser but still substantial portion of what we created society to guard the individual against.
It's a small group of ethically amoral people injuring thousands of innocent people and making money from it, mind thieves selling access to their mimeographs of the human soul for $20/month, thank you very much.
If some parallel of this existed in ancient Egypt or Rome, surely the culprits would be cooked alive in a brazen bull or drawn and quartered in the town square, but in the modern era they are given the power and authority and wealth of kings. Can you not see how that might cause misery?
All that being said, if the 20 year outcome of this misery is that everyone ends up in an GAI assisted beautiful world of happiness and delight, then surely the debt will be paid, but that is at bet a 5% likely outcome.
More likely, the tech will crash and burn, or the financial stability of the world that it needs to last for 20 years will crash and burn, or WWIII will break out and in a matter of days we will go from the modern march towards glory to irradiated survivors struggling for daily survival on a dark poisoned planet.
Either way, the manner in which we are allowing LLMS to be fed, trained, and handled is not one that works to the advantage of all humanity.
"It's that the people running them are committing ethical crimes that have not been formally illegalized."
I think it's even worse than that - they are committing actual crimes that many people were punished severely for in the previous decades, (for example, https://en.wikipedia.org/wiki/Capitol_Records,_Inc._v._Thoma...)
> One thing I do believe in are the words of Karl Marx: from each according to their ability, to each according to their need. The creation of a world where that is possible is not dependent on advanced technology but on human solidarity.
The author doesn't understand Marx but merely parrots leftist talking points. Marx strongly claims that without change in technology, feudalism would not have changed to capitalism.
For me, the change from optimism to cynicism happened when I realized the value of tech companies came primarily from being able to find new rules exploits. Not from any of the actual, y'know, technology. Like, sure, Apple invented the iPhone, but Uber found a way to turn your iPhone into a legal weapon aimed directly at your city's local taxi licensing scheme.
That's also why Apple is so worried about their App Store revenue above all else. The legal argument they make is that the 30% take is an IP licensing scheme, but the value of IP is Soviet central planning nonsense. Certainly, if the App Store was just there to take 30% from games, Apple wouldn't be defending it this fiercely[0], and they wouldn't have burned goodwill trying to impose the 30% on Patreon.
Likewise, the value of generative AI is not that the AI is going to give us post-scarcity mental labor or even that AI will augment human productivity. The former isn't happening and the latter is dwarfed by the fact that AI is a rules exploit to access a bunch of copyrighted information that would have otherwise cost lots of money. In that environment, it is unethical to evaluate the technology solely on its own merits. My opinion of your model and your thinly-veiled """research""" efforts will depend heavily on what the model is trained for and on, because that's the only intelligent way to evaluate such a thing.
Did you train on public domain or compensated and consensually provided data? Good for you.
Did you train an art generator on a bunch of artists' deviantART or Dribbble pages? Fuck off, slopmonger.
Did you train on a bunch of Elsevier journals? You know what? Fuck them, they deserve it, now please give me the weights for free.
Humans can smell exploitation a mile away, and the people shitting on AI are doing so because they smell the exploitation.
[0] As a company, Apple has always been mildly hostile to videogames. Like, strictly speaking, operating a videogame platform requires special attention to backwards compatibility that only Microsoft and console vendors have traditionally been willing to offer. The API stability guarantees Apple and Google provide - i.e. "we don't change things for dumb reasons, but when we do change them we expect you to move within X years" are not acceptable to anything other than perpetually updated live service games. The one-and-done model of most videogames is not economically compatible with the moving target that is Apple platforms.
The title resonates a lot with me as well.
I think this hazard extends up and down too; a balance we each have of how we regard possibility & value vs whether we default to looking for problems or denial. This becomes a pattern of perspective people adopt. And I worry so much at how doubt & denial pervade. In our hearts and… well… in the comments, everywhere.
I get it and I respect it; it's true: we need to be aware, alert, and on guard. Everything is very complicated. Hazards and bad patterns abound. But especially as techies, finding possibility is enormously valuable to me. Being willing to believe and amplify the maybe, even when it's a challenging situation. I cherish that so much.
Thank you very much Steve Yegge for the life-changing experience of Notes from the Mystery Machine Bus. I did not realize, did not have framing to understand the base human motivations of tech & building & the comments. I see the world so much differently for grokking the thesis here, see much more the outlooks people come from than I did. It has pushed me in life to look for higher possibility & reach, & to avoid closings of the mind, to avoid rejecting, to avoid fear uncertainty and doubt. https://gist.github.com/cornchz/3313150
It's one of the most Light Side vs Dark Side noospherically illuminating pieces I've ever read. The article here touches upon those who care, and what they see: it frames the world. Yegge's post I think reflects further, back at the techie, on what happens to caring thoughtful people, Carlin's arc if idealist -> disappointed -> cynic. And to me Notes was a rallying cry to have fortitude, & to keep a certain purity of hope close, and to work against thought terminating fear uncertainty and doubt.
> I will spare you some misery: you do not have to read this blog. It is fucking stupid as hell, constantly creating ideas to shadowbox with then losing to them.
OK. Closed tab.
They weren't talking about their blog.
Correct. But it applies to their own screed.
So brave.
>Yes it is. It is still exactly as simple as it sounds. If I’m doing math billions of times that doesn’t make the base process somehow more substantial. It’s still math, still a machine designed to predict the next token without being able to reason, meaning that yes, they are just fancy pattern-matching machines.
I find this argument even stranger. Every system can be reduced to its parts and made to sound trivial thereby. My brain is still just neurons firing. The world is just made up of atoms. Humans are just made up of cells.
>here’s actually a few commonly understood theories of existence that are generally accepted even by laypeople, like, “if I ask a sentient being how many Rs there are in the word ‘strawberry’ it should be able to use logic to determine that there are three and not two,” which is a test that generative AI frequently fails.
This shows that the author is not very curious because its easy to take the worst examples from the cheapest models and extrapolate. Its like asking a baby some questions and interpreting humanity's potential on that basis. What's the point of this?
> The questions leftists ask about AI are: does this improve my life? Does this improve my livelihood? So far, the answer for everyone who doesn’t stand to get rich off AI is no.
I'll spill the real tension here for all of you. There are people who really like their comfy jobs and have got attached to their routine. Their status, self worth and everything is attached to it. Anything that disrupts this routine is obviously worth opposing. Its quite easy to see how AI can make a person's life better - I have so many examples. But that's not what "leftists" care about - its about security of their job.
The rest of the article is pretty low quality and full of errors.
<< This shows that the author is not very curious because its easy to take the worst examples from the cheapest models and extrapolate.
I find this line of reasoning compelling. Curiosity ( and trying to break things ) will get you a lot fun. The issue I find that people don't even try to break things ( in interesting ways ), but repeat common failure modes more as a gospel and not an observed experiment. The fun thing is that even the strawberry issue tells us more about the limitations of llms than not. In other words, that error is useful...
<< Their status, self worth and everything is attached to it. Anything that disrupts this routine is obviously worth opposing.
There is some of that for sure. Of all days, today I had my manager argue against use of AI for a use case that would affect his buddy's workflow. I let it go, because I am not sure what it actually means, but some resistance is based on 'what we have always done'.
> The fun thing is that even the strawberry issue tells us more about the limitations of llms than not. In other words, that error is useful
That's a fair way to look at it - failure modes tell us something useful about the underlying system. In this case it tells us something about how LLM's work at the token level.
But if you go a step beyond that, you would realise that this problem is solved at a _general_ level with the reasoning models. GPT o1 was internally named strawberry as far as I remember. This would be a nice discussion to have but instead of shallow dismissal of AI as a technology with a failure mode that has been pretty much solved.
What really has not been solved is long context and continual learning (and world model stuff but I don't find that interesting).
<< What really has not been solved is long context and continual learning (and world model stuff but I don't find that interesting).
I wonder about that. In a sense, the solution seems simple.. allow more context. One of the issues, based on progression of chatgpt models, was that too much context allowed for a much easier jailbreak and the fear most corporates have over that make me question the service. Don't get me wrong, I am not one of those people missing 4o for telling me "I love you". I do miss it its nerfed capability to go across all conversations. Working context is was made more narrow now. For a paid sub, that kind of limitation is annoying.
My point is, I know there are some interesting trade-offs to be made ( mostly because I am navigating those on local inference machine ), but with all those data centers one would think, providers have enough power to solve that.. if they so chose.
>Every system can be reduced to its parts and made to sound trivial thereby
But the trivialization does not come from being reduced to parts, but what parts you end up with.
It is like realizing the toy that seems to be able figure out a path around obstacles, cannot actually "see", but works by a clever arrangement of gears.
>It is like realizing the toy that seems to be able figure out a path around obstacles, cannot actually see, but works by a clever arrangement of gears.
in this case can you come up with things that the toy can't do but a toy with eyes could have?
Yes? When you reverse-engineer a machine, it's obviously much easier to draw up an edge case it can't handle.
Can you draw up an edge case for LLMs?