The byte-for-byte identical output requirement is the smartest part of this whole thing. You basically get to run the old and new pipelines side by side and diff them, which means any bug in the translation is immediately caught. Way too many rewrites fail because people try to "improve" things during the port and end up chasing phantom bugs that might be in the old code, the new code, or just behavioral differences.
Also worth noting that "translated from C++" Rust is totally fine as a starting point. You can incrementally make it more idiomatic later once the C++ side is retired. The Rust compiler will still catch whole classes of memory bugs even if the code reads a bit weird. That's the whole point.
I hope, with the velocity unlocked by these tools, that more pure ports will become the norm. Before, migrations could be so costly that “improving” things “while I’m here” helped sell doing the migration at all, especially in business settings. Only to lead to more toil chasing those phantom bugs.
One of the biggest point of rewriting is you know better by then so you create something better.
This is a HUUUGE reason code written in rust tended to be so much better than the original (which was probably written in c++).
Human expertise is the single most important factor and is more important than language.
Copy pasting from one language to another is way worse than complete rewrite with actual idiomatic and useful code.
Best option after proper rewrite is binding. And copy-paste with LLM comes way below these options imo.
If you look at real world, basically all value is created by boring and hated languages. Because people spent so much effort on making those languages useful, and other people spent so much effort learning and using those languages.
Don’t think anyone would prefer to work in a rust codebase that an LLM copy-pasted from c++, compared to working on a c++ codebase written by actual people that they can interact with.
I did several web framework conversions exactly like this. Make sure the http output string matches in the new code exactly as the old code and then eventually deleted the old code with full confidence.
> I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.
> The requirement from the start was byte-for-byte identical output from both pipelines. The result was about 25,000 lines of Rust, and the entire port took about two weeks. The same work would have taken me multiple months to do by hand. We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output. Zero regressions across the board
This is the way. Coding assistants are also really great at porting from one language to the other, especially if you have existing tests.
> Coding assistants are also really great at porting from one language to the other
I had a broken, one-off Perl script, a relic from the days when everyone thought Drupal was the future (long time ago). It was originally designed to migrate a site from an unmaintained internal CMS to Drupal. The CMS was ancient and it only ran in a VM for "look what we built a million years ago" purposes (I even had written permission from my ex-employer to keep that thing).
Just for a laugh, I fed this mess of undeclared dependencies and missing logic into Claude and told it to port the whole thing to Rust. It spent 80 minutes researching Drupal and coding, then "one-shotted" a functional import tool. Not only did it mirror the original design and module structure, but it also implemented several custom plugins based on hints it found in my old code comments.
It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
The Epilogue: That site has since been ported to WordPress, then ProcessWire, then rebuilt as a Node.js app. Word on the street is that some poor souls are currently trying to port it to Next.js.
> 10/10 - would generate tens of thousands of lines of useless code again.
Me too! A couple days ago I gave claude the JMAP spec and asked it to write a JMAP based webmail client in rust from scratch. And it did! It burned a mountain of tokens, and its got more than a few bugs. But now I've got my very own email client, powered by the stalwart email server. The rust code compiles into a 2mb wasm bundle that does everything client side. Its somehow insanely fast. Honestly, its the fastest email client I've ever used by far. Everything feels instant.
I don't need my own email client, but I have one now. So unnecessary, and yet strangely fun.
Its quite a testament to JMAP that you can feed the RFC into claude and get a janky client out. I wonder what semi-useless junk I should get it to make next? I bet it wouldn't do as good a job with IMAP, but maybe if I let it use an IMAP library someone's already made? Might be worth a try!
Same here. I had Claude write me a web based RSS feed reader in Rust. It has some minor glitches I still need to iron out, but it works great, is fast as can be, and is easy on the eyes.
> It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
This is the biggest bottleneck at this point. I'm looking forward to RAM production increasing, and getting to a point where every high-end PC (workstation & gaming) has a dedicated NPU next to the GPU. You'll be able to do this kind of stuff as much as you want, using any local model you want. Run a ralph loop continuously for 72 hours? No problem.
I bet RAM production will only increase to meet AI demand and there will be none left for you. Or me. Or anyone. Crucial is already going probably forever and I'm sure more will follow...
> a relic from the days when everyone thought Drupal was the future (long time ago).
Drupal is the future. I never really used it properly, but if you fully buy into Drupal, it can do most everything without programming, and you can write plugins (extensions? whatever they're called...) to do the few things that do need programming.
> The Epilogue: That site has since been ported to WordPress, then ProcessWire, then rebuilt as a Node.js app. Word on the street is that some poor souls are currently trying to port it to Next.js.
This is the problem! Fickle halfwits mindlessly buying into whatever "next big thing" is currently fashionable. They shoulda just learned Drupal...
I'm not sure if you're serious or not, but while I never liked Drupal (even used to hate it once upon a time), I always liked the pragmatism surrounding it, reaching to the point of saving php code into the mysql database and executing from there.
> It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
Pardon me, and, yes, I know we're on HN, but I guess you're... rich? I imagine a single run like this probably burns through tens or hundreds of dollars. For a joke, basically.
I guess I understand why some people really like AI :-)
Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved. I don't want "Fire one prompt and get somewhat working code", I want a UX tailored for long sessions with back and forth, letting me leverage my skills, rather than agents trying to emulate what I already can do myself.
It was said a long time ago about computing in general, but more fitting than ever, "Augmenting the human intellect" is what we should aim for, not replacing the human intellect. IA ("Intelligence amplification") rather than AI.
But I'm guessing the target market for such tools would be much smaller, basically would require you to already understand software development, and know what you want, while all AI companies seem to target non-developers wanting to build software now. It's no-code all over again essentially.
Is it any surprise that the cocaine cartels really want you to buy more cocaine, so they don't focus on its usefulness in pain relief and they refine it and cut it with the cheapest substances that will work rather than medical-grade reagents?
It's surprising that the ones who are producing the cocaine, don't try to find the best use of the cocaine, yes. But then these are VC-fueled businesses, then it all goes out the window, unfortunately. Otherwise they'd actually focus on usefulness, not just "usage" or whatever KPI they go by and share with their investors.
"All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved."
I want less ambitious LLM powered tools than what's being offered. For example, I'd love a tool that can analyse whether comments have been kept up to date with the code they refer to. I don't want it to change anything I just want it to tell me of any problems. A linter basically. I imagine LLMs would be a good foundation for this.
Any terminal tool like Claude Code or Codex (I assume OpenCode too, but I haven't tried) can do it, by using as a prompt pretty much exactly what you wrote, and if it still wants to edit, just don't approve the tool calls.
One problem I've noticed is that both claude models and gpt-codex variants make absolutely deranged tool calls (like `cat <<'EOF' >> foo...EOF` pattern to create a file, or sed to read a couple lines), so it's sometimes hard to see what is it even trying to do.
Of course there are tools focusing on this. It takes a little getting used to how prevalent it is. My editor now can anticipate the next three lines of code I intend to write complete with what values I want to feed to the function I was about to invoke. It all shows up in an autocomplete annotation for me. I just type the first two or three characters and press tab to get everything exactly how I was about to type it in--including an accurate comment worded exactly in my voice.
Is that what you mean by IA?
For example, I type "for" and my editor guesses I want to iterate over the list that is the second argument of the function for which I am currently building the body. So it offers to complete the rest of the loop condition for me. Not only did it anticipate that I am writing a for loop. It figures out what I want to iterate over, and perhaps even that I want to enumerate the iteration so I have the index and the value. Imagine if I had written a comment to explain my intent for the function before I started writing the function body. How much better could it augment my intellect?
I think this could be a decent interface with one addition, a way to comment on the completion being suggested. You could ask it for a different completion or to extend the completion, do something different, do a specific thing, whatever. An active way to "explain my intent" with the AI (besides leaving comments hinting at what you want) in addition to the passive completion system.
To be honest, I'm not quite sure what the ideal UX looks like yet. The AI assisted autocomplete is too little, but the idea of saying "Build X for purpose Y" is too high-level. Maintaining Markdown documents that the AI implements, also feels too high-level, but letting the human fully drive the implementation probably again too low-level.
I'm guessing the direction I'd prefer, would be tooling built to accept and be driven by humans, but allowed to be extended/corrected by AI, or something like that, maybe.
Maybe a slight contradiction, and very wish-washy/hand-wavey, but I haven't personally quite figured out what I think would be best yet either, what the right level actually is, so probably the best I could say right now :) Sorry!
>Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
This is because, regardless of the current state of things, the endgame which will justify all the upfront investment is autonomous, self-improving, self-maintaining systems.
Yeah, Douglas Engelbart was also a huge believer in that, and I think from various stuff I've read from him and the Augmentation Research Center put me on this track of really agreeing with it.
"Bicycle for the mind", as always when it involves Jobs, sounds more fitting for the masses though, so thanks for sharing that :)
Agents are a "self-driving car for the mind". I don't enjoy or dislike driving, but lots of Americans love to drive. In the future they will lament their driving skills' decline.
I am learning rust myself and one of the things I definetly didn't want to do was let Claude write all the code. But I needed guidance.
I decided to create a Claude skill called "teach". When I enable it, Claude never writes any code. It just gives me hints - progressively more detailed if I am stuck. Then it reviews what I write.
I am finding it very satisfying to work this way - Rust in particular is a language where there's little space to "wing it". Most language features are interlaced with each other and having an LLM supporting me helps a lot. "Let's not declare a type for this right now, we would have to deal with several lifetime issues, let's add a note to the plan and revisit this later".
FYI: Claude has output styles, one of them is called `learning`. Instead of writing the code itself, it will add `TODO(human)` and comments to explain how to. Also adds `Insights` explaining concepts to you in its output.
This link also has a comparison to Skills further down.
I had a bash spaghetti code script that I wrote a few years ago to handle TLS certificates(generate CSRs, bundle up trust chains, match keys to certs, etc). It was fragile, slow, extremely dependent on specific versions of OpenSSL, etc.
I used Claude to rewrite it in golang and extend its features. Now I have tests, automatic AIA chain walking, support for all the DER and JKS formats, and it’s fast. My bash script could spend a few minutes churning through a folder with certs and keys, my golang version does a few thousand in a second.
So I basically built a limited version of OpenSSL with better ergonomics and a lot of magic under the hood because you don’t have to specify input formats at all. I wasn’t constrained by things like backwards compatibility and interface stability, which let me make something much nicer to use.
I even was able to build a wasm version so it can run in the browser. All this from someone that is not a great coder. Don’t worry, I’m explicitly not rolling my own crypto.
It's how most of us are actually going to end up using AI agents for the foreseeable future, perhaps with increasing degrees of abstraction as we move to a teams-of-agents model.
The industry hasn't come up with a simple meme-format term to explain this workflow pattern yet, so people aren't excited about it. But don't worry, we'll surely have a bullshit term for it soon, and managers everywhere will be excited. In the meantime, we can just continue doing work with these new tools.
This is an opportunity to select some stupid words that you would like to hear repeated a million times. The process is like patiently nurturing a well-contained thing, so how about "egg coding"?
I havent quite dealt with "teams of agents" yet outside of Claude Code itself spawning subagents, but I have some ideas as to how to achieve it in a meaningful way without giving a developer 10 claude code licenses, I think the real approach that makes more sense to me is to still have humans in the loop, but have their respective agents sync together and divide work towards one goal, but being able to determine which tasks are left to be worked one and tested. I do think for the foreseeable future you will need human validation for AI.
I'm not sure there's going to be a term, because there's no difference from normal, good quality engineering. You iterate on design, validate results, prioritise execution. It's just that you hand over the writing code part. It's as boring as it gets.
Sure, but they're going to be stuck writing software for yesterday's problems. As our tools become more powerful, we're going to unlock new problems and expectations that would be impossible or impractical to solve with yesterday's tooling.
Thinking people who disagree with you hate you or hate the thing you like is a recipe for disaster. It's much better to not love or hate things like this, and instead just observe and come to useful, outcome-based conclusions.
Look at any HN thread that has a project that uses AI in any way, shape or form. People quickly remark that it is slop, without even reviewing the code. If that's not blind hatred of AI, I don't know what is.
There's a huge distinction between Vibe Coding, and actual software engineers using AI tooling effectively. I vibe code for fun sometimes too, nothing wrong with it, helps me figure out how the model behaves in some instances, and to push the limits of what I understand.
We keep seeing this pattern over and over as well. Despite LLM companies' almost tangible desperation to show that they can replace software engineers, the real value comes from domain experts using the tools to enhance what they're already good at.
I haven’t done a ton of porting. And when I did, it was more like a reimplementation.
> We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output.
Is this a conventional goal? It seems like quite an achievement.
My company helps companies do migrations using LLM agents and rigid validations, and it is not a surprising goal. Of course most projects are not as clean as a compiler is in terms of their inputs and outputs, but our pitch to customers is that we aim to do bug-for-bug compatible migrations.
Porting a project from PHP7 to PHP8, you'd want the exact same SQL statements to be sent to the server for your test suite, or at least be able to explain the differences. Porting AngularJS to Vue, you'd want the same backend requests, etc..
It’s a very good way of getting LLMs to work autonomously for a long time; give it a spec and a complete test suite, shut the door; and ask it to call you when all the tests pass.
I had a script in another language. It was node, took up >200MB of RAM that I wanted back. "claude, rewrite this in rust". 192MB of memory returned to me.
I used to have a bunch of bespoke node express server utilities that I liked to keep running in the background to have access to throughout the day but 40-50mb per process adds up quickly.
I’ve been throwing codex at them and now they’ve all been rewritten in Go - cut down to about 10mb per process.
This is sad to see. Node was originally one of the memory efficient options – it’s roots are solving the c10k problem. Mind sharing what libraries/frameworks you were using?
It was an express server. I don't think c10k is particularly interesting since it mostly just involves having cooperating scheduling. Doesn't really impact flat memory overhead etc. I mean, the binary for node alone, without any libraries etc, is larger than the produced rust binary.
This is the way. This exact workflow is my sweet spot.
In my coding agent std::slop I've optimized for this workflow
https://github.com/hsaliak/std_slop/blob/main/docs/mail_mode... basically the idea is that you are the 'maintainer' and you get bisect safe, git patches that you review (or ask a code reviewer skill or another agent to review). Any change re-rolls the whole stack. Git already supports such a flow and I added it to the agent. A simple markdown skill does not work because it 'forgets'. A 'github' based PR flow felt too externally dependent. This workflow is enforced by a 'patcher' skill, and once that's active, tools do not work unless they follow the enforced flow.
I think a lot of people are going to feel comfortable using agents this way rather than going full blast. I do all my development this way.
- on the Lua integration https://x.com/hsaliak/status/2022911468262350976 (I've since disabled the recursion, not every code file is long and it seems simpler to not do it), but the rest of it is still there
Also /review and /feedback. /feedback (the non code version) opens up the LLM's last response in an editor so you can give line by line comments. Inspired by "not top posting" from mailing lists.
I am having immense success with the latest models developing a personal project that I open sourced and then got burned off by.I can't write anymore by hands but I do enjoy writing prompts with my voice.I have been shipping the best code the project has ever seen.The revolution is real.
Coding assistants are great at pattern matching and pattern following. This is why it’s a good idea to point them at any examples or demos that come with the libraries you want to use, too.
> Coding assistants are also really great at porting from one language to the other
No, they are quite terrible at doing that.
They may (I guess?) produce code that compiles, but they will, almost certainly not produce the appropriate combination of idioms and custom abstractions that may the code "at home" in the target language.
PS - Please fix your blockquote... HN ignores single linebreaks, so you have to either using pairs of them, or possibly go with italicization of the quoted text.
How does he solve the Fruit of the Poison Tree problem? For all he know, his LLMs included a bunch copyrighted or patented code throughout the codebase. How is he going to convince serious people that this port is not just a transformation of an _asset_ into a _liability_?
And you might say that this is a hypothetical problem, one that is not practically occurring. Well, we had a similar problem like this in the recent past, that LLMs are close to _making actual_. When it comes to software patents, they were considered a _hypothetical_ problem (i.e. nobody is going to bother suing you unless you were so big that violating a patent was a near certainty). We were instructed (at pretty much all jobs), to never read patents, so that we cannot incriminate ourselves in the discovery process.
That is going to change soon (within a year). I have friend, whom I won't name, who is working on a project, using LLMs, to discover whether software (open source and proprietary) is likely to be violating a software patent from a patent database. And it is designed to be used, not by programmers, but by law firms, patent attorneys, etc. Even though it is not marketed this way, it is essentially a target acquisition system for use by patent trolls. It is hard for me to tell if this means that we will have to keep ignoring patents for that plausible deniability, or if this means that we will have to become hyper informed about all patents. I suppose, we can just subscribe to the patent-agent, and hope that it guides the other coding agents into avoiding the insertion of potentially infringing code.
(I also have a friend who built a system in 2020 that could translate between C++ and Python, and guarantee equivalent results, and code that looks human-written. This was a very impressive achievement, especially because of how it guarantees the equivalence (it did not require machine-learning nor GPUs, just CPUs and some classic algorithms from the 80s). The friend informs me that they are very disheartened to see that now any toddler with a credit card can mindlessly do something similar, invalidating around a decade of unpublished research. They tell me that it will remain unpublished, and if they could go back in time, they would spend that decade extracting as much surplus from society as possible, by hook or by crook (apparently they had the means and the opportunity, but lacked the motive); we should all learn from my friend's mistake. The only people who succeed are, sadly, perversely, those who brazenly and shamelessly steal -- and make no mistake, the AI companies are built on theft. When millionaires do it, they become billionaires -- when Aaron Swartz does it, he is sentenced to federal prison. I'm not quite a pessimist yet, but it really is saddening to watch my friend go from a passionate optimist to a cold nihilist.).
If there was value (the guarantees) to this tech he buried a bunch of time in, he should be wrapping a natural language prompt around it and selling it.
Not even the top providers are giving any sort of tangible safety or reliability guarantees in the enterprise…
I'm a long-time Rust fan and have no idea how to respond. I think I need a lot more info about this migration, especially since Ladybird devs have been very vocal about being "anti-rust" (I guess more anti-hype, where Rust was the hype).
I don't know if it's a good fit. Not because they're writing a browser engine in Rust (good), but because Ladybird praises CPP/Swift currently and have no idea what the contributor's stance is.
At least contributing will be a lot nicer from my end, because my PR's to Ladybird have been bad due to having no CPP experience. I had no idea what I was doing.
Yeah that is the thing I struggle with. I am really happy for people falling in love with Rust. It is a amazing language when used for the right use case.
The problem is that had my Rust adventures a few years ago and I am over the hype cycle and able to see both the advantages and disadvantages. Plus being generally older and hopefully wiser I don't tie my identity towards any specific programming language that much.
So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting. Really not sure how to solve it. It is good when people get excited about a language. It just can be very annoying for everyone else sometimes.
> So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting. Really not sure how to solve it. It is good when people get excited about a language. It just can be very annoying for everyone else sometimes.
This rings very true, and I've actually disadvantaged myself somewhat here. I was involved in projects that made very dubious decisions to rewrite large systems in Rust. This caused me to actively stay away from the language, and stick to C++, investing lots of time in overcoming its shortcomings.
Now years later, I started with Rust in a new project. And I must say, I like the language, I really like the tools, and I like the ecosystem. On some dimension I wish I would have done this sooner (but on the other hand, I think I have a better justification of "why Rust" now).
I'm contemplating diving into Rust for a smallish project, a daemon with super-basic UI intended for Linux, MacOS and Windows. Do you mind expanding on what disadvantages you encountered? Or use-cases that aren't appropriate for Rust?
It's all the stuff that people always mention; they are not wrong. You spend a decent amount of time... conversing with the compiler about lifetimes and, in my experience, even more so about the type system, which is _extremely_ complicated. But you also have to keep in mind that Rust got very popular, very fast, and the tail end of something like that is always a negative reaction. The language is the same, despite the hype roller coaster.
I'm not OP but here's my disadvantages. Rust is the way I earn my living, and also my open source tool of choice. And my background is 25 years of SWE career:
1. build / compile times can be atrocious
2. crates.io inherits the npm philosophy, which means fairly unmoderated space of third party deps and because the Rust stdlib doesn't have a lot in it, extensive third party crate (lib) usage is strong in Rust. As a result most Rust projects have rather sprawling dependency trees, often with duplicated functionality (multiple Base64, rand, sha256, etc crates). I personally have a problem with this (auditability, accountability, security, complexity etc). Others don't.
3. Despite being nominally runtime agnostic, Rust async basically is tokio and it's almost impossible to use another runtime once you factor in third party deps. In many ways Rust is the language that tokio ate. In fact even if you opt out of async entirely, you often end up with tokio as a dependency simply because the community just seems to expect it.
4. Despite advertising itself as a "systems" language, some basic systems programming facilities I expect from my C++ background are still fundamentally not there. In particular, per-container/struct pluggable allocators still isn't a thing and the feature to add it (allocator-api) has sat unmerged in nightly for almost ten years at this point and it doesn't look good for it landing any time soon.
5. If you're working in the embedded space, there's still plenty of devices that will not have a workable Rust toolchain option yet.
I still choose it for new projects instead of its competitors C++ or Zig. But I think it's important to recognize there are compromises like any other tool.
As much as people might insist otherwise, there will in fact come a day when there are "multiple Rusts" by which I mean multiple styles and ways of doing things -- just like C++. For myself, for example... if it were my repository and my team and my hiring, and I was starting from scratch... I'd be extremely careful about third party crate adoption and have an extremely minimalistic approach there. And I don't use tokio. Though my paying jobs do.
It’s a pretty good language and ecosystem. Downside was always the community which every ten seconds someone will start asking to tax everyone to fund Rust Software Foundation or constantly argue that you have to donate a percentage of income to it. Now with LLM I don’t have to talk to community. Huge improvement.
Problem with community is it has experts and groupies mixed in. Ideally experts can talk somewhere and groupies can go somewhere else and talk about funding RSF etc. but now is unnecessary. Expert is available on demand via chatbot.
Its possible to dislike Rust but pragmatically use it. Personally, I do not like Rust, but it is the best available choice for some work and personal stuff.
Personally I think most programming languages have really ... huge problems. And the languages that are more fun to use, ruby or python, are slow. I wonder if we could have a great, effective, elegant language that is also slow. All that try end up with e. g. with a C++ like language.
Honestly I find writing Rust more fun than writing Python. Python just doesn't scale, any non-trivial quantity of it has a habit of turning into spaghetti however hard I try to be disciplined.
Rust, although annoying at a micro scale, does at least enforce some structure on your code, although like Kling I miss OO.
AI has made Rust approachable to a new audience of programmers who didn't want to dedicate their life to learning the ins and outs of the language. Especially for C++ developers who already learned the ins and outs of a hyper complex programming language and don't want to go through that a second time.
Before AI, writing Rust was frustrating experience that involved spending 90% of your time reading documentation and grumbling that "I could do this in 5 minutes in C++"
Now I can write Rust in a way that makes sense to my C++ addled brain and let the AI do the important job of turning it into an idiomatic Rust program that compiles.
Its for the time being is stuck with LLVM, so I can't currently LTO with GCC objects. Its got a lot higher complexity than I perfer in a language. A lot of features I find important seem perma-unstable. Pin is unnessesarily confusing. No easy way to define multiple compilation units for use with linker object selection and attribute constructor. The easy path is downloading binary toolchains with rustup and not using your disto package manager. You can't use unstable features without the bootstrap env var on distro rust toolchains. Cargo leads to dependency bloat. The std/core crates are prebuilt binaries and bloat binary sizes. Bindgen doesn't translate static inline code. The language has a ton of stuff it exposes just to std and not user code. Unsafe code is unergonomic. No easy way to model a cleanup function that needs more args. No support for returns_twice. No ability to use newer stuff like preserve_none. Can't go-to-definition from a bindgen binding to original header file. Macros pollute global namespace. Can't account for platforms where size_t and uintptr_t are different. Traits can only be relied on if marked unsafe. Can't implement something like defer since it holds a borrow. no_std code still can pull in core::fmt. Can't enforce dependencies are also no_std. Panics are considered safe. No way to add non-function fields to dyn vtables. No way to declare code separately from definition. No way to have duplicate type definitions that merge, making interop between different bindgen generated modules annoying.
They've been stuck with swift adoption for a long time, abandoning that was the reasonable decision. That only leaves Rust as the second language to C++
To me, a project's "hype-ness" is the ratio of how much attention it gets over how useful it actually is to users.
As a browser, Ladybird usefulness is currently quite limited for obvious reasons. This is not meant to dismiss its achievements, nor to overlook the fact that building a truly useful browser for everyday users is something few open source teams can accomplish without the backing of a billion dollar company. Still, in its present state, its practical utility remains limited.
I am somewhat concerned about the volatility. All three languages have their merits and each has a stable foundation that has been developed and established over many years. The fact that the programming language has been “changed” within a short period of time, or rather that the direction has been altered, does not inspire confidence in the overall continuity of Ladybird's design decisions.
Not just volatility but also flip-flopping. Rust was explicitly a contender when they decided to go with Swift 18 months ago, and they've already done a 180 on it despite the language being more or less the same as it was.
they tried swift, it didn't work, and they figured rust was the best remaining option. that's not "flip-flopping" (by which I assume you mean random indecisiveness that leads to them changing their mind for no reason)
They made a very pragmatic and sensible decision after reviewing Swift that it wouldn't be suitable for their purposes, so they shifted to the next best alternative. I think they reasoned it very well and made a great decision.
There's been some fun volatility with the author over the years. I told him once that he might want to consider another language to which he replied slightly insultingly. Then he tried to write another language. Then he tried to switch from C++ to Swift, and now to Rust :P
> I think I need a lot more info about this migration
Doesn't sound like it's some Fish-style, full migration to Rust of everything. Seems like they are just moving a couple parts over for evaluation, and then, going forward, making it an official project language that folks are free to use. They note that basically every browser already does that, so this isn't a huge shakeup.
But not the stance on Rust, which is something I'm wondering. I understand there's a core team assigned, but are the ~200 contributors okay with this migration?
Swift adoption had been dead long before the actual announcement. It's likely Rust was being considered long before this two week experiment with LLMs.
it's very odd that someone with no experience would take a big project like this and just jump to another language because he trusts the AI generated code of current models
if it works it works i guess, but it seems mad to me on the surface
Why do you think the creator behind SerenityOS has no experience? I mean it’s not the most popular OS out there but he seems like a capable individual.
in case it's not glaringly obvious from the comment, he has plenty of cpp experience and little rust experience, and that's according to his own comments
the relevant bit here is that he's porting from a language in which he has plenty of experience into another one in which he doesn't, in a large project
that in itself sounds like putting a lot of faith in LLMs but maybe there are factors not mentioned here, which is why i said "on the surface"
Looks like Andreas is a mighty fine engineer, but he's even better entrepreneur. Doesn't matter if intentional or not, but he managed to create and lead a rather visible passion project, attract many contributors and use that project's momentum to detach Ladybird into a separate endeavor with much more concrete financial prospects.
The Jakt -> Swift -> Rust pivots look like the same thing on a different level. The initial change to Swift was surely motivated by potential industry support gain (i believe it was a dubious choice from purely engineering standpoint).
It's awe-inspiring to see how a person can carve a job for himself, leverage hobbyists'/hackers' interest and contributions, attract industry attention and sponsors all while doing the thing he likes (assuming, browsers are his thing) in a controlling position.
Can't fully rationalize the feeling, but all of this makes me slightly wary. Doesn't make it less cool to observe from a side, though.
Eh, he's given an interview where he talks about the Swift decision. He and several maintainers tried building some features in Swift, Rust, and C++, spending about two weeks on each one IIRC. And all the maintainers liked the experience of Swift better. That might have ended up wrong, but it's a pretty reasonable way to make a decision.
Two weeks with Rust and you're still fighting with the compiler. I think the LLM pulled a lot of weight selling the language, it can help smooth over the tricky bits.
idk man it's rare to fight the compiler once you've used Rust for long enough unless you're doing something that's the slightest bit complex with async.
You get to good at schmoozing the compiler you start to create actual logical bugs faster.
Probably, Roy was born agentic as a part of a package which included an disregard for intellectual growth.
This doesn't mean that being agentic cannot be cultivated by regular people.
In 2026, yes, agency matters more than skill/wisdom/intelligence to get VC funds.
But what's the point of agency alone if you are leading such a life?
What gives me hope is that in 2026, skillful people can delegate a lot of their work to LLMs, which gives them time to learn the "agentic" part which is basically marketing and talking with people.
This is less about languages and more about so-called AI. One thing’s for sure: it’s becoming harder and harder to deny that agentic coding is revolutionizing software development.
We’re at the point where a solid test suite and a high-quality agent can achieve impressive results in the hands of a competent coder. Yes, it will still screw up, needs careful human review and steering, etc, but there is a tangible productivity improvement. I don’t think it makes sense putting numbers on it, but for many tasks, it looks like there’s a tangible benefit.
> We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
Correct me if I’m wrong since I don’t know these two languages, but like some other languages, doing things the idiomatic way could be dramatically different. Is “cleanup” doing a lot of heavy lifting here? Could that also mean another complete rewrite from scratch?
A startup switching languages after years of development is usually a big red flag. “We are rewriting it in X” posts always preceded “We are shutting down”. I wish them luck though!
A mitigating factor in this case is the C++ and Rust are both multi-paradigm languages. You can quite reasonably represent most C++ patterns in Rust, even if it might not be quite how you'd write Rust in the first place.
In addition, C++ and Rust are very, very similar languages. Almost everything in C++ translates easily, including low level stuff and template shenanigans. There's only a few "oh shit there's no analog" things, like template specialization or virtual inheritance.
Out of all the languages rust takes inspiration from, id rank C++ at the top of the list.
Strong disagree. Rust copied C++ syntax to avoid looking weird to C++ programmers, but the similarity is skin deep. C can be tamed, because it's mostly a subset of Rust, but C++ idioms are a death from papercuts.
OOP, weakly-typed templates, and mutable aliasing create impedance mismatch in almost every C++ API.
Rust doesn't have data inheritance, and what looks like interface inheritance is merely extra requirements in a flat list of traits, so subclassing won't behave like C++ APIs expect. When you translate a class hierarchy to Rust, it needs lots of crutches which make it weird, boilerplatey, and tedious to use. There's no good recipe for OOP hierarchy in Rust, because the idioms are so different. The mismatch feels like writing an ORM.
For some C++ APIs mutability and circular references can be a pain too. Rust works well with DAG data structures and clear mostly-immutable data flow. Objects with some "parent" pointer are common in C++, but Rust sees them as potentially dangling, with shared mutable state, and requires much heavier control of them. It can be done, but it's ugly. Idiomatic Rust designs go to great lengths to avoid it unless necessary, but C++ APIs can have the extra pointers "for convenience".
There's a reason why Rust doesn't have typical GUI libraries - an arbitrary web of references between widgets and event handlers make it ugly in Rust, and that's on top of a view class inheritance.
C++ templates sit very uncomfortably between Rust's macros (duck typed) and Rust's generics (strictly typed at point of declaration).
C++ templates almost always are a mix of types they're attached to and some duck-typing in their expansion.
Rust's generics do not allow any duck typing at all. This makes translation of even a tiny bit clever C++ templates a chore. There's no specialization. No way to deal with SFINAE and such.
Rust macros have flexibility for all the syntax shenanigans (and even similarly bad errors at instantiation time), but macros can't see any types. Idiomatic Rust has very deliberate division between traits (usually much simpler and smaller in scope), macros and proc macros/derives. Splitting C++ templates like that can be a major redesign.
This is the famous trap that Joel on Software talked about in a blog post long time ago.
If you do a rewrite you essentially put everything else on halt while rewriting.
If you keep doing feature dev on the old while another "tiger team" is doing the rewrite port then these two teams are essentially in a race against each other and the port will likely never catch up. (Depending on relative velocities)
Maybe they think that they can to this LLM assisted tools in a big bang approach quickly and then continue from there without spending too much time on it.
I’ve been part of at least 2 successful rewrites. I think that Joel’s post is too often taken as gospel. Sometimes a rewrite is the best way forward.
Moving Ladybird from C++ to a safer more modern language is a real differentiator vs other browsers, and will probably pay dividends. Doing it now is better than doing it once ladybird is fully established.
One last point about rewrites: you can look at any industry disruptor as essentially a team that did a from-scratch rewrite of their competitors and won because the rewrite was better.
> I’ve been part of at least 2 successful rewrites. I think that Joel’s post is too often taken as gospel. Sometimes a rewrite is the best way forward.
HN nerd-snipe alert! OK, you got me good. Can you share some battle stories? I have also been part of rewrites in my career, but my experience is mixed. I'm not here to simple brush away your experience; I want to know more about why you think (in retrospective) it was a good idea and why it was successful.
I can recall recently, listening to an Oxide and Friends podcast where they spent 30 minutes dumping all over "Agile Dev", only to have a very senior, hands-on guy join from AWS and absolutely deliver the smack down. (Personally, I have no positive experiences with Agile Dev, but this guy really stunned the room into silence.) The best part: The Oxide crew immediately recognized the positive experence and backed off the give this guy the space he needed to tell and interesting story. (Hats off the Ox crew for doing that... even if I, personally, have zero love for Agile Dev.)
The good news is as of now ladyboy doesn't have any competition.
Rarely if ever is anything able to compete simply by being "better". As far as USPs go it's just not enough. I reckon for ladyboy the USP (if any) is going to be it being open and NOT chrome (or derivative). So "safe" "modern" language is not going to mean much to the end users.
What's different today really is the LLMs and coding agents. The reason to never rewrite in another language is that it requires you to stop everything else for months or even years. Stopping for two weeks is a lot less likely to kill your project.
He's still right if you don't have good automated testing and you lost most of the original developers (or you don't have other seniors ceva familiar with the domain).
> then these two teams are essentially in a race against each other and the port will likely never catch up
Ladybird appears to have the discipline to have recognized this: “[Rust] is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time.”
Firefox is already spying on you with a lot of telemetry, and they have recently amended their terms of use to remove the obligation to "never sell your data" [1]. So perhaps you should reconsider that statement.
A lot of the previous calculus around refactoring and "rewrite the whole thing in a new language" is out the window now that AI is ubiquitous. Especially in situations where there is an extensive test suite.
For a personal thing I had AI write some python libraries to power a cli. It has to do with simple excel file filtering, grouping and aggregating. Nothing too fancy. However since it's backed by a library, I am playing with different UIs for the same thing and it's fun to say.. Do it with streamlit. Oh it can't do this particular thing. Fine do it with shiny. No? OK Dash. It takes only like an hour to prototype with a whole new UI library then I get to say "nah" like a spoiled child. :)
> Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.
I have the opposite opinion. As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.
The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?
I use LLM at work in Python. It can, and will, easily use hacks upon hacks to get around things.
Thus I maintain that as code generation is cheap, it is more important to constraint that code generation.
All of this assume that you care even a tiny bit about what is happening in your code. If you don't, I suppose you can keep banging the LLM to fix that binary blob for you.
> The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?
As a very practical problem the assembly would consume the context window like no other. And another is having some static guardrails; sometimes LLMs make mistakes, and without guard rails it debugging some of them becomes quite a big workload.
So to keep things efficient, an LLM would first need to create its own programming language. I think we'll actually see some proposals for a token-effective language that has good abstraction abilities for this exact use.
> As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.
I think, changes to languages/tooling to accomodate Agentic loops will become important.
> All of this assume that you care even a tiny bit about what is happening in your code. If you don't...
I mean, as software engineers, we most certainly do. I suspect there'll be a new class of "developers" who will have their own way of making software, dealing with bugs, building debugging tools that suit their SDLC etc. LLMs will be to software development what Relativity was to Astrophysics, imo: A fundamental & permanent shift.
I don't agree. For one thing, the language directly impacts things like iteration speed, runtime performance, and portability. For another, there's a trade-off between "verbose, eats context" and "implicit, hard to reason about".
IMO Rust will strike a very strong balance here for LLMs.
I would say that current programming languages have a better chance due to the huge amount of code that AI can train on. New languages do not have that leverage. Moreover, current languages have large ecosystems that still matter.
I see the opposite. New languages have more difficulty breaking into popularity due to lack of enough existing codename and ecosystems.
Im already using models to reason about and summarize part of the code from programming language to prose. They are good at that. I can see the process being something like english to machine lang, machine lang to english if the human needs to understand. However amother truism is that compilers are a great guardrail against bad generated code. More deterministic guardrails are good for llms. So yeah im not there yet where i trust binaries to the statistical text generators.
> This is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time.
I don't like this bit. Wouldn't it be better to decide on a memory-safe language, and then commit to it by writing all new code in Rust, or whatever. This looks like doing double the work.
It doesn't have to all-or-nothing. Firefox has been a mixed C++ and Rust codebase for years now. It isn't like the code is written twice. The C++ components are written in C++, and the Rust components are written in Rust.
I suspect that'll also be what happens here. And if the use of Rust is successful, then over time more components may switch over to Rust. But each component will only ever be in one language at a time.
You can't compare the choices made to evolve a >20 years old codebase with a brand new one. Firefox also as Rust support for XPCOM components, so you can use and write them in Rust without manual FFI (this comes with some baggage of course).
The Ladybird devs painted themselves in a corner when choosing C++ for a new web browser, with many anti-Rust folks claiming that "modern C++ was safe". Well...
> The Ladybird devs painted themselves in a corner when choosing C++ for a new web browser
That choice was never made. C++ was selrcted as the language of choice for SerenityOS. Since the goal of the OS was to make its founder happy, and C++ was his faviourite language at the time, that seems like an obvious choice. Later, as part of SerenityOS, there was a need for an HTML parser. It was written in C++ as was the rest of the operating system. Then that HTML parser evolved into a full web browser. As part of the SerenityOS project, that browser was written completely in C++. Then that web browser forked off into an independent project...
Ladybird was already a fully functioning browser (not finished of course but complete enough to surf many web pages) when it was forked from SerenityOS to create a stand-alone web browser. The choice at that point was "keep evolving the current C++ code base" or start-over. I doubt the second option was even considered.
They have been evaluating other languages since before the fork. Rust was evaluated and rejectd early-on. They even created their own language at one point.
https://github.com/SerenityOS/jakt
> The Ladybird devs painted themselves in a corner when choosing C++ for a new web browser, with many anti-Rust folks claiming that "modern C++ was safe". Well...
Perhaps, but in fairness the project was started in 2018 when Rust was still new and unproven.
> You can't compare the choices made to evolve a >20 years old codebase with a brand new one.
I guess not, but I'm pretty optimistic about Ladybird's ability to adopt Rust if they want to. It's a much smaller codebase than Firefox (~650K LoC).
This initial PR is already ~25k LoC, so approximately 4% of the codebase. It took 1 person 2 weeks to complete. If you extrapolate from that, it would take 1 person-year to port the whole thing, which is not so bad considering that you could spread that work out over multiple years and multiple people.
And Firefox has shown that the intermediate state where you have a mix of languages is viable over the long term, even in a much larger and more complex codebase.
Firefox was special in that Mozilla created Rust to build Servo and then backported parts of Servo to Firefox and ultimately stopped building Servo.
Thankfully Servo has picked up speed again and if one wants a Rust based browser engine what better choice than the one the language was built to enable?
But I'm also cheering along Ladybird's progress. There's definitely room for more than one project in the space. And IMO the more browsers being built in Rust in the better.
> After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.
I feel like you just know it’s doomed. What this is saying is “I didn’t want to and cannot review the code it generated” asking models to find mistakes never works for me. It’ll find obvious patterns, a tendency towards security mistakes, but not deep logical errors.
Somehow they did use this as part of their approach to get to 0 regressions across 65k tests + no performance regressions though + identical output for AST and bytecode though. How much manual review was part of the hundreds of rounds of prompt steering is not stated, but I don't think it's possible to say it couldn't find any deep logical errors along the way and still achieve those results.
The part that concerns me is whether this part will actually come in time or not:
> The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
Of course, it wouldn't be the first time Andreas delivered more than I expected :).
That’s convincing and impressive, but I wouldn’t say it proves it can spot deep errors. If it’s incredible at porting files and comparing against the source of truth then finding complicated issues isn’t being tested imo.
Your argument is just as applicable on human code reviewers. Obviously having others review the code will catch issues you would never have thought of. This includes agents as well.
They’re not equal. Humans are capable of actually understanding and looking ahead at consequences of decisions made, whereas an LLM can’t. One is a review, one is mimicking the result of a hypothetical review without any of the actual reasoning. (And prompting itself in a loop is not real reasoning)
I keep hearing people say "but as humans we actually understand". What evidence do you have of the material differences in what understanding an LLM has, and what version a human has? What processes do we fundamentally do, that an LLM does not or cannot do? What here is the definition of "understanding", that, presumably an LLM does not currently do, that humans do?
Yeah, I lost all interest in the ladybird project now that it is AI slop.
No one wants to work with this generated, ugly, unidiomatic ball of Rust. Other than other people using AI. So you dependency AI grows and grows. It is a vicious trap.
Considering David Tolnay's indefensible treatment of JeanHeyd Meneide, I'm inclined to agree with Kling on the toxicity of the Rust community. Evangelical fervor does not excuse douchebaggery.
> We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
I wonder what kind of tech debt this brings and if the trade off will be worth whatever problems they were having with C++.
the tech debt risk in this case is mostly in the cleanup phase, not the port itself. non-idiomatic Rust that came from C++ tends to have a lot of raw pointer patterns and manual lifetime management that works fine but hides implicit ownership assumptions. when you go to make it idiomatic, the borrow checker forces those assumptions to be explicit, and sometimes you discover the original structure doesn't compose well with Rust's aliasing rules. servo went through this. the upside is you catch real latent bugs in the process.
It depends. I migrated a 20k loc c++ project to rust via AI recently and I would say it did so pretty well. There is no unsafe or raw pointer usage. It did add Rc<RefCell in a bunch of places to make things happy, but that ultimately caught some real bugs in the original code. Refactoring it to avoid shared memory (and the need for Rc<RefCell<>> wasn't very difficult, but keeping the code structure identical at first allowed us to continue to work on the c++ code while the rust port was ongoing and keep the rust port aligned without needing to implement the features twice.
I would say modern c++ written by someone already familiar with rust will probably be structured in a way that's extremely easy to port because you end up modeling the borrow checker in your brain.
Yes, I just translated a Rust library from non-idiomatic and unsafe Rust to idiomatic and safe Rust and it was as much work as if I had rewritten it from scratch.
yeah, matches what I'd expect. when you're porting idiomatic -> idiomatic within a language, the cleanup is mechanical. crossing from C++ to Rust means the borrow checker surfaces assumptions that were latent in the original code, so you end up redesigning rather than translating. that's not a complaint about Rust -- it's actually doing its job.
Andreas Kling mentioned many times they would prefer a safer language, specifically for their js runtime garbage collector.
But since the team were already comfortable with cpp that was the choice, but they were open and active seeking alternatives.
The problem was strictly how cpp is perceived as an unsafe language, and this problem rust does solve!
Not being sarcastic, this truly looks like a mature take. Like, we don't know if moving to rust would improve quality or prevent vulnerabilities, here's our best effort to find out and ignore if the claim has merits for now. If the claim maintains, well, you're better prepared, if it doesn't, but the code holds similar qualities...what is the downside?
> We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited.
Why was there ever any expectation for Swift having good platform support outside Apple? This should have been (and was to me) already obvious when they originally announced moving to Swift.
Apple actually did put some resources behind it, the toolchain is reasonably pleasant to use outside macOS and Xcode, they have people building an ecosystem in the Swift Server Workgroup, and arguably some recent language design decisions don't seem to be purely motivated by desktop/mobile usage.
But in the end I can't help but feel Swift has become an absolute beast of a multi-paradigm language with even worse compile times than Rust or C++ for dubious ergonomics gains.
A language is more than a compiler. All of the Swift frameworks you would need to do anything actually useful or interesting in the language are macOS-only. You cannot develop in Swift for Windows/Linux/Android the way that you develop in Swift for macOS/iOS. That matters.
Have you actually used .NET on Linux/macOS? I have (both at home and work) and there isn't anything that made me think it was neglected on those platforms. Everything just works™
This is really YOLOing as the original author doesn't know Rust well so what happens if they hit some complex production issue LLM aren't aware of? Hiring an expensive consultant to fix that until the next LLM iteration?
I'm as anti LLM use as they come, but this appears to be migrating libraries from already funcitoning C++ code. In the case of your hypothetical I suspect the course of action will be "shelve this library port until someone with domain expertise and Rust experience can look at it". Its not like he chucked the whole codebase at the GenaI gods and said "Port it to Rust!".
The "human-directed, not autonomous" framing is the part people keep glossing over. Claude Code here is a compiler-level translation tool, you are still the architect deciding what gets ported and in what order.
The real question is what this does to migrations that never happened because 18 months of rewrite did not pencil out. A 2-week port fundamentally changes that calculus.
Someone should try this with the “Ralph Wiggum loop” approach. I suspect it would fail spectacularly, but it would be fascinating to watch.
Personally, I can’t get meaningful results unless I use the tool in a true pair-programming mode—watching it reason, plan, and execute step by step. The ability to clearly articulate exactly what you want, and how you want it done, is becoming a rare skill.
Given the quality of their existing test suite I'm confident the Ralph Wiggum loop would produce a working implementation... but the code quality wouldn't be anywhere near what they got from two weeks of hands-on expert prompting.
All the best to them, however this feels like yah shaving instead of focusing into delivering a browser than can become an alternative to Safari/Chrome duopoly.
Part of browser experience is safety and migrating their JS library to Rust is probably one of the best ways to gain advantage over any other existing engine out there in this aspect. Strategically this may and likely will attract 3rd party users of the JS library itself, thus helping its adoption and further improving it.
They're not porting the browser itself to Rust, for the record.
Javascript is a self contained sub system, if the public API stays the same, then they can rewrite as much as they want, also I suppose this engine now will attract new contributors that will want to contribute to Ladybird just because they enjoy working with Rust.
Don't forget that the Rust ecosystem around browsers is growing, Firefox already uses it for their CSS engine[0], AFAIK Chrome JPEG XL implementation is written in Rust.
So I don't see how this could be seen as a negative move, I don't think sharing libraries in C++ is as easy as in Rust.
Not only is Firefox using it for their CSS engine but Mozilla created Rust to build Servo and sadly only the CSS engine and maybe some other parts is what they kept around when they offloaded Rust.
“the Rust ecosystem around browsers is growing” – in the beginning pretty much 100% of the ecosystem around Rust was browser oriented
Thankfully Servo is picking up speed again and is a great project to help support with some donations etc: https://servo.org/
Agreed. They said they ruled out rust in 2024, I believe the article they published was near the end of 2024 because I remember reading it fairly recently.
Seems like a lot of language switches in a short time frame. That'd make me super nervous working on such a project. There will be rough parts for every language and deciding seemingly on whims that 1 isn't good enough will burn a lot of time and resources.
Woah, this is a wild claim. @dang: Is this a thing? I don't believe it. I, myself, have submitted many articles and never once did I see some auto-magical "title shortening algorithm" at work!
A LLM-assisted codebase migration is perhaps one of the better use cases for them, and interestingly the author advocates for a hands-on approach.
Adding the "with help from AI" almost always devolves the discussion from that to "developers must adopt AI or else!" on the one hand and "society is being destroyed by slop!" on the other, so as long as that's not happening I'm not complaining about the editorialized title.
I think we've come to the point when it should be the opposite for any new code, something in line of: "done without AI". Bein an old fart working in software development I have many friends working as very senior developers. Every single one of them including yours truly uses AI.
I use AI more and more. Goes like create me classes A,B,C with such and such descriptive names, take this state machine / flowchart description to understand the flow and use this particular sets of helpers declared in modules XYZ
I then test the code and then go over and look at any un-optimal and other patterns I prefer not to have and asking to change those.
After couple of iterations code usually shines. I also cross check final results against various LLMs just in case
Very happy to see this. Ladybird's engineering generally seems excellent, but the decision to use Swift always seemed pretty "out there". Rust makes a whole lot more sense.
Cool, that seems like a rational choice. I hope this will help Ladybird and Servo benefit from each other in the long run, and will make both of them more likely to succeed
You can do it via the C ABI, and use opaque pointers to represent higher-level Rust/C++ concepts if you want to.
Firefox is a mixed C++ / Rust codebase with a relatively close coupling between Rust and C++ components in places (layout/dom/script are in C++ while style is in Rust, and a mix of WebRender (Rust) and Skia (C++) are used for rendering with C++ glue code)
My understanding from a brief read of the Swift issue is that they kept running into bugs in the Swift compiler which, in practice, prevented them from doing the things that they ought to be do in theory. This went on for long enough, that they got fed up and abandoned Swift.
The Rust compiler is incredibly solid (across all target platforms), and while it's C/C++ interop is relatively simplistic, what does exist is extensively battle tested in production codebases.
I’m curious what issues people were running into with Swift’s built in C++ interop? I haven’t had the chance to use it myself, but it seemed reasonable to me at a surface level.
Yeah, that part doesn't make much sense to me. IMO, Swift has reasonably good C++ interop[1] and Swift's C interop has also significantly improved[2] since Swift 6.2.
> albeit you have to struggle sending `std` types back and forth a bit
Firefox solves this partly by not using `std` types.
For example, https://github.com/mozilla/thin-vec exists in large part because it's compatible with Firefox's existing C++ Vec/Array implementation (with the bonus that it's only 8 bytes on the stack compared to 24 for the std Vec).
Porting the JS parser to Rust and adopting Rust in other parts of the engine while continuing to use C++ heavily is unlikely to make Ladybird meaningfully more secure.
Attackers are surprisingly resilient to partial security.
Servo has a distinct design goal that sets it apart from its predecessor within Mozilla and has already had offsprings that has made its way directly into Firefox.
Its purpose is not to reinvent everything. It’s not a hype project.
Unfortunately licence incompatibility may prevent that. Ladybird is BSD and Servo is MPL. This is also why there is only limited collaboration between Servo and the Rust GUI ecosystem.
Based on the origins of Rust as a tool for writing the really thorny, defensive parsers of potentially actively hostile code for firefox, I have to imagine that another web browser is the most at-home place the language could ever be.
Is there any discussion on why D or even Ada was not considered? These languages have been around for long time. If they were willing to use llm to break the initial barrier to entry for a new language, then a case can be made for these languages as well.
They already made the mistake picking a niche language twice (first their own language, then Swift as a cross-platform language), why would you want them to make it a third time?
What kind of response is this? I was asking if there was any technical evaluation on other languages. And D and Ada are not niche. They have been battle tested in critical software.
Swift had/has some problems in the language itself. It's not because of the niche nature of Swift that was the problem iirc.
I don't think this is the right response because certainly a meaningful discussion could've definitely taken place and given how they were already open to other languages which was the reason why they picked Swift in the first place.
I remember Andreas video where he talked about how people used rust in his codebase and they were so happy but later it became very difficult whereas they found with swift that it became easier to manage. That was the reason why they picked swift that time.
Certainly their goal wasn't to pick a popular language (because if that's what you want use python or JS) but rather a language that was relevant to what they were building.
So if D and Ada were relevant or not, that's the main point of discussion imo.
I've dabbled a bit in Ada, but it wouldn't be my choice either. It's still susceptible to memory errors. It's better behaved than C, but you still have to be careful. And the tooling isn't great, and there isn't a lot in terms of libraries. I think Ladybird also has aspirations to build their own OS, so portability could also be an issue.
Not the case with spark. But I understand it requires writing lot of things from scratch for browsers. But I don’t think portability will be an issue with Ada, it is cross platform.
However, this is where d shines. D has a mature ecosystem. Offers first class cpp abi and provides memory safety guarantees, which the blog mentioned as a primary factor. And d is similar to cpp, low barrier for cpp devs to pick up.
Unfortunately a really good question gets downvoted instead of causing a relevant discussion, as so often in recent HN. It would be really interesting to know, why Ada would not be considered for such a large project, especially now when the code is translated with LLMs, as you say. I was never really comfortable that they were going for the most recent C++ versions, since there are still too many differences and unimplemented parts which make cross-compiler compatibilty an issue. I hope that with Rust at least cross-compilation is possible, so that the resulting executable also runs on older systems, where the toolchain is not available.
I personally think that people might've framed it as use Ada/D over rust comment which might have the HN people who prefer rust to respond with downvotes.
I agree that, this might be wrong behaviour and I don't think its any fault of rust itself which itself could be a blanket statement imo. There's nuance in both sides of discussions.
Coming to the main point, I feel like the real reason could be that rust is this sort of equilibra that the world has reached for, especially security related projects. Whether good or bad, this means that using rust would definitely lead to more contributor resources and the zeal of rustaceans can definitely be used as well and also third party libraries developed in rust although that itself is becoming a problem nowadays from what I hear from people in here who use rust sometimes (ie. too many dependencies)
Rust does seem to be good enough for this use case. I think the question could be on what D/Ada (Might I also add Nim/V/Odin) will add further to the project but I honestly agree that a fruitful discussion b/w other languages would've been certainly beneficial to the project (imo) and at the very least would've been very interesting to read personally
> which might have the HN people who prefer rust to respond with downvotes.
This completely misses the purpose of the downvoting feature, which is not surprising, since upvoting seems no longer to indicate quality or truth of the comment neither.
> rust is this sort of equilibra that the world has reached for, especially security related projects
Which is amazing, since Rust only covers a fraction of safety/security concerns covered by Ada/SPARK. Of course this language has some legacy issues (e.g. the physical separation of interface and body in two separate files; we have better solutions today), but it is still in development and more robust than the C/C++ (and likely Rust) toolchain. And in the age of LLMs, robustness and features of a toolchain should matter more than the language syntax/semantics.
> Rust does seem to be good enough for this use case.
If you compare it to the very recend C++ implementations they are using, I tend to agree. But if you compare it to a much more mature technology like e.g. Ada, I have my doubts.
> We’ve been searching for a memory-safe programming language to replace C++ in
Ladybird for a while now.
The article fails to explain why. What problems (besides the obvious) have been found in which "memory-safe languages" can help. Do these problems actually explain the need of adding complexity to a project like this by adding another language?
I guess AI will be involved which, at this early point in the project would make ladybird a lot less interested (at least to me).
Browsers are incredibly security-sensitive projects. Downloading untrusted code from the internet and executing is part of their intended functionality! If memory safety is needed anywhere it's in browsers.
I know he doesn't make live coding videos anymore, but it'd be cool if Andreas showed off how this worked a little more. I'm curious how much he had to fix by hand (vs reprompting or spinning a different model or whatever).
What happened? It’s been awhile since I checked in but it seems he doesn’t work on serenity and doesn’t live stream anymore (and is now into lifting weights)
He got his serenity and at the same time ladybird browser started getting somewhere, so he separated it out and went full on with it. From what I know, he was working on browsers before at Apple, so it was like he got ready to return
My intuition is that they will convert to zig again when it stables. If it is possible to do it using LLM in
2 weeks for rust, then it would be the same for zig, too.
While rust is nice on paper, writting complex software in it is mentally consuming. You can not do it for a long time.
If they do, it could be because safety is a gradient and one variable among many in software development, albeit a very important one when it comes to browsers.
If it is this easy, surely the trend is Rust output being an intermediate pass of the LLM super compiler. A security subset if you will (like other kinds of optimization), it will move from Rust specs to some deeper level of analysis and output the final executable. Some brave souls will read the intermediate Rust output (just like people used to read the assembler output from compilers) but the LLM super compiler will just translate a detailed English like spec into final executables.
I have my doubts it'll ever be "finished". Servo gives strong vibes of a project that will avoid performance hacks, because they're not nice/state of the art code. I have no evidence, it's just the energy I've picked up from it
Any word on how much more memory safe the implementation is? If passing a previous test suite is the criteria for success, what has changed, really? Are there previous memory safety tests that went from failing to passing?
I am very interested to know if this time and energy spent actually improved memory safety.
Other engineers facing the same challenges want to know!
If the previous impl had known memory safety issues I'd imagine they'd fix them as a matter of priority. It's hard to test for memory safety issues you don't know about.
On the rust side, the question is how much `unsafe` they used (I would hope none at all, although they don't specify).
It is entirely possible a Rust port could have caught previously unknown memory safety issues. Furthermore, a Rust port that looks and feels like C++ may be peppered with unsafe calls to the point where the ROI on the port is greatly reduced.
I am not trying to dunk on the effort; quite the contrary. I am eager to hear more about the goals it originally set out to achieve.
Interesting in context of that some time ago Andreas said that they failed on porting TypeScript compiler from TypeScript itself to Go lang by using LLMs and they went with manual port https://youtu.be/uMqx8NNT4xY?si=Vf1PyNkg3t6tmiPp&t=1423
I wonder what is gained by this port though, if the C++ codebase already employed modern approaches to memory management. It's entirely possible that the Rust version will perform worse too as compilers are less mature.
Maybe, but it's certainly possible to write memory safe code in C++. It may be more or less difficult, but it isn't typically the ONLY objective of a project. C++ has other advantages too, such as seamless integration with C APIs and codebases, idiomatic OOP, and very mature compilers and libraries.
That's a pivot, iirc they wanted to swift (I'm very glad they didn't do that). It's cool to see something like claude be useful for large scale projects like that
> Porting LibJS
> Our first target was LibJS , Ladybird’s JavaScript engine. The lexer, parser, AST, and bytecode generator are relatively self-contained and have extensive test coverage through test262, which made them a natural starting point.
> Results
> The requirement from the start was byte-for-byte identical output from both pipelines. The result was about 25,000 lines of Rust, and the entire port took about two weeks. The same work would have taken me multiple months to do by hand.
I'm not here to troll the LLM-as-programmer haters, but Ladybird (and Rust!) is loved by HN, and this is a big win.
How long until Ladybird begins to impact market dominance for Chrome and Firefox? My guess: Two years.
Note that Firefox doesn't have market dominance. It is under 5% market share. That said I imagine Firefox users to be the most likely to make the jump. However, the web is a minefield of corner cases. It's hard to believe it will be enough to make the browser largely useful enough to be a daily driver.
Why do you think Firefox users would be most likely to make the jump? The main reason I see people give for supporting Ladybird is challenging the dominance of the incumbents. That's not really a great reason to switch from Firefox because, as you note, it doesn't have any dominance. And there's also an argument that splitting the non-Chrome market into two only increases Chrome's dominance.
From what I can tell from HN, Brave seems to be popular with those users who hate Google but for whatever reason hate Mozilla even more, and I suspect those will be the most likely users to switch.
I don't get it, and I don't have a dog in the C/C++ vs. Rust race.
Ladybird has ~1200 contributors with a predominance of C++ contributions, followed by HTML, and with "other" lying at 0.5%.
That's a lot of people contributing.
How many of them will be less willing to contribute in the future, and less productive when they do if a sizable portion is in Rust?
Maybe there'll be more contributions and maybe there'll be less. I don't know.
If you've managed to develop a community of 1200 developers who are willing to advance the project why upset the applecart?
I must admit to being somewhat confused by the article's claim that Rust and C++ emit bytecode. To my knowledge, neither do (unless they're both targeting WASM?) - is there something I'm missing or is the author just using the wrong words?
EDIT: bramhaag pointed out the error of my ways. Thanks bramhaag!
By 'Rust compiler' and 'C++ compiler', they refer to the LibJS bytecode generator implemented in those languages. This is about the generated JS bytecode.
This is sort of hilarious if you think about it. The Firefox browser is completely written in Rust. Now Ladybird is a "human-directed ai" Rust browser. Makes you wonder how much of the code the two browsers will share going forward given llm assisted autocompletes will pull from the same Rust Browser dataset.
Probably not much: the requirement is exact equivalence of program inputs to outputs, and as such the agents are performing very mechanical translation from the existing C++ code to Rust. Their prompts aren't "implement X browser component in rust", they're "translate this C++ code to Rust, with these extra details that you can't glean from the code itself."
Only a small portion of Firefox is written in Rust. Apparently some of the most performant and least buggy parts are those in Rust, but again, only parts like the CSS engine.
This will be another bad decision just like with Swift. From what I heard, Rust is notoriously bad at letting people define their own structure and instead beats you up until you satisfy the borrow checker. I think it'll make development slow and unpleasant. There are people out there who enjoy that, but it's not a fit for when you need to deliver a really huge codebase in reasonable time. I remember Andreas mentioning he just wanted something like C++, but with a GC and D would be absolutely perfect for this job.
Maybe, but will they have to fight with borrow checker for doing some other than (the very OOP) DOM components? They'll obviously use both for a long time in the future, so more functional places can get Rust, while more OOP places can benefit from C++
And? Does it work? Because it does. It's a lot closer to C++ and you literally need like a week to start being productive and it's insanely flexible as a language. Nobody uses Swift also, but the additional problem with Swift was that it's entirely Apple-centric.
Probably not unless using Rust present some particular challenge for this type of project. But having eaten this proverbial apple they would probably use AI more and more assuming they have a budget and in this case being less rich than C++ might not mean much for productivity
I wouldn't mind if one result of this was a writeup on what patterns/antipatterns are there when converting code and concepts that used to be very aligned with C++-style OOP, deep inheritance and all that jazz, to what feels natural in Rust, and how you can rephrase those concepts without loss in the substance of what you need to do.
I guess it's a long way off, since the LLM translation would need to be refactored into natural Rust first. But the value of it would be in that it's a real world project, and not a hypothetical "well, you could probably just...".
Sigh agents keep killing all the passion I have for programming. It can do things way faster than me, and better than me in some cases. Soon it will do everything better and faster than me.
> Soon it will do everything better and faster than me
There is no evidence of that coming from this post. The work was highly directly by an extremely skilled engineer. As he points out, it was small chunks. What chunks and in what order were his decision.
Is AI re-writing those chunks much faster than he could. Yes. Very much so. Is it doing it better? Probably not. So, it is mostly just faster when you are very specific about what it should do. In other words, it is not a competitor. It is a tool.
And the entire thing was constrained by a massive test suite. AI did not write that. It does not even understand why those tests are the way they are.
This is a long way from "AI, write me a JavaScript engine".
Id put it as a example of a carpenter preparing their material with a lathe and circular saw vs one working with a handsaw and chisel.
Both will get a skilled craftsman to the point where thie output is a quality piece of work. Using the autotoools to prepare the inputs allows velocity and consistency.
Main issue is the hype and skiddies who would say - feed this tree into a machine and get a cabinet.Producing non-detrministic outputs with the operator being unable to adjust requirements on the fly or even stray from patterns/designs that havent been trained yet.
The tools have limitiations and the operators as well , and the hype does adisservice to what would be establishing reasonable patterns of usage and best practices.
Is a migration from language X to Y or refactoring from pattern A to B really the kind of task that makes you look forward to your day when you wake up?
Personally my sweet spot for LLM usage is for such tasks, and they can do a much better job unpacking the prompt and getting it done quickly.
In fact, there's a few codebases at my workplace that are quite shit, and I'm looking forward to make my proposal to refactor these. Prior to LLMs, I'm sure I'd have been laughed off, but now it's much more practical to achieve this.
Right. I had a 100% manual hobby project that did a load of parametric CAD in Python. The problem with sharing this was either actively running a server, trying to port the stack to emscripten including OCCT, or rewriting in JS, something I am only vaguely experienced in.
In ~5 hours of prompting, coding, testing, tweaking, the STL outputs are 1:1 (having the original is essential for this) and it runs entirely locally once the browser has loaded.
I don’t pretend that I’m a frontend developer now but it’s the sort of thing that would have taken me at least days, probably longer if I took the time to learn how each piece worked/fitted together.
It's the opposite for me, most of the time it's first rough pass it generates is awful and if you don't have good taste and a solid background of years of experience programming you won't notice it and I keep having to tell it to steer into better design choices.
I'm not sure 25,000 lines translated in 2 weeks is "fast", for a naive translation between languages as similar as C++ and Rust (and Ladybird does modern RAII smart-pointer-y C++ which is VERY similar to Rust). You should easily be able to do 2000+ lines/day chunks.
Yeah, it also a lot that the person doing the translation is the lead developer of the project who is very familiar with the original version.
I imagine LLMs do help quite a bit for these language translation tasks though. Language translation (both human and programming) is one of the things they seem to be best at.
Agreed, however, I'm quite sure 25,000 lines translated in "multiple months" is very "slow", for a naive translation between languages as similar as C++ and Rust.
Despite the many claims to the contrary, agents can't do anything better than a human yet. Faster, certainly, but the quality is always poor compared to what a human would produce. You aren't obsolete yet, brother.
Dunno, that probably doesn't hold for webapps with backend as they are typically complete garbage and LLMs (even local ones) would give you about the same result but in 1 hour.
Look into platforms like Workato, Boomi, or similar iPaaS products, unfortunely it feels like those of us that like coding have to be happy turning into architect roles, with AI as brick layers.
It automates both the fun and the boring parts equally well. Now the job is like opening a box of legos and they fall out and then auto-assemble themselves into whatever we want..
Rather like opening a box of legos and reading them the instruction sheet while they auto assemble based on what they understood. Then you re-read and clarify where the assembly went wrong. Many times, if needed.
I am unsure if I can rationally justify saying this, but I am left with disappointment and unease. Comparable to when a series I care about changes showrunner and jumps the shark.
Hate to tell you this, but it's cults all the way down. Plato understood this, and his disdain for caves and wall-shadows, is really a disdain for cults. The thing is, over the last 2300 years, we have gotten really good at making our caves super cozy -- much cozier than the "real world" could ever be. Our wall-shadows have become theme parks, broadway theaters, VR headsets, youtube videos, books, entire cities even. In Plato's day, it made sense to question the cave, to be suspicious of it. But today, the cave is not just at parity with reality, it is superior to it (similar to how a video game is a precisely engineered experience, one that never has too little signal and never has too much noise, the perfect balance to keep you interested and engaged).
I'm no mind reader, and certainly no anthropologist, but I suspect that what separates humans from other (non extinct) animals, is that we compulsively seek caves that we can decorate with moving shadows and static symbols. We even found a series of prime numbers (sequences of dots, ". ... ..... .......") in a cave from the _ice age_. Mathematics before writing. We seek to project what we see with our mind's eye into the world itself, thereby making it communicable, shareable. Ever tell someone you had a dream, and they believed you? You just planted the seed for a cult, a shared cave. Even though you cannot photograph the dream, or offer any evidence that you can dream at all.
The industrial and scientific revolutions have distanced our consciousness from this idea, even as they enabled ever more perfect caves to manifest. Our vocabulary has become corrupted and unclear. We started using words like "reality", and "literally", and "truth", when we mean the exact opposite.
The conspiracy theorists and cultists, are just people who wandered into a new cave, with a different kind of fire, and differently curved walls, and they want to tell people from their old cave that they have found a way out of the cave into reality -- they do not yet realize (or do not want to accept), that they live in a network of caves, a network of different things in the same category.
During the early 2020s, we did a lot of talking about the disappearance of "consensus reality". This is scientific terminology mapped over the idea of caves and cults. You can tell, because the phrase is an oxymoron. It is not reality, if it requires consensus. It is fantasy, it is fiction, it is a dream. The cave has indeed become so widespread that we even _call_ it reality.
If you speak language, and read words, you are participating in a cult (we even call caves that had a kind of altar in the center a cult -- in Eurasia, there was a cave-cult called _the cult of the bear_, which had a bear skull placed in its center during the last ice age, and I would not be surprised if people spoke to it, with the help of hallucinogens). The only question is whether the cult is nourishing you or cannibalizing you.
To the person you are responding to (user ocd): your cave (ladybird, your hypothetical tv-series), no longer nourishes you like it once did. Maybe find a new cave, build a fire in it. Unlike a television series, you can fork a code base. You make it into the perfect cave, just for you. And if another person likes this cave, chooses to sit by the fire with you, well, now you have a cult.
I feel similar about the potential of this technique and have heard this from other C++ developers too.
Rust syntax is a PITA and investing a lot of effort in the language doesn’t seem worth the trouble for an experienced C++ developer, but with AI learning, porting and maintenance all become more accessible.
It’s possible to integrate Rust in an existing codebase or write subparts of larger C++ projects in Rust where it makes sense.
I was recently involved in an AI porting effort, but using different languages and the results were fine. Validating and reviewing the code took longer than writing it.
Some time ago I was perma-banned from the Ladybird github repository. One can say it is warranted, or not (people have their own opinion; I completely disagree with their decision). Now that this has happened, I can speak more freely about Ladybird.
Naturally this will be somewhat critical, but I need to first put things into context. I do believe that we really need an alternative to Google dominating our digital life. So I don't object that we need alternatives; whether Ladybird will be an alternative, or not, will be shown in the future. Most assuredly we need competition as otherwise the Google empire moves forward like Darth Vader and the empire (but nowhere near as cool as that; I find Google boring and lame. Even skynet in Terminator was more fun than Google. Google just annoys the heck out of me, but back to the topic of browsers).
So with that out of the way ... Ladybird is kind of ... erratic.
Some time ago, perhaps two months or three, Andreas suddenly announced "Swift WILL BE THE FOREVER FUTURE! C++ sucks!!!". People back then were scratching heads. It was not clear why Swift is suddenly our saviour.
Ok, now we learn - "wait ... swift is NOT the future, but RUST is!!!". Ok ... more head-scratching. We are having a deja-vu moment here... but it gets stranger:
"We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited. Rust is a different story."
and then:
"I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation"
So ... the expertise will be with regards to ... relying on AI to autogenerate code in ... Rust.
I am not saying this is a 100% fail strategy, mind you. AI can generate useful code, we could see that. But I am beginning to have more and more doubts about the Ladybird project. Add to this the breakage of URLs that are used by thousands or million people world-wide (see the issues reported on the github tracker); or also the fact that, once you scale-up and more and more people use ladybird, will you be able to keep up with issue trackers? Will you ban more people?
In a way it is actually good that I am no longer allowed to make comments on their repository because I can now be a lot more critical and ask questions that the ladybird team will have to evaluate. Will ladybird blend? Will it succeed? Will it fail? Yes, it is way too early to make an evaluation, so we should evaluate in some months or so, perhaps end of this year. But I am pretty certain the criticism will increase, at the least the moment they decide to leave beta (or alpha or whatever model they use; they claimd they want a first working version in this year for Linux users, let's see whether that works).
i rememebr seeing interviews saying rust is not suited for this project because of recursion and dom tree. how they tested multiple languages and settled on swift. then they abandon swift and now they shift towards rust.
this entire project starts to look like "how am i feeling today?" rather than a serious project.
> The browser and libraries are all written in C++. (While our own memory-safe Jakt language is in heavy development, it’s not yet ready for use in Ladybird.)
From the link it seems that Ladybird architecture is very modular, in this case LibJS is one of the subsystems that has less external dependencies, said that they don't need to migrate everything, only the parts that makes sense.
Completely ignoring the Rust aspect, I’m disappointed that two weeks were spent on something that isn’t getting Ladybird to a state where it can be used as a daily driver. Ladybird isn’t usable right now, and if it was usable, improving the memory safety would be a commendable goal. Right now I just feel like this is premature.
10x programmers become 100x with the power of AI. Not an unexpected outcome. But the world is going to suck for ordinary people. 10x programmers will gladly embrace this future become it empowers them more.
We have to accept this reality and act accordingly.
Yes you will downvote me. I have accepted this reality and will hack on my own projects in the woods or in a cave, on my own terms.
------ I wrote the following after a bit of thought:
It was with a heavy heart that I learned that the author of "Ladybird Browser" managed to convert the JavaScript compiler from C++ to Rush in 2 weeks, with the help of AI. It was a mix of awe and depression. 10x programmers leveraged AI to achieve a great feat in only 2 weeks, passing all tests. This was not a surprise to me as we all saw the writing on the wall a couple of years ago, but reality hit hard still. I'm a very average programmer, a very average person, and perhaps worse than the median in many perspectives. The gap between an ordinary people, with a 10X whatever, is getting much larger due to the evolution of tools. No, I do not believe AI can ever replace humans completely, at least not in the near future. But the point is, we the ordinary people are getting less and less relevant. The gate of professional work, the gate from which we drink satisfaction by knowing that many are using our work, is closing. I have no ill feeling towards any 10X programmers who is enjoying this. They are much better than me. They have earned it. They deserve it. And I deserve it, too, to have allowed myself to be mediocre. Being mediocre is a lesser evil then and now, but is a major sin in the future.
I soaked myself in "Crypto-zoologist" (Disco Elysium) to savor the moment. It is fine. Perhaps I will never get a professional job as a system programmer, and this is fine. I'll go into the woods, stay in a cave, and hack on my own projects, on my own terms. I do no care about the end products, and neither do I care whether people use them at all. Programming is a ritual to dispel the daemons from my soul, and I must keep doing it, until the last moment.
They ported an existing project from CPP to Rust using AI because the porting would've been too tedious. I don't think they're planning on vibe coding PRs the way you're imagining.
Yeah, some weekends ago I tried writing a cross-platform browser without any Rust crates, this weekend I made my own self-hosted compile to Rust Clojure-like lisp, maybe next weekend attempting to create a OS that uses my language to run on bare-metal would actually be a challenge. Thanks for the inspiration :)
This comment raises an interesting question: Would Serenity OS have brought Andreas the same kind of serenity had it been developed with AI? Open candid question.
I don't think so because if I remember it correctly, Andreas suffered from alcoholism and serenity prayer helped him to go on the right path and iirc he honored that and created an os named serenityos.
God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.
(courage to change the things I can;):- I think that this line must've given Andreas the strength, the passion to make the project reality.
but if AI made the change. Would the line be changed to courage to prompt an all powerful entity to change the things I asked it to.
Would that give courage? Would that inspire confidence in oneself?
I have personally made many projects with LLM's (honestly I must admit that I am a teenager and so I have been sort of using it from the start)
and personally, I feel like there are some points of curiosity that I can be prideful of in my projects but there is still a sense of emptiness and I think I am not the only one who observes it as such.
I think in the world of AI hype, it takes true courage & passion to write by hand.
Obviously one tries to argue that AI is the next bytecode but that is false because of the non deterministic nature of AI but even that being said, I think I personally feel as if the people who write assembly are definitely likely to be more passionate of their craft than Nodejs (and I would consider myself a nodejs guy and there's still passion but still)
Coding was definitely a form of art/expression/sense-of-meaning for Mr Andreas during a time of struggle. To automate that might strip him of the joy derived from stroking brush on an empty canvas.
Honestly, I really don't know about AI the more I think about it so I will not pretend that I know a thing/two about AI. This message is just my opinion in the moment. Opinions change with time but my opinion right now is that coding by hand definitely is more meaningful than not if the purpose of the project is to derive meaning.
Cool project, but I'm a bit curious hearing how the rest of the project feels about this?
I'm not sure how I'd feel if I woke up and found a system I worked on had been translated into an another language I'm not neccessarily familiar with. And I'm not sure I'd want to fix an non-idiomatic "mess" just because it's been translated into a language I'm familiar with either (although I suspect they'll have no problem attracting rust developers).
The byte-for-byte identical output requirement is the smartest part of this whole thing. You basically get to run the old and new pipelines side by side and diff them, which means any bug in the translation is immediately caught. Way too many rewrites fail because people try to "improve" things during the port and end up chasing phantom bugs that might be in the old code, the new code, or just behavioral differences.
Also worth noting that "translated from C++" Rust is totally fine as a starting point. You can incrementally make it more idiomatic later once the C++ side is retired. The Rust compiler will still catch whole classes of memory bugs even if the code reads a bit weird. That's the whole point.
I hope, with the velocity unlocked by these tools, that more pure ports will become the norm. Before, migrations could be so costly that “improving” things “while I’m here” helped sell doing the migration at all, especially in business settings. Only to lead to more toil chasing those phantom bugs.
One of the biggest point of rewriting is you know better by then so you create something better.
This is a HUUUGE reason code written in rust tended to be so much better than the original (which was probably written in c++).
Human expertise is the single most important factor and is more important than language.
Copy pasting from one language to another is way worse than complete rewrite with actual idiomatic and useful code.
Best option after proper rewrite is binding. And copy-paste with LLM comes way below these options imo.
If you look at real world, basically all value is created by boring and hated languages. Because people spent so much effort on making those languages useful, and other people spent so much effort learning and using those languages.
Don’t think anyone would prefer to work in a rust codebase that an LLM copy-pasted from c++, compared to working on a c++ codebase written by actual people that they can interact with.
> Copy pasting from one language to another is way worse than complete rewrite with actual idiomatic and useful code.
But translating with automated tools is a much faster experiment.
Sometimes (not always), rewriting from scratch ends up in a big loss of time and resources and never replaces the old version.
It depends on your goals. If your only initial goal is to ensure the safety of your code, and that is rather important for a browser!
I did several web framework conversions exactly like this. Make sure the http output string matches in the new code exactly as the old code and then eventually deleted the old code with full confidence.
Works even better if you have a good test suite, which is surely the case here with Ladybird
> I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation. I decided what to port, in what order, and what the Rust code should look like. It was hundreds of small prompts, steering the agents where things needed to go. After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns. > The requirement from the start was byte-for-byte identical output from both pipelines. The result was about 25,000 lines of Rust, and the entire port took about two weeks. The same work would have taken me multiple months to do by hand. We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output. Zero regressions across the board
This is the way. Coding assistants are also really great at porting from one language to the other, especially if you have existing tests.
> Coding assistants are also really great at porting from one language to the other
I had a broken, one-off Perl script, a relic from the days when everyone thought Drupal was the future (long time ago). It was originally designed to migrate a site from an unmaintained internal CMS to Drupal. The CMS was ancient and it only ran in a VM for "look what we built a million years ago" purposes (I even had written permission from my ex-employer to keep that thing).
Just for a laugh, I fed this mess of undeclared dependencies and missing logic into Claude and told it to port the whole thing to Rust. It spent 80 minutes researching Drupal and coding, then "one-shotted" a functional import tool. Not only did it mirror the original design and module structure, but it also implemented several custom plugins based on hints it found in my old code comments.
It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
The Epilogue: That site has since been ported to WordPress, then ProcessWire, then rebuilt as a Node.js app. Word on the street is that some poor souls are currently trying to port it to Next.js.
> 10/10 - would generate tens of thousands of lines of useless code again.
Me too! A couple days ago I gave claude the JMAP spec and asked it to write a JMAP based webmail client in rust from scratch. And it did! It burned a mountain of tokens, and its got more than a few bugs. But now I've got my very own email client, powered by the stalwart email server. The rust code compiles into a 2mb wasm bundle that does everything client side. Its somehow insanely fast. Honestly, its the fastest email client I've ever used by far. Everything feels instant.
I don't need my own email client, but I have one now. So unnecessary, and yet strangely fun.
Its quite a testament to JMAP that you can feed the RFC into claude and get a janky client out. I wonder what semi-useless junk I should get it to make next? I bet it wouldn't do as good a job with IMAP, but maybe if I let it use an IMAP library someone's already made? Might be worth a try!
Same here. I had Claude write me a web based RSS feed reader in Rust. It has some minor glitches I still need to iron out, but it works great, is fast as can be, and is easy on the eyes.
https://github.com/AdrianVollmer/FluxFeed
Just curious, does it look anything like this library?
https://docs.rs/jmap-client/latest/jmap_client/
Please post this. I'd love to play with it and, especially, see how fast it is.
Can you release it as open source code?
> It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
This is the biggest bottleneck at this point. I'm looking forward to RAM production increasing, and getting to a point where every high-end PC (workstation & gaming) has a dedicated NPU next to the GPU. You'll be able to do this kind of stuff as much as you want, using any local model you want. Run a ralph loop continuously for 72 hours? No problem.
Wasting electricity to "generate tens of thousands of lines of useless code" at will? Why is that in any way a desirable future?
I bet RAM production will only increase to meet AI demand and there will be none left for you. Or me. Or anyone. Crucial is already going probably forever and I'm sure more will follow...
> a relic from the days when everyone thought Drupal was the future (long time ago).
Drupal is the future. I never really used it properly, but if you fully buy into Drupal, it can do most everything without programming, and you can write plugins (extensions? whatever they're called...) to do the few things that do need programming.
> The Epilogue: That site has since been ported to WordPress, then ProcessWire, then rebuilt as a Node.js app. Word on the street is that some poor souls are currently trying to port it to Next.js.
This is the problem! Fickle halfwits mindlessly buying into whatever "next big thing" is currently fashionable. They shoulda just learned Drupal...
I'm not sure if you're serious or not, but while I never liked Drupal (even used to hate it once upon a time), I always liked the pragmatism surrounding it, reaching to the point of saving php code into the mysql database and executing from there.
There are plenty of SMEs trapped into that future. :)
> It burned through a mountain of tokens, but 10/10 - would generate tens of thousands of lines of useless code again.
Pardon me, and, yes, I know we're on HN, but I guess you're... rich? I imagine a single run like this probably burns through tens or hundreds of dollars. For a joke, basically.
I guess I understand why some people really like AI :-)
It was below 100$, but only after burning through the 20x max session limit.
The subsidized Codex/Claude subscriptions make it not so bad.
Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved. I don't want "Fire one prompt and get somewhat working code", I want a UX tailored for long sessions with back and forth, letting me leverage my skills, rather than agents trying to emulate what I already can do myself.
It was said a long time ago about computing in general, but more fitting than ever, "Augmenting the human intellect" is what we should aim for, not replacing the human intellect. IA ("Intelligence amplification") rather than AI.
But I'm guessing the target market for such tools would be much smaller, basically would require you to already understand software development, and know what you want, while all AI companies seem to target non-developers wanting to build software now. It's no-code all over again essentially.
Is it any surprise that the cocaine cartels really want you to buy more cocaine, so they don't focus on its usefulness in pain relief and they refine it and cut it with the cheapest substances that will work rather than medical-grade reagents?
Same thing.
It's surprising that the ones who are producing the cocaine, don't try to find the best use of the cocaine, yes. But then these are VC-fueled businesses, then it all goes out the window, unfortunately. Otherwise they'd actually focus on usefulness, not just "usage" or whatever KPI they go by and share with their investors.
LLMs are drugs because they’re addictive and sap your abilities, is it?
(or generally: “Is the cocaine cartel comparison fair or unfair?”)
"All of them are moving into the direction of "less human involved and agents do more", while what I really want is better tooling for me to work closer with AI and be better at reviewing/steering it, and be more involved."
I want less ambitious LLM powered tools than what's being offered. For example, I'd love a tool that can analyse whether comments have been kept up to date with the code they refer to. I don't want it to change anything I just want it to tell me of any problems. A linter basically. I imagine LLMs would be a good foundation for this.
Any terminal tool like Claude Code or Codex (I assume OpenCode too, but I haven't tried) can do it, by using as a prompt pretty much exactly what you wrote, and if it still wants to edit, just don't approve the tool calls.
One problem I've noticed is that both claude models and gpt-codex variants make absolutely deranged tool calls (like `cat <<'EOF' >> foo...EOF` pattern to create a file, or sed to read a couple lines), so it's sometimes hard to see what is it even trying to do.
Of course there are tools focusing on this. It takes a little getting used to how prevalent it is. My editor now can anticipate the next three lines of code I intend to write complete with what values I want to feed to the function I was about to invoke. It all shows up in an autocomplete annotation for me. I just type the first two or three characters and press tab to get everything exactly how I was about to type it in--including an accurate comment worded exactly in my voice.
Is that what you mean by IA?
For example, I type "for" and my editor guesses I want to iterate over the list that is the second argument of the function for which I am currently building the body. So it offers to complete the rest of the loop condition for me. Not only did it anticipate that I am writing a for loop. It figures out what I want to iterate over, and perhaps even that I want to enumerate the iteration so I have the index and the value. Imagine if I had written a comment to explain my intent for the function before I started writing the function body. How much better could it augment my intellect?
I think this could be a decent interface with one addition, a way to comment on the completion being suggested. You could ask it for a different completion or to extend the completion, do something different, do a specific thing, whatever. An active way to "explain my intent" with the AI (besides leaving comments hinting at what you want) in addition to the passive completion system.
To be honest, I'm not quite sure what the ideal UX looks like yet. The AI assisted autocomplete is too little, but the idea of saying "Build X for purpose Y" is too high-level. Maintaining Markdown documents that the AI implements, also feels too high-level, but letting the human fully drive the implementation probably again too low-level.
I'm guessing the direction I'd prefer, would be tooling built to accept and be driven by humans, but allowed to be extended/corrected by AI, or something like that, maybe.
Maybe a slight contradiction, and very wish-washy/hand-wavey, but I haven't personally quite figured out what I think would be best yet either, what the right level actually is, so probably the best I could say right now :) Sorry!
Still magical a few years in?
>Imagine if I had written a comment to explain my intent for the function before I started writing the function body.
This in particular is not dissimilar from opening a chat with a model and giving it a prompt as usual but then adding at the end:
Begin your response below:
Which editor?
> Imagine if I had written a comment to explain my intent for the function before I started writing the function body.
The loon programming language (a Lisp) has "semantic functions", where the body is just the doc comment.
>Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
This is because, regardless of the current state of things, the endgame which will justify all the upfront investment is autonomous, self-improving, self-maintaining systems.
I think it was Steve Jobs who said computers should be like a bicycle for the mind, I tend to agree
Yeah, Douglas Engelbart was also a huge believer in that, and I think from various stuff I've read from him and the Augmentation Research Center put me on this track of really agreeing with it.
"Bicycle for the mind", as always when it involves Jobs, sounds more fitting for the masses though, so thanks for sharing that :)
Agents are a "self-driving car for the mind". I don't enjoy or dislike driving, but lots of Americans love to drive. In the future they will lament their driving skills' decline.
I love this Jobs quote for two reasons:
(1) It captures the ideal so well
(2) The bitter irony of how thoroughly pre-OS X Macintosh computers failed to live up to it
I feel like there's a similar dichotomy in LLM tools now
> Agree, and it's also such a shame that none of the AI companies actually focus on that way of using AI.
their valuations are replaced on getting rid of you entirely, along with everyone else
the "humans can use it to increase their productivity" is an interim step
I am learning rust myself and one of the things I definetly didn't want to do was let Claude write all the code. But I needed guidance.
I decided to create a Claude skill called "teach". When I enable it, Claude never writes any code. It just gives me hints - progressively more detailed if I am stuck. Then it reviews what I write.
I am finding it very satisfying to work this way - Rust in particular is a language where there's little space to "wing it". Most language features are interlaced with each other and having an LLM supporting me helps a lot. "Let's not declare a type for this right now, we would have to deal with several lifetime issues, let's add a note to the plan and revisit this later".
FYI: Claude has output styles, one of them is called `learning`. Instead of writing the code itself, it will add `TODO(human)` and comments to explain how to. Also adds `Insights` explaining concepts to you in its output.
This link also has a comparison to Skills further down.
https://code.claude.com/docs/en/output-styles#built-in-outpu...
I had a bash spaghetti code script that I wrote a few years ago to handle TLS certificates(generate CSRs, bundle up trust chains, match keys to certs, etc). It was fragile, slow, extremely dependent on specific versions of OpenSSL, etc.
I used Claude to rewrite it in golang and extend its features. Now I have tests, automatic AIA chain walking, support for all the DER and JKS formats, and it’s fast. My bash script could spend a few minutes churning through a folder with certs and keys, my golang version does a few thousand in a second.
So I basically built a limited version of OpenSSL with better ergonomics and a lot of magic under the hood because you don’t have to specify input formats at all. I wasn’t constrained by things like backwards compatibility and interface stability, which let me make something much nicer to use.
I even was able to build a wasm version so it can run in the browser. All this from someone that is not a great coder. Don’t worry, I’m explicitly not rolling my own crypto.
This is also how some of us use Claude despite what the haters say. You dont just go “build thing” you architect, review, refine, test and build.
It's how most of us are actually going to end up using AI agents for the foreseeable future, perhaps with increasing degrees of abstraction as we move to a teams-of-agents model.
The industry hasn't come up with a simple meme-format term to explain this workflow pattern yet, so people aren't excited about it. But don't worry, we'll surely have a bullshit term for it soon, and managers everywhere will be excited. In the meantime, we can just continue doing work with these new tools.
This is an opportunity to select some stupid words that you would like to hear repeated a million times. The process is like patiently nurturing a well-contained thing, so how about "egg coding"?
I havent quite dealt with "teams of agents" yet outside of Claude Code itself spawning subagents, but I have some ideas as to how to achieve it in a meaningful way without giving a developer 10 claude code licenses, I think the real approach that makes more sense to me is to still have humans in the loop, but have their respective agents sync together and divide work towards one goal, but being able to determine which tasks are left to be worked one and tested. I do think for the foreseeable future you will need human validation for AI.
I thought the term was "agentic engineering"
https://youtu.be/JV-wY5pxXLo?si=ga-9Gg8IZfU6g8Tg
It's vibe engineering
I'm not sure there's going to be a term, because there's no difference from normal, good quality engineering. You iterate on design, validate results, prioritise execution. It's just that you hand over the writing code part. It's as boring as it gets.
> how some of us
Operative word being “some”. The issue is that too many aren’t doing it that way.
> You dont just go “build thing”
Tell that to the overwhelming majority of posters discussing vibe coding, including on HN.
Sure, but they're going to be stuck writing software for yesterday's problems. As our tools become more powerful, we're going to unlock new problems and expectations that would be impossible or impractical to solve with yesterday's tooling.
I suppose to some extent those people have always existed. The ones who would choose the most expedient solution.
The difference now is they can get much further along.
> despite what the haters say
Thinking people who disagree with you hate you or hate the thing you like is a recipe for disaster. It's much better to not love or hate things like this, and instead just observe and come to useful, outcome-based conclusions.
LLMs really do attract haters in the classic sense though. You'll find them in almost every thread on here.
Look at any HN thread that has a project that uses AI in any way, shape or form. People quickly remark that it is slop, without even reviewing the code. If that's not blind hatred of AI, I don't know what is.
There's a huge distinction between Vibe Coding, and actual software engineers using AI tooling effectively. I vibe code for fun sometimes too, nothing wrong with it, helps me figure out how the model behaves in some instances, and to push the limits of what I understand.
We keep seeing this pattern over and over as well. Despite LLM companies' almost tangible desperation to show that they can replace software engineers, the real value comes from domain experts using the tools to enhance what they're already good at.
I haven’t done a ton of porting. And when I did, it was more like a reimplementation.
> We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output.
Is this a conventional goal? It seems like quite an achievement.
My company helps companies do migrations using LLM agents and rigid validations, and it is not a surprising goal. Of course most projects are not as clean as a compiler is in terms of their inputs and outputs, but our pitch to customers is that we aim to do bug-for-bug compatible migrations.
Porting a project from PHP7 to PHP8, you'd want the exact same SQL statements to be sent to the server for your test suite, or at least be able to explain the differences. Porting AngularJS to Vue, you'd want the same backend requests, etc..
It’s a very good way of getting LLMs to work autonomously for a long time; give it a spec and a complete test suite, shut the door; and ask it to call you when all the tests pass.
I had a script in another language. It was node, took up >200MB of RAM that I wanted back. "claude, rewrite this in rust". 192MB of memory returned to me.
Solving the big RAM shortage one prompt at a time.
I used to have a bunch of bespoke node express server utilities that I liked to keep running in the background to have access to throughout the day but 40-50mb per process adds up quickly.
I’ve been throwing codex at them and now they’ve all been rewritten in Go - cut down to about 10mb per process.
This is sad to see. Node was originally one of the memory efficient options – it’s roots are solving the c10k problem. Mind sharing what libraries/frameworks you were using?
It was an express server. I don't think c10k is particularly interesting since it mostly just involves having cooperating scheduling. Doesn't really impact flat memory overhead etc. I mean, the binary for node alone, without any libraries etc, is larger than the produced rust binary.
This is the way. This exact workflow is my sweet spot.
In my coding agent std::slop I've optimized for this workflow https://github.com/hsaliak/std_slop/blob/main/docs/mail_mode... basically the idea is that you are the 'maintainer' and you get bisect safe, git patches that you review (or ask a code reviewer skill or another agent to review). Any change re-rolls the whole stack. Git already supports such a flow and I added it to the agent. A simple markdown skill does not work because it 'forgets'. A 'github' based PR flow felt too externally dependent. This workflow is enforced by a 'patcher' skill, and once that's active, tools do not work unless they follow the enforced flow.
I think a lot of people are going to feel comfortable using agents this way rather than going full blast. I do all my development this way.
your patch queue approach is very clever. Solves a huge tech debt poblem with llm code gen. Should work with jujitsu too probably.
Would be curious to see more about how you save tokens with lua too.
Do you blog?
Thanks for your interest in this work - I do not blog(maybe I should?) but i have posted a bit more on X about this work.
- A bit more on mail mode https://x.com/hsaliak/status/2020022329154420830
- on the Lua integration https://x.com/hsaliak/status/2022911468262350976 (I've since disabled the recursion, not every code file is long and it seems simpler to not do it), but the rest of it is still there
- hotwords for skill activation https://x.com/hsaliak/status/2024322170353037788
Also /review and /feedback. /feedback (the non code version) opens up the LLM's last response in an editor so you can give line by line comments. Inspired by "not top posting" from mailing lists.
I am having immense success with the latest models developing a personal project that I open sourced and then got burned off by.I can't write anymore by hands but I do enjoy writing prompts with my voice.I have been shipping the best code the project has ever seen.The revolution is real.
Coding assistants are great at pattern matching and pattern following. This is why it’s a good idea to point them at any examples or demos that come with the libraries you want to use, too.
> Coding assistants are also really great at porting from one language to the other
No, they are quite terrible at doing that.
They may (I guess?) produce code that compiles, but they will, almost certainly not produce the appropriate combination of idioms and custom abstractions that may the code "at home" in the target language.
PS - Please fix your blockquote... HN ignores single linebreaks, so you have to either using pairs of them, or possibly go with italicization of the quoted text.
> This was human-directed, not autonomous code generation.
All my vibe coded projects are human directed, unless explicitly stated otherwise
Quite good. I ported my codebase from Go to Rust in a fraction of the time it would have taken me to rewrite it.
How does he solve the Fruit of the Poison Tree problem? For all he know, his LLMs included a bunch copyrighted or patented code throughout the codebase. How is he going to convince serious people that this port is not just a transformation of an _asset_ into a _liability_?
And you might say that this is a hypothetical problem, one that is not practically occurring. Well, we had a similar problem like this in the recent past, that LLMs are close to _making actual_. When it comes to software patents, they were considered a _hypothetical_ problem (i.e. nobody is going to bother suing you unless you were so big that violating a patent was a near certainty). We were instructed (at pretty much all jobs), to never read patents, so that we cannot incriminate ourselves in the discovery process.
That is going to change soon (within a year). I have friend, whom I won't name, who is working on a project, using LLMs, to discover whether software (open source and proprietary) is likely to be violating a software patent from a patent database. And it is designed to be used, not by programmers, but by law firms, patent attorneys, etc. Even though it is not marketed this way, it is essentially a target acquisition system for use by patent trolls. It is hard for me to tell if this means that we will have to keep ignoring patents for that plausible deniability, or if this means that we will have to become hyper informed about all patents. I suppose, we can just subscribe to the patent-agent, and hope that it guides the other coding agents into avoiding the insertion of potentially infringing code.
(I also have a friend who built a system in 2020 that could translate between C++ and Python, and guarantee equivalent results, and code that looks human-written. This was a very impressive achievement, especially because of how it guarantees the equivalence (it did not require machine-learning nor GPUs, just CPUs and some classic algorithms from the 80s). The friend informs me that they are very disheartened to see that now any toddler with a credit card can mindlessly do something similar, invalidating around a decade of unpublished research. They tell me that it will remain unpublished, and if they could go back in time, they would spend that decade extracting as much surplus from society as possible, by hook or by crook (apparently they had the means and the opportunity, but lacked the motive); we should all learn from my friend's mistake. The only people who succeed are, sadly, perversely, those who brazenly and shamelessly steal -- and make no mistake, the AI companies are built on theft. When millionaires do it, they become billionaires -- when Aaron Swartz does it, he is sentenced to federal prison. I'm not quite a pessimist yet, but it really is saddening to watch my friend go from a passionate optimist to a cold nihilist.).
One or both of you have the story very wrong.
If there was value (the guarantees) to this tech he buried a bunch of time in, he should be wrapping a natural language prompt around it and selling it.
Not even the top providers are giving any sort of tangible safety or reliability guarantees in the enterprise…
I'm a long-time Rust fan and have no idea how to respond. I think I need a lot more info about this migration, especially since Ladybird devs have been very vocal about being "anti-rust" (I guess more anti-hype, where Rust was the hype).
I don't know if it's a good fit. Not because they're writing a browser engine in Rust (good), but because Ladybird praises CPP/Swift currently and have no idea what the contributor's stance is.
At least contributing will be a lot nicer from my end, because my PR's to Ladybird have been bad due to having no CPP experience. I had no idea what I was doing.
> I guess more anti-hype, where Rust was the hype
Yeah that is the thing I struggle with. I am really happy for people falling in love with Rust. It is a amazing language when used for the right use case.
The problem is that had my Rust adventures a few years ago and I am over the hype cycle and able to see both the advantages and disadvantages. Plus being generally older and hopefully wiser I don't tie my identity towards any specific programming language that much.
So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting. Really not sure how to solve it. It is good when people get excited about a language. It just can be very annoying for everyone else sometimes.
> So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting. Really not sure how to solve it. It is good when people get excited about a language. It just can be very annoying for everyone else sometimes.
This rings very true, and I've actually disadvantaged myself somewhat here. I was involved in projects that made very dubious decisions to rewrite large systems in Rust. This caused me to actively stay away from the language, and stick to C++, investing lots of time in overcoming its shortcomings.
Now years later, I started with Rust in a new project. And I must say, I like the language, I really like the tools, and I like the ecosystem. On some dimension I wish I would have done this sooner (but on the other hand, I think I have a better justification of "why Rust" now).
I'm contemplating diving into Rust for a smallish project, a daemon with super-basic UI intended for Linux, MacOS and Windows. Do you mind expanding on what disadvantages you encountered? Or use-cases that aren't appropriate for Rust?
It's all the stuff that people always mention; they are not wrong. You spend a decent amount of time... conversing with the compiler about lifetimes and, in my experience, even more so about the type system, which is _extremely_ complicated. But you also have to keep in mind that Rust got very popular, very fast, and the tail end of something like that is always a negative reaction. The language is the same, despite the hype roller coaster.
I'm not OP but here's my disadvantages. Rust is the way I earn my living, and also my open source tool of choice. And my background is 25 years of SWE career:
1. build / compile times can be atrocious
2. crates.io inherits the npm philosophy, which means fairly unmoderated space of third party deps and because the Rust stdlib doesn't have a lot in it, extensive third party crate (lib) usage is strong in Rust. As a result most Rust projects have rather sprawling dependency trees, often with duplicated functionality (multiple Base64, rand, sha256, etc crates). I personally have a problem with this (auditability, accountability, security, complexity etc). Others don't.
3. Despite being nominally runtime agnostic, Rust async basically is tokio and it's almost impossible to use another runtime once you factor in third party deps. In many ways Rust is the language that tokio ate. In fact even if you opt out of async entirely, you often end up with tokio as a dependency simply because the community just seems to expect it.
4. Despite advertising itself as a "systems" language, some basic systems programming facilities I expect from my C++ background are still fundamentally not there. In particular, per-container/struct pluggable allocators still isn't a thing and the feature to add it (allocator-api) has sat unmerged in nightly for almost ten years at this point and it doesn't look good for it landing any time soon.
5. If you're working in the embedded space, there's still plenty of devices that will not have a workable Rust toolchain option yet.
I still choose it for new projects instead of its competitors C++ or Zig. But I think it's important to recognize there are compromises like any other tool.
As much as people might insist otherwise, there will in fact come a day when there are "multiple Rusts" by which I mean multiple styles and ways of doing things -- just like C++. For myself, for example... if it were my repository and my team and my hiring, and I was starting from scratch... I'd be extremely careful about third party crate adoption and have an extremely minimalistic approach there. And I don't use tokio. Though my paying jobs do.
It’s a pretty good language and ecosystem. Downside was always the community which every ten seconds someone will start asking to tax everyone to fund Rust Software Foundation or constantly argue that you have to donate a percentage of income to it. Now with LLM I don’t have to talk to community. Huge improvement.
Problem with community is it has experts and groupies mixed in. Ideally experts can talk somewhere and groupies can go somewhere else and talk about funding RSF etc. but now is unnecessary. Expert is available on demand via chatbot.
> So sometimes when some Junior dev discovers Rust and they get really obnoxious with their evangelicalism it can be very off putting.
And experience doesn't equal correct decision making. People just get traumatized in different ways.
Its possible to dislike Rust but pragmatically use it. Personally, I do not like Rust, but it is the best available choice for some work and personal stuff.
I think this is a good, realistic point of view.
Personally I think most programming languages have really ... huge problems. And the languages that are more fun to use, ruby or python, are slow. I wonder if we could have a great, effective, elegant language that is also slow. All that try end up with e. g. with a C++ like language.
Honestly I find writing Rust more fun than writing Python. Python just doesn't scale, any non-trivial quantity of it has a habit of turning into spaghetti however hard I try to be disciplined.
Rust, although annoying at a micro scale, does at least enforce some structure on your code, although like Kling I miss OO.
AI has made Rust approachable to a new audience of programmers who didn't want to dedicate their life to learning the ins and outs of the language. Especially for C++ developers who already learned the ins and outs of a hyper complex programming language and don't want to go through that a second time.
Before AI, writing Rust was frustrating experience that involved spending 90% of your time reading documentation and grumbling that "I could do this in 5 minutes in C++"
Now I can write Rust in a way that makes sense to my C++ addled brain and let the AI do the important job of turning it into an idiomatic Rust program that compiles.
So what don't you like about it?
Its for the time being is stuck with LLVM, so I can't currently LTO with GCC objects. Its got a lot higher complexity than I perfer in a language. A lot of features I find important seem perma-unstable. Pin is unnessesarily confusing. No easy way to define multiple compilation units for use with linker object selection and attribute constructor. The easy path is downloading binary toolchains with rustup and not using your disto package manager. You can't use unstable features without the bootstrap env var on distro rust toolchains. Cargo leads to dependency bloat. The std/core crates are prebuilt binaries and bloat binary sizes. Bindgen doesn't translate static inline code. The language has a ton of stuff it exposes just to std and not user code. Unsafe code is unergonomic. No easy way to model a cleanup function that needs more args. No support for returns_twice. No ability to use newer stuff like preserve_none. Can't go-to-definition from a bindgen binding to original header file. Macros pollute global namespace. Can't account for platforms where size_t and uintptr_t are different. Traits can only be relied on if marked unsafe. Can't implement something like defer since it holds a borrow. no_std code still can pull in core::fmt. Can't enforce dependencies are also no_std. Panics are considered safe. No way to add non-function fields to dyn vtables. No way to declare code separately from definition. No way to have duplicate type definitions that merge, making interop between different bindgen generated modules annoying.
> Ladybird praises CPP/Swift currently
Not anymore.
https://news.ycombinator.com/item?id=47067678
They are moving fast.
Next month it will be yet-another-language.
Eventually they come full circle and settle for either C or C++.
They've been stuck with swift adoption for a long time, abandoning that was the reasonable decision. That only leaves Rust as the second language to C++
I guess I missed this, thanks!
I'd argue Ladybird itself is a "hype" project.
Anything trying to break the browser monopolies in a meaningful way deserves the hype, IMO.
Fair point. What does Ladybird need to achieve in your opinion to shake the "hype" label? Honestly, I, myself, don't have a good answer!
To me, a project's "hype-ness" is the ratio of how much attention it gets over how useful it actually is to users.
As a browser, Ladybird usefulness is currently quite limited for obvious reasons. This is not meant to dismiss its achievements, nor to overlook the fact that building a truly useful browser for everyday users is something few open source teams can accomplish without the backing of a billion dollar company. Still, in its present state, its practical utility remains limited.
> What does Ladybird need to achieve in your opinion to shake the "hype" label?
A release (?)
I am somewhat concerned about the volatility. All three languages have their merits and each has a stable foundation that has been developed and established over many years. The fact that the programming language has been “changed” within a short period of time, or rather that the direction has been altered, does not inspire confidence in the overall continuity of Ladybird's design decisions.
Ladybird as a project is not that old, and it's still in pre-alpha, if they are going to make important changes then it's better now than later.
> I am somewhat concerned about the volatility.
Not just volatility but also flip-flopping. Rust was explicitly a contender when they decided to go with Swift 18 months ago, and they've already done a 180 on it despite the language being more or less the same as it was.
they tried swift, it didn't work, and they figured rust was the best remaining option. that's not "flip-flopping" (by which I assume you mean random indecisiveness that leads to them changing their mind for no reason)
They made a very pragmatic and sensible decision after reviewing Swift that it wouldn't be suitable for their purposes, so they shifted to the next best alternative. I think they reasoned it very well and made a great decision.
I guess they bet on Swift being more than Apple's blessed way of writing UI software.
It's not that they are loving Rust, but they realized going all-in on Swift means becoming sharecroppers on massa Tim Apple's plantation.
There's been some fun volatility with the author over the years. I told him once that he might want to consider another language to which he replied slightly insultingly. Then he tried to write another language. Then he tried to switch from C++ to Swift, and now to Rust :P
Upside: he's learning?
> I think I need a lot more info about this migration
Doesn't sound like it's some Fish-style, full migration to Rust of everything. Seems like they are just moving a couple parts over for evaluation, and then, going forward, making it an official project language that folks are free to use. They note that basically every browser already does that, so this isn't a huge shakeup.
TFA mentions "the contributor's" stance on Swift.
But not the stance on Rust, which is something I'm wondering. I understand there's a core team assigned, but are the ~200 contributors okay with this migration?
Why would 200 contributors have to be okay with this migration? The project has a leader, the leader makes decisions.
They abandoned Swift recently.
The public announcement was less then a week ago. Meanwhile in TFA:
> ... the entire port took about two weeks.
So he was ~halfway in when he made the Swift announcement.
Doesn’t sound like a bad thing to evaluate the most obvious alternative to build confidence before officially pulling the plug.
Swift adoption had been dead long before the actual announcement. It's likely Rust was being considered long before this two week experiment with LLMs.
it's very odd that someone with no experience would take a big project like this and just jump to another language because he trusts the AI generated code of current models
if it works it works i guess, but it seems mad to me on the surface
Why do you think the creator behind SerenityOS has no experience? I mean it’s not the most popular OS out there but he seems like a capable individual.
in case it's not glaringly obvious from the comment, he has plenty of cpp experience and little rust experience, and that's according to his own comments
the relevant bit here is that he's porting from a language in which he has plenty of experience into another one in which he doesn't, in a large project
that in itself sounds like putting a lot of faith in LLMs but maybe there are factors not mentioned here, which is why i said "on the surface"
It's hard to articulate, but as someone who knows first hand, I just want to say that manic productivity is not the same as solid engineering.
Did you read the OP? No trust, only thorough verification.
I did, and the point stands because reading someone else's code is not the same as writing it, esp. when you're not able to do so to the same standard
> especially since Ladybird devs have been very vocal about being "anti-rust" (I guess more anti-hype, where Rust was the hype).
I mean, they seem mostly to be against anything that isn't C++'s peculiar brand of Object Oriented Programming?
(also against women and immigrants, but that's a different story)
Looks like Andreas is a mighty fine engineer, but he's even better entrepreneur. Doesn't matter if intentional or not, but he managed to create and lead a rather visible passion project, attract many contributors and use that project's momentum to detach Ladybird into a separate endeavor with much more concrete financial prospects.
The Jakt -> Swift -> Rust pivots look like the same thing on a different level. The initial change to Swift was surely motivated by potential industry support gain (i believe it was a dubious choice from purely engineering standpoint).
It's awe-inspiring to see how a person can carve a job for himself, leverage hobbyists'/hackers' interest and contributions, attract industry attention and sponsors all while doing the thing he likes (assuming, browsers are his thing) in a controlling position.
Can't fully rationalize the feeling, but all of this makes me slightly wary. Doesn't make it less cool to observe from a side, though.
Yeah, this is glorified yak-shaving if we're being real. I'm not getting my hopes up for a true new browser
Eh, he's given an interview where he talks about the Swift decision. He and several maintainers tried building some features in Swift, Rust, and C++, spending about two weeks on each one IIRC. And all the maintainers liked the experience of Swift better. That might have ended up wrong, but it's a pretty reasonable way to make a decision.
Two weeks with Rust and you're still fighting with the compiler. I think the LLM pulled a lot of weight selling the language, it can help smooth over the tricky bits.
idk man it's rare to fight the compiler once you've used Rust for long enough unless you're doing something that's the slightest bit complex with async.
You get to good at schmoozing the compiler you start to create actual logical bugs faster.
This looks like guerrilla advertising for sure.
LLM and rust rewrite together. And it does work so hopefully they get more attention and build it so I have an alternative browser to use
> but all of this makes me slightly wary.
Wary of what?
I'd say it's the idea/fact/feeling that, in 2026, agency matters more than skill/wisdom/intelligence.
Long read on the topic (quite funny, covers Cluely): https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...
Probably, Roy was born agentic as a part of a package which included an disregard for intellectual growth.
This doesn't mean that being agentic cannot be cultivated by regular people.
In 2026, yes, agency matters more than skill/wisdom/intelligence to get VC funds. But what's the point of agency alone if you are leading such a life?
What gives me hope is that in 2026, skillful people can delegate a lot of their work to LLMs, which gives them time to learn the "agentic" part which is basically marketing and talking with people.
(just thinking out loud)
This is less about languages and more about so-called AI. One thing’s for sure: it’s becoming harder and harder to deny that agentic coding is revolutionizing software development.
We’re at the point where a solid test suite and a high-quality agent can achieve impressive results in the hands of a competent coder. Yes, it will still screw up, needs careful human review and steering, etc, but there is a tangible productivity improvement. I don’t think it makes sense putting numbers on it, but for many tasks, it looks like there’s a tangible benefit.
> We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
Correct me if I’m wrong since I don’t know these two languages, but like some other languages, doing things the idiomatic way could be dramatically different. Is “cleanup” doing a lot of heavy lifting here? Could that also mean another complete rewrite from scratch?
A startup switching languages after years of development is usually a big red flag. “We are rewriting it in X” posts always preceded “We are shutting down”. I wish them luck though!
A mitigating factor in this case is the C++ and Rust are both multi-paradigm languages. You can quite reasonably represent most C++ patterns in Rust, even if it might not be quite how you'd write Rust in the first place.
In addition, C++ and Rust are very, very similar languages. Almost everything in C++ translates easily, including low level stuff and template shenanigans. There's only a few "oh shit there's no analog" things, like template specialization or virtual inheritance.
Out of all the languages rust takes inspiration from, id rank C++ at the top of the list.
Strong disagree. Rust copied C++ syntax to avoid looking weird to C++ programmers, but the similarity is skin deep. C can be tamed, because it's mostly a subset of Rust, but C++ idioms are a death from papercuts.
OOP, weakly-typed templates, and mutable aliasing create impedance mismatch in almost every C++ API.
Rust doesn't have data inheritance, and what looks like interface inheritance is merely extra requirements in a flat list of traits, so subclassing won't behave like C++ APIs expect. When you translate a class hierarchy to Rust, it needs lots of crutches which make it weird, boilerplatey, and tedious to use. There's no good recipe for OOP hierarchy in Rust, because the idioms are so different. The mismatch feels like writing an ORM.
For some C++ APIs mutability and circular references can be a pain too. Rust works well with DAG data structures and clear mostly-immutable data flow. Objects with some "parent" pointer are common in C++, but Rust sees them as potentially dangling, with shared mutable state, and requires much heavier control of them. It can be done, but it's ugly. Idiomatic Rust designs go to great lengths to avoid it unless necessary, but C++ APIs can have the extra pointers "for convenience".
There's a reason why Rust doesn't have typical GUI libraries - an arbitrary web of references between widgets and event handlers make it ugly in Rust, and that's on top of a view class inheritance.
C++ templates sit very uncomfortably between Rust's macros (duck typed) and Rust's generics (strictly typed at point of declaration).
C++ templates almost always are a mix of types they're attached to and some duck-typing in their expansion.
Rust's generics do not allow any duck typing at all. This makes translation of even a tiny bit clever C++ templates a chore. There's no specialization. No way to deal with SFINAE and such.
Rust macros have flexibility for all the syntax shenanigans (and even similarly bad errors at instantiation time), but macros can't see any types. Idiomatic Rust has very deliberate division between traits (usually much simpler and smaller in scope), macros and proc macros/derives. Splitting C++ templates like that can be a major redesign.
This is the famous trap that Joel on Software talked about in a blog post long time ago.
If you do a rewrite you essentially put everything else on halt while rewriting.
If you keep doing feature dev on the old while another "tiger team" is doing the rewrite port then these two teams are essentially in a race against each other and the port will likely never catch up. (Depending on relative velocities)
Maybe they think that they can to this LLM assisted tools in a big bang approach quickly and then continue from there without spending too much time on it.
I’ve been part of at least 2 successful rewrites. I think that Joel’s post is too often taken as gospel. Sometimes a rewrite is the best way forward.
Moving Ladybird from C++ to a safer more modern language is a real differentiator vs other browsers, and will probably pay dividends. Doing it now is better than doing it once ladybird is fully established.
One last point about rewrites: you can look at any industry disruptor as essentially a team that did a from-scratch rewrite of their competitors and won because the rewrite was better.
I can recall recently, listening to an Oxide and Friends podcast where they spent 30 minutes dumping all over "Agile Dev", only to have a very senior, hands-on guy join from AWS and absolutely deliver the smack down. (Personally, I have no positive experiences with Agile Dev, but this guy really stunned the room into silence.) The best part: The Oxide crew immediately recognized the positive experence and backed off the give this guy the space he needed to tell and interesting story. (Hats off the Ox crew for doing that... even if I, personally, have zero love for Agile Dev.)
The good news is as of now ladyboy doesn't have any competition.
Rarely if ever is anything able to compete simply by being "better". As far as USPs go it's just not enough. I reckon for ladyboy the USP (if any) is going to be it being open and NOT chrome (or derivative). So "safe" "modern" language is not going to mean much to the end users.
I still don’t buy this “safer more modern” mentality. Modern C++ pretty much solves the safety issues. People need to learn how to use tools properly.
If you ask me, Go is a better Rust. Rust is an ugly version of C++ with longer compile times and a band of zealous missionaries.
I mean the keywords mut and fn very annoying to read just get rid of them or spell the f*n thing function.
Nearly 26 years ago! https://www.joelonsoftware.com/2000/04/06/things-you-should-...
What's different today really is the LLMs and coding agents. The reason to never rewrite in another language is that it requires you to stop everything else for months or even years. Stopping for two weeks is a lot less likely to kill your project.
> What's different today really is the LLMs and coding agents.
In Ladybird's case, tests they could rely upon.
He's still right if you don't have good automated testing and you lost most of the original developers (or you don't have other seniors ceva familiar with the domain).
> then these two teams are essentially in a race against each other and the port will likely never catch up
Ladybird appears to have the discipline to have recognized this: “[Rust] is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time.”
And I might suggest that there's the possibility that the C++ code could end up being more cleanly ported to a memory-safe subset of C++. plug: https://github.com/duneroadrunner/scpptool/blob/master/appro...
> A startup switching languages after years of development is usually a big red flag.
Startups are not a good comparison here. They have a different relationship with code than software projects.
Linux has rewriten entire stacks over and over again.
The PHP engine was rewritten completely at least twice.
The musl libc had entire components rewritten basically from scratch and later integrated.
Exactly my thought! I guess I'll keep Firefox for the foreseeable future...
Firefox is already spying on you with a lot of telemetry, and they have recently amended their terms of use to remove the obligation to "never sell your data" [1]. So perhaps you should reconsider that statement.
[1] : https://news.ycombinator.com/item?id=43213612
Spending weeks porting (presumably) working code with LLM is a bit strange
that's only the mechanical translation too
the hard bit (borrow checker) has still to be done...
Twitter is the canonical startup rewrite. It worked.
A lot of the previous calculus around refactoring and "rewrite the whole thing in a new language" is out the window now that AI is ubiquitous. Especially in situations where there is an extensive test suite.
Testing has become 10x as important as ever.
For a personal thing I had AI write some python libraries to power a cli. It has to do with simple excel file filtering, grouping and aggregating. Nothing too fancy. However since it's backed by a library, I am playing with different UIs for the same thing and it's fun to say.. Do it with streamlit. Oh it can't do this particular thing. Fine do it with shiny. No? OK Dash. It takes only like an hour to prototype with a whole new UI library then I get to say "nah" like a spoiled child. :)
Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.
I am already using low code tooling with agents for some projects, in iPaaS products.
> Well, I am on the provocative side that as AI tooling matures current programming languages will slowly become irrelevant.
I have the opposite opinion. As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.
The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?
I use LLM at work in Python. It can, and will, easily use hacks upon hacks to get around things.
Thus I maintain that as code generation is cheap, it is more important to constraint that code generation.
All of this assume that you care even a tiny bit about what is happening in your code. If you don't, I suppose you can keep banging the LLM to fix that binary blob for you.
> The problem with LLM for me is that it is now possible to write anything using only assembly. While technically possible, who can possibly read and understand the mountain of code that it is going to generate?
As a very practical problem the assembly would consume the context window like no other. And another is having some static guardrails; sometimes LLMs make mistakes, and without guard rails it debugging some of them becomes quite a big workload.
So to keep things efficient, an LLM would first need to create its own programming language. I think we'll actually see some proposals for a token-effective language that has good abstraction abilities for this exact use.
Lets say years of offshoring projects have helped to reach that opinion.
> As LLM become ubiquitous and code generation becomes cheap, the choice of language becomes more important.
I think, changes to languages/tooling to accomodate Agentic loops will become important.
> All of this assume that you care even a tiny bit about what is happening in your code. If you don't...
I mean, as software engineers, we most certainly do. I suspect there'll be a new class of "developers" who will have their own way of making software, dealing with bugs, building debugging tools that suit their SDLC etc. LLMs will be to software development what Relativity was to Astrophysics, imo: A fundamental & permanent shift.
I don't agree. For one thing, the language directly impacts things like iteration speed, runtime performance, and portability. For another, there's a trade-off between "verbose, eats context" and "implicit, hard to reason about".
IMO Rust will strike a very strong balance here for LLMs.
Formal specifications and automated testing, will beat any language specific tooling.
Hardly much different than dealing with traditional offshoring projects output.
I would say that current programming languages have a better chance due to the huge amount of code that AI can train on. New languages do not have that leverage. Moreover, current languages have large ecosystems that still matter.
I see the opposite. New languages have more difficulty breaking into popularity due to lack of enough existing codename and ecosystems.
Im already using models to reason about and summarize part of the code from programming language to prose. They are good at that. I can see the process being something like english to machine lang, machine lang to english if the human needs to understand. However amother truism is that compilers are a great guardrail against bad generated code. More deterministic guardrails are good for llms. So yeah im not there yet where i trust binaries to the statistical text generators.
Interesting take, what do you think comes next? A programming language optimized for coding agents?
> This is not becoming the main focus of the project. We will continue developing the engine in C++, and porting subsystems to Rust will be a sidetrack that runs for a long time.
I don't like this bit. Wouldn't it be better to decide on a memory-safe language, and then commit to it by writing all new code in Rust, or whatever. This looks like doing double the work.
It doesn't have to all-or-nothing. Firefox has been a mixed C++ and Rust codebase for years now. It isn't like the code is written twice. The C++ components are written in C++, and the Rust components are written in Rust.
I suspect that'll also be what happens here. And if the use of Rust is successful, then over time more components may switch over to Rust. But each component will only ever be in one language at a time.
You can't compare the choices made to evolve a >20 years old codebase with a brand new one. Firefox also as Rust support for XPCOM components, so you can use and write them in Rust without manual FFI (this comes with some baggage of course).
The Ladybird devs painted themselves in a corner when choosing C++ for a new web browser, with many anti-Rust folks claiming that "modern C++ was safe". Well...
> The Ladybird devs painted themselves in a corner when choosing C++ for a new web browser
That choice was never made. C++ was selrcted as the language of choice for SerenityOS. Since the goal of the OS was to make its founder happy, and C++ was his faviourite language at the time, that seems like an obvious choice. Later, as part of SerenityOS, there was a need for an HTML parser. It was written in C++ as was the rest of the operating system. Then that HTML parser evolved into a full web browser. As part of the SerenityOS project, that browser was written completely in C++. Then that web browser forked off into an independent project...
Ladybird was already a fully functioning browser (not finished of course but complete enough to surf many web pages) when it was forked from SerenityOS to create a stand-alone web browser. The choice at that point was "keep evolving the current C++ code base" or start-over. I doubt the second option was even considered.
They have been evaluating other languages since before the fork. Rust was evaluated and rejectd early-on. They even created their own language at one point. https://github.com/SerenityOS/jakt
> The Ladybird devs painted themselves in a corner when choosing C++ for a new web browser, with many anti-Rust folks claiming that "modern C++ was safe". Well...
Perhaps, but in fairness the project was started in 2018 when Rust was still new and unproven.
> You can't compare the choices made to evolve a >20 years old codebase with a brand new one.
I guess not, but I'm pretty optimistic about Ladybird's ability to adopt Rust if they want to. It's a much smaller codebase than Firefox (~650K LoC).
This initial PR is already ~25k LoC, so approximately 4% of the codebase. It took 1 person 2 weeks to complete. If you extrapolate from that, it would take 1 person-year to port the whole thing, which is not so bad considering that you could spread that work out over multiple years and multiple people.
And Firefox has shown that the intermediate state where you have a mix of languages is viable over the long term, even in a much larger and more complex codebase.
Firefox was special in that Mozilla created Rust to build Servo and then backported parts of Servo to Firefox and ultimately stopped building Servo.
Thankfully Servo has picked up speed again and if one wants a Rust based browser engine what better choice than the one the language was built to enable?
https://servo.org/
As a Servo contributor, I am aware of Servo :)
But I'm also cheering along Ladybird's progress. There's definitely room for more than one project in the space. And IMO the more browsers being built in Rust in the better.
One could do that but then they'd lose all momentum and the project would never get finished.
> Wouldn't it be better to decide on a memory-safe language,
it is totally possible to use some strict subset of C++, which will be memory safe.
Ladybird already does that
> After the initial translation, I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.
I feel like you just know it’s doomed. What this is saying is “I didn’t want to and cannot review the code it generated” asking models to find mistakes never works for me. It’ll find obvious patterns, a tendency towards security mistakes, but not deep logical errors.
Somehow they did use this as part of their approach to get to 0 regressions across 65k tests + no performance regressions though + identical output for AST and bytecode though. How much manual review was part of the hundreds of rounds of prompt steering is not stated, but I don't think it's possible to say it couldn't find any deep logical errors along the way and still achieve those results.
The part that concerns me is whether this part will actually come in time or not:
> The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
Of course, it wouldn't be the first time Andreas delivered more than I expected :).
That’s convincing and impressive, but I wouldn’t say it proves it can spot deep errors. If it’s incredible at porting files and comparing against the source of truth then finding complicated issues isn’t being tested imo.
Your argument is just as applicable on human code reviewers. Obviously having others review the code will catch issues you would never have thought of. This includes agents as well.
They’re not equal. Humans are capable of actually understanding and looking ahead at consequences of decisions made, whereas an LLM can’t. One is a review, one is mimicking the result of a hypothetical review without any of the actual reasoning. (And prompting itself in a loop is not real reasoning)
I keep hearing people say "but as humans we actually understand". What evidence do you have of the material differences in what understanding an LLM has, and what version a human has? What processes do we fundamentally do, that an LLM does not or cannot do? What here is the definition of "understanding", that, presumably an LLM does not currently do, that humans do?
With humans though, I wouldn't have to review 20k lines of code at once.
So ask the AI to just translate one little chunk at a time, right?
>Your argument is just as applicable on human code reviewers.
The tests many of us use for how capable a model or harness is is usually based around whether they can spot logical errors readily visible to humans.
Hence: https://news.ycombinator.com/item?id=47031580
That is what the testing suite is there to check, no?
No. Testing generally can only falsify, not verify. It’s complementary to code review, not a substitute for it.
You mean the testing suite generated by AI?
The primary JS test suite is maintained by the authors of the specification itself: https://github.com/tc39/test262
It isn’t, in this case.
No, a real test suite, either their own which they developped or the official ECMA one
Yeah, I lost all interest in the ladybird project now that it is AI slop.
No one wants to work with this generated, ugly, unidiomatic ball of Rust. Other than other people using AI. So you dependency AI grows and grows. It is a vicious trap.
From their post on Twitter in 2024 when they adopted Swift, with a comment on Rust.
My general thoughts on Rust:
- Excellent for short-lived programs that transform input A to output B
- Clunky for long-lived programs that maintain large complex object graphs
- Really impressive ecosystem
- Toxic community
https://xcancel.com/awesomekling/status/1822241531501162806
Mayhaps he had a Damascene conversion? Not that I ever understood the need to change from C++ in the first place though.
Considering David Tolnay's indefensible treatment of JeanHeyd Meneide, I'm inclined to agree with Kling on the toxicity of the Rust community. Evangelical fervor does not excuse douchebaggery.
Most likely some big sponsor requires them turn to AI slops.
> We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline. That cleanup will come in time.
I wonder what kind of tech debt this brings and if the trade off will be worth whatever problems they were having with C++.
the tech debt risk in this case is mostly in the cleanup phase, not the port itself. non-idiomatic Rust that came from C++ tends to have a lot of raw pointer patterns and manual lifetime management that works fine but hides implicit ownership assumptions. when you go to make it idiomatic, the borrow checker forces those assumptions to be explicit, and sometimes you discover the original structure doesn't compose well with Rust's aliasing rules. servo went through this. the upside is you catch real latent bugs in the process.
It depends. I migrated a 20k loc c++ project to rust via AI recently and I would say it did so pretty well. There is no unsafe or raw pointer usage. It did add Rc<RefCell in a bunch of places to make things happy, but that ultimately caught some real bugs in the original code. Refactoring it to avoid shared memory (and the need for Rc<RefCell<>> wasn't very difficult, but keeping the code structure identical at first allowed us to continue to work on the c++ code while the rust port was ongoing and keep the rust port aligned without needing to implement the features twice.
I would say modern c++ written by someone already familiar with rust will probably be structured in a way that's extremely easy to port because you end up modeling the borrow checker in your brain.
Yes, I just translated a Rust library from non-idiomatic and unsafe Rust to idiomatic and safe Rust and it was as much work as if I had rewritten it from scratch.
yeah, matches what I'd expect. when you're porting idiomatic -> idiomatic within a language, the cleanup is mechanical. crossing from C++ to Rust means the borrow checker surfaces assumptions that were latent in the original code, so you end up redesigning rather than translating. that's not a complaint about Rust -- it's actually doing its job.
This is what I was trying to highlight in my post.
I don't think they were having problems with C++, they moved to Rust for memory safety. Mind that they migrated LibJS, their JavaScript library.
Andreas Kling mentioned many times they would prefer a safer language, specifically for their js runtime garbage collector. But since the team were already comfortable with cpp that was the choice, but they were open and active seeking alternatives.
The problem was strictly how cpp is perceived as an unsafe language, and this problem rust does solve! Not being sarcastic, this truly looks like a mature take. Like, we don't know if moving to rust would improve quality or prevent vulnerabilities, here's our best effort to find out and ignore if the claim has merits for now. If the claim maintains, well, you're better prepared, if it doesn't, but the code holds similar qualities...what is the downside?
> We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited.
Why was there ever any expectation for Swift having good platform support outside Apple? This should have been (and was to me) already obvious when they originally announced moving to Swift.
Apple’s own marketing speak has Swift as a cross platform language. Just like, I suppose, C# is a cross platform language.
Apple puts zero resources into making that claim reality, however.
Apple actually did put some resources behind it, the toolchain is reasonably pleasant to use outside macOS and Xcode, they have people building an ecosystem in the Swift Server Workgroup, and arguably some recent language design decisions don't seem to be purely motivated by desktop/mobile usage.
But in the end I can't help but feel Swift has become an absolute beast of a multi-paradigm language with even worse compile times than Rust or C++ for dubious ergonomics gains.
A language is more than a compiler. All of the Swift frameworks you would need to do anything actually useful or interesting in the language are macOS-only. You cannot develop in Swift for Windows/Linux/Android the way that you develop in Swift for macOS/iOS. That matters.
Was it Apple, or community driven projects?
> Just like, I suppose, C#
Have you actually used .NET on Linux/macOS? I have (both at home and work) and there isn't anything that made me think it was neglected on those platforms. Everything just works™
It didn't use to be that way, for a very long time.
This is really YOLOing as the original author doesn't know Rust well so what happens if they hit some complex production issue LLM aren't aware of? Hiring an expensive consultant to fix that until the next LLM iteration?
I'm as anti LLM use as they come, but this appears to be migrating libraries from already funcitoning C++ code. In the case of your hypothetical I suspect the course of action will be "shelve this library port until someone with domain expertise and Rust experience can look at it". Its not like he chucked the whole codebase at the GenaI gods and said "Port it to Rust!".
> what happens if they hit some complex production issue
they learn Rust
it takes a couple of years
it's not that hard.
The "human-directed, not autonomous" framing is the part people keep glossing over. Claude Code here is a compiler-level translation tool, you are still the architect deciding what gets ported and in what order.
The real question is what this does to migrations that never happened because 18 months of rewrite did not pencil out. A 2-week port fundamentally changes that calculus.
Someone should try this with the “Ralph Wiggum loop” approach. I suspect it would fail spectacularly, but it would be fascinating to watch.
Personally, I can’t get meaningful results unless I use the tool in a true pair-programming mode—watching it reason, plan, and execute step by step. The ability to clearly articulate exactly what you want, and how you want it done, is becoming a rare skill.
Given the quality of their existing test suite I'm confident the Ralph Wiggum loop would produce a working implementation... but the code quality wouldn't be anywhere near what they got from two weeks of hands-on expert prompting.
Sure yeah, I can buy that, but that would be like collecting tech debt for generations.
https://ghuntley.com/loop/ and https://github.com/anthropics/claude-code/blob/main/plugins/...
Looks like it's been renamed to ralph-loop for legal reasons. :D
https://github.com/anthropics/claude-plugins-official/pull/1...
All the best to them, however this feels like yah shaving instead of focusing into delivering a browser than can become an alternative to Safari/Chrome duopoly.
Part of browser experience is safety and migrating their JS library to Rust is probably one of the best ways to gain advantage over any other existing engine out there in this aspect. Strategically this may and likely will attract 3rd party users of the JS library itself, thus helping its adoption and further improving it.
They're not porting the browser itself to Rust, for the record.
Yet, they are open to further rewrites.
Your rant was about their loss of focus, not about them being open to other changes. Moving goalposts!
Javascript is a self contained sub system, if the public API stays the same, then they can rewrite as much as they want, also I suppose this engine now will attract new contributors that will want to contribute to Ladybird just because they enjoy working with Rust.
Don't forget that the Rust ecosystem around browsers is growing, Firefox already uses it for their CSS engine[0], AFAIK Chrome JPEG XL implementation is written in Rust.
So I don't see how this could be seen as a negative move, I don't think sharing libraries in C++ is as easy as in Rust.
[0] https://github.com/servo/stylo
Not only is Firefox using it for their CSS engine but Mozilla created Rust to build Servo and sadly only the CSS engine and maybe some other parts is what they kept around when they offloaded Rust.
“the Rust ecosystem around browsers is growing” – in the beginning pretty much 100% of the ecosystem around Rust was browser oriented
Thankfully Servo is picking up speed again and is a great project to help support with some donations etc: https://servo.org/
Maybe it is my cynicism, but I always suspect such projects to be endless rabbit chasing. It is not about catching it.
Agreed. They said they ruled out rust in 2024, I believe the article they published was near the end of 2024 because I remember reading it fairly recently.
Seems like a lot of language switches in a short time frame. That'd make me super nervous working on such a project. There will be rough parts for every language and deciding seemingly on whims that 1 isn't good enough will burn a lot of time and resources.
think of it as axe sharpening rather than yak shaving
Interestingly editorialized title omits “with help from AI”.
That’s probably just the classic HackerNews title shortening algorithm at work.
I went to check if this was documented in the list of undocumented HN features on GitHub but it’s not.
There is an open PR (by simonw btw): https://github.com/minimaxir/hacker-news-undocumented/pull/4...
It's been confirmed by @dang many times before. I'm not sure if that's what cut the title here but I've seen it many times in the last 10 years.
I've seen it happen a couple times, iirc, it removes things after commas, and removes certain words as well
A LLM-assisted codebase migration is perhaps one of the better use cases for them, and interestingly the author advocates for a hands-on approach.
Adding the "with help from AI" almost always devolves the discussion from that to "developers must adopt AI or else!" on the one hand and "society is being destroyed by slop!" on the other, so as long as that's not happening I'm not complaining about the editorialized title.
I think we've come to the point when it should be the opposite for any new code, something in line of: "done without AI". Bein an old fart working in software development I have many friends working as very senior developers. Every single one of them including yours truly uses AI.
I use AI more and more. Goes like create me classes A,B,C with such and such descriptive names, take this state machine / flowchart description to understand the flow and use this particular sets of helpers declared in modules XYZ
I then test the code and then go over and look at any un-optimal and other patterns I prefer not to have and asking to change those.
After couple of iterations code usually shines. I also cross check final results against various LLMs just in case
Very happy to see this. Ladybird's engineering generally seems excellent, but the decision to use Swift always seemed pretty "out there". Rust makes a whole lot more sense.
Servo makes a whole lot more sense: https://servo.org/
Can you send a Gmail in Servo? No?
Ladybird is much further ahead in terms of actually rendering web pages that people use.
The biggest advantage to Servo was that it is written in Rust. This move begins to nullify that advantage as well.
Why exactly does Servo make more sense?
I hope they both succeed. But Ladybird is more likely to become a usable browser first.
This move was only to port a part of their JS runtime pieces to Rust, that's it.
Cool, that seems like a rational choice. I hope this will help Ladybird and Servo benefit from each other in the long run, and will make both of them more likely to succeed
I hope it does not -> because we don't more browser crossbreeding
> We previously explored Swift, but the C++ interop never quite got there
But Rust doesn't have C++ interop at all?
You can do it via the C ABI, and use opaque pointers to represent higher-level Rust/C++ concepts if you want to.
Firefox is a mixed C++ / Rust codebase with a relatively close coupling between Rust and C++ components in places (layout/dom/script are in C++ while style is in Rust, and a mix of WebRender (Rust) and Skia (C++) are used for rendering with C++ glue code)
> You can do it via the C ABI, and use opaque pointers to represent higher-level Rust/C++ concepts
Yeah but, you can do the same in Swift
My understanding from a brief read of the Swift issue is that they kept running into bugs in the Swift compiler which, in practice, prevented them from doing the things that they ought to be do in theory. This went on for long enough, that they got fed up and abandoned Swift.
The Rust compiler is incredibly solid (across all target platforms), and while it's C/C++ interop is relatively simplistic, what does exist is extensively battle tested in production codebases.
>But Rust doesn't have C++ interop at all?
It also doesn't have the disadvantages of Swift. Once the promise of Swift/C++ interop is gone there isn't enough left to recommend it.
I’m curious what issues people were running into with Swift’s built in C++ interop? I haven’t had the chance to use it myself, but it seemed reasonable to me at a surface level.
There's a list of unsolved problems in this Ladybird issue, now closed because they dropped Swift: https://github.com/LadybirdBrowser/ladybird/issues/933
for example: "Swift fails to import clang modules with #include <math.h> with libstdc++-15 installed. Workaround: None (!!)"
Yeah, that part doesn't make much sense to me. IMO, Swift has reasonably good C++ interop[1] and Swift's C interop has also significantly improved[2] since Swift 6.2.
[1]: https://www.swift.org/documentation/cxx-interop/
[2]: https://www.swift.org/blog/improving-usability-of-c-librarie...
It may have in the future. Crubit is one effort in this direction: https://crubit.rs/
There is also cxx.rs, which is quite nice, albeit you have to struggle sending `std` types back and forth a bit
> albeit you have to struggle sending `std` types back and forth a bit
Firefox solves this partly by not using `std` types.
For example, https://github.com/mozilla/thin-vec exists in large part because it's compatible with Firefox's existing C++ Vec/Array implementation (with the bonus that it's only 8 bytes on the stack compared to 24 for the std Vec).
Luckily, ladybird also does not use `std` types
Rust has cxx which I would argue is "good enough" for most use cases. At least all C++ use cases I have. Not perfect, but pretty damn reasonable.
It's technically Rust -> C -> C++ as it stands right now
Porting the JS parser to Rust and adopting Rust in other parts of the engine while continuing to use C++ heavily is unlikely to make Ladybird meaningfully more secure.
Attackers are surprisingly resilient to partial security.
I hope that this opens the door for collaboration between Ladybird and Servo, no need to reinvent the wheel for core components.
I thought the entire point of Ladybird was precisely to reinvent the wheel?
This is also the case for Servo, so it makes sense to collaborate.
Servo has a distinct design goal that sets it apart from its predecessor within Mozilla and has already had offsprings that has made its way directly into Firefox.
Its purpose is not to reinvent everything. It’s not a hype project.
Unfortunately licence incompatibility may prevent that. Ladybird is BSD and Servo is MPL. This is also why there is only limited collaboration between Servo and the Rust GUI ecosystem.
Commenting about not reinventing the wheel on a Ladybird post is ironic
Based on the origins of Rust as a tool for writing the really thorny, defensive parsers of potentially actively hostile code for firefox, I have to imagine that another web browser is the most at-home place the language could ever be.
Is there any discussion on why D or even Ada was not considered? These languages have been around for long time. If they were willing to use llm to break the initial barrier to entry for a new language, then a case can be made for these languages as well.
They already made the mistake picking a niche language twice (first their own language, then Swift as a cross-platform language), why would you want them to make it a third time?
What kind of response is this? I was asking if there was any technical evaluation on other languages. And D and Ada are not niche. They have been battle tested in critical software.
Swift had/has some problems in the language itself. It's not because of the niche nature of Swift that was the problem iirc.
I don't think this is the right response because certainly a meaningful discussion could've definitely taken place and given how they were already open to other languages which was the reason why they picked Swift in the first place.
I remember Andreas video where he talked about how people used rust in his codebase and they were so happy but later it became very difficult whereas they found with swift that it became easier to manage. That was the reason why they picked swift that time.
Certainly their goal wasn't to pick a popular language (because if that's what you want use python or JS) but rather a language that was relevant to what they were building.
So if D and Ada were relevant or not, that's the main point of discussion imo.
I've dabbled a bit in Ada, but it wouldn't be my choice either. It's still susceptible to memory errors. It's better behaved than C, but you still have to be careful. And the tooling isn't great, and there isn't a lot in terms of libraries. I think Ladybird also has aspirations to build their own OS, so portability could also be an issue.
Not the case with spark. But I understand it requires writing lot of things from scratch for browsers. But I don’t think portability will be an issue with Ada, it is cross platform.
However, this is where d shines. D has a mature ecosystem. Offers first class cpp abi and provides memory safety guarantees, which the blog mentioned as a primary factor. And d is similar to cpp, low barrier for cpp devs to pick up.
There's no dynamic memory allocation with (100%) Spark. That's really limiting. You can to write "unsafe" code, but that has the same problems as Ada.
Probably contributing reasons? I imagine over time they will have a lot more Rust contributors than D or Ada.
Unfortunately a really good question gets downvoted instead of causing a relevant discussion, as so often in recent HN. It would be really interesting to know, why Ada would not be considered for such a large project, especially now when the code is translated with LLMs, as you say. I was never really comfortable that they were going for the most recent C++ versions, since there are still too many differences and unimplemented parts which make cross-compiler compatibilty an issue. I hope that with Rust at least cross-compilation is possible, so that the resulting executable also runs on older systems, where the toolchain is not available.
Unfortunately some folks do get bit sensitive on rust, that can be off putting.
But what I wanted to know was about evaluation with other languages, because Andreas has written complex software.
His insight might become enriching as to shortcomings or other issues which developers not that high up in the chain, may not have encountered.
Ultimately, that will only help others to understand how to write better software or think about scalability.
I personally think that people might've framed it as use Ada/D over rust comment which might have the HN people who prefer rust to respond with downvotes.
I agree that, this might be wrong behaviour and I don't think its any fault of rust itself which itself could be a blanket statement imo. There's nuance in both sides of discussions.
Coming to the main point, I feel like the real reason could be that rust is this sort of equilibra that the world has reached for, especially security related projects. Whether good or bad, this means that using rust would definitely lead to more contributor resources and the zeal of rustaceans can definitely be used as well and also third party libraries developed in rust although that itself is becoming a problem nowadays from what I hear from people in here who use rust sometimes (ie. too many dependencies)
Rust does seem to be good enough for this use case. I think the question could be on what D/Ada (Might I also add Nim/V/Odin) will add further to the project but I honestly agree that a fruitful discussion b/w other languages would've been certainly beneficial to the project (imo) and at the very least would've been very interesting to read personally
> which might have the HN people who prefer rust to respond with downvotes.
This completely misses the purpose of the downvoting feature, which is not surprising, since upvoting seems no longer to indicate quality or truth of the comment neither.
> rust is this sort of equilibra that the world has reached for, especially security related projects
Which is amazing, since Rust only covers a fraction of safety/security concerns covered by Ada/SPARK. Of course this language has some legacy issues (e.g. the physical separation of interface and body in two separate files; we have better solutions today), but it is still in development and more robust than the C/C++ (and likely Rust) toolchain. And in the age of LLMs, robustness and features of a toolchain should matter more than the language syntax/semantics.
> Rust does seem to be good enough for this use case.
If you compare it to the very recend C++ implementations they are using, I tend to agree. But if you compare it to a much more mature technology like e.g. Ada, I have my doubts.
> We’ve been searching for a memory-safe programming language to replace C++ in Ladybird for a while now.
The article fails to explain why. What problems (besides the obvious) have been found in which "memory-safe languages" can help. Do these problems actually explain the need of adding complexity to a project like this by adding another language?
I guess AI will be involved which, at this early point in the project would make ladybird a lot less interested (at least to me).
> What problems (besides the obvious) have been found in which "memory-safe languages" can help.
Why isn't that enough?
Browsers are incredibly security-sensitive projects. Downloading untrusted code from the internet and executing is part of their intended functionality! If memory safety is needed anywhere it's in browsers.
Rust was pretty much created to help solve security issues in browsers: https://en.wikipedia.org/wiki/Rust_(programming_language)#20...
> besides the obvious
Well, what else is there besides the obvious? It's a browser.
Even Chrome has started to adopt Rust due to recurring memory vulnerabilities.... that's a big enough reason.
You don't want a browser with a bunch of RCEs that can be triggered by opening a web page...
You do want a browser with RCE, but you want it to keep the it sandboxed. The hard part is executing the code safely
I guess you will need to wait for their Feb 2026 update.
I know he doesn't make live coding videos anymore, but it'd be cool if Andreas showed off how this worked a little more. I'm curious how much he had to fix by hand (vs reprompting or spinning a different model or whatever).
You can checkout the pull requests related to LibJS: https://github.com/LadybirdBrowser/ladybird/pulls?q=is%3Apr+...
What happened? It’s been awhile since I checked in but it seems he doesn’t work on serenity and doesn’t live stream anymore (and is now into lifting weights)
He got his serenity and at the same time ladybird browser started getting somewhere, so he separated it out and went full on with it. From what I know, he was working on browsers before at Apple, so it was like he got ready to return
My intuition is that they will convert to zig again when it stables. If it is possible to do it using LLM in 2 weeks for rust, then it would be the same for zig, too.
While rust is nice on paper, writting complex software in it is mentally consuming. You can not do it for a long time.
If they are looking for a memory-safe language, why would they convert to Zig?
If they do, it could be because safety is a gradient and one variable among many in software development, albeit a very important one when it comes to browsers.
If it is this easy, surely the trend is Rust output being an intermediate pass of the LLM super compiler. A security subset if you will (like other kinds of optimization), it will move from Rust specs to some deeper level of analysis and output the final executable. Some brave souls will read the intermediate Rust output (just like people used to read the assembler output from compilers) but the LLM super compiler will just translate a detailed English like spec into final executables.
Do you seriously think LLMs will not just spam unsafe blocks in it like they do with any task ever?
If this means we will get an independent state-of-the-art browser engine, I'm all for it.
IMV Servo is going to be the independent state of the art browser
I have my doubts it'll ever be "finished". Servo gives strong vibes of a project that will avoid performance hacks, because they're not nice/state of the art code. I have no evidence, it's just the energy I've picked up from it
Any word on how much more memory safe the implementation is? If passing a previous test suite is the criteria for success, what has changed, really? Are there previous memory safety tests that went from failing to passing?
I am very interested to know if this time and energy spent actually improved memory safety.
Other engineers facing the same challenges want to know!
None at all, the generated AST and bytecode are stated to be identical
If the previous impl had known memory safety issues I'd imagine they'd fix them as a matter of priority. It's hard to test for memory safety issues you don't know about.
On the rust side, the question is how much `unsafe` they used (I would hope none at all, although they don't specify).
It is entirely possible a Rust port could have caught previously unknown memory safety issues. Furthermore, a Rust port that looks and feels like C++ may be peppered with unsafe calls to the point where the ROI on the port is greatly reduced.
I am not trying to dunk on the effort; quite the contrary. I am eager to hear more about the goals it originally set out to achieve.
You can look: https://github.com/LadybirdBrowser/ladybird/pull/8104/files?...
It seems like it is used mostly for FFI.
Interesting in context of that some time ago Andreas said that they failed on porting TypeScript compiler from TypeScript itself to Go lang by using LLMs and they went with manual port https://youtu.be/uMqx8NNT4xY?si=Vf1PyNkg3t6tmiPp&t=1423
I wonder what is gained by this port though, if the C++ codebase already employed modern approaches to memory management. It's entirely possible that the Rust version will perform worse too as compilers are less mature.
"modern approaches to memory management" aren't enough for complete memory safety.
Maybe, but it's certainly possible to write memory safe code in C++. It may be more or less difficult, but it isn't typically the ONLY objective of a project. C++ has other advantages too, such as seamless integration with C APIs and codebases, idiomatic OOP, and very mature compilers and libraries.
That's a pivot, iirc they wanted to swift (I'm very glad they didn't do that). It's cool to see something like claude be useful for large scale projects like that
Good step. It will bring many more contributors.
Using LibJS with servo, when?
Were there any immediate benefits of this conversion, e.g. reduced memory use or lower CPU utilization?
Likely the opposite, as safe Rust has some extra safety checks for things like array bounds.
Fuck me. This is wild. Sorry for the potty mouth.
I'm not here to troll the LLM-as-programmer haters, but Ladybird (and Rust!) is loved by HN, and this is a big win.How long until Ladybird begins to impact market dominance for Chrome and Firefox? My guess: Two years.
Note that Firefox doesn't have market dominance. It is under 5% market share. That said I imagine Firefox users to be the most likely to make the jump. However, the web is a minefield of corner cases. It's hard to believe it will be enough to make the browser largely useful enough to be a daily driver.
Why do you think Firefox users would be most likely to make the jump? The main reason I see people give for supporting Ladybird is challenging the dominance of the incumbents. That's not really a great reason to switch from Firefox because, as you note, it doesn't have any dominance. And there's also an argument that splitting the non-Chrome market into two only increases Chrome's dominance.
From what I can tell from HN, Brave seems to be popular with those users who hate Google but for whatever reason hate Mozilla even more, and I suspect those will be the most likely users to switch.
I don't get it, and I don't have a dog in the C/C++ vs. Rust race. Ladybird has ~1200 contributors with a predominance of C++ contributions, followed by HTML, and with "other" lying at 0.5%.
That's a lot of people contributing.
How many of them will be less willing to contribute in the future, and less productive when they do if a sizable portion is in Rust? Maybe there'll be more contributions and maybe there'll be less. I don't know. If you've managed to develop a community of 1200 developers who are willing to advance the project why upset the applecart?
There is a flock of people yelling around that they'd contribute if it was Rust, but won't touch C++
I must admit to being somewhat confused by the article's claim that Rust and C++ emit bytecode. To my knowledge, neither do (unless they're both targeting WASM?) - is there something I'm missing or is the author just using the wrong words?
EDIT: bramhaag pointed out the error of my ways. Thanks bramhaag!
By 'Rust compiler' and 'C++ compiler', they refer to the LibJS bytecode generator implemented in those languages. This is about the generated JS bytecode.
Yes, I re-read again, and I think you are correct. Thanks!
Thanks! I was confused about this as well.
They're referring to LibJS's bytecode (the internal instruction stream of Ladybird’s JS engine), not to Rust/CPP output formats.
This is sort of hilarious if you think about it. The Firefox browser is completely written in Rust. Now Ladybird is a "human-directed ai" Rust browser. Makes you wonder how much of the code the two browsers will share going forward given llm assisted autocompletes will pull from the same Rust Browser dataset.
Probably not much: the requirement is exact equivalence of program inputs to outputs, and as such the agents are performing very mechanical translation from the existing C++ code to Rust. Their prompts aren't "implement X browser component in rust", they're "translate this C++ code to Rust, with these extra details that you can't glean from the code itself."
I wonder where did you get the idea that Firefox was all Rust. Made me curious.
It's like only 10% of Firefox is rust.
Only a small portion of Firefox is written in Rust. Apparently some of the most performant and least buggy parts are those in Rust, but again, only parts like the CSS engine.
https://github.com/mozilla-firefox/firefox Rust isn't even mentioned in languages used.
Something of a culture clash here ain’t it, albeit an imbalanced one.
Oooh noooo I will have to fork it before it is too late!
This will be another bad decision just like with Swift. From what I heard, Rust is notoriously bad at letting people define their own structure and instead beats you up until you satisfy the borrow checker. I think it'll make development slow and unpleasant. There are people out there who enjoy that, but it's not a fit for when you need to deliver a really huge codebase in reasonable time. I remember Andreas mentioning he just wanted something like C++, but with a GC and D would be absolutely perfect for this job.
Maybe, but will they have to fight with borrow checker for doing some other than (the very OOP) DOM components? They'll obviously use both for a long time in the future, so more functional places can get Rust, while more OOP places can benefit from C++
Nobody uses D
This is like the "real world" argument. Nobody uses that "in the real world", except well people that do.
Well, I do!?!! It's even faster than zoomer langs like Odin. You should try it.
Zoomer is a good word btw. I love zoomers.
And? Does it work? Because it does. It's a lot closer to C++ and you literally need like a week to start being productive and it's insanely flexible as a language. Nobody uses Swift also, but the additional problem with Swift was that it's entirely Apple-centric.
> Nobody uses Swift also
Yep, it was also a weird, not entirely pragmatic choice, even if it was well justified technically and all-in-all rational. D would be the same.
Entirely Apple-centric?
Someone will be put down like a dog.
Great! I can't wait they totally ditch C++
I guess the ETA will pushed back by a few years then?
By 2 weeks so far ;-)
Probably not unless using Rust present some particular challenge for this type of project. But having eaten this proverbial apple they would probably use AI more and more assuming they have a budget and in this case being less rich than C++ might not mean much for productivity
What are Rust programmers to do now that LLMs can port code to Rust??
It reads like a joke without a punchline
Rejoice?
Guess it will never come out.
I wouldn't mind if one result of this was a writeup on what patterns/antipatterns are there when converting code and concepts that used to be very aligned with C++-style OOP, deep inheritance and all that jazz, to what feels natural in Rust, and how you can rephrase those concepts without loss in the substance of what you need to do.
I guess it's a long way off, since the LLM translation would need to be refactored into natural Rust first. But the value of it would be in that it's a real world project, and not a hypothetical "well, you could probably just...".
Sigh agents keep killing all the passion I have for programming. It can do things way faster than me, and better than me in some cases. Soon it will do everything better and faster than me.
> Soon it will do everything better and faster than me
There is no evidence of that coming from this post. The work was highly directly by an extremely skilled engineer. As he points out, it was small chunks. What chunks and in what order were his decision.
Is AI re-writing those chunks much faster than he could. Yes. Very much so. Is it doing it better? Probably not. So, it is mostly just faster when you are very specific about what it should do. In other words, it is not a competitor. It is a tool.
And the entire thing was constrained by a massive test suite. AI did not write that. It does not even understand why those tests are the way they are.
This is a long way from "AI, write me a JavaScript engine".
Id put it as a example of a carpenter preparing their material with a lathe and circular saw vs one working with a handsaw and chisel.
Both will get a skilled craftsman to the point where thie output is a quality piece of work. Using the autotoools to prepare the inputs allows velocity and consistency.
Main issue is the hype and skiddies who would say - feed this tree into a machine and get a cabinet.Producing non-detrministic outputs with the operator being unable to adjust requirements on the fly or even stray from patterns/designs that havent been trained yet.
The tools have limitiations and the operators as well , and the hype does adisservice to what would be establishing reasonable patterns of usage and best practices.
Is a migration from language X to Y or refactoring from pattern A to B really the kind of task that makes you look forward to your day when you wake up?
Personally my sweet spot for LLM usage is for such tasks, and they can do a much better job unpacking the prompt and getting it done quickly.
In fact, there's a few codebases at my workplace that are quite shit, and I'm looking forward to make my proposal to refactor these. Prior to LLMs, I'm sure I'd have been laughed off, but now it's much more practical to achieve this.
Right. I had a 100% manual hobby project that did a load of parametric CAD in Python. The problem with sharing this was either actively running a server, trying to port the stack to emscripten including OCCT, or rewriting in JS, something I am only vaguely experienced in.
In ~5 hours of prompting, coding, testing, tweaking, the STL outputs are 1:1 (having the original is essential for this) and it runs entirely locally once the browser has loaded.
I don’t pretend that I’m a frontend developer now but it’s the sort of thing that would have taken me at least days, probably longer if I took the time to learn how each piece worked/fitted together.
It's the opposite for me, most of the time it's first rough pass it generates is awful and if you don't have good taste and a solid background of years of experience programming you won't notice it and I keep having to tell it to steer into better design choices.
I'm not sure 25,000 lines translated in 2 weeks is "fast", for a naive translation between languages as similar as C++ and Rust (and Ladybird does modern RAII smart-pointer-y C++ which is VERY similar to Rust). You should easily be able to do 2000+ lines/day chunks.
Yeah, it also a lot that the person doing the translation is the lead developer of the project who is very familiar with the original version.
I imagine LLMs do help quite a bit for these language translation tasks though. Language translation (both human and programming) is one of the things they seem to be best at.
Agreed, however, I'm quite sure 25,000 lines translated in "multiple months" is very "slow", for a naive translation between languages as similar as C++ and Rust.
2000+ lines/day chunks are 10 days for 20+k lines...
I'm aware. What I meant is this is a reasonable output for a 1:1 translation by hand, without LLM use.
"I will never be a world class athlete, so I play for the love of the sport."
Helps me.
Not sure why you'd get that from this post, which says it required careful small prompts over the course of weeks.
In the hands of experienced devs, AI increases coding speed with minimal impact to quality. That's your differentiator.
Despite the many claims to the contrary, agents can't do anything better than a human yet. Faster, certainly, but the quality is always poor compared to what a human would produce. You aren't obsolete yet, brother.
Dunno, that probably doesn't hold for webapps with backend as they are typically complete garbage and LLMs (even local ones) would give you about the same result but in 1 hour.
Look into platforms like Workato, Boomi, or similar iPaaS products, unfortunely it feels like those of us that like coding have to be happy turning into architect roles, with AI as brick layers.
It automates both the fun and the boring parts equally well. Now the job is like opening a box of legos and they fall out and then auto-assemble themselves into whatever we want..
Rather like opening a box of legos and reading them the instruction sheet while they auto assemble based on what they understood. Then you re-read and clarify where the assembly went wrong. Many times, if needed.
I am unsure if I can rationally justify saying this, but I am left with disappointment and unease. Comparable to when a series I care about changes showrunner and jumps the shark.
Maybe you're part of an anti-cult-cult?
Would be as bad as being in a cult.
Hate to tell you this, but it's cults all the way down. Plato understood this, and his disdain for caves and wall-shadows, is really a disdain for cults. The thing is, over the last 2300 years, we have gotten really good at making our caves super cozy -- much cozier than the "real world" could ever be. Our wall-shadows have become theme parks, broadway theaters, VR headsets, youtube videos, books, entire cities even. In Plato's day, it made sense to question the cave, to be suspicious of it. But today, the cave is not just at parity with reality, it is superior to it (similar to how a video game is a precisely engineered experience, one that never has too little signal and never has too much noise, the perfect balance to keep you interested and engaged).
I'm no mind reader, and certainly no anthropologist, but I suspect that what separates humans from other (non extinct) animals, is that we compulsively seek caves that we can decorate with moving shadows and static symbols. We even found a series of prime numbers (sequences of dots, ". ... ..... .......") in a cave from the _ice age_. Mathematics before writing. We seek to project what we see with our mind's eye into the world itself, thereby making it communicable, shareable. Ever tell someone you had a dream, and they believed you? You just planted the seed for a cult, a shared cave. Even though you cannot photograph the dream, or offer any evidence that you can dream at all.
The industrial and scientific revolutions have distanced our consciousness from this idea, even as they enabled ever more perfect caves to manifest. Our vocabulary has become corrupted and unclear. We started using words like "reality", and "literally", and "truth", when we mean the exact opposite.
The conspiracy theorists and cultists, are just people who wandered into a new cave, with a different kind of fire, and differently curved walls, and they want to tell people from their old cave that they have found a way out of the cave into reality -- they do not yet realize (or do not want to accept), that they live in a network of caves, a network of different things in the same category.
During the early 2020s, we did a lot of talking about the disappearance of "consensus reality". This is scientific terminology mapped over the idea of caves and cults. You can tell, because the phrase is an oxymoron. It is not reality, if it requires consensus. It is fantasy, it is fiction, it is a dream. The cave has indeed become so widespread that we even _call_ it reality.
If you speak language, and read words, you are participating in a cult (we even call caves that had a kind of altar in the center a cult -- in Eurasia, there was a cave-cult called _the cult of the bear_, which had a bear skull placed in its center during the last ice age, and I would not be surprised if people spoke to it, with the help of hallucinogens). The only question is whether the cult is nourishing you or cannibalizing you.
To the person you are responding to (user ocd): your cave (ladybird, your hypothetical tv-series), no longer nourishes you like it once did. Maybe find a new cave, build a fire in it. Unlike a television series, you can fork a code base. You make it into the perfect cave, just for you. And if another person likes this cave, chooses to sit by the fire with you, well, now you have a cult.
Chatbot-translated code which is C++ foisted onto Rust? I will respectfully roll my eyes.
Ah, but I see they actually haven't done that to most of their code, so maybe it's just a bit of pandering to the hype and fashion.
I feel similar about the potential of this technique and have heard this from other C++ developers too.
Rust syntax is a PITA and investing a lot of effort in the language doesn’t seem worth the trouble for an experienced C++ developer, but with AI learning, porting and maintenance all become more accessible. It’s possible to integrate Rust in an existing codebase or write subparts of larger C++ projects in Rust where it makes sense.
I was recently involved in an AI porting effort, but using different languages and the results were fine. Validating and reviewing the code took longer than writing it.
Some time ago I was perma-banned from the Ladybird github repository. One can say it is warranted, or not (people have their own opinion; I completely disagree with their decision). Now that this has happened, I can speak more freely about Ladybird.
Naturally this will be somewhat critical, but I need to first put things into context. I do believe that we really need an alternative to Google dominating our digital life. So I don't object that we need alternatives; whether Ladybird will be an alternative, or not, will be shown in the future. Most assuredly we need competition as otherwise the Google empire moves forward like Darth Vader and the empire (but nowhere near as cool as that; I find Google boring and lame. Even skynet in Terminator was more fun than Google. Google just annoys the heck out of me, but back to the topic of browsers).
So with that out of the way ... Ladybird is kind of ... erratic.
Some time ago, perhaps two months or three, Andreas suddenly announced "Swift WILL BE THE FOREVER FUTURE! C++ sucks!!!". People back then were scratching heads. It was not clear why Swift is suddenly our saviour.
Ok, now we learn - "wait ... swift is NOT the future, but RUST is!!!". Ok ... more head-scratching. We are having a deja-vu moment here... but it gets stranger:
"We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited. Rust is a different story."
and then:
"I used Claude Code and Codex for the translation. This was human-directed, not autonomous code generation"
So ... the expertise will be with regards to ... relying on AI to autogenerate code in ... Rust.
I am not saying this is a 100% fail strategy, mind you. AI can generate useful code, we could see that. But I am beginning to have more and more doubts about the Ladybird project. Add to this the breakage of URLs that are used by thousands or million people world-wide (see the issues reported on the github tracker); or also the fact that, once you scale-up and more and more people use ladybird, will you be able to keep up with issue trackers? Will you ban more people?
In a way it is actually good that I am no longer allowed to make comments on their repository because I can now be a lot more critical and ask questions that the ladybird team will have to evaluate. Will ladybird blend? Will it succeed? Will it fail? Yes, it is way too early to make an evaluation, so we should evaluate in some months or so, perhaps end of this year. But I am pretty certain the criticism will increase, at the least the moment they decide to leave beta (or alpha or whatever model they use; they claimd they want a first working version in this year for Linux users, let's see whether that works).
i rememebr seeing interviews saying rust is not suited for this project because of recursion and dom tree. how they tested multiple languages and settled on swift. then they abandon swift and now they shift towards rust.
this entire project starts to look like "how am i feeling today?" rather than a serious project.
So Swift didn't turned out like they imagined and Rust is just the next best alternative to that failed vision using Swift.
So far this is the first and only shift
They were doing their own custom language before Swift.
didn't know
> The browser and libraries are all written in C++. (While our own memory-safe Jakt language is in heavy development, it’s not yet ready for use in Ladybird.)
https://awesomekling.github.io/Ladybird-a-new-cross-platform...
only thing I could find - has it been actually used in Ladybird after all?
From the link it seems that Ladybird architecture is very modular, in this case LibJS is one of the subsystems that has less external dependencies, said that they don't need to migrate everything, only the parts that makes sense.
Yes, i understand that in a personal project, but they have investors behind them.
They adopted Rust for LibJS, not the browser and its engine.
Completely ignoring the Rust aspect, I’m disappointed that two weeks were spent on something that isn’t getting Ladybird to a state where it can be used as a daily driver. Ladybird isn’t usable right now, and if it was usable, improving the memory safety would be a commendable goal. Right now I just feel like this is premature.
ÄNTLIGEN!
10x programmers become 100x with the power of AI. Not an unexpected outcome. But the world is going to suck for ordinary people. 10x programmers will gladly embrace this future become it empowers them more.
We have to accept this reality and act accordingly.
Yes you will downvote me. I have accepted this reality and will hack on my own projects in the woods or in a cave, on my own terms.
------ I wrote the following after a bit of thought:
It was with a heavy heart that I learned that the author of "Ladybird Browser" managed to convert the JavaScript compiler from C++ to Rush in 2 weeks, with the help of AI. It was a mix of awe and depression. 10x programmers leveraged AI to achieve a great feat in only 2 weeks, passing all tests. This was not a surprise to me as we all saw the writing on the wall a couple of years ago, but reality hit hard still. I'm a very average programmer, a very average person, and perhaps worse than the median in many perspectives. The gap between an ordinary people, with a 10X whatever, is getting much larger due to the evolution of tools. No, I do not believe AI can ever replace humans completely, at least not in the near future. But the point is, we the ordinary people are getting less and less relevant. The gate of professional work, the gate from which we drink satisfaction by knowing that many are using our work, is closing. I have no ill feeling towards any 10X programmers who is enjoying this. They are much better than me. They have earned it. They deserve it. And I deserve it, too, to have allowed myself to be mediocre. Being mediocre is a lesser evil then and now, but is a major sin in the future.
I soaked myself in "Crypto-zoologist" (Disco Elysium) to savor the moment. It is fine. Perhaps I will never get a professional job as a system programmer, and this is fine. I'll go into the woods, stay in a cave, and hack on my own projects, on my own terms. I do no care about the end products, and neither do I care whether people use them at all. Programming is a ritual to dispel the daemons from my soul, and I must keep doing it, until the last moment.
developers with good taste like Andreas Kling will be able to design entire OSes with coding agents
> design entire OSes with coding agents
They ported an existing project from CPP to Rust using AI because the porting would've been too tedious. I don't think they're planning on vibe coding PRs the way you're imagining.
He already did
Yeah, some weekends ago I tried writing a cross-platform browser without any Rust crates, this weekend I made my own self-hosted compile to Rust Clojure-like lisp, maybe next weekend attempting to create a OS that uses my language to run on bare-metal would actually be a challenge. Thanks for the inspiration :)
This comment raises an interesting question: Would Serenity OS have brought Andreas the same kind of serenity had it been developed with AI? Open candid question.
I don't think so because if I remember it correctly, Andreas suffered from alcoholism and serenity prayer helped him to go on the right path and iirc he honored that and created an os named serenityos.
God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.
(courage to change the things I can;):- I think that this line must've given Andreas the strength, the passion to make the project reality.
but if AI made the change. Would the line be changed to courage to prompt an all powerful entity to change the things I asked it to.
Would that give courage? Would that inspire confidence in oneself?
I have personally made many projects with LLM's (honestly I must admit that I am a teenager and so I have been sort of using it from the start)
and personally, I feel like there are some points of curiosity that I can be prideful of in my projects but there is still a sense of emptiness and I think I am not the only one who observes it as such.
I think in the world of AI hype, it takes true courage & passion to write by hand.
Obviously one tries to argue that AI is the next bytecode but that is false because of the non deterministic nature of AI but even that being said, I think I personally feel as if the people who write assembly are definitely likely to be more passionate of their craft than Nodejs (and I would consider myself a nodejs guy and there's still passion but still)
Coding was definitely a form of art/expression/sense-of-meaning for Mr Andreas during a time of struggle. To automate that might strip him of the joy derived from stroking brush on an empty canvas.
Honestly, I really don't know about AI the more I think about it so I will not pretend that I know a thing/two about AI. This message is just my opinion in the moment. Opinions change with time but my opinion right now is that coding by hand definitely is more meaningful than not if the purpose of the project is to derive meaning.
Cool project, but I'm a bit curious hearing how the rest of the project feels about this?
I'm not sure how I'd feel if I woke up and found a system I worked on had been translated into an another language I'm not neccessarily familiar with. And I'm not sure I'd want to fix an non-idiomatic "mess" just because it's been translated into a language I'm familiar with either (although I suspect they'll have no problem attracting rust developers).