I gave up on timecapsule because performance has gotten worse and worse year over year. I replaced it with a periodic rsync backup to a NAS that is in turn backed up in other ways
The upside is that it's dead simple when it comes to how the backup is stored. In 10 years time, having files in a filesystem will still work, but I imagine restoring an old time machine backup will require quite a bit of work
If you wanted to you could probably figure out how to do apfs snapshots before rsyncing
If you exclude pointless stuff like browser caches it's also pretty performant compared to timecapsule, and the transfer is properly encrypted
1. When I benchmarked it, AFP was significantly faster than SMB. Both with SMB2 and SMB3. Even when transport encryption was turned off.
2. On SMB2+, symlinks created by the client are not real symlinks. They're "Minshall+French" links which only look like symlinks to other SMB2+ clients. To the server and NFS mounts they look like flat files with the target path encoded in them.
3. It exposes a different precision for certain timestamps. Software that uses this metadata to decide whether a file needs to be updated will see almost every file as needing a resync.
It's been a year or two since I checked the status of these. The situation may have improved since last I looked.
Yeah I recently migrated my NAS and took the opportunity to switch from AFP to SMB for my Time Machine backups. There were so many problems like the ones you describe that I gave up and went back to AFP. Looks like I'm going to be forced to spend a weekend with Claude figuring this out.
It's been more than a decade since they replaced AFP with SMB as the default protocol for file sharing, and they've been warning that AFP would be going away for years.
Yeah but AFP is still performing way better than SMB on Mac for any fast networking. Like 10GigE and faster. Apple SMB stack is a disaster, and thoroughly unprofessional. NFS is faster, too, but unfortunately the Finder, being the rat nest of bugs it is, has often trouble with NFS shares.
macOS 26 still has a hard kernel panic if you try to mount an NFS share with krb5 auth but don’t have a valid Kerberos ticket. 100% reproducible.
Every OS update I try mounting with no ticket, get a panic, fill in the error reporting dialog with a nice “hope you had a nice holiday break!” message or whatever is seasonally appropriate, with the same simple steps to reproduce. It’s just kinda comical at this point.
My guess is kerberized NFS has absolutely zero users within Apple, and it’s likely hard to find an engineer there who even knows what Kerberos is anymore.
I used to work at Apple and I’d have filed a radar for it but now I’m just a customer so I’m powerless.
It's been a while since I worked at Apple, but back in the day the entire OS X Server team made extensive use of kerberized NFS shares for moving around large files...
...the last version of Server shipped in 2021 (and the last real version shipped almost a decade before that).
Did they ever work? No, seriously. I've had a couple of them and the few times I really could have used them I discovered that they represented the worst backup solution I've ever had the misfortune to deal with. Slow, very hard to use beyond their primary integration with the OS (which isn't good to begin with), there's really no good way to keep an eye on how they are doing (what's actually backed up, if it is still there) and the performance is worse than any hand rolled solution I've ever used.
They never supported it properly in the first place and then it just meh'ed out of existence.
I hope "the new Apple" is going to take software seriously.
Where "new" in this case could be a NAS running Samba from 2011? Samba added official support for Time Machine much later, but I think it was possible on earlier versions with some extra steps.
That's when Samba gained official easy to use support for being used with Time Machine. I'm pretty sure it was possible long before then, IIRC by changing a setting on the Mac to allow selecting unsupported network volumes.
I don't recall when I stopped running netatalk on my NAS and switched to pure Samba, but I think it was before 2018.
SMB1 has major security issues but even those ignored (which a lot of people on private home networks shouldn't be too worried about) it's also slow as hell on MacOS
philosophically, it depends on who you are. If you're Sam Altman or Vitalik Buterin, yeah, your private home network should be considered to be under attack by hostiles trying to steal from you, but for the rest of us, the NSA isn't going to make an international incident trying to get at your Plex server.
In comparison to other 'changes' Apple usually do those one are realistic.
Dropping deprecated networking practices that worth upgrading (meaning, if you already have newer macOS clients mostly with apple stack, update your servers)
I just hope they won't break anything they don't need to break (which is more concerning usually) and that they won't drop other things that do make sense to keep until transitioned properly (eg. OpenGL as one example)
Although TimeCapsule is more than decade old, it serves nicely with TimeMachine (automatic backups). Sad to see that going away permanently for Apple Silicon.
"Dropping support for things just because they are old" is typical commercial software behavior. I can run the latest Linux kernel and still have access to an internal floppy disk drive if I wanted to, yet billion dollar companies can't seem to manage to support 10 year old stuff.
I still am sore from when I "upgraded" macOS and suddenly support for my 1080i TV was gone. Yesterday it worked fine, today it's gone. All because they can't be bothered to maintain a code path.
With closed source IP, every bit of support, from bug fixes, to feature requests, to compatibility fixes to integrate with newer mainline/foundational tooling, costs money.
With open source projects (and in particular ones like Linux where there's a huge number of contributors and interested parties), support for would-be niche facilities can keep going as long as there's someone with the knowledge and spare time to do it.
AFAIK, Linux has a policy that any change you make must not break existing kernel features, and if it does, you have to fix them yourself.
With that said, kernel maintainers have recently indicated that some unused subsystems are likely to be removed soon, as AI is now finding (real) security vulnerabilities in them that nobody is willing to fix.
> The economics make the reasoning obvious, though.
Looking through Apple’s financial statements, they theoretically could support these old systems. I’m not saying a cut doesn’t make sense, but just that economics-wise they could keep one guy for it
There's somewhere in the ballpark of 166,000 employees at Apple, just unfathomable scale [1]. It is not unreasonable to ask that someone specific is responsible for each particular small feature and ensuring it keeps working. Trying to apply an economic analysis to such a "free as in beer" operating system does not seem to work well. Consider the question of "how many small holes can you have in your wooden sailing ship"?
> With open source projects (and in particular ones like Linux where there's a huge number of contributors and interested parties), support for would-be niche facilities can keep going as long as there's someone with the knowledge and spare time to do it.
And that increasingly gets difficult to do. i386 support went down the drain in the kernel in 2012, i486 is probably going down the drain as well this year [1] and soon-ish another bunch of really really old stuff will go as well because it isn't maintained [2] - good luck finding someone still running IPX networks or ISDN hardware.
Ideally, at a certain point, you'd have some sort of upstream FLOSS project where you could let John Q. Public do that sort of low-level, maintenance-only stuff, while the proprietary "value adds" are closed source, until it becomes financially attractive to FLOSS them.
IIRC, that could exist for MacOS in the form of Darwin.
It's my understanding that those are (mostly?) devices where they legitimately have reason to believe there are zero users. In particular, there's a pattern where someone will discover that Linux has a driver that hasn't actually worked for a long time, and nobody's complained, so then they remove it.
I'm not suggesting they keep it all... just ironic as a statement considering Linux is literally removing a bit lately... <= 486, the bus drivers for mice, etc.
I'm mostly okay cleaning out a lot of legacy and unsupported devices. In some ways, and for people who want to support really old hardware it may not be great, but they're most likely stuck on older versions for other reasons.
I don't think it is ironic, though; Linux isn't "Dropping support for things just because they are old", it's dropping unused things when they cause code quality problems. That's rather different than features being dropped because the vendor doesn't want to bother supporting them even though they still worked and have active users.
Feetures being dropped because nobody wants to support them is a prominent feature of free software. That's part of "no warranty". If it does bother you, you're supposed to step up to support it yourself, or pay someone to.
Okay, but that's the exact opposite of what we're discussing here? Linux, which is free software, isn't dropping features because nobody wants to support them, but because nobody's using them. Meanwhile, macOS, developed as a commercial product and with a much weaker showing of open source or even source availability, is dropping features because Apple doesn't want to support them.
> Linux, which is free software, isn't dropping features because nobody wants to support them, but because nobody's using them.
I disagree. They are dropping support because nobody is maintaining them. There may very well be people still using these features, but they haven't been motivated or aren't properly skilled to offer to maintain them going forward, and haven't motivated some other skilled person via payments.
Rather, the core difference is that Apple does not offer a way to have external people take over providing support.
If anybody would care to keep these drivers up, it would be easy to revive them as kernel modules. It's not that Linux is going to lose an upstream interface to publish events from a bus mouse.
Support for 486 is another thing, but, frankly speaking, running a modern Linux kernel on a 486 makes no sense, either form a practical or preservationist / museum perspective.
Just this week we've seen Linux talking about dropping support for some older hardware precisely because attacks against it were becoming easier with LLMs.
Do you have a detailed source for this? I want to read more about it.
Because I noticed my old Core 2 Quad PC with Nvidia 8600GT that my parents use as their email and Facebook machine, doesn't boot with any linux newer than Kernel 6.1 even though I can get Windows 11 to boot on it.
So the myth around "Linux is great for old PCs", highly depends on what HW you have.
Sounds like an Nvidia driver module issue more than anything else. If I had to guess, simply removing the Nvidia module should fix that and still get you video through one of the various backup paths (opennuveau etc)
Ok what do you suggest? Every feature ever written should be supported in perpetuity even if 3 people are using it? Clearly you didn't think this through. Should 2026 computers have a ISA interface as well?
Supporting old hardware and software has a substantial cost that only grows exponentially. Companies exist to print money, not to cater to the smallest niches.
It would be great if they could support things, but I most definitely understand why they don't.
Really? Like actual internal floppy drives, and not just USB floppy drives (which even Windows still supports)?
I actually wouldn't expect macOS to support actual floppy drives since the OS's list of supported devices doesn't include any that shipped with floppy drives. The fact that I cannot install the latest macOS on any devices older than 2019 is a related, but separate problem.
In this case, what would internal floppy drive mean? The last Macs with floppy drives (I think Old World G3s?) used a custom Apple controller, integrated into the chipset, with a bespoke 20-pin cable.
A USB floppy drive behaves almost identically to a USB hard drive-yet another SCSI block device. The cost of keeping support for them is minimal
This is very different from legacy PC floppy drive controllers which spoke a completely different protocol, which was very complex and full of footguns
Legacy floppy controllers also had various legacy features almost nobody used, like soft deletion of sectors (IBM added this in the 70s for use with primitive database systems), or attaching tape drives using the floppy interface (nowadays if you buy a brand new tape drive, the interface options are SAS or Fibre Channel)
> There are still some people who need to run 32-bit applications that cannot be updated; the solution he has been pushing people toward is to run a 32-bit user space on a 64-bit kernel. This is a good solution for memory-constrained systems; switching to 32-bit halves the memory usage of the system. Since, on most systems, almost all memory is used by user space, running a 64-bit kernel has a relatively small cost. Please, he asked, do not run 32-bit kernels on 64-bit processors.
> "Dropping support for things just because they are old" is typical commercial software behavior.
You are deluding yourself if you think open source folks are better. You can't compile and run a modern version of GCC on Solaris 10 on SPARC, for example. And we just had a story here last week about removal of bus mouse support. It's only a mild exaggeration to say that lots of folks will check the commit activity on github and of a project doesn't have commits this week it should be banned from the internet and the universe.
Then you have the problem that many dev tools are not forward compatible. CMake is a huge issue. An ubuntu system from 2020 has CMake on it, but it won't compile anything that uses CMake that was released in recent years because the cmakefiles are incompatible.
CMake is a bad example, you can build latest CMake and run it on Debian Jessie. It will work perfectly. CMake is the thing you can build on really old compilers.
Open source is better because if you need the device driver then you can step up to maintain it yourself. It doesn't mean someone else will magically do it for you. I've used devices with very obscure incantations to get some random person's hack to run on Linux that worked natively on Windows.
It may not be the easiest surgery in the world, but you can replace the hard drive in a Time Capsule. You'll probably want to replace the power supply too after this much time
wasn't it capped at 3tb? is the drive swappable to something bigger? They discontinues them in 2018, the wifi in them is old, single disk (no raid).. better to just pick up a multidrive nas or use cloud backups. What we should be asking for is timemachine backends for cloud providers.
It's not "officially" supported, but iFixit has a guide for swapping the drive on a time capsule. I used mine with a 4TB drive for years with no trouble.
My old trusty readynas should still work i think.. probalby. Supports smd for time machine and smb3 generally. If it doesn't I might finally be pushed onto a nas that isn't discontinued.
From a risk assessment standpoint, I’ve seen my Time Machine backups corrupted much more frequently than I’ve experienced drive failure. Happened with both my Time Capsule and then my Synology RAID.
It’s a “nice to have” automatic backup, but not a primary backup destination for me.
"...if you have an Apple silicon Mac and AFP support is dropped from macOS 27, that would leave you unable to upgrade without replacing your network storage."
How big is this market? I'm not saying vibe code a product, but...
That "replacement" is not always full-on hardware.
I have colleagues who are running AFP on BSD for continuous backups on their systems, and they have to reconfigure something new to be able to continue backing up their systems.
One of my COVID projects was to set up a networked Time Machine backup on Raspberry Pi.
Every single one of the blogspam sites (lifehacker, howtogeek, etc.) told you to use AFP/HFS+/Netatalk. I had so many problems with this. Time Machine would work well the first few times and then slow to a crawl. If there was a power outage, look out. The whole thing would be corrupted. It wasn't the network. FTP and scp worked just fine.
Eventually I found one blog that told you how to do it with SMB and ext4. It was that site that I learned about the much malignment of AFP and HFS+. SMB/ext4 worked like a charm. Six years later and not a single hiccup.
They are compatible with netatalk though. The project split between version 2 and 3, but in recent releases they folded them back into a single thing. Current netatalk releases support all versions of AFP.
Relevant to the discussion is that the project comes with an AFP client as well. I have no experience with the client but I've used the Netatalk server for more than 15 years.
So I should have to e-waste my printer, scanner, and wireless card reader that only exist on my LAN, and that I connect to via a web interface just because… reasons?
On an unrelated note, I use Time Machine and I’m surprised at how unpolished, not to say downright buggy, all the animations are. They used to look magical, but now they are a mess of elements popping on and off and things moving and then vanishing the next frame and so on. It looks like they kept changing Finder and Time Machine didn’t keep up; they kept fixing the bare minimum to have it compile and nothing more.
Even the new app launcher. It takes 1-2 seconds to draw a bunch of icons. Scrolling is also choppy. This even happens on their newest machines. How this possible in 2026?
If you have a legacy Time Capsule you'd rather not e-waste, you can try this out. Note that this is very much beta quality software, so don't expect it to work on all configurations.
My app launcher loads as soon as it's triggered (4 fingers swiped in). There is a weird 5ms glitch on the zoom in animation, but otherwise it loads in within a few ms, and scrolling is smooth. I'm on a M2 MBA macOS 26.3.1
Edit, but don't take this as me saying I like the current state of macOS. There are plenty of weird edge cases I wish they'd fix, but on the whole the OS works fine for me.
For me the launcher itself loads fast, but it takes 1-2 seconds to show the icons. And when I scroll down it often times does not draw the icons fast enough.
What "walled garden"? The Mac-only apps aside, what's that that you couldn't get on Windows (and most even on Linux), either the same thing, or a zero-switch-cost subscription (it's not like you need to rebuy something to go from Music to Spotify for exampe).
iCloud? You can use Google Drive or Dropbox or whatever MS calls theirs.
Apple Music? Pretty sure it plays at both.
Most major apps are cross platform (Adobe, Microsoft and such), or Electron based.
Syncing with your iPhone? You can do that from Windows and Linux as well. Airpods? Work with Android and Windows too.
You didn't read what I said. I said MacOS IS a monopoly in the Apple ecosystem.
Apple users dissatisfied with how MacOS is changing, as the one I was replying to, have nothing else to switch to without uprooting themselves out of the Apple ecosystem altogether, which most don't do but just put up with it.
The Mac isn’t a monopoly, but choices for desktop operating systems are indeed limited. I use macOS, Windows, and Linux on a regular basis. The only one that’s improving is the Linux ecosystem. I prefer macOS to Windows, but macOS is not as polished in 2026 as it was in 2016 or especially in the Snow Leopard era.
Originally, it was "solved" because computers were the only thing Apple sold. They couldn't afford a Lisa without successes like the Apple II.
Now, Apple's incentives are changed. The App Store alone makes multiple times more money in a year than the sum of annual Mac and iPad sales put together. The OSes for these products are decidedly back-burner so Apple can focus on expanding AppleTV's IP library and lobby for Apple Pay. Ternus won't be your savior.
John Ternus says Apple has ‘so much’ opportunity to expand services
A couple of revisions in Time Machine was just fine.
The UI was cute and fun if you wanted an older revision of a single file (especially since you could see previews of the file as you warped backwards).
However, importantly, the snapshots were available in Finder itself so you could browse through the files you wanted and retrieve them.
The worst feature of Time Machine is how it takes over every single display you have. Even though it only shows content on one screen, it feels the need to completely black out the others.
Classic Apple engineering. I would there is technically a "single responsible individual" assigned to Time Machine, but it covers the whole product, so the UI component falls by the wayside as the work on other products or the low level portion.
The "quality" Apple delivers is by now a complete joke. It's going south since over a decade, and this never stopped.
It's like that because people are still buying. Even for the ridiculous prices Apple asks for.
So why would Apple actually care? They get away with this "quality", so from a business standpoint there is simply nothing that needs investments or even just attention.
It's a race to the bottom. Like everywhere else. That's simply how the system which people created works.
- 3rd party devices are often unreliable. Not directly Apple's fault, but the lack of certification process hurts
- SMB extensions: In order for an SMB server to support Time Machine, it must support Apple's AAPL extensions to SMB (my understand of this my be a bit uncorrect)
- Network device connecting is separate from Time Machine device connecting. This causes an inconsistent UX.
- Not possible to browse a backup. You can only view file or folder's backup over time. In other words, you can scroll through time but you can't browse a single backup (point in time). This requires using 3rd party tools like BackupLoupe
You can't turn it on without an external drive attached, even though it saves local backups. It works if you mount a disk image and then point TM to it with the CLI.
It’s more tangential than unrelated. It’s how conversation naturally flows, and this is a discussion board. No need to fire up a new post.
On another tangential note: you’re insufferable. If you’re like this in the real world, I can’t imagine you’ve got many people wanting to hold a conversation for very long.
I’m reminded of that time 10-years ago when Apple rewrote parts of its networking code (discovery/mDNSResponder), and it caused so many issues they had to revert the code.
I originally added a different title: Apple is dropping AFP/TimeMachine support in macOS 27.
It seems like somehow got overwritten to the original title of the post.
Nevertheless, knowing Apple so far, unless _some_ large-enterprise~y customer comes and objects, they will drop the support. We already know Intel support is dropping. Why not clean up rest of the things from the kernel and the userspace?
When i saw the headline I briefly allowed myself to hope that DNS settings would no longer be set universally (requiring manual intervention when switching networks if not using DHCP) but of course it's nothing useful and only "Apple is breaking stuff because they can"
>Apple made SMB its primary file-sharing protocol in OS X 10.9 Mavericks, over 12 years ago…
…and yet SMB support in macOS remains slow and buggy to this day. I tried all combinations of server-side settings and obscure plist tweaks to make SMB navigation and search work as fast as they do on my Linux machine out of box before giving up. It is very obviously not a priority for their services revenue, so there’s no incentive for fixing any of the long standing problems.
> SMB support in macOS remains slow and buggy to this day. I tried all combinations of server-side settings and obscure plist tweaks to make SMB navigation and search work as fast as they do on my Linux machine out of box before giving up. It is very obviously not a priority for their services revenue
That's where my thoughts went, too. I can make SMB "better" but not "great" usually, but it's annoying to have to look up and apply, and still have things not optimal. Just in case, IIRC I find this the most useful:
I found something fun last week--- Apparently if you use Adobe tools, there is a sync plugin they install for finder that can cause big issues with SMB shares. Might help you if you have that!
Apple has their own implementation of SMB in macOS and it's one of the worst out there. Dropping connections, can't re-establish connections automatically after sleep, and performance issues.
Why they didn't keep Samba (licensing, probably) is beyond me.
Yeah, can't remember the last time I even bothered with SMB because it's so buggy. Usually I don't need filesystem behavior, I'll just push/pull files over SSH.
I used to do that a lot in some old versions of OS X, but then MacFUSE got abandoned and picked up as osxfuse, then that broke then got fixed repeatedly with several Mac updates, and I gave up.
NFS works way better than SMB, but the Finder is not without its troubles. Sometimes it will take 10 minutes to display a folder for reasons, mostly.
The Finder is really an horrible piece of sh*t of software, slow as hell, doesn't provide the most basic information[1], and, of course, doesn't work properly when browsing network shares either SMB or NFS.
[1]virtually all common file browsers (Windows Explorer, Gnome Nautilus, KDE dolphin) displays at all times : the number of files in the current folder, their size, the number of files selected, their size; also all but the Finder have a "recent files" section that actually contains the latest files used, while the Finder displays a completely random selection of recent files, but never the most recently used ones.
With the exception of summed size of selected items, the Finder has all of that. Help yourself to the "View->Show Status Bar" menu option. Also, "View->Show View Options->Calculate All Sizes" to show storage size for directories.
You greatly under-estimate how much work it is to maintain old code, particularly to maintain in securely.
AFP and Time Capsules add attack vectors to the OS, which can be targeted even when few users actively using them. One dev could keep both basically functional, but to what end? User counts are already small, and people that aren't using them are still exposed by their mere existence.
Shrinking or removing code, in my experience, is one of the biggest single wins you can have in software development. Less to test, less to update, less to secure.
Yes, writing and maintaining less code is great for a developer. We can follow this to the logical extreme and marvel at how easy it is to write and maintain a program whose only function is to print "hello, world" to the console. Nevermind the users, what do they matter?
By the very nature of assigning development time to these antiquated features, you're assigning them away from other features, bug fixes, or requests that may have a larger user reach.
Development is a finite resource, the argument here is to allocate them to hard-to-secure, outmoded, replaced, technology instead of anything future relevant. It doesn't make sense.
The person was specifically suggesting hiring extra developers for maintenance. While I'm familiar with the concept that "nine women can't birth a baby in a month", I don't think that applies so much to maintenance of old code paths. Apple makes over $100b in net profit per year, a truly unfathomable amount of money, they can afford it, and I think not only can they afford it but that it would benefit them. Even if only 1% of your users use X, for Apple that might translate to perhaps 10 million people using X, or at 0.1% 1 million. Hiring a dev to improve the experience for that many people just makes sense at scale, software is write-once reproduce-a-million-times-for-free.
I have no doubt the bean counters have drawn up every kind of spreadsheet they can imagine trying to quantify it as being not worth it, but I don't think these kinds of quality of life things can be easily quantified, because each small thing maintained might only impact a small number of users but collectively, all of these kinds of small things add up to either a system with sharp corners that constantly papercuts the user (current Apple software), or one that is so seamless that it engenders customer loyalty for decades (old Apple software). This kind of shortsighted penny-pinching is how companies become a shell of their former selves, suffering a slow death-by-MBA.
My estimate is that your lower count of people who could still be using Time Capsule is off by a factor of 20, but we'll continue with the idea that Apple could justify hiring a single engineer to be assigned full-time on the TimeCapsule, starting today.
This hypothetical employee would:
- update the TimeCapsule firmware from using AFP to using a brand new SMBv3 implementation, including both porting and making it "fit" within the constraints of 2013 hardware.
- be designing and implementing a migration system for both the TimeCapsule and the Mac to move to using the new implementation
- be responsible for all security analysis, QA, and documentation for the firmware and migration system
They also need to get it done by the first macOS version that has AFP removed, which will land in developer preview in six weeks and need to be feature complete in about 17 weeks.
If Apple hires a new developer capable of doing that, I don't want them to relegate them to supporting 13 year old hardware. I want them improving things that the majority of users actually need.
And that is the core problem with this sort of argument. Even with infinite money or the infinite possibilities of open source contributions, the availability of talent is still _always_ finite.
> Even if only 1% of your users use X, for Apple that might translate to perhaps 10 million people using X, or at 0.1% 1 million. Hiring a dev to improve the experience for that many people just makes sense at scale; software is write-once, reproduce-a-million-times-for-free.
If Apple is known for anything, it's that they keep moving ahead with the operating system, even if it means leaving some users behind… and that goes back to the late 80's/early 90s when apps had to be "32-bit clean" [1] to run on System 7 and newer Motorola 68000 processors like the 68020, 68030, etc.
Some beloved apps don't make the transition, and that happens with every technology transition like 68000 to PowerPC, then to Intel, and then to ARM. And of course, from Classic Mac OS to OS X, Mac OS X then macOS.
I've been active in user groups since the Apple II days; there's a cohort who mostly won't upgrade their hardware but complain bitterly that they lack certain features. Or they attempt these fragile and unreliable hacks to keep their old hardware and software running.
Usually, they're doing themselves more harm than good, especially if they're not technical.
Also, it's pretty unlikely recent college graduates would be able to tackle old C++ or Objective-C code written before they were born, in some cases, to keep something like AFP alive. Regardless of Apple’s financial success, it's not a good use of resources to keep a bespoke network protocol going that originated in 1985 that less than 1% of the installed base is actively using.
Is the code that Apple is removing support for open source? The Linux drivers could at least plausibly be picked up and used by someone who really wants to, so it doesn't seem to be a fair comparison
I'm still using my time capsule. I don't really trust the hard drive inside of it, but I basically use it to connect to an SSD that I attached to it. Unfortunately, Nest Wi-Fi, that I use as a router doesn't have any USBs, unlike some cheaper routers. I know that it's, I know that it will be gone after Tahoe. I'm still not sure what I'm going to do about this. I mean, I don't really want to fool on us
I mean, it's basically just like a time machine backup plus, uh, a little bit of some older files that I don't want to keep on my main Mac.
seems like any NAS would take way more space than I would love to. I suppose one alternative would be actually getting some kind of like Beelink PC and then maybe setting up a proper home server, moving some of my side projects in there, running plex from it. The problem is that the current ram prices, it's a surprisingly expensive solution.
Changing out the network protocol used for local network backups isn't the same thing as getting rid of local network backups.
TFA:
> Apple made SMB its primary file-sharing protocol in OS X 10.9 Mavericks, over 12 years ago, and has repeatedly told us that support for its predecessor AFP will be removed in the future.
I don’t think they’re going to drop support for local backups any time soon. There are lots of enterprise customers relying on Time Machine who will never switch to iCloud. TM can also be configured via MDM settings and is a really common solution for Mac IT administrators, so it would take ages to deprecate it.
"There are a lot of enterprise customers using Xcode server". And poof, it's gone and there's now only the Xcode cloud service. It would not take ages. It would take a single release which no longer supports it. Complaints? Keep using the old one or subscribe.
I am fairly confident in saying that approximately zero enterprise customers used Xcode server. It was extremely limited and targeted at small shops which didn't see the need for a proper CI setup but had an extra machine sitting around to run builds on.
- BigCo already is a zero-sum deal, they use Xcode-cloud as a service, which runs back on their servers anyway... (Google, Amazon, Azure, etc)
- It was not a long-standing product. Introduced somewhere around 2016~ish if I remember correctly. Only lasted a few major releases. Easier to kill than an established one (ie. TimeMachine)
People have been asking for iCloud macOS backups since iCloud was introduced. It would be very popular. I'm not sure why Apple doesn't offer this, because it's easy revenue.
Because people will fill their iClouds. An important value proposition of iCloud is that customers pay for more space than they need. Time Machine grows to fill all available space.
They could sell a separate service for Time Machine backups. I'm not an Apple customers so I don't know if it makes sense, but they could make customers pay X times the last N days in the backup plus Y times a number M of snapshots in the past.
I would have agreed if they hadn't put in the engineering effort to upgrade the backup disk image to APFS instead of HFS+. They wouldn't have done that if the plan was to deprecate it soon. (IIRC the next version of macOS is also dropping HFS+ support)
Also it's honestly really weird that they don't have iCloud backups for Macs yet. It seems like a no-brainer feature. I know I would easily switch to Apple over Backblaze as Backblaze's client is just terrible.
I've been working on improving an open source menubar that wraps restic. Right now it is a bit rough around the edges, but my plan is to have a simple onboarding experience for various backend services like B2.
Over the weekend, I added a "Smart backups" feature that uses all the same directories that the backblaze menubar app and timemachine excludes. This was the primary missing feature for me. It even generates and backups your Brewfile...
The story of TimeMachine is a tragedy: a revolutionary feature that made backups accessible for normal people allowed to lie fallow for a decade or more until it's as annoying and unreliable as anything else. I now use Carbon Copy Cloner to avoid the TM headaches.
I never found it to be overly reliable. It was reliable... for a while. Then would silently fail/stop working, or just tell you that it had stopped working and that whatever you had in it was no longer accessible.
And then I went to Acronis True Image backing up to my Synology NAS, but that became unreliable too - oftentimes when I'd go to do a restore, the client would crash trying to read the catalog.
So, like you... CCC nightly to my Synology, with a Snapshot rotation on it - snapshot the previous night's backup at 8pm, and then kick off that night's backup at 11pm.
It was unreliable over SMB. Not surprising when you look at what it was doing. It would create a virtual drive on the share, map that and backup to it. There was too much going on for that to be reliable.
Yeah, you may be right. I have fond memories of it from around 2008, but those might be from the initial experience and not all the "you need to recreate your back from scratch" errors that would crop up after a while.
> Next: macOS iCloud backups and the eventual deprecation of local Time Machine backups altogether. More services revenue!
The "new computer" out of box account creation and first sign in experience on both Windows 11 and MacOS are clearly designed to drive end users towards perpetual for life monthly recurring subscriptions for (Microsoft 365 Personal, OneDrive, iCloud storage, etc).
Imagine the difficulty for the ordinary non technical person (absolutely not a stereotypical HN reader) ever being able to stop paying for iCloud when they have 600GB+ of their family photos and videos and stuff backed up to it.
> Imagine the difficulty for the ordinary non technical person (absolutely not a stereotypical HN reader) ever being able to stop paying for iCloud when they have 600GB+ of their family photos and videos and stuff backed up to it.
To be fair, non technical folks get a lot of value from this scheme too. I can't imagine many of my relatives successfully juggling backups and external media in a way that would actually keep their content safe in case their phone is lost/stolen/destroyed.
Right now the monthly fees for this stuff are rather modest, but I could see a future where the dominant players lock out competitors and use their market position to raise prices significantly.
Ubiquiti is really taking up the slack in some areas Apple has abandoned.
I bought a UNAS-2 (and a couple of 12 TB IronWolf Pro drives) a few months ago when the "time capsule will not be supported in a future version of macOS" warning first appeared. It has been outstanding alongside the rest of my UniFi setup, and perfectly supports Time Machine backups. The UniFi Identity macOS app means my family's computers always stay authenticated/connected and my wife & kids don't have to do anything to make Time Machine just work.
If you're a power user who loves the Apple aesthetic and you already have a UniFi setup at home, you'll feel right at home switching from Time Capsule to a UNAS.
What format is the destination drive? My ideal is APFS clone backups to a remote drive, but I don't know if there are any network setups that support that, even though you can do it to a local drive.
Have you tried it also working to backup files from Linux and windows machines ? Was hoping for a good mixed backup solution and I'm getting Ubiquiti would deliver here.
Also why the 12TB ironwolf drives specifically ? Personally I always was a fan of buying true enterprise (the ones designed for "online" or near line storage) but sometimes specific models and sizes of random drives do very well in Backblaze testing
I don't have any Linux/Windows machines, but I've seen nothing that would dissuade me from using it when I eventually migrate my current laptop to Asahi Linux.
As for IronWolf Pro drives, I chose them because they seem to have similar longevity to enterprise drives with less noise (my equipment is in a closet under the stairs).
I was shocked years ago that the mac, famous for its early network peer discovery and zeroconf and all, couldn't present a list of SMB servers and shares despite that kind of function being around forever on every other platform in existence.
Must have been a lot of years ago since Samba was introduced in Jaguar (2002), and SMB replaced AFP as the default for file sharing as of Mavericks (2013).
Time Capsule has been unsupported since 2018 (last shipped 2013):
* https://en.wikipedia.org/wiki/AirPort_Time_Capsule
I think there's some population of folks that have been doing NAS TM backups over AFP, and they'll now have to switch to SMB.
I gave up on timecapsule because performance has gotten worse and worse year over year. I replaced it with a periodic rsync backup to a NAS that is in turn backed up in other ways
The upside is that it's dead simple when it comes to how the backup is stored. In 10 years time, having files in a filesystem will still work, but I imagine restoring an old time machine backup will require quite a bit of work
If you wanted to you could probably figure out how to do apfs snapshots before rsyncing
If you exclude pointless stuff like browser caches it's also pretty performant compared to timecapsule, and the transfer is properly encrypted
I still use AFP on my NAS for a few reasons:
1. When I benchmarked it, AFP was significantly faster than SMB. Both with SMB2 and SMB3. Even when transport encryption was turned off.
2. On SMB2+, symlinks created by the client are not real symlinks. They're "Minshall+French" links which only look like symlinks to other SMB2+ clients. To the server and NFS mounts they look like flat files with the target path encoded in them.
3. It exposes a different precision for certain timestamps. Software that uses this metadata to decide whether a file needs to be updated will see almost every file as needing a resync.
It's been a year or two since I checked the status of these. The situation may have improved since last I looked.
Yeah I recently migrated my NAS and took the opportunity to switch from AFP to SMB for my Time Machine backups. There were so many problems like the ones you describe that I gave up and went back to AFP. Looks like I'm going to be forced to spend a weekend with Claude figuring this out.
They discontinued sales in 2018, but continued to support Time Capsule backup over AFP through macOS 26 (Tahoe).
It's been more than a decade since they replaced AFP with SMB as the default protocol for file sharing, and they've been warning that AFP would be going away for years.
Yeah but AFP is still performing way better than SMB on Mac for any fast networking. Like 10GigE and faster. Apple SMB stack is a disaster, and thoroughly unprofessional. NFS is faster, too, but unfortunately the Finder, being the rat nest of bugs it is, has often trouble with NFS shares.
macOS 26 still has a hard kernel panic if you try to mount an NFS share with krb5 auth but don’t have a valid Kerberos ticket. 100% reproducible.
Every OS update I try mounting with no ticket, get a panic, fill in the error reporting dialog with a nice “hope you had a nice holiday break!” message or whatever is seasonally appropriate, with the same simple steps to reproduce. It’s just kinda comical at this point.
My guess is kerberized NFS has absolutely zero users within Apple, and it’s likely hard to find an engineer there who even knows what Kerberos is anymore.
I used to work at Apple and I’d have filed a radar for it but now I’m just a customer so I’m powerless.
What's the panic?
It's been a while since I worked at Apple, but back in the day the entire OS X Server team made extensive use of kerberized NFS shares for moving around large files...
...the last version of Server shipped in 2021 (and the last real version shipped almost a decade before that).
Apple was still using Kerberos when I was there not that long ago.
IIRC I had some really nasty move/duplication issues with NFS the last time I tried it in Finder.app. (and the whole UID mess)
Did they ever work? No, seriously. I've had a couple of them and the few times I really could have used them I discovered that they represented the worst backup solution I've ever had the misfortune to deal with. Slow, very hard to use beyond their primary integration with the OS (which isn't good to begin with), there's really no good way to keep an eye on how they are doing (what's actually backed up, if it is still there) and the performance is worse than any hand rolled solution I've ever used.
They never supported it properly in the first place and then it just meh'ed out of existence.
I hope "the new Apple" is going to take software seriously.
Time Machine support is also dropping support over SMB1 so whatever new solution needs to support SMB2/3.
SMB2 came out with Vista and SMB3 was Win8 so they are not new protocols either.
That just ended up inadvertently reminding me, Windows Vista is actually almost old enough to be at the minimum legal drinking age in the US.
Windows 8 is nearly a decade and a half old as well.
Time really does fly.
Where "new" in this case could be a NAS running Samba from 2011? Samba added official support for Time Machine much later, but I think it was possible on earlier versions with some extra steps.
Samba 4.8 from 2018:
* https://www.samba.org/samba/history/samba-4.8.0.html ("vfs_fruit")
* https://wiki.samba.org/index.php/Configure_Samba_to_Work_Bet...
That's when Samba gained official easy to use support for being used with Time Machine. I'm pretty sure it was possible long before then, IIRC by changing a setting on the Mac to allow selecting unsupported network volumes.
I don't recall when I stopped running netatalk on my NAS and switched to pure Samba, but I think it was before 2018.
I only meant new as in someone currently owns a Time Capsule and has to replace it with something "new" that supports newer SMB versions.
I've added support for Samba 4 (running SMB3) to the Time Capsule so it can work with modern macOS: https://github.com/jamesyc/TimeCapsuleSMB
SMB1 has major security issues but even those ignored (which a lot of people on private home networks shouldn't be too worried about) it's also slow as hell on MacOS
> people on private home networks shouldn't be too worried about
philosophically I would beg to differ about any premise assuming we can trust the castle and moat model. Even on home networks.
philosophically, it depends on who you are. If you're Sam Altman or Vitalik Buterin, yeah, your private home network should be considered to be under attack by hostiles trying to steal from you, but for the rest of us, the NSA isn't going to make an international incident trying to get at your Plex server.
For the rest of us we have IoT devices and guests malware filled devices constantly probing the internal network.
For those that are interested: I've managed to build Samba 4 and get it running on a Apple Time Capsule https://github.com/jamesyc/TimeCapsuleSMB
In comparison to other 'changes' Apple usually do those one are realistic. Dropping deprecated networking practices that worth upgrading (meaning, if you already have newer macOS clients mostly with apple stack, update your servers)
I just hope they won't break anything they don't need to break (which is more concerning usually) and that they won't drop other things that do make sense to keep until transitioned properly (eg. OpenGL as one example)
Although TimeCapsule is more than decade old, it serves nicely with TimeMachine (automatic backups). Sad to see that going away permanently for Apple Silicon.
"Dropping support for things just because they are old" is typical commercial software behavior. I can run the latest Linux kernel and still have access to an internal floppy disk drive if I wanted to, yet billion dollar companies can't seem to manage to support 10 year old stuff.
I still am sore from when I "upgraded" macOS and suddenly support for my 1080i TV was gone. Yesterday it worked fine, today it's gone. All because they can't be bothered to maintain a code path.
The economics make the reasoning obvious, though.
With closed source IP, every bit of support, from bug fixes, to feature requests, to compatibility fixes to integrate with newer mainline/foundational tooling, costs money.
With open source projects (and in particular ones like Linux where there's a huge number of contributors and interested parties), support for would-be niche facilities can keep going as long as there's someone with the knowledge and spare time to do it.
AFAIK, Linux has a policy that any change you make must not break existing kernel features, and if it does, you have to fix them yourself.
With that said, kernel maintainers have recently indicated that some unused subsystems are likely to be removed soon, as AI is now finding (real) security vulnerabilities in them that nobody is willing to fix.
> The economics make the reasoning obvious, though.
Looking through Apple’s financial statements, they theoretically could support these old systems. I’m not saying a cut doesn’t make sense, but just that economics-wise they could keep one guy for it
There's somewhere in the ballpark of 166,000 employees at Apple, just unfathomable scale [1]. It is not unreasonable to ask that someone specific is responsible for each particular small feature and ensuring it keeps working. Trying to apply an economic analysis to such a "free as in beer" operating system does not seem to work well. Consider the question of "how many small holes can you have in your wooden sailing ship"?
[1] https://stockanalysis.com/stocks/aapl/employees/
Not that it impacts your argument significantly, but for the sake of completeness, Apple employs a huge number of retail employees.
Yes. A more useful number would be how many employees are working on macOS specifically. Hard to find a definitive number for that.
Less than 1% of that number. Of course this is hard to actually count properly since there is a lot of shared work across platforms.
It’s not unreasonable to ask but they can and are saying “no”.
> With open source projects (and in particular ones like Linux where there's a huge number of contributors and interested parties), support for would-be niche facilities can keep going as long as there's someone with the knowledge and spare time to do it.
And that increasingly gets difficult to do. i386 support went down the drain in the kernel in 2012, i486 is probably going down the drain as well this year [1] and soon-ish another bunch of really really old stuff will go as well because it isn't maintained [2] - good luck finding someone still running IPX networks or ISDN hardware.
[1] https://www.theregister.com/2026/04/06/patch_to_end_i486_sup...
[2] https://lwn.net/Articles/1068928/
Ideally, at a certain point, you'd have some sort of upstream FLOSS project where you could let John Q. Public do that sort of low-level, maintenance-only stuff, while the proprietary "value adds" are closed source, until it becomes financially attractive to FLOSS them.
IIRC, that could exist for MacOS in the form of Darwin.
The economics make the reasoning obvious, though
These arguments fall apart when you remember that Apple has several trillion dollars at hand. It's not some shoestring startup.
Ironic, considering Linux is dropping a LOT of old devices from 7.1
It's my understanding that those are (mostly?) devices where they legitimately have reason to believe there are zero users. In particular, there's a pattern where someone will discover that Linux has a driver that hasn't actually worked for a long time, and nobody's complained, so then they remove it.
I'm not suggesting they keep it all... just ironic as a statement considering Linux is literally removing a bit lately... <= 486, the bus drivers for mice, etc.
I'm mostly okay cleaning out a lot of legacy and unsupported devices. In some ways, and for people who want to support really old hardware it may not be great, but they're most likely stuck on older versions for other reasons.
I don't think it is ironic, though; Linux isn't "Dropping support for things just because they are old", it's dropping unused things when they cause code quality problems. That's rather different than features being dropped because the vendor doesn't want to bother supporting them even though they still worked and have active users.
Feetures being dropped because nobody wants to support them is a prominent feature of free software. That's part of "no warranty". If it does bother you, you're supposed to step up to support it yourself, or pay someone to.
Okay, but that's the exact opposite of what we're discussing here? Linux, which is free software, isn't dropping features because nobody wants to support them, but because nobody's using them. Meanwhile, macOS, developed as a commercial product and with a much weaker showing of open source or even source availability, is dropping features because Apple doesn't want to support them.
> Linux, which is free software, isn't dropping features because nobody wants to support them, but because nobody's using them.
I disagree. They are dropping support because nobody is maintaining them. There may very well be people still using these features, but they haven't been motivated or aren't properly skilled to offer to maintain them going forward, and haven't motivated some other skilled person via payments.
Rather, the core difference is that Apple does not offer a way to have external people take over providing support.
If anybody would care to keep these drivers up, it would be easy to revive them as kernel modules. It's not that Linux is going to lose an upstream interface to publish events from a bus mouse.
Support for 486 is another thing, but, frankly speaking, running a modern Linux kernel on a 486 makes no sense, either form a practical or preservationist / museum perspective.
Absolutely--Linux is by no means perfect.
What is the age of the 486SX code vs the code paths Apple is removing right now?
Just this week we've seen Linux talking about dropping support for some older hardware precisely because attacks against it were becoming easier with LLMs.
Do you have a detailed source for this? I want to read more about it.
Because I noticed my old Core 2 Quad PC with Nvidia 8600GT that my parents use as their email and Facebook machine, doesn't boot with any linux newer than Kernel 6.1 even though I can get Windows 11 to boot on it.
So the myth around "Linux is great for old PCs", highly depends on what HW you have.
> even though I can get Windows 11 to boot on it
But by modifying it right? Because the core 2 does not support SSE4.2
Sounds like an Nvidia driver module issue more than anything else. If I had to guess, simply removing the Nvidia module should fix that and still get you video through one of the various backup paths (opennuveau etc)
Ok what do you suggest? Every feature ever written should be supported in perpetuity even if 3 people are using it? Clearly you didn't think this through. Should 2026 computers have a ISA interface as well?
Supporting old hardware and software has a substantial cost that only grows exponentially. Companies exist to print money, not to cater to the smallest niches.
It would be great if they could support things, but I most definitely understand why they don't.
macOS Tahoe still has floppy drive support.
Really? Like actual internal floppy drives, and not just USB floppy drives (which even Windows still supports)?
I actually wouldn't expect macOS to support actual floppy drives since the OS's list of supported devices doesn't include any that shipped with floppy drives. The fact that I cannot install the latest macOS on any devices older than 2019 is a related, but separate problem.
In this case, what would internal floppy drive mean? The last Macs with floppy drives (I think Old World G3s?) used a custom Apple controller, integrated into the chipset, with a bespoke 20-pin cable.
Even on the old world G3s, Mac OS X never had floppy drive support. There was a driver someone had ported from BSD you could install.
Yes! And Zip Disk support. I have an app that has to detect different external media types and have a pile of old drives that work just fine.
USB floppy drives indeed.
A USB floppy drive behaves almost identically to a USB hard drive-yet another SCSI block device. The cost of keeping support for them is minimal
This is very different from legacy PC floppy drive controllers which spoke a completely different protocol, which was very complex and full of footguns
Legacy floppy controllers also had various legacy features almost nobody used, like soft deletion of sectors (IBM added this in the 70s for use with primitive database systems), or attaching tape drives using the floppy interface (nowadays if you buy a brand new tape drive, the interface options are SAS or Fibre Channel)
And soon I won't be able to run old 32bit binaries with the latest Linux Kernel. We all move on.
Umm no?
> There are still some people who need to run 32-bit applications that cannot be updated; the solution he has been pushing people toward is to run a 32-bit user space on a 64-bit kernel. This is a good solution for memory-constrained systems; switching to 32-bit halves the memory usage of the system. Since, on most systems, almost all memory is used by user space, running a 64-bit kernel has a relatively small cost. Please, he asked, do not run 32-bit kernels on 64-bit processors.
https://lwn.net/Articles/1035727/
> "Dropping support for things just because they are old" is typical commercial software behavior.
You are deluding yourself if you think open source folks are better. You can't compile and run a modern version of GCC on Solaris 10 on SPARC, for example. And we just had a story here last week about removal of bus mouse support. It's only a mild exaggeration to say that lots of folks will check the commit activity on github and of a project doesn't have commits this week it should be banned from the internet and the universe.
Then you have the problem that many dev tools are not forward compatible. CMake is a huge issue. An ubuntu system from 2020 has CMake on it, but it won't compile anything that uses CMake that was released in recent years because the cmakefiles are incompatible.
CMake is a bad example, you can build latest CMake and run it on Debian Jessie. It will work perfectly. CMake is the thing you can build on really old compilers.
Open source is better because as long as you have a single developer caring to maintain the device, it will still be there.
Bus mouse support isn't removed because it's old but because it's been broken since 2015 and nobody noticed.
Open source is better because if you need the device driver then you can step up to maintain it yourself. It doesn't mean someone else will magically do it for you. I've used devices with very obscure incantations to get some random person's hack to run on Linux that worked natively on Windows.
Given the mtbf of disks, I wouldn’t risk doing backups on a device discontinued in 2018.
It may not be the easiest surgery in the world, but you can replace the hard drive in a Time Capsule. You'll probably want to replace the power supply too after this much time
Disks can be replaced.
wasn't it capped at 3tb? is the drive swappable to something bigger? They discontinues them in 2018, the wifi in them is old, single disk (no raid).. better to just pick up a multidrive nas or use cloud backups. What we should be asking for is timemachine backends for cloud providers.
It's not "officially" supported, but iFixit has a guide for swapping the drive on a time capsule. I used mine with a 4TB drive for years with no trouble.
Sure, but still just a single drive.
My old trusty readynas should still work i think.. probalby. Supports smd for time machine and smb3 generally. If it doesn't I might finally be pushed onto a nas that isn't discontinued.
I had an early ReadyNAS that was a champ for years. I wonder if the fact that it was based on SPARC had anything to do with its longevity.
From a risk assessment standpoint, I’ve seen my Time Machine backups corrupted much more frequently than I’ve experienced drive failure. Happened with both my Time Capsule and then my Synology RAID.
It’s a “nice to have” automatic backup, but not a primary backup destination for me.
"...if you have an Apple silicon Mac and AFP support is dropped from macOS 27, that would leave you unable to upgrade without replacing your network storage."
How big is this market? I'm not saying vibe code a product, but...
That "replacement" is not always full-on hardware.
I have colleagues who are running AFP on BSD for continuous backups on their systems, and they have to reconfigure something new to be able to continue backing up their systems.
I use this for networked Time Machine backups for multiple Macs in my household. Works just as well over tailscale VPN.
https://wiki.archlinux.org/title/Netatalk
One of my COVID projects was to set up a networked Time Machine backup on Raspberry Pi.
Every single one of the blogspam sites (lifehacker, howtogeek, etc.) told you to use AFP/HFS+/Netatalk. I had so many problems with this. Time Machine would work well the first few times and then slow to a crawl. If there was a power outage, look out. The whole thing would be corrupted. It wasn't the network. FTP and scp worked just fine.
Eventually I found one blog that told you how to do it with SMB and ext4. It was that site that I learned about the much malignment of AFP and HFS+. SMB/ext4 worked like a charm. Six years later and not a single hiccup.
Also works for System 7 based Macintoshes. In case you got frozen in a glacier in 1991.
Nah, classic Macintosh OSes aren't compatible with modern AFP.
They are compatible with netatalk though. The project split between version 2 and 3, but in recent releases they folded them back into a single thing. Current netatalk releases support all versions of AFP.
> That "replacement" is not always full-on hardware
Oh, I was thinking only of software. Apple dropping AFP in the OS doesn't mean it can't work at all.
I believe the only supported mode is SAMBA now.
Netatalk has been around for like 25 years: https://github.com/Netatalk/Netatalk
Relevant to the discussion is that the project comes with an AFP client as well. I have no experience with the client but I've used the Netatalk server for more than 15 years.
I've already built it: https://github.com/jamesyc/TimeCapsuleSMB
This runs Samba 4 on the Apple Time Capsule.
> will require connections to certain servers to be made using at least TLS 1.2
Seriously, no-one should still be using 1.1 since ... 5 years ago? It's not even the 1.2 -> 1.3 previous upgrade problems we're talking about.
Longer than that, even. A similar requirement for iOS apps was in the cards 10 years ago. https://developer.apple.com/news/?id=12212016b
(Yes, this article is about an extension of the deadline. I don't remember what happened after that.)
Yes this one seems unambiguously a good idea
So I should have to e-waste my printer, scanner, and wireless card reader that only exist on my LAN, and that I connect to via a web interface just because… reasons?
If you read the article and the linked documentation, you'll see that those things aren't in the list of what this change applies to.
https://support.apple.com/en-us/126655
On an unrelated note, I use Time Machine and I’m surprised at how unpolished, not to say downright buggy, all the animations are. They used to look magical, but now they are a mess of elements popping on and off and things moving and then vanishing the next frame and so on. It looks like they kept changing Finder and Time Machine didn’t keep up; they kept fixing the bare minimum to have it compile and nothing more.
Even the new app launcher. It takes 1-2 seconds to draw a bunch of icons. Scrolling is also choppy. This even happens on their newest machines. How this possible in 2026?
We put a supercomputer in a laptop just so the OS could struggle to draw a grid of icons. Peak modern engineering.
Apple hardware team looking at Apple software team: You guys, everything OK over there?
I just did the work of the software team for them:
I got Samba 4 working on Apple Time Capsules: https://github.com/jamesyc/TimeCapsuleSMB
If you have a legacy Time Capsule you'd rather not e-waste, you can try this out. Note that this is very much beta quality software, so don't expect it to work on all configurations.
My app launcher loads as soon as it's triggered (4 fingers swiped in). There is a weird 5ms glitch on the zoom in animation, but otherwise it loads in within a few ms, and scrolling is smooth. I'm on a M2 MBA macOS 26.3.1
Edit, but don't take this as me saying I like the current state of macOS. There are plenty of weird edge cases I wish they'd fix, but on the whole the OS works fine for me.
For me the launcher itself loads fast, but it takes 1-2 seconds to show the icons. And when I scroll down it often times does not draw the icons fast enough.
My app launcher loads fine as well, but sometimes (a few times a week) it just doesn't find any apps at all. Or only some of them.
It isn't even centered on my monitor, looks like an intern wrote it.
>How this possible in 2026?
Enshittification. When you're an ecosystem monopoly, people are forced to buy your shit no matter how bad it gets.
Macs are nowhere near a monopoly.
I would (grudgingly) accept this argument for iOS, but for Mac OS it doesn't make any sense.
If you want to keep your shiny Apple stuff you're effectively trapped. Their walled garden approach works extremely well…
What "walled garden"? The Mac-only apps aside, what's that that you couldn't get on Windows (and most even on Linux), either the same thing, or a zero-switch-cost subscription (it's not like you need to rebuy something to go from Music to Spotify for exampe).
iCloud? You can use Google Drive or Dropbox or whatever MS calls theirs. Apple Music? Pretty sure it plays at both.
Most major apps are cross platform (Adobe, Microsoft and such), or Electron based.
Syncing with your iPhone? You can do that from Windows and Linux as well. Airpods? Work with Android and Windows too.
And so on.
>Macs are nowhere near a monopoly.
You didn't read what I said. I said MacOS IS a monopoly in the Apple ecosystem.
Apple users dissatisfied with how MacOS is changing, as the one I was replying to, have nothing else to switch to without uprooting themselves out of the Apple ecosystem altogether, which most don't do but just put up with it.
The Mac isn’t a monopoly, but choices for desktop operating systems are indeed limited. I use macOS, Windows, and Linux on a regular basis. The only one that’s improving is the Linux ecosystem. I prefer macOS to Windows, but macOS is not as polished in 2026 as it was in 2016 or especially in the Snow Leopard era.
Apple used to solve this through the ruthless application of good taste; we hope this returns with the new CEO
Originally, it was "solved" because computers were the only thing Apple sold. They couldn't afford a Lisa without successes like the Apple II.
Now, Apple's incentives are changed. The App Store alone makes multiple times more money in a year than the sum of annual Mac and iPad sales put together. The OSes for these products are decidedly back-burner so Apple can focus on expanding AppleTV's IP library and lobby for Apple Pay. Ternus won't be your savior.
https://9to5mac.com/2026/04/27/john-ternus-says-apple-has-so...Even ignoring the lack of polish, the animations make it very hard to actually use Time Machine.
A couple of revisions in Time Machine was just fine.
The UI was cute and fun if you wanted an older revision of a single file (especially since you could see previews of the file as you warped backwards).
However, importantly, the snapshots were available in Finder itself so you could browse through the files you wanted and retrieve them.
The worst feature of Time Machine is how it takes over every single display you have. Even though it only shows content on one screen, it feels the need to completely black out the others.
I don’t know what kind of time machines you’ve been using, but typically everything changes outside all the portholes when you time travel.
skeuomorphism is back, boys!
Damn, I can't reply to the girls comment, but it's back for them too :P
What about girls?
Classic Apple engineering. I would there is technically a "single responsible individual" assigned to Time Machine, but it covers the whole product, so the UI component falls by the wayside as the work on other products or the low level portion.
The "quality" Apple delivers is by now a complete joke. It's going south since over a decade, and this never stopped.
It's like that because people are still buying. Even for the ridiculous prices Apple asks for.
So why would Apple actually care? They get away with this "quality", so from a business standpoint there is simply nothing that needs investments or even just attention.
It's a race to the bottom. Like everywhere else. That's simply how the system which people created works.
I stopped using it because the interface was wretched and it didn't need to be cutesy. Rsync found it's way back into the tool belt.
i wonder if support for DIY backup tools isn't prioritized when a future iCloud monthly subscription will be pushed eventually.
future iCloud monthly subscription?
I've been paying for iCloud storage since I don't know when.
Other issues with Time Machine:
- Very slow, even on an M4.
- 3rd party devices are often unreliable. Not directly Apple's fault, but the lack of certification process hurts
- SMB extensions: In order for an SMB server to support Time Machine, it must support Apple's AAPL extensions to SMB (my understand of this my be a bit uncorrect)
- Network device connecting is separate from Time Machine device connecting. This causes an inconsistent UX.
- Not possible to browse a backup. You can only view file or folder's backup over time. In other words, you can scroll through time but you can't browse a single backup (point in time). This requires using 3rd party tools like BackupLoupe
You can't turn it on without an external drive attached, even though it saves local backups. It works if you mount a disk image and then point TM to it with the CLI.
On an unrelated note
If you know it's unrelated, why try to derail this discussion? Why not start another? What's the point?
Could it be that you only posted this in an active thread so it would get the most eyeballs, instead of being judged on its own merits?
It’s more tangential than unrelated. It’s how conversation naturally flows, and this is a discussion board. No need to fire up a new post.
On another tangential note: you’re insufferable. If you’re like this in the real world, I can’t imagine you’ve got many people wanting to hold a conversation for very long.
> Could it be that you only posted this in an active thread so it would get the most eyeballs
How is this a criticism? Seems smart to me.
Makes sense since it hasn't been supported since 2018 lol
Are you thinking of Time Capsule? Time Machine is fully supported and I use it every day on Tahoe.
Yep, I misread.
I’m reminded of that time 10-years ago when Apple rewrote parts of its networking code (discovery/mDNSResponder), and it caused so many issues they had to revert the code.
https://news.ycombinator.com/item?id=9026192
https://www.macrumors.com/2015/06/30/apple-releases-os-x-10-...
They’re possibly dropping a protocol they’ve been saying they’d drop for years, and tightening connection validation.
This is nothing like the mDNS stuff.
Unless I'm mixing it up, I still remember this as the infamous "wifi update"
Finally, TLS 1.2 is baseline, after having been released 18 years ago.
Why is it that Apple products attract blogspam titles?
> Networking changes coming in macOS 27
And yet:
> This year, with just over six weeks to go before that first beta of macOS 27, we already have two warnings of what might be coming.
> It repeated those warnings with macOS Sequoia 15.5, but still hasn’t confirmed when AFP will be lost.
> Although Apple carefully avoids being too specific, it warns that this change could come “as early as the next major software release”,
I originally added a different title: Apple is dropping AFP/TimeMachine support in macOS 27.
It seems like somehow got overwritten to the original title of the post.
Nevertheless, knowing Apple so far, unless _some_ large-enterprise~y customer comes and objects, they will drop the support. We already know Intel support is dropping. Why not clean up rest of the things from the kernel and the userspace?
I was also surprised by this. The post appears to contain next to no actual information.
The facts: Apple put a warning in macOS 15.5 that AFP support might be dropped in the future.
The claim: AFP support will be dropped in macOS 27.
I just do not see how you get from the facts to the claim. This is just complete speculation.
When i saw the headline I briefly allowed myself to hope that DNS settings would no longer be set universally (requiring manual intervention when switching networks if not using DHCP) but of course it's nothing useful and only "Apple is breaking stuff because they can"
>Apple made SMB its primary file-sharing protocol in OS X 10.9 Mavericks, over 12 years ago…
…and yet SMB support in macOS remains slow and buggy to this day. I tried all combinations of server-side settings and obscure plist tweaks to make SMB navigation and search work as fast as they do on my Linux machine out of box before giving up. It is very obviously not a priority for their services revenue, so there’s no incentive for fixing any of the long standing problems.
> SMB support in macOS remains slow and buggy to this day. I tried all combinations of server-side settings and obscure plist tweaks to make SMB navigation and search work as fast as they do on my Linux machine out of box before giving up. It is very obviously not a priority for their services revenue
That's where my thoughts went, too. I can make SMB "better" but not "great" usually, but it's annoying to have to look up and apply, and still have things not optimal. Just in case, IIRC I find this the most useful:
But surely some of the other tweaks that LLMs suggest may help, too.I found something fun last week--- Apparently if you use Adobe tools, there is a sync plugin they install for finder that can cause big issues with SMB shares. Might help you if you have that!
Would you have any more info? I have both: adobe synctool + issues with smb shares
Apple has their own implementation of SMB in macOS and it's one of the worst out there. Dropping connections, can't re-establish connections automatically after sleep, and performance issues.
Why they didn't keep Samba (licensing, probably) is beyond me.
> licensing, probably
Correct, Apple has dropped everything that switched to GPLv3 which includes newer versions of bash, samba, etc.
Yeah, can't remember the last time I even bothered with SMB because it's so buggy. Usually I don't need filesystem behavior, I'll just push/pull files over SSH.
I regret the difficulty of mounting an SSH connection as a filesystem. It requires Fuse and giving permissions to the kernel.
I used to do that a lot in some old versions of OS X, but then MacFUSE got abandoned and picked up as osxfuse, then that broke then got fixed repeatedly with several Mac updates, and I gave up.
How is nfs on mac?
Not really equivalent, I know, but if smb is that bad I am curious about alternatives.
NFS works way better than SMB, but the Finder is not without its troubles. Sometimes it will take 10 minutes to display a folder for reasons, mostly.
The Finder is really an horrible piece of sh*t of software, slow as hell, doesn't provide the most basic information[1], and, of course, doesn't work properly when browsing network shares either SMB or NFS.
[1]virtually all common file browsers (Windows Explorer, Gnome Nautilus, KDE dolphin) displays at all times : the number of files in the current folder, their size, the number of files selected, their size; also all but the Finder have a "recent files" section that actually contains the latest files used, while the Finder displays a completely random selection of recent files, but never the most recently used ones.
With the exception of summed size of selected items, the Finder has all of that. Help yourself to the "View->Show Status Bar" menu option. Also, "View->Show View Options->Calculate All Sizes" to show storage size for directories.
I can pull about 700MB/s off my NAS over a 10Gb link. I wouldn’t exactly call it slow.
In a corporate environment SMB3 on MacOS was lagging Windows and Linux big time (at least a few years ago when I tested).
How's the latest to your NAS? Are those single large files or many small files ?
I think SMB is quite chatty -- if you have lots of small files, you can get quite slow.
That was SMBv1. Not SMB of today.
Still true for extended attributes, which Finder and Spotlight love to query.
...and don't even get me started on locking, if many people write to one file you're on borrowed time
Completely unrelated but I love the layout / blog format of eclecticlight.co
Can't they hire an extra dev per abandoned project to not abandon it?
You greatly under-estimate how much work it is to maintain old code, particularly to maintain in securely.
AFP and Time Capsules add attack vectors to the OS, which can be targeted even when few users actively using them. One dev could keep both basically functional, but to what end? User counts are already small, and people that aren't using them are still exposed by their mere existence.
Shrinking or removing code, in my experience, is one of the biggest single wins you can have in software development. Less to test, less to update, less to secure.
Yes, writing and maintaining less code is great for a developer. We can follow this to the logical extreme and marvel at how easy it is to write and maintain a program whose only function is to print "hello, world" to the console. Nevermind the users, what do they matter?
By the very nature of assigning development time to these antiquated features, you're assigning them away from other features, bug fixes, or requests that may have a larger user reach.
Development is a finite resource, the argument here is to allocate them to hard-to-secure, outmoded, replaced, technology instead of anything future relevant. It doesn't make sense.
The person was specifically suggesting hiring extra developers for maintenance. While I'm familiar with the concept that "nine women can't birth a baby in a month", I don't think that applies so much to maintenance of old code paths. Apple makes over $100b in net profit per year, a truly unfathomable amount of money, they can afford it, and I think not only can they afford it but that it would benefit them. Even if only 1% of your users use X, for Apple that might translate to perhaps 10 million people using X, or at 0.1% 1 million. Hiring a dev to improve the experience for that many people just makes sense at scale, software is write-once reproduce-a-million-times-for-free.
I have no doubt the bean counters have drawn up every kind of spreadsheet they can imagine trying to quantify it as being not worth it, but I don't think these kinds of quality of life things can be easily quantified, because each small thing maintained might only impact a small number of users but collectively, all of these kinds of small things add up to either a system with sharp corners that constantly papercuts the user (current Apple software), or one that is so seamless that it engenders customer loyalty for decades (old Apple software). This kind of shortsighted penny-pinching is how companies become a shell of their former selves, suffering a slow death-by-MBA.
My estimate is that your lower count of people who could still be using Time Capsule is off by a factor of 20, but we'll continue with the idea that Apple could justify hiring a single engineer to be assigned full-time on the TimeCapsule, starting today.
This hypothetical employee would:
- update the TimeCapsule firmware from using AFP to using a brand new SMBv3 implementation, including both porting and making it "fit" within the constraints of 2013 hardware.
- be designing and implementing a migration system for both the TimeCapsule and the Mac to move to using the new implementation
- be responsible for all security analysis, QA, and documentation for the firmware and migration system
They also need to get it done by the first macOS version that has AFP removed, which will land in developer preview in six weeks and need to be feature complete in about 17 weeks.
If Apple hires a new developer capable of doing that, I don't want them to relegate them to supporting 13 year old hardware. I want them improving things that the majority of users actually need.
And that is the core problem with this sort of argument. Even with infinite money or the infinite possibilities of open source contributions, the availability of talent is still _always_ finite.
> Even if only 1% of your users use X, for Apple that might translate to perhaps 10 million people using X, or at 0.1% 1 million. Hiring a dev to improve the experience for that many people just makes sense at scale; software is write-once, reproduce-a-million-times-for-free.
If Apple is known for anything, it's that they keep moving ahead with the operating system, even if it means leaving some users behind… and that goes back to the late 80's/early 90s when apps had to be "32-bit clean" [1] to run on System 7 and newer Motorola 68000 processors like the 68020, 68030, etc.
Some beloved apps don't make the transition, and that happens with every technology transition like 68000 to PowerPC, then to Intel, and then to ARM. And of course, from Classic Mac OS to OS X, Mac OS X then macOS.
I've been active in user groups since the Apple II days; there's a cohort who mostly won't upgrade their hardware but complain bitterly that they lack certain features. Or they attempt these fragile and unreliable hacks to keep their old hardware and software running.
Usually, they're doing themselves more harm than good, especially if they're not technical.
Also, it's pretty unlikely recent college graduates would be able to tackle old C++ or Objective-C code written before they were born, in some cases, to keep something like AFP alive. Regardless of Apple’s financial success, it's not a good use of resources to keep a bespoke network protocol going that originated in 1985 that less than 1% of the installed base is actively using.
[1]: https://en.wikipedia.org/wiki/Classic_Mac_OS_memory_manageme...
> You greatly under-estimate how much work it is to maintain old code, particularly to maintain in securely.
cf Linux removing old network drivers this week for the same reason (without the hand-wringing that this Apple announcement is getting!)
Is the code that Apple is removing support for open source? The Linux drivers could at least plausibly be picked up and used by someone who really wants to, so it doesn't seem to be a fair comparison
The AFP protocol was deprecated in 2013. The AFP server was removed in Big Sur, so over five years ago. This is removal of AFP client support.
Apple's source is not public, but the protocol is still fully documented if someone wanted to create a new client and server. https://developer.apple.com/library/archive/documentation/Ne...
However, they'd be better off just creating a driver and server around the open source Netatalk implementation.
I'm still using my time capsule. I don't really trust the hard drive inside of it, but I basically use it to connect to an SSD that I attached to it. Unfortunately, Nest Wi-Fi, that I use as a router doesn't have any USBs, unlike some cheaper routers. I know that it's, I know that it will be gone after Tahoe. I'm still not sure what I'm going to do about this. I mean, I don't really want to fool on us
I mean, it's basically just like a time machine backup plus, uh, a little bit of some older files that I don't want to keep on my main Mac.
seems like any NAS would take way more space than I would love to. I suppose one alternative would be actually getting some kind of like Beelink PC and then maybe setting up a proper home server, moving some of my side projects in there, running plex from it. The problem is that the current ram prices, it's a surprisingly expensive solution.
This update broke my workflow! I use Netatalk [1] with AFP to share files between my Macintosh 512ke and MacBook via AppleTalk.
Look, my setup works for me. Just add an option to re-enable AFP [2].
1. https://github.com/Netatalk/netatalk
2. https://xkcd.com/1172/
Wouldn't the TimeCapsules still work over wired connections, just like any other hard drive, even if the networking AFP protocol support is dropped?
No, afp is application layer. It doesn't matter how the device is connected at layers 1 or 2.
You could shuck the disk and use it directly, though. Then it's just a disk, not a time capsule.
Next: macOS iCloud backups and the eventual deprecation of local Time Machine backups altogether. More services revenue!
Changing out the network protocol used for local network backups isn't the same thing as getting rid of local network backups.
TFA:
> Apple made SMB its primary file-sharing protocol in OS X 10.9 Mavericks, over 12 years ago, and has repeatedly told us that support for its predecessor AFP will be removed in the future.
Hence "next". And by local I meant directly connected drives.
If the pattern continues, they'll announce deprecation this fall and remove the feature in 2039.
I don’t think they’re going to drop support for local backups any time soon. There are lots of enterprise customers relying on Time Machine who will never switch to iCloud. TM can also be configured via MDM settings and is a really common solution for Mac IT administrators, so it would take ages to deprecate it.
"There are a lot of enterprise customers using Xcode server". And poof, it's gone and there's now only the Xcode cloud service. It would not take ages. It would take a single release which no longer supports it. Complaints? Keep using the old one or subscribe.
I am fairly confident in saying that approximately zero enterprise customers used Xcode server. It was extremely limited and targeted at small shops which didn't see the need for a proper CI setup but had an extra machine sitting around to run builds on.
I think they switched to cloud because;
- BigCo already is a zero-sum deal, they use Xcode-cloud as a service, which runs back on their servers anyway... (Google, Amazon, Azure, etc)
- It was not a long-standing product. Introduced somewhere around 2016~ish if I remember correctly. Only lasted a few major releases. Easier to kill than an established one (ie. TimeMachine)
They switched the default protocol from AFP to SMB a long time ago.
They aren’t deprecating Time Machine. The old protocol is being removed.
The old protocol hasn’t worked well for a long time, at least in my experience
People have been asking for iCloud macOS backups since iCloud was introduced. It would be very popular. I'm not sure why Apple doesn't offer this, because it's easy revenue.
Because people will fill their iClouds. An important value proposition of iCloud is that customers pay for more space than they need. Time Machine grows to fill all available space.
They could sell a separate service for Time Machine backups. I'm not an Apple customers so I don't know if it makes sense, but they could make customers pay X times the last N days in the backup plus Y times a number M of snapshots in the past.
I wouldn't pay for it, so that's one data point.
I would, so that's a second data point.
I would have agreed if they hadn't put in the engineering effort to upgrade the backup disk image to APFS instead of HFS+. They wouldn't have done that if the plan was to deprecate it soon. (IIRC the next version of macOS is also dropping HFS+ support)
Also it's honestly really weird that they don't have iCloud backups for Macs yet. It seems like a no-brainer feature. I know I would easily switch to Apple over Backblaze as Backblaze's client is just terrible.
As long as you can migrate/recover your Mac from your TM backup, I guess that this scenario won't happen.
I like having control over my backups.
I've been working on improving an open source menubar that wraps restic. Right now it is a bit rough around the edges, but my plan is to have a simple onboarding experience for various backend services like B2.
Over the weekend, I added a "Smart backups" feature that uses all the same directories that the backblaze menubar app and timemachine excludes. This was the primary missing feature for me. It even generates and backups your Brewfile...
https://github.com/lookfirst/ResticScheduler
The story of TimeMachine is a tragedy: a revolutionary feature that made backups accessible for normal people allowed to lie fallow for a decade or more until it's as annoying and unreliable as anything else. I now use Carbon Copy Cloner to avoid the TM headaches.
Good nudge to look into using CCC. Which folders do you backup? It seems slower than TM so thinking of backing up home folder only
I never found it to be overly reliable. It was reliable... for a while. Then would silently fail/stop working, or just tell you that it had stopped working and that whatever you had in it was no longer accessible.
And then I went to Acronis True Image backing up to my Synology NAS, but that became unreliable too - oftentimes when I'd go to do a restore, the client would crash trying to read the catalog.
So, like you... CCC nightly to my Synology, with a Snapshot rotation on it - snapshot the previous night's backup at 8pm, and then kick off that night's backup at 11pm.
It was unreliable over SMB. Not surprising when you look at what it was doing. It would create a virtual drive on the share, map that and backup to it. There was too much going on for that to be reliable.
For me it was a key DB file inside the Photo library which Time Machine omitted from all backups and prevented me from restoring the library. Not fun.
Yeah, you may be right. I have fond memories of it from around 2008, but those might be from the initial experience and not all the "you need to recreate your back from scratch" errors that would crop up after a while.
This is reflexive and ill-considered FUD. Be better.
also known as "prescient"
> Next: macOS iCloud backups and the eventual deprecation of local Time Machine backups altogether. More services revenue!
The "new computer" out of box account creation and first sign in experience on both Windows 11 and MacOS are clearly designed to drive end users towards perpetual for life monthly recurring subscriptions for (Microsoft 365 Personal, OneDrive, iCloud storage, etc).
Imagine the difficulty for the ordinary non technical person (absolutely not a stereotypical HN reader) ever being able to stop paying for iCloud when they have 600GB+ of their family photos and videos and stuff backed up to it.
> Imagine the difficulty for the ordinary non technical person (absolutely not a stereotypical HN reader) ever being able to stop paying for iCloud when they have 600GB+ of their family photos and videos and stuff backed up to it.
To be fair, non technical folks get a lot of value from this scheme too. I can't imagine many of my relatives successfully juggling backups and external media in a way that would actually keep their content safe in case their phone is lost/stolen/destroyed.
Right now the monthly fees for this stuff are rather modest, but I could see a future where the dominant players lock out competitors and use their market position to raise prices significantly.
Ubiquiti is really taking up the slack in some areas Apple has abandoned.
I bought a UNAS-2 (and a couple of 12 TB IronWolf Pro drives) a few months ago when the "time capsule will not be supported in a future version of macOS" warning first appeared. It has been outstanding alongside the rest of my UniFi setup, and perfectly supports Time Machine backups. The UniFi Identity macOS app means my family's computers always stay authenticated/connected and my wife & kids don't have to do anything to make Time Machine just work.
If you're a power user who loves the Apple aesthetic and you already have a UniFi setup at home, you'll feel right at home switching from Time Capsule to a UNAS.
What format is the destination drive? My ideal is APFS clone backups to a remote drive, but I don't know if there are any network setups that support that, even though you can do it to a local drive.
I was under the impression that's how SMB TimeMachine backups work currently
Have you tried it also working to backup files from Linux and windows machines ? Was hoping for a good mixed backup solution and I'm getting Ubiquiti would deliver here.
Also why the 12TB ironwolf drives specifically ? Personally I always was a fan of buying true enterprise (the ones designed for "online" or near line storage) but sometimes specific models and sizes of random drives do very well in Backblaze testing
I don't have any Linux/Windows machines, but I've seen nothing that would dissuade me from using it when I eventually migrate my current laptop to Asahi Linux.
As for IronWolf Pro drives, I chose them because they seem to have similar longevity to enterprise drives with less noise (my equipment is in a closet under the stairs).
Does the mac still lack a SMB/CIFS browser?
I was shocked years ago that the mac, famous for its early network peer discovery and zeroconf and all, couldn't present a list of SMB servers and shares despite that kind of function being around forever on every other platform in existence.
macOS has a Network location in the sidebar that will show other SMB devices discovered on the network.
Must have been a lot of years ago since Samba was introduced in Jaguar (2002), and SMB replaced AFP as the default for file sharing as of Mavericks (2013).
It's had it since before version 10.4, though it wasn't fantastic, I'll give you that.