Though this outage may be more related to the copy.fail upgrade cycle, it reminds me of a thought I've had recently in respect of agents.
In the UK they have this issue called "TV pickup" (https://en.wikipedia.org/wiki/TV_pickup). TV pickup is where everyone in the UK watching a popular TV show gets up to boil a high-powered tea kettle at the same time on an ad break. This causes a temporary surge in electricity demand and leads to real outages. It was a mystery at first but now is accounted for.
I suspect the global internet is facing an "agent pickup" problem where significant changes (e.g., releases of new frontier models or new package versions) puts unpredictable pressure on arbitrary infrastructure as millions of distributed agents act to address the change simultaneously.
While the timing with the copy.fail patches mentioned by a few comments here seems suspicious indeed, I have seen this repeating over the last few weeks: packages.ubuntu.com was hardly reachable on some days, causing apt-get to take forever to update the system. They have been struggling hard recently, it seems.
Best of luck to the people having to deal with this mess on a holiday!
Tinfoil hat mode: a competitor wants to exploit copy.fail on some ubuntu servers, and is DDoSing canonical so that they can't update and thus patch the vuln
Double tinfoil hat mode: an attacker learned of my plan to finally update my personal computer out of 20.04 today and is DDoSing canonical so I can't do that and I remain vulnerable to the backdoors they've found.
If you can access AF_ALG on a server you don't need to do shenanigans like that. It's much easier to just find another bug and exploit that one instead.
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
In what way is it "not a special bug"? It's a publicly known root access from RCE exploit. Those cannot be a dime a dozen. I'm sure it's especially interesting for any shared hosting services which might be affected, and could be delayed. I could find any places running containered services and exfiltrate secrets parallel services, no?
What constitutes "special" for you, out of curiosity? Something chaining with a hypervisor exploit?
Seems reasonable to assume it's something to do with the recently publicized exploits. More likely, this could be an extortion attempt by criminals rather than a competitor.
This seems to be pretty targeted, and with the services affected like livepatch and such this could indeed be an actor DDoSing to avoid patches rolling out for copy.fail
Though this outage may be more related to the copy.fail upgrade cycle, it reminds me of a thought I've had recently in respect of agents.
In the UK they have this issue called "TV pickup" (https://en.wikipedia.org/wiki/TV_pickup). TV pickup is where everyone in the UK watching a popular TV show gets up to boil a high-powered tea kettle at the same time on an ad break. This causes a temporary surge in electricity demand and leads to real outages. It was a mystery at first but now is accounted for.
I suspect the global internet is facing an "agent pickup" problem where significant changes (e.g., releases of new frontier models or new package versions) puts unpredictable pressure on arbitrary infrastructure as millions of distributed agents act to address the change simultaneously.
In the US we have the Super Bowl Flush: https://medium.com/nycwater/the-big-flush-on-super-bowl-sund...
Well, that and the rush to upgrade for copy.fail.
Has Ubuntu published patches yet?
Yes, but I can currently only load the page about them via the Wayback Machine: https://web.archive.org/web/20260430191621/https://ubuntu.co...
We're at the stage where we blame AI for anything as a first reaction?
(Love the tv pickup story. I also thought of that, in other situations)
I wasn't blaming this issue on that in particular, just making an more general observation in line with the post. I'll make that clearer.
Indeed. It is far more likely to be the copyfail issue.
While the timing with the copy.fail patches mentioned by a few comments here seems suspicious indeed, I have seen this repeating over the last few weeks: packages.ubuntu.com was hardly reachable on some days, causing apt-get to take forever to update the system. They have been struggling hard recently, it seems. Best of luck to the people having to deal with this mess on a holiday!
Tinfoil hat mode: a competitor wants to exploit copy.fail on some ubuntu servers, and is DDoSing canonical so that they can't update and thus patch the vuln
Double tinfoil hat mode: an attacker learned of my plan to finally update my personal computer out of 20.04 today and is DDoSing canonical so I can't do that and I remain vulnerable to the backdoors they've found.
The plot thickens...
If you can access AF_ALG on a server you don't need to do shenanigans like that. It's much easier to just find another bug and exploit that one instead.
The copy.fail website is very silly, it is not a special bug. If anyone gets compromised by that vuln their node architecture was broken anyway, patching copy.fail doesn't help.
In what way is it "not a special bug"? It's a publicly known root access from RCE exploit. Those cannot be a dime a dozen. I'm sure it's especially interesting for any shared hosting services which might be affected, and could be delayed. I could find any places running containered services and exfiltrate secrets parallel services, no?
What constitutes "special" for you, out of curiosity? Something chaining with a hypervisor exploit?
I thought copy.fail is a privelage escalation exploit, become root from a regular user? Am I missing something?
How would "node architecture" make people vulnerable to this?
You have to have shell access to a victim first right? Or am I missing something?
Seems reasonable to assume it's something to do with the recently publicized exploits. More likely, this could be an extortion attempt by criminals rather than a competitor.
why a competitor? Criminals, secret services, country adversaries...
s/competitor/intelligence services/
+1, it hasnt even been 24 hours and I already see these stupid CyberSec companies trying to squeeze themselves between this.
This seems to be pretty targeted, and with the services affected like livepatch and such this could indeed be an actor DDoSing to avoid patches rolling out for copy.fail
We are so broken as society ddos'n ubuntu is now a thing.
Noticed it because snap didn't work, snap has its own status page just fyi: https://status.snapcraft.io/
Frustrating because the Slack snap is broken so every day you have to downgrade it and I guess you can't without connectivity.
This might be the incentive I need to finally purge snap.
Snap recently got much more polished.
I used to have to find a script to purge excess old snaps that would fill up my hard drive. Now Ubuntu only keeps two versions of each snap.
I was wondering why the script didn't have to ever clean more than one version, even when I took longer between running updates.
Just move to flatpak, much nicer to deal with
In my testing I find the exact reverse. I much prefer snap to flatpak.
I like to imagine it's returning a 500 error response asking you to email rhonda@ubuntu.com