xz-utils backdoored

socK

Supreme [H]ardness
Joined
Jan 25, 2004
Messages
5,017
I'm good.
 

Attachments

  • XZ Utils_mod.png
    XZ Utils_mod.png
    149.5 KB · Views: 1
Also there's probably a good chance you can find this package installed on Arch or more rarely, possibly even Mac (from brew), though it doesn't seem clear yet if anything will trigger.

Not sure you're vulnerable if you don't have SSH open to the wild, either.

As a bonus, GitHub immediately obliterated the repo for it, which seems to be causing some havoc for some distros, like Nix.
 
Also there's probably a good chance you can find this package installed on Arch or more rarely, possibly even Mac (from brew), though it doesn't seem clear yet if anything will trigger.

Not sure you're vulnerable if you don't have SSH open to the wild, either.

As a bonus, GitHub immediately obliterated the repo for it, which seems to be causing some havoc for some distros, like Nix.

The package was also available under Windows. I noticed the GitHub repo was taken down, the first time I've seen that happen regarding such a high profile repo.
 
Also there's probably a good chance you can find this package installed on Arch or more rarely, possibly even Mac (from brew), though it doesn't seem clear yet if anything will trigger.

Not sure you're vulnerable if you don't have SSH open to the wild, either.

As a bonus, GitHub immediately obliterated the repo for it, which seems to be causing some havoc for some distros, like Nix.
Already identified & fixed by the Arch Linux team.
https://archlinux.org/news/the-xz-package-has-been-backdoored/

1711775358534.png
 
The repo obliteration aspect is nothing more than evidence concealment. Now we can't track down who pushed what updates and actually installed the backdoor.
 
Microsoft incorporated the known-vulnerable version into its vcpkg C/C++ library manager after prompting by a user. Some have suspected certain users advocating for updating to these specific versions as being part of some coordinated attempt, while others caution about false positive internet knee-jerk sleuthing.
 
Microsoft incorporated the known-vulnerable version into its vcpkg C/C++ library manager after prompting by a user. Some have suspected certain users advocating for updating to these specific versions as being part of some coordinated attempt, while others caution about false positive internet knee-jerk sleuthing.

Apparently the malicious actor in this instance was in touch with many distro/software devs/maintainers, trying to get the newer version of xz-utils added because of it's 'great new features'.
 
Last edited:
Why would a commonly used protocol such as openssh need to be changed ?

Well admittedly I remember the time that some very core stable packages, notably openssl, used to sign tons of stuff, turned out t have a low key backdoor capable vulnerability (heartbleed?) because it was rarely updated. That was the time everyone switched over to libressl for a little while/if possible as I recall but a little fuzzy on the details.

Still, this is a bit of a strange situation to target something that has a rather narrow distribution - unless its for a targeted attack on a system or user likely to be using the very latest everything, or is some sort of positioning hoping to be sleeper down the line, its a bit unusual...then again, maybe the intention to exploit it was to let it marinate on systems that didn't update it frequently, but the discoverer just so happened to become aware thank to a fortuitous set of circumstances and upended their entire plans. This was the kind of thing that in another life a couple decades ago I would have been really interested to investigate, but my skills are a bit out of practice I admit.
 
The repo obliteration aspect is nothing more than evidence concealment. Now we can't track down who pushed what updates and actually installed the backdoor.
I read an article about this yesterday that mentioned that there have been two main contributers for the last few years--one guy who was a long-term primary dev, and a newish person who got commit privileges about a year or two ago. Odds are it was the second one.

Edit: post #11 update 2 tracks with what I read.
 
I read an article about this yesterday that mentioned that there have been two main contributers for the last few years--one guy who was a long-term primary dev, and a newish person who got commit privileges about a year or two ago. Odds are it was the second one.

Sure seems like the second dude played the long con and here we are.

I guess this is probably a nation state at work
 
Well admittedly I remember the time that some very core stable packages, notably openssl, used to sign tons of stuff, turned out t have a low key backdoor capable vulnerability (heartbleed?) because it was rarely updated. That was the time everyone switched over to libressl for a little while/if possible as I recall but a little fuzzy on the details.

Yeah, my company had been putting off the update to openssl 1.0.2 because updating openssl is a giant PITA. But we finally did, cause people were whining about TLS 1.2 and/or diffie-hellman ephemeral / perfect forward security... And then a month later, heartbleed. Which wasn't in openssl 1.0.0. (Or maybe heartbleed was in 1.0.0, and we were on 0.9.8; I dunno OpenSSL was terrible at making updates that would fix the security bit and break everything else) Thanks a lot. We didn't switch to libressl, too much work, not enough gain. But we did switch to running TLS in a separate, locked down, process from our webserver, so if there was a future issue with like read/write filesystem access, it wouldn't be able to do anything.

All that said, while clearly some of the 'let's update this dependency' messages were suspicious, there's at least one that seems like a normal update all the things guy with bad timing.

Keeping everything up to date is a treadmill, and there's usually not a lot of benefit. It doesn't take much more work to audit a quarter of updates than a year of updates, and most software doesn't turn into a pumpkin in the meantime. The only issue is when there's an actual important fix, you've got to figure out how it applies to the version you're running if you're running something out of date.
 
Last edited:
Keeping everything up to date is a treadmill, and there's usually not a lot of benefit.
I had a customer that had an old unix-of-some-type system that they used as a database server for a couple of applications, including ours, and they were proud of the fact that they hadn't run an update in a decade. They were paranoid about an update breaking something. They got mad one day when they wanted to upgrade the database software and that forced them into adding an OS patch.
 
Thankfully this was caught essentially in beta before reaching some fortune 100 companies. Imagine the $100s of billions in ransomware that could have hatched, this could have brought some very serious stuff down dead.

Months worth of auditing to come.
 
Already identified & fixed by the Arch Linux team.
https://archlinux.org/news/the-xz-package-has-been-backdoored/

Regarding sshd authentication bypass/code execution
From the upstream report (one):

openssh does not directly use liblzma. However debian and several other distributions patch openssh to support systemd notification, and libsystemd does depend on lzma.

Arch does not directly link openssh to liblzma, and thus this attack vector is not possible. You can confirm this by issuing the following command:

ldd "$(command -v sshd)"

However, out of an abundance of caution, we advise users to remove the malicious code from their system by upgrading either way. This is because other yet-to-be discovered methods to exploit the backdoor could exist.

Noice

This is why I don't use ARCH BTW.
Because they use stable versions of packages, and when there is a security issue with a specific version they issue an update and put out a news post with details about the vuln and what you can do to mitigate it (roll-back, or update, etc)?
 
You can confirm this by issuing the following command:

ldd "$(command -v sshd)"

Heads up, in case you weren't aware. Dynamic linking is dark arts, running ldd on an untrusted executable is actually dangerous. A malicious executable can get code execution if you examine it with ldd.
 
Heads up, in case you weren't aware. Dynamic linking is dark arts, running ldd on an untrusted executable is actually dangerous. A malicious executable can get code execution if you examine it with ldd.
Related:
https://news.ycombinator.com/item?id=36839800
https://catonmat.net/ldd-arbitrary-code-execution

So LD_TRACE_LOADED_OBJECTS=1 command -v sshd might be safe to run instead? (Edit: no, that's essentially what ldd does. So if that was a problem before, it still is when ran this way)
 
Last edited:
Related:
https://news.ycombinator.com/item?id=36839800
https://catonmat.net/ldd-arbitrary-code-execution

So LD_TRACE_LOADED_OBJECTS=1 command -v sshd might be safe to run instead? (Edit: no, that's essentially what ldd does. So if that was a problem before, it still is when ran this way)

Yeah, so there's nm that I think doesn't run the exe, but you'd need to fiddle with options to get similar output, and it doesn't examine the linked libraries for their dependencies.

Otherwise, maybe you've got to run ldd in a sandbox?
 
"But open source is more secure because people spend all their waking hours reading over lines of code to make sure nothing malicious gets in"

Reality is, no they do not. Open source also allows the bad guys to find ways in, just as much as people can find holes and get it patched. But at least being open source, more eyes can dig into it when needed vs waiting for MS or someone else to decide to patch something or not. I would of expected far more scrutiny against a repo being tied into distro's by said teams...

This is why things are so insecure because people just tie into a github repo and assuming it is safe and clean cause it is on github, copy pasta and off they go...

But, in the end, SSH should not be accessible on the internet, and any critical or prod server should have outbound internet blocked all together with specific ACL's for anything that does need to hit the internet (webserver)
 
Last edited:
Yeah, my company had been putting off the update to openssl 1.0.2 because updating openssl is a giant PITA. But we finally did, cause people were whining about TLS 1.2 and/or diffie-hellman ephemeral / perfect forward security... And then a month later, heartbleed. Which wasn't in openssl 1.0.0. (Or maybe heartbleed was in 1.0.0, and we were on 0.9.8; I dunno OpenSSL was terrible at making updates that would fix the security bit and break everything else) Thanks a lot. We didn't switch to libressl, too much work, not enough gain. But we did switch to running TLS in a separate, locked down, process from our webserver, so if there was a future issue with like read/write filesystem access, it wouldn't be able to do anything.

All that said, while clearly some of the 'let's update this dependency' messages were suspicious, there's at least one that seems like a normal update all the things guy with bad timing.

Keeping everything up to date is a treadmill, and there's usually not a lot of benefit. It doesn't take much more work to audit a quarter of updates than a year of updates, and most software doesn't turn into a pumpkin in the meantime. The only issue is when there's an actual important fix, you've got to figure out how it applies to the version you're running if you're running something out of date.


NGINX (or similar) reverse proxy as the front end and do SSL offloading to that and leave all your server behind that as HTTP - centralised management and control. If said apps/sites support being run that way. Went through this years back and then ran a config like this and made life so easy for updates and patching with testing being a breeze.
 
"But open source is more secure because people spend all their waking hours reading over lines of code to make sure nothing malicious gets in"

Reality is, no they do not. Open source also allows the bad guys to find ways in, just as much as people can find holes and get it patched. But at least being open source, more eyes can dig into it when needed vs waiting for MS or someone else to decide to patch something or not.

This is why things are so insecure because people just tie into a github repo and assuming it is safe and clean cause it is on github, copy pasta and off they go...

But, in the end, SSH should not be accessible on the internet, and any critical or prod server should have outbound internet blocked all together with specific ACL's for anything that does need to hit the internet (webserver)

The exploit was injected at build time via the build script, examining the source code wouldn't have highlighted any abnormalities - Hence the reason only the tarball was affected and not the git source. Commits can be rolled back, while any commits by the malicious actor can be scrutinized for similar attacks. It's damn near impossible to hide in git.


View: https://vanichitkara.medium.com/xz-backdoor-is-open-source-software-really-secure-e926cbfb53d5

On the other hand, the vulnerability would have never been discovered had XZ utils not been open-source. The backdoor got traced back to the source, and the timeline unfolded only because the commits and PRs were visible to the community, and they had the means to dig deep to get to the root cause. Had XZ utils been closed-source, Andres might have just shrugged and accepted the 500ms delay as a "new feature" with no means to investigate further and blow the whistle. Only because XZ utils was open-sourced did we all get to know the intricate details of the whole ordeal.
 
On the other hand, the vulnerability would have never been discovered had XZ utils not been open-source.

That seem quite the strange speculation.

. Open source also allows the bad guys to find ways in, just as much as people can find holes and get it patched.
I do no think it is the number one issue (everything is open source if you love dissambly...), it is more like in this case, malicious actor.

People hire (if they were not from day one a malecious state or company entity) people with strong open source commit history able to push stuff in code to by purpose introduce backdoor to gain access, we can assume China-Russia-Isreal-Americans, etc... all have open source contributor with good track record on the payroll.
 
That seem quite the strange speculation.

Which is a quote out of context when they explain their reasons why. I think discovered (in time) is what the article was specifically stating.

Another article elaborating on the vulnerabilities and strengths of OSS software in such a scenario (linked off Lemmy as you need an account under 'X', formally Twitter) to read the original article:

https://lemmy.ml/post/13861351
 
Which is a quote out of context when they explain their reasons why. I think discovered (in time) is what the article was specifically stating.
I think the rest of the article you posted explain the position really well and contradict it (the may), if it was a rogue employee in a company that tried to create a backdoor for a state or ransom ware group, etc... , the company code review process could have got it, the same person the same way could have caught it and told them and many other ways
 
I think the rest of the article you posted explain the position really well and contradict it (the may), if it was a rogue employee in a company that tried to create a backdoor for a state or ransom ware group, etc... , the company code review process could have got it, the same person the same way could have caught it and told them and many other ways

Lets not forget the point I highlighted earlier:

The exploit was injected at build time via the build script, examining the source code wouldn't have highlighted any abnormalities - Hence the reason only the tarball was affected and not the git source. Commits can be rolled back, while any commits by the malicious actor can be scrutinized for similar attacks. It's damn near impossible to hide in git.

There's little doubt the ability to readily go back through commits and audit the code is the larger part of what saved the day in this example. Without that ability, the individual that found the exploit would have just shrugged off such a minuscule impact to performance and moved on.
 
A company with a good code review process would probably have caught it. But if they had just one or two people working on it (same as this case) then it may have gone unnoticed, or have been intentionally ignored. Then the only way to have discovered it would be to decompile, or strace, and look for strange behavior.
 
A company with a good code review process would probably have caught it. But if they had just one or two people working on it (same as this case) then it may have gone unnoticed, or have been intentionally ignored. Then the only way to have discovered it would be to decompile, or strace, and look for strange behavior.

You may not have seen my edit:

There's little doubt the ability to readily go back through commits and audit the code is the larger part of what saved the day in this example. Without that ability, the individual that found the exploit would have just shrugged off such a minuscule impact to performance and moved on.
 
There's little doubt the ability to readily go back through commits and audit the code is the larger part of what saved the day in this example. Without that ability, the individual that found the exploit would have just shrugged off such a minuscule impact to performance and moved on.
The company that made the compiler (say Intel, nvidia, microsoft, etc..) would have had that ability to look at their commit history and catch the rogue employee, same for the companies that use the compiler.

You may not have seen my edit:

It just goes to the company making that compiler and it is possible that it is harder for people to infiltrate those companies and take the risk of doing cyber-attack stuff than to do it by getting good reputation on open source code from anonymous account, same goes for the process and motivation to catch them.
 
NGINX (or similar) reverse proxy as the front end and do SSL offloading to that and leave all your server behind that as HTTP - centralised management and control. If said apps/sites support being run that way. Went through this years back and then ran a config like this and made life so easy for updates and patching with testing being a breeze.

NGINX is too much code for this. We went with stud, the Varnish folks restarted development for stud as hitch.
 
The company that made the compiler (say Intel, nvidia, microsoft, etc..) would have had that ability to look at their commit history and catch the rogue employee, same for the companies that use the compiler.

That's a bit of an assumption. The reality is: We really have no idea how proprietary software development progresses behind the scenes, or how efficient it is for others to view commit history.

It just goes to the company making that compiler and it is possible that it is harder for people to infiltrate those companies and take the risk of doing cyber-attack stuff than to do it by getting good reputation on open source code from anonymous account, same goes for the process and motivation to catch them.

Stating it's harder for malicious actors to infiltrate the proprietary software development chain is a bit of a stretch, especially when there's little doubt there's less eyes inspecting code. The reality is: If proprietary code was compromised by a malicious actor, the possibility exists that we'd never find out about it.
 
Back
Top