> About the name: The subdomain was called onyx, a reference to the Pokémon Onix (a Pokémon made of multiple boulders, fitting for a multi-node architecture). It was an informal codename chosen by the engineer. It had no connection whatsoever to Fivecast ONYX, an unrelated 3rd party commercial product previously used by ICE. We understand this coincidence caused confusion, and we address it further below.
Yeah I'd sorta second that actually. I can't "judge" on everything they say in the blog post. But some things I definitely recognize as "bad-faith".
Datadog RUM (browser-intake-datadoghq.com) - real-time user monitoring. every click, every page load - on a FedRAMP platform processing PII and biometrics.
Well duh, yes, DataDog does have those capabilities. Doesn't mean you use all of it, just coz you use RUM in general. We also use DataDog and RUM. But we also use filtering, including filtering out the known PII sources we have in our specific case (non-FedRAMP) and we don't have entire session recording enabled for example and we only sample.
Yet no mention of that in the post. They just assume that they must be sending PII from a FedRAMP site to DataDog. No proof of what data actually does get sent.
I agree; I didn't want to editorialize too much as I think the writeup stands on its own.
My takeaway was that in this case, even an author with a clear and extreme bias against this sort of thing could find only unfortunately-common bad practices rather than deeply nefarious intent. Of course, this is just the front-end code, but this just looks like a KYC platform to me. Most of the secondary reports on this write-up seem to completely ignore section 0x13 and jump to the specific conclusions the author does not draw.
The fact that we've created a system where Discord need and want a KYC platform is a different and quite strange thing, but the KYC platform itself just looks like what it says on the tin.
Any time you interact with the financial services industry in a meaningful way, they are doing almost exactly all of these checks on you. It is mandated by law, and they're overseen by FINTRAC in Canada and FinCEN in US.
When you applied for a bank account for your freelancing business (or startup idea), some people googled you, looked for PEPs (politically exposed persons) in your family, stored photos of your IDs and probably even printed them off, and sent everything in a nice package to some "risk" department. Who knows how that department is handling your data.
The only difference is that Persona is trying to put a front-end on it and selling the process as a SaaS. Look up "KYC/KYB saas" and you'll find hundreds of businesses doing this (including, of course, Persona).
edit: I want to emphasize that this isn't restricted to just business banking. Poor wording on my part. Lots of industries are legally mandated to conduct KYC/IDV. Notaries do it in home sales, your stock brokerage is doing it, employers in regulated industries do it to everyone on payroll. The list is very long. Unfortunately...
The government should take on responsibility for KYC imo, instead of letting 100 vendors come up with their own solutions. But that would probably have some nasty externalities.
There is more than “unique web design” that cause reading issues with that article. For one the lowercase and as well as arcane keywords and organization. Not mention the autoplay music. I have communicated this to the author and they shrugged it off.
>> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Yes most of us have read the rule. And I wasnt complaining in my comment I was directing the author as to why their submission was getting complaints and flagged.
Stomping your feet that it doesn’t matter when people are telling your article is slightly unreadable really doesn’t make you or your article worthwhile to invest time in. No matter how good it is.
Have a quirky website fine, but if you have important information you want to be taken seriously for, maybe consolidate that information into a more accessible format. Otherwise people will tell you AND do otherwise.
Some of the most interesting authors in tech on the internet have just absolute awful websites. Blinking animations everywhere, weird sounds, "cute" little javascript animations like it's 1999 again.
the last time the website was submitted, over half the comments talked about website design instead of the actual content. we can probably skip doing it again.
different people have different tastes. people complain about boring websites, people complain about websites with animations or colors. the only guarantee is that the conversation isnt interesting.
if you are on the side that doesnt like music, animations, whatever, i recommend a combination of noscript and using reader mode.
This is my problem with it. Put in a mute button if you're going to do this, otherwise it's just user hostile. No problem with stylized websites and fun animations.
Why not use your main account to post this, unless you mean it was submitted less than 4 days ago when your account was created? Genuinely curious what benefit a fresh account gives you here?
>unless you mean it was submitted less than 4 days ago
maybe you are unaware, but you can browse HN without an account, and you can browse previous submissions (years back, even!). its not like i can only see posts made in the last 4 days.
amazing comment from a 13 year old account. really embodying the spirit of the HN guidelines. thanks for the warm welcome.
so, what exactly, are you basing your accusation on?
was it me saying "use noscript and reader mode" or maybe "people have different opinions"? or just by nature of having created an account after you created yours?
this sort of accusation is what will drive HN to be a shit community to participate in. just accuse anyone you slightly disagree with as being a bot/ai
im not even sure what your issue, or rezonants issue with me even is! all i said was different people have different opinions, and you two are crawling up my ass about it. lets hope we never have to talk to each other about anything slightly important.
I find your surefooted statements about what hacker news is, will become, or ever was to be worth little seeing as you have only just become a part of this community.
Might I recommend leaving the snark at home and approaching your interactions here with good faith instead of acting like you're the authoritative arbiter of community interactions?
>Might I recommend leaving the snark at home and approaching your interactions here with good faith
the highest irony, coming from someone who literally cross-examined me and then insulted me rather than engage with my comments at all! can you seriously not see how ridiculous that is?
somehow i am the one who needs to approach interactions in good faith?
lmao. no. you dont get good faith from me. if you want good faith, dont start your conversations with an interrogation and follow it with an insult.
Your original post was written as if you were a long time community member here and had an investment in how the community works and yet were a brand new green account. Something you probably don't know if you haven't spent enough time here is that it is indeed quite common to use a green alt to post comments that you don't want associated with your main profile. It is so common and allowable in fact that most users doing so just call them throwawayXYZ, as in that exact pattern. So I'm used to seeing this.
But it didn't make sense why you would do that for this particular post, especially while seeming to assert some kind of authority and familiarity with the cultural norms of the HN community. So I asked you about it, not aiming to negate or reduce your point but rather to learn about why you would use an alt account.
Instead of simply replying that you were in fact new and this was not an alt, rather that you were a long time lurker and a first time poster, you jumped down my throat and the throats of everyone else involved in this thread, being toxic and abrasive in every way possible. Both of my posts here which you think are at odds with each other are in service of encouraging you to check yourself, because you are being unreasonable; not meant as an insult demanding you to clap back. "You must be great at parties" should prompt you to step back and think of what passers by would think of your behavior and demeanor, because I'm giving you a social hint that a lot of people will find that behavior off putting. That statement is not a prompt for you to grow more vitriolic and upset, and continue the behavior I'm trying to point out is not going to serve you well in a social space like this.
Everyone should read this comment, it does a really eloquent job explaining the situation.
The fundamental thing to understand is this: The things you hear about that people make $500k for on the gray market and the things that you see people make $20k for in a bounty program are completely different deliverables, even if the root cause bug turns out to be the same.
Quoted gray market prices are generally for working exploit chains, which require increasingly complex and valuable mitigation bypasses which work in tandem with the initial access exploit; for example, for this exploit to be particularly useful, it needs a sandbox escape.
Developing a vulnerability into a full chain requires a huge amount of risk - not weird crimey bitcoin in a back alley risk like people in this thread seem to want to imagine, but simple time-value risk. While one party is spending hundreds of hours and burning several additional exploits in the course of making a reliable and difficult-to-detect chain out of this vulnerability, fifty people are changing their fuzzer settings and sending hundreds of bugs in for bounty payout. If they hit the same bug and win their $20k, the party gambling on the $200k full chain is back to square one.
Vulnerability research for bug bounty and full-chain exploit development are effectively different fields, with dramatically different research styles and economics. The fact that they intersect sometimes doesn't mean that it makes sense to compare pricing.
Why is it the USA doesn't have their own bug bounty program for non-DOD systems? Like, sure, they have a bounty for vulns in govt systems. But why not accept vulns for any system, and offer to pay more than anyone else? It would give them a competitive advantage (offensive & defensive) over every other nation. End one experimental weapons program (or whatever garbage DOD spends its obscene budget on) and suddenly we're not cyber-sucky anymore.
I think you are confusing bug bounty programs with espionage and cyber warfare. The USA definitely accepts vulnerabilities for any system (or at least target systems), paying good money for them if it is an attack chain, giving them that competitive edge you mention. They have at least one military organization over this exact thing (USCYBERCOM) and realistically other orgs to include the intelligence community.
There are no bug bounties on "any" system because bug bounties are part of programs to fix bugs, not exploit them. They therefore have bug bounties for their own systems, as those are the ones they would be interested in improving. What you described, which they definitely do, is cyber espionage, and those bugs are submitted through different channels than a bug bounty.
But that's the thing, I think they specifically need a non-IC program. If I'm a white-hat, grey-hat, or a somewhat cagey black-hat, I'm not gonna reach out to a shadowy organization with a penchant for extrajudicial surveillance, torture & killing to make $50k on a bug. Sure, you can try your hand at selling them an exploit that won't get revealed. But if only you and The Company know about the bug, and it could mean the upside in a potential war (or just a feather in an agency head's cap), why would The Company keep you alive and able to talk about it? OTOH, if the program you're reporting to doesn't have a track record of illegal activity, personally I'd feel a lot safer reporting there. And ideally their mission would be to patch the bug and not hold onto it. But we get to patch first, so it's still our advantage.
Because collecting and gatekeeping vulns so you can attack other countries is bad manners.
If you look up some of the Snowden testimonies, it's implied USA at least had access to some 0-days at the past, but nobody admitted to it, because it just bad national politics.
Even if USA is doing dog-shit in politics now, openly admitting to collecting cyber-weapons (instead of doing it silently) is just an open invitation to condemnation
From being in the trenches a couple of decades ago, they do. They just don't disclose after they pay the bounty. They keep them to themselves. I knew one guy (~2010?) making good money just selling exploits (to a 3-letter agency) that disabled the tally lamps on webcams so the cams could be enabled without alerting the subject.
Even though I agree with the conclusion with respect to pricing, I don't think this comment is generally accurate.
Most* valuable exploits can be sold on the gray market - not via some bootleg forum with cryptocurrency scammers or in a shadowy back alley for a briefcase full of cash, but for a simple, taxed, legal consulting fee to a forensics or spyware vendor or a government agency in a vendor shaped trenchcoat, just like any other software consulting income.
The risk isn't arrest or scam, it's investment and time-value risk. Getting a bug bounty only requires (generally) that a bug can pass for real; get a crash dump with your magic value in a good looking place, submit, and you're done.
Selling an exploit chain on the gray market generally requires that the exploit chain be reliable, useful, and difficult to detect. This is orders of magnitude more difficult and is extremely high-risk work not because of some "shady" reason, but because there's a nonzero chance that the bug doesn't actually become useful or the vendor patches it before payout.
The things you see people make $500k for on the gray market and the things you see people make $20k for in a bounty program are completely different deliverables even if the root cause / CVE turns out to be the same.
*: For some definition of most, obviously there is an extant "true" crappy cryptocurrency forum black market for exploits but it's not very lucrative or high-skill compared to the "gray market;" these places are a dumping ground for exploits which are useful only for crime and/or for people who have difficulty doing even mildly legitimate business (widely sanctioned, off the grid due to personal history, etc etc.)
I see that someone linked an old tptacek comment about this topic which per the usual explains things more eloquently, so I'll link it again here too: https://news.ycombinator.com/item?id=43025038
The lack of CUDA support on AMD is absolutely not that AMD "couldn't" (although I certainly won't deny that their software has generally been lacking), it's clearly a strategic decision.
Supporting CUDA on AMD would only build a bigger moat for NVidia; there's no reason to cede the entire GPU programming environment to a competitor and indeed, this was a good gamble; as time goes on CUDA has become less and less essential or relevant.
Also, if you want a practical path towards drop-in replacing CUDA, you want ZLUDA; this project is interesting and kind of cool but the limitation to a C subset and no replacement libraries (BLAS, DNN, etc.) makes it not particularly useful in comparison.
They've already ceded the entire GPU programming environment to their competitor. CUDA is as relevant as it always has been.
The primary competitors are Google's TPU which are programmed using JAX and Cerebras which has an unrivaled hardware advantage.
If you insist on an hobbyist accessible underdog, you'd go with Tenstorrent, not AMD. AMD is only interesting if you've already been buying blackwells by the pallet and you're okay with building your own inference engine in-house for a handful of models.
Even disregarding CUDA, NVidia has had like 80% of the gaming market for years without any signs of this budging any time soon.
When it comes to GPUs, AMD just has the vibe of a company that basically shrugged and gave up. It's a shame because some competition would be amazing in this environment.
Nvidia has a sprawling APU family in the Tegra series of ARM APUs, that span machines from the original Jetson boards and the Nintendo Switch all the way to the GB10 that powers the DGX Spark and the robotics-targeted Thor.
The CPUs in their SOCs were not up to snuff for a non-portable game console until very recently. They used (and largely still do I believe) off the shelf ARM Cortex designs. The SOC fabric is their own, but the cores are standard.
In performance even the aging Zen2 would demolish the best Tegra you could get at the time.
You should note that the Switch, the only major handheld console for the last 10 years, is the only one using a Tegra.
And from everything I've heard Nvidia is a garbage hardware partner who you absolutely don't want to base your entire business on because they will screw you. The consoles all use custom AMD SOCs, if you're going to that deep level of partnering you'd want a partner who isn't out to stab you.
There has been a rumor that some OEMs will releasing gaming oriented laptops with Nvidia N1X Arm CPU + some form of 5070-5080 ballpark GPU, obviously not on x86 windows so it would be pushing the latest compatibility layer.
PlayStation and Xbox are two extremely low-margin, high volume customers. Winning their bid means shipping the most units of the cheapest hardware, which AMD is very good at.
Agreed on ZLUDA being the practical choice. This project is more impressive as a "build a GPU compiler from scratch" exercise than as something you'd actually use for ML workloads. The custom instruction encoding without LLVM is genuinely cool though, even if the C subset limitation makes it a non-starter for most real CUDA codebases.
ZLUDA doesn't have full coverage though and that means only a subset of cuda codebases can be ported successfully - they've focused on 80/20 coverage for core math.
Completely different layer; tinygrad is a library for performing specific math ops (tensor, nn), this is a compiler for general CUDA C code.
If your needs can be expressed as tensor operations or neural network stuff that tinygrad supports, might as well use that (or one of the ten billion other higher order tensor libs).
Claude is doing the decompilation here, right? Has this been compared against using a traditional decompiler with Claude in the loop to improve decompilation and ensure matched results? I would think that Claude’s training data would include a lot more pseudo-C <-> C knowledge than MIPS assembler from GCC 2.7 and C pairs, and even if the traditional decompiler was kind of bad at N64 it would be more efficient to fix bad decompiler C than assembler.
It's wild to me that they wouldn't try this first. Feeding the asm directly into the model seems like intentionally ignoring a huge amount of work that has gone in traditional decompilation. What LLMs excel at (names, context, searching in high-dimensional space, making shit up) is very different from, e.g. coming up with an actual AST with infix expressions that represents asm code.
I've been doing some decompilation with Ghidra. Unfortunately, it's of a C++ game, which Ghidra isn't really great at. And thus Claude gets a bit confused about it all too. But all in all: it does work, and I've been able to reconstruct a ton of things already.
One of the other PhD students in my department has an NDSS 2026 paper about combining the strengths of both LLMs and traditional decompilers! https://lukedramko.github.io/files/idioms.pdf
Not Claude, but there are open-weight LLMs trained specifically on Ghidra decomp and tested on their ability to help reverse engineers make sense of it:
Agree. IDA is surely the “primary” tool for anything that runs on an OS on a common arch, but once you get into embedded Ghidra is heavily used for serious work and once you get to heavily automation based scenarios or obscure microarchitectures it’s the best solution and certainly a “serious” product used by “real” REs.
https://vmfunc.re/blog/persona
I definitely recommend reading this primary source before drawing conclusions about the code as most of the secondary reporting is quite low quality.
reply