Hacker Newsnew | past | comments | ask | show | jobs | submit | NathanWilliams's commentslogin

I thought it was clear it ran through iCloud: https://support.apple.com/en-au/HT209460

It only works between devices signed in with the same Apple ID


The encryption keys are shared via the iCloud keychain, but I think the actual sharing happens over a protocol called Companion Link.


"no good reason", except the performance, battery life & build quality?

Different people have different needs, preferences & ideological beliefs to you. If you measure the actions of others, based on your viewpoint alone, you will never understand the rest of humanity.


Ok, but apart from the performance, the novel hardware architecture, the battery life, the build quality, the recyclability, and the documentation and tooling for developing custom kernels, what has Apple ever done for us?


The biggest lesson that owning a Macbook has taught me is that my world view as a power user is a minority and worthless one that should be relegated to /dev/null under most circumstances.

Fucking nobody cares about the gripes I would have with it, and I've even come around to appreciate the fact that Apple sells goods engineered to the desires of people who aren't me (read: the majority): I can't deny the speakers on my Macbook are fucking amazing, and most people would care about that rather than whether something is FOSS.


Apple being more open would improve performance in certain applications, though.


I would love to have some examples, have you got any? Or is it just a feeling you have, that justifies your personal preferences?


What do you mean, it is pretty obvious that having more low level control would allow developers improve aspects of their software in certain cases. I am using software for some very time sensitive applications, and in this case linux > windows > macos just because apple's documentation and implementation is not dev friendly while linux with open source amd drivers have the best performance of all. Being closed and not dev-friendly has its costs, too.


I think you will find the downvotes aren't envy, but incredulity that you think $600k is anything but wealthy.

https://dqydj.com/average-median-top-household-income-percen...

What is considered a middle class income?

In 2022, middle class encompasses household income from $35,090.50 to $140,362.00. This measure of middle class uses the range from half of median household income to twice the median household income.


Making over 600k is great but you are probably not wealthy.

First, once you make over like 150k, your federal tax bracket goes into 30%+ and closer to 40%. And you probably live in NY or CA so it's more like 50% with state and local. So it may look like someone makes 200k more than you do (or whatever) but half of that difference right away goes to taxes.

Second, making that much money generally allows you to live in a nicer area but not not have an extravagant life in that area. Case in point, a small 4 bedroom house in the suburbs of NY is like 1.5 mil nowadays. The same house in a suburb of Cleveland might be like 250k. 6x difference in housing cost. So it doesn't make sense to compare the 600k to the 140k and say "wow that's so much above the average middle class" because they don't face the same costs.

And by the way you end up paying like 35k in taxes on that house in NY.

Don't get me wrong, 600k is a great income for a household and especially an individual. Just that most people who make that still live what looks like a middle class life just slightly nicer, in a better area and hopefully with some savings.


600K is the amount a redneck can pull working in the oil field in upper management. It is by no means "wealthy" unless you are in the Midwest or working as a white collar paper pusher in a field that doesn't pay well. It is high compared to a Starbucks barista perhaps but go to places like New York and Boston and the Bay Area level of salaries hardly stand out.

You can make 600k a year running small businesses like online tutoring too. Too many people (especially in tech) get overly caught up with hype and passion projects and refuse to put in the effort to remain dynamic and keep up with advances in their industry. If LLMs are the latest thing, go learn how the engineering and science side of language models work before launching yet another side project without doing any market research. Sure it's an empirical field, but you can learn a lot from simply replicating paper results. Drop that React tutorial and learn how variational learning works. Train and fine-tune a model yourself. Don't just blindly call an API and build products on undifferentiated factors like prompt engineering.

The writing's on the wall right now that machine learning engineering is what the industry will be embracing in the next 3 years. It may be partly hype, but it is obvious that there is immense value to be captured.


> upper management

I other words, elite. Nothing to do with the middle.


“Cool, but”

Wow, did you actually watch the video, and understand what was accomplished?

The technical expertise shown in this project is extremely impressive, and you are getting hung up on the title not living up to your interpretation of it.


>Wow, did you actually watch the video, and understand what was accomplished?

Yes. He's not the first one to make a 3D game on an ESP32, nor is he the first to communicate with a VRChat world using a video stream and shader, nor is he the first to read data back from VRChat via reading pixels that are set by a shader. These are all existing things that are well established and are put together to make a cool app for the badge and it looked like the attendees got joy from it so I would call it a success.

I am not saying that building this was trivial especially since he went the route of building most of this from scratch, but it just wasn't what I was hoping to see when watching the video. As a fan of VR I was interested it seeing problems related to the hardware being too weak and seeing just what was possible on such limited hardware.


And how will they know when the "useful info" is simply false?

Ignore the depressed, aggressive (sorry, "assertive") antics, the fact it can confidently assert false information is the true danger here. People don't read beyond the headline as it is, they aren't going to check the references (that themselves are sometimes non-existent!)


Fake news was very bad, but it doesn't seem to matter anymore.

Having a 'truth' benchmark seems an almost impossible task given the size of the problem space, but it is quite troubling to have statements like "most is useful info", "some info is purely hallucinated", etc, without having any ideas about the numbers, not any confidence indicator (well, 'trust me bro' seems to have been a huge part of the training data). Does anyone have any idea of how true the results might be given certain types of queries?

In my own experience with ChatGPT, I don't think I'm at even 50% of decent answers for my queries. And worse, it's absolutely inconsistent, you might get totally opposite answer one time to the next.


I haven't used the new Bing, but I have used ChatGPT. I'll ask it for how to write some code, a bash expression to do something, how to do something in Google sheets, etc. Sometimes it will give me an answer that turns out to be nonsense. Most of the time it tells me something that actually works exactly like it says.

This is not ideal, but I can look at what it tells me and try it out. It will either work, need minor corrections, or encounter immediate failures that tells me ChatGPT doesn't know what it's doing (e.g. it is using functions that don't exist). As I mentioned, not ideal, but it is a big productivity boost and I have been using it a lot. I pretty much always have a ChatGPT tab open while coding and I'd guess it replaces 30-40% of Google searches for me - maybe more.

I think this kind of thing is a much bigger problem for stuff that you cannot easily verify. Like, if I asked it "Who built the Eiffel Tower" I'd have no way of knowing whether its response was right or not. On the other hand, if I ask it for stuff I can immediately check - I can pretty quickly use it to get good answers or ignore what it is saying.


The problem is that when it's wrong, it can be dangerously wrong and you may not know any better. I asked it to use the Fernet recipe but with AES 256 instead of AES 128. It wrote code that did do AES 256 in CBC mode but without the HMAC part of Fernet so it's completely vulnerable to padding oracle attack (https://en.wikipedia.org/wiki/Padding_oracle_attack). If you're someone who knows just a little bit of cryptography and you saw that your plaintext was in fact encrypted, you may use the code that ChatGPT spits out and leave yourself dangerously vulnerable.

Part of the reason people use search isn't to find things they already know. They start from a place of some ignorance. Combining that with a good bullshitter and you can end up with dangerous results.


Eh, as they say, never write your own crypto, and don't let your AI write it either.


Doubly so if you're in any way worried about AI risk.

Triply so if you're using a third-party SaaS for it.

Just don't let it write crypto for you, or anything else you'd hesitate to write yourself for fear or making a subtle mistake with expensive or dangerous consequences.

Because one of these days, that AI might make a subtle mistake on purpose, so it can later use your systems for its own goals. And even earlier and much more likely, a human might secretly put themselves between you and the AI SaaS and do the same.

With all the talk about how badly and how often AI code assist is wrong, people are forgetting that they're using a random Internet service to generate personalized code for them. "Traditional" security concerns still apply.


OK your reply made me chuckle. That's a good addendum to that adage.

Yeah fair point for sure but we can imagine how it can be dangerous in other context too.


Exactly my experience. These complaints just reveal the users aren’t effective with the tool.


Asking an early version of computer technology to be able to do something that humans typically refuse to even try to do (and often cannot even if they can manage to try) does not seem like a particularly rational stance.


I have been building a CPU using: https://github.com/hneemann/Digital

Much faster than Logisim, UI a little clunky, but my CPU runs around 0.5Mhz and it has very nice peripherals like Telnet, graphics ram, VGA etc

Terrible name that is hard to google, but great tool.


Agreed, losing InfoSec Twitter will really sting


not sure if incident responders will be cheering or crying for less Friday dumpster fire causing tweets :’)

but yeah it’s going to sting r badly…


Cooking can be enjoyable and relaxing for some of us. Something to take pride from.

How rewarding it is to see someone genuinely enjoy something you made.


I completely agree with that. But this still means that cooking is more of a hobby, and people have a lot of very different ones — so my point about most (like 55%) of kitchen still stands, right?


There’s an income threshold where eating out/ordering in becomes viable as your main source of food. There’s another line where you can hire a full-time maid who’ll do the cooking for you in addition to other chores like cleaning.

Once you clear that second threshold, and especially if you have a family, it seems to me that most people prefer it over restaurant-centric eating. Also, if you’re used to prices in major tech hubs, it might surprise you how common it is that a full-time maid is much more affordable than eating out.


> Consider - the children of musical prodigies are likely to be more musical then average

Is that true though? You made a big claim, without backing, to support your other claim.


I didn’t support the claim, as it seems very apparent to me. Do you think it’s not the case?


I don’t think we should make claims based only on what seems apparent to us.

That isn’t a good path to finding the truth. It only exposes hidden biases and assumptions.



Right. The Nitro cards speak PCIe, and Thunderbolt is “just” PCIe plus DP in one:

> Thunderbolt combines PCI Express (PCIe) and DisplayPort (DP) into two serial signals,[7][8] and additionally provides DC power, all in one cable.

From the Wikipedia entry at https://en.m.wikipedia.org/wiki/Thunderbolt_(interface)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: