Hacker Newsnew | past | comments | ask | show | jobs | submit | Crespyl's commentslogin

Is it possible that the iPhone filters are weaker due to FaceID requirements? I seem to recall that FaceID (and similar systems, like Windows Hello) depend on IR to get a more 3D map of the face, so it'd make sense that they want to be more sensitive in that range.

Laptops aren't generally being used in the same areas as cars though, so you wouldn't expect to see as many cases involving Windows Hello compatible laptops/cameras.


That wouldn't make sense on the back of the phone.

Possibly. Some models of iPhone use LIDAR for AR tooling as the measure app

Allowing the owner of the device root access doesn't necessarily break the security model. It just means that the user can grant additional privileges to specific apps the owner has decided to trust. Every other app still has to abide by the restrictions.

The fact that Android complains and tells any app that asks whether the owner actually, you know, owns the device they paid for is an implementation detail.

A Linux distribution that adopts an Android style security model could easily still provide the owner root access while locking down less trusted apps in such a way that the apps can't know or care whether the device is rooted.


IMHO, I should be able install the OS I want on the hardware I paid for. What should be illegal is to technically prevent me from installing a different OS, because I paid for that hardware and I should own it.

But that does not mean that all OSes should be open source. I think it's fine for iOS to be proprietary, but there should be enough information for someone to write an entire alternative OS that runs on iPhone. I think it should be illegal to prevent that (is it called tivoisation?).

All that to say, I don't believe that having root on my Android system is a right. But being able to install a system that gives me root should be one. If that system exists, that is.


All this is just "Games haven't(/can't) had their 'Citizen Kane'" all over again. What are you expecting? What would a "Lord of the Rings" of gaming need to do to be "real art" in your (the general you, I'm not really trying to call you out specifically) eyes?

When someone watches a movie, or engages with any other art form, are they "transformed"?

Games are certainly a unique art form, but I reject the idea that they are somehow unable to produce a "shared cultural vocabulary", or that the experience of playing a game can't be discussed to just as rich a level as, say, the experience of watching a movie, or listening to a piece of music. Ultimately, to fully engage in a dialogue about a work of art, you need to experience that work in its intended form, this should be obviously true of music, movies, painting, and games. But to set games apart as somehow less able to be fully discussed is nonsense.


> reject the idea that they are somehow unable to produce a "shared cultural vocabulary"

Anyone who witnessed a playtesting session with someone who never played video games before knows that there's a tremendous amount of shared cultural vocabulary there already.


I've had this happen a handful of times with my Frame TV and Steam Deck, though it's inconsistent for some reason. It's pretty cool when it works.

The Deck can pretty consistently turn the TV on from standby(/picture mode) and grab the input, but if the TV is completely off (black screen) CEC doesn't work anymore.


I would guess it's referring to the Erlang VM: https://en.wikipedia.org/wiki/BEAM_(Erlang_virtual_machine)


this is correct :)


Hey, I've also used and loved Orpie!

I'm not extremely familiar with any of the ML family, but Eric Lippert had a blog series I followed for a while in which he was writing a Z-Machine in OCaml: https://ericlippert.com/2016/02/01/west-of-house/ I followed along but in Rust for a while, though I think he paused the project at some point and I lost steam.

I learned more about Rust (which, IIRC was first implemented in OCaml) than I did about OCaml, but it's always seemed like a nice language.


Lippert started doing that blog series as part of his learning journey when he got hired at Facebook to write OCaml. Just a fun historical fact.


Isn't the Deck x86 though?


IIUC, it comes down to simplifying playback/subtitle rendering to the lowest common denominator among the various western streaming platforms.

The good/old subtitles in the ASS format required a more complex playback system than what Netflix/Hulu (and maybe blueray players) currently offer. This could be worked around by burning the subs into the video stream, but then you need to keep separate copies of your (large) video files for each subtitled language.

That doesn't seem like it'd be such a huge problem to me, but what do I know?

The post does a good job explaining the effective monopoly system at play that prevents real competition to provide any pressure to improve or maintain the prior quality.


It's an X*N*N problem: n_videos, bitrates, formats.

Assuming each video in its largest bitrate is... 2gb for example, and assuming S3 is $0.025/gb, that's a nickle per month or let's say $0.50/yr for that video.

Next up is reduced bitrates, assume you go from 2gb to 1gb and finally 500mb. Round up and you're at $1/video.

Now duplicate it to AV1 and MP4, and multiply that by English, French, and Spanish (oh, and let's say Japanese and Chinese too for good measure).

So a single 2gb video goes from $1/yr to $10/yr, and you're not doing "the dumb simple thing" for subtitles which would basically 4x your cost over "commodity subtitling services".

Or "simplify, simplify simplify", you reduce costs (cha-ching!), and become compatible for syndication or redistribution (cha-ching!)

... and they would have gotten away with it too if it weren't for those meddling kids!


S3 is >100x more expensive than hosting it yourself. You shouldn't argue about things being expensive using cloud prices.


Except ASS streams really aren't that big and don't have to be stored with each encode. They can just be in a separate file. And this is how cr used to serve them. Before they used hardware drm you could just download all of the separate sub tracks.

You don't need to multiply anything here, except the number of sub streams. One is ass, the other the primitive standards Netflix and other surges use.


Someone else was saying they maintained burn-in subs for devices that didn't handle ASS renderers. Even without accounting for the burn-in versions, using non-standard subs still bumps them off of commodity subtitling services and limits distribution/syndication.

Edit: and to the peer comment regarding S3 vs self host: regardless of 10x cloud cost, it's still 10x volume. Where 1TB local would do, now you need 10TB (10x the cost).


The labor of ass to ttml is there yeah. But the the factors are n_videos * languages * 2 Formats. And considering these are pretty compressible text(34MB->4MB for a completely bonkers sub track that includes animations, animated fonts and otherwise transformed text). I can't imagine that hosting costs cost more than their analysis.


The bigger LLMs have generally figured out this specific problem.


Feature branches that have been cleaned up and peer-reviewed/CI-tested, at least in the last few places I worked.

Every so often this still means that devs working on a feature will need to rebase back on the latest version of the shared branch, but if your code is reasonably modular and your project management doesn't have people overlapping too much this shouldn't be terribly painful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: