Hacker Newsnew | past | comments | ask | show | jobs | submit | mabedan's commentslogin

it's truly a bismal experience compared to what else is out there (my experience is with rust, python and ts inside vsc).

Often autocomplete is hopeless and doesn't help you with the most simple things like picking from multiple initializers, or changing to a different signature of a function call.

the project setup is this mystery Xcode project file, instead of a standardized yml or something that anyone can modify and understand.

I have to say provisioning has improved a lot. I remember back in 2008, it was really a pain to get anything working.

This is not necessarily about Xcode, but maybe it should be: screen shots for your app. they need screenshots for 454 device types, and zero automation in their own tooling.

the layout is also very inflexible. they dictate a couple of panels and that's how you _must_ use them. that's unlike any other modern IDE.


Brother, maybe make a new username...

To believe that is … quite something


Right. GPT is a glorified keyboard prediction, and people should treat it as such. I don’t get it when people get mad at the output.


Or, stop starting wars


Quite sure it's done in order to speed up the camera app performance and reduce the time to first photo time. The camera module requires some tenths of a second to boot up and it makes sense to start that process at the earliest indication of user's interaction. In this case, a touch-down is a good indication, even if user ends up swiping instead of touch-up. The same thing happens in the lock screen, if you hold your finger on the lock screen and move 1 pixel to the left, the camera module starts up even if you don't finish your swipe to camera gesture.


Wouldn't surprise me either. I know a guy who worked at Apple on iOS perf and the one time he was telling me about it years ago, it was "camera app doesn't start fast enough, so we reworked memory management". Apple really cares about the camera.


We should sue Apple for this: their Camera app gets an unfair advantage here compared to third-party camera apps.


Yup, all the gimmicks I have to do in my app to distract users from the camera loading...


No thanks, the time from locked to first capture is already too long on my 15 pro


The point of the suit would be for the camera to operate faster in all apps.


Yeah, makes total sense why they'd do it, but in my case it was increasing "alert fatigue" (why is my camera on?) and so I moved it.


I bet this is in the new version 26. That version is so garbage and I regret updating. 95% of the time, when I open the phone, it doesnt unlock my phone with face and I have to enter PIN. Sometimes I cant take photos also. In the browser, when I touch the address field nothing happens and I can go on and on and on. Just leave the shit as is, people. Its like if I have a screwdriver in my workshot and every other month, when I come back to use it, you change some bullshit, so I have to operate it slightly different. Fuck that


No, I confirm that this camera behavior also happens on iOS 16. But I agree that iOS and macOS 26 are the worst thing Apple made in a long time.


Also happens on iOS 18


I think ChatGPT has a similar feature. I was amazed how the reply starts coming in literally the moment I press enter. As far as I can tell that is only possible if all the previous tokens I submitted have already been processed. So when I actually submit the message, it only needs to update the inner state by one more token.

i.e. I think it's sending my message to the server continuously, and updating the GPU state with each token (chunk of text) that comes in.

Or maybe their set up is just that good and doesn't actually need any tricks or optimizations? Either way that's very impressive.


The 'flash' / no or low-thinking versions of those models are crazy fast. We often receive full response (not just first token) in less than 1 second via API.


Support systems often do this - they stream message and agents already see what you are typing. I know a few banking apps that do this.


> I think it's sending my message to the server continuously

It is, at least I see it for the first message when starting a new chat. If you open the network tools and type, you can see the text being sent to the servers on every character.

Source, from spending too much time analysing the network calls in ChatGPT to keep using mini models in a free account.


IIRC, apple has a patent from years ago for keeping a camera module in a semi-active mode when the phone isn't entirely idle to make starting it faster.


> which isn't unsurprising

There has to be an easier combination of words for conveying the same thing.


Great stuff.

The layout shifting animation on the page is making this very hard to read


Are they as accessible as GUI though (genuine question)

UI libraries have a lot of features for allowing people with disabilities to “read” and interact with the screen in efficient ways


TUI tools are generally as accessible as the terminal on which they run.

GUI apps are much trickier. They require that the developer implement integration with accessibility frameworks (which vary depending on X11/Wayland) or use a toolkit which does this.


GUI kits like AppKit or GTK have built-in accessibility features like standard components (input fields, dropdown boxes) and view hierarchy that interact with accessibility tools for free. It's the main upside of a GUI.

TUIs are tricky.

I think TUI accessibility generally involves rereading the screen on changes (going by macOS VoiceOver). It can optimize this if you use the terminal cursor (move it with ansi sequences) or use simple line-based output, but pretty much zero TUIs do this. You'd have to put a lot of thought into making your TUI screenreader friendly compared to a GUI.

The thing going for you when you build a TUI is that people are used to bad accessibility so they don't expect you to solve the ecosystem. Kind of like how international keyboards don't work in terminal apps because terminal emulator doesn't send raw key scans.


How are TUI tools just as accessible as the terminal? Take a visually-simple program like neomutt or vim. How does a vision-impaired user understand the TUI's layout? E.g. splits and statusbar in vim, or the q:Quit d:Del... labels at the top of neomutt. It seems to me like the TUI, because it only provides the abstraction of raw glyphs, any accessibility is built on hopes and dreams. More complicated TUIs like htop or glances seem like they would be utterly hopeless.

When it comes to GUIs, you have a higher level of abstraction than grid-of-glyphs. By using a GUI toolkit with these abstractions, you can get accessibility (relatively) for free.

Open to having my mind changed though.


Accessibility is a great thing to have and strive for, but it cannot be the number one design principle.

Imagine if everything around us would be designed for blind people.


I suspect blind people imagine that a lot.

The idea is to design for all (or as many as feasible), it's not a binary either/or.


You cannot design a lot of TUI for all. Should we abandon TUI entirely ?


Not necessarily designed for, but accessible to.

Additionally in sysadmin, blind-users are not just some random group, the ability not to use one's eyes is central to the Command Line Interface. You could always in theory get by with just a keyboard and a TTS that reads out the output, it's all based on the STDIO abstractions that are just string streams, completely compatible and accessible to blind, and even deaf users. (Unlike GUIs)


I dreaded the thought of scrolling down because I knew I’m gonna stumble upon his face.


the only thing his statement was missing was thank you for your attention to this matter


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: