it's truly a bismal experience compared to what else is out there (my experience is with rust, python and ts inside vsc).
Often autocomplete is hopeless and doesn't help you with the most simple things like picking from multiple initializers, or changing to a different signature of a function call.
the project setup is this mystery Xcode project file, instead of a standardized yml or something that anyone can modify and understand.
I have to say provisioning has improved a lot. I remember back in 2008, it was really a pain to get anything working.
This is not necessarily about Xcode, but maybe it should be: screen shots for your app. they need screenshots for 454 device types, and zero automation in their own tooling.
the layout is also very inflexible. they dictate a couple of panels and that's how you _must_ use them. that's unlike any other modern IDE.
Quite sure it's done in order to speed up the camera app performance and reduce the time to first photo time. The camera module requires some tenths of a second to boot up and it makes sense to start that process at the earliest indication of user's interaction.
In this case, a touch-down is a good indication, even if user ends up swiping instead of touch-up.
The same thing happens in the lock screen, if you hold your finger on the lock screen and move 1 pixel to the left, the camera module starts up even if you don't finish your swipe to camera gesture.
Wouldn't surprise me either. I know a guy who worked at Apple on iOS perf and the one time he was telling me about it years ago, it was "camera app doesn't start fast enough, so we reworked memory management". Apple really cares about the camera.
I bet this is in the new version 26. That version is so garbage and I regret updating. 95% of the time, when I open the phone, it doesnt unlock my phone with face and I have to enter PIN. Sometimes I cant take photos also. In the browser, when I touch the address field nothing happens and I can go on and on and on. Just leave the shit as is, people. Its like if I have a screwdriver in my workshot and every other month, when I come back to use it, you change some bullshit, so I have to operate it slightly different. Fuck that
I think ChatGPT has a similar feature. I was amazed how the reply starts coming in literally the moment I press enter. As far as I can tell that is only possible if all the previous tokens I submitted have already been processed. So when I actually submit the message, it only needs to update the inner state by one more token.
i.e. I think it's sending my message to the server continuously, and updating the GPU state with each token (chunk of text) that comes in.
Or maybe their set up is just that good and doesn't actually need any tricks or optimizations? Either way that's very impressive.
The 'flash' / no or low-thinking versions of those models are crazy fast. We often receive full response (not just first token) in less than 1 second via API.
> I think it's sending my message to the server continuously
It is, at least I see it for the first message when starting a new chat. If you open the network tools and type, you can see the text being sent to the servers on every character.
Source, from spending too much time analysing the network calls in ChatGPT to keep using mini models in a free account.
IIRC, apple has a patent from years ago for keeping a camera module in a semi-active mode when the phone isn't entirely idle to make starting it faster.
TUI tools are generally as accessible as the terminal on which they run.
GUI apps are much trickier. They require that the developer implement integration with accessibility frameworks (which vary depending on X11/Wayland) or use a toolkit which does this.
GUI kits like AppKit or GTK have built-in accessibility features like standard components (input fields, dropdown boxes) and view hierarchy that interact with accessibility tools for free. It's the main upside of a GUI.
TUIs are tricky.
I think TUI accessibility generally involves rereading the screen on changes (going by macOS VoiceOver). It can optimize this if you use the terminal cursor (move it with ansi sequences) or use simple line-based output, but pretty much zero TUIs do this. You'd have to put a lot of thought into making your TUI screenreader friendly compared to a GUI.
The thing going for you when you build a TUI is that people are used to bad accessibility so they don't expect you to solve the ecosystem. Kind of like how international keyboards don't work in terminal apps because terminal emulator doesn't send raw key scans.
How are TUI tools just as accessible as the terminal? Take a visually-simple program like neomutt or vim. How does a vision-impaired user understand the TUI's layout? E.g. splits and statusbar in vim, or the q:Quit d:Del... labels at the top of neomutt. It seems to me like the TUI, because it only provides the abstraction of raw glyphs, any accessibility is built on hopes and dreams. More complicated TUIs like htop or glances seem like they would be utterly hopeless.
When it comes to GUIs, you have a higher level of abstraction than grid-of-glyphs. By using a GUI toolkit with these abstractions, you can get accessibility (relatively) for free.
Additionally in sysadmin, blind-users are not just some random group, the ability not to use one's eyes is central to the Command Line Interface. You could always in theory get by with just a keyboard and a TTS that reads out the output, it's all based on the STDIO abstractions that are just string streams, completely compatible and accessible to blind, and even deaf users. (Unlike GUIs)
Often autocomplete is hopeless and doesn't help you with the most simple things like picking from multiple initializers, or changing to a different signature of a function call.
the project setup is this mystery Xcode project file, instead of a standardized yml or something that anyone can modify and understand.
I have to say provisioning has improved a lot. I remember back in 2008, it was really a pain to get anything working.
This is not necessarily about Xcode, but maybe it should be: screen shots for your app. they need screenshots for 454 device types, and zero automation in their own tooling.
the layout is also very inflexible. they dictate a couple of panels and that's how you _must_ use them. that's unlike any other modern IDE.
reply