Hacker Newsnew | past | comments | ask | show | jobs | submit | straws's commentslogin

The Voyager Company is truly worthy of study if you are at all interested in a vision for hypermedia before the internet.

- Collected media https://the-next.eliterature.org/collections/2

- A catalog introducing this software to a print audience https://archive.org/details/voyager-360-catalog/mode/2upa



Every multimedia CD for Windows in mid-late 90's was like that. Encarta was the pinnacle for obvious reasons among these Macromedia Director games/tools with courses, 360 views from Paris before even Google existed and OFC Cryo Omni 3D games, soon playable under Scummvm.

Has anyone tried asking Microsoft if they’re willing to put Encarta (or at least Collier’s Encyclopedia https://en.wikipedia.org/wiki/Collier's_Encyclopedia which it was developed from) under a Creative Commons license? They don’t seem to be doing much else with it, I assume they’ve probably written it off financially at this point, and they’d get some goodwill out of it.

Scummvm (daily build) can run tons of Macromedia Director based games and software.

Ooh, that's nice. But I'm really hoping for the Encarta/Collier's material to be free to legally distribute and adapt.

Would also recommend https://types.kitlangton.com/ as a companion — sometimes many examples can illustrate the point more succinctly than text.


Both ProseMirror and the newer version of CodeMirror have a pretty elegant solution to this: each modification to the document is modeled as a step that keeps track of indices instead of node/text identities, and uses a data structure called a "position map" that the buffered steps can be mapped through to create steps with updated positions, which are then applied to your document.

In practice, it works quite well. Here's more info:

https://marijnhaverbeke.nl/blog/collaborative-editing.html https://marijnhaverbeke.nl/blog/collaborative-editing-cm.htm...


Github's https://primer.style/product/getting-started/ does a good job of making a cohesive design language that works just as well for server-rendering Ruby views and "upgrading" parts of the ui to React views when you need more interactivity. That's a constraint that I wish more of the Rails ecosystem design tooling would attempt to solve for.


You're not alone. There's been an effort to transparently update the UI to a React implementation over the past year or two, and while I understand the benefits to that approach, they have introduced some flakiness in moving away from a the server-rendered pjax/html-pipeline/simple web components approach that was so cohesive and battle tested over the decade before it.


The convention on desktop and mobile is not to have a submission at all. If you click one of these "mercury" buttons, you can always have up-to-date state.

Forms and submissions are mostly a web convention. I too think that's more natural, but there are a lot of existing contexts like settings where the expectation is that making the selection doesn't require an explicit "save" step.


That's a bad excuse. There are plenty of widgets that need to deal with illegal states in an auto-saving form. For example, every text input that expects a numeric value needs to allow an empty string.

If you don't want to deal with validation logic in your app, you could just disable the last checkbox.


I like your last point.

Also in practice this state does not happen in “apps”. Before the user reaches the screen, you must have set a default. If you have default, then “deselect all” will eventually revert to that default.


I'm not very excited about the product itself, but if you are at all interested in spatial computing, it's worth reading through Apple's design and developer resources:

Designing for visionOS - https://developer.apple.com/design/human-interface-guideline...

Inputs: Eyes - https://developer.apple.com/design/human-interface-guideline...

Principles of spatial design - https://developer.apple.com/videos/play/wwdc2023/10072/

Create accessible spatial experiences - https://developer.apple.com/videos/play/wwdc2023/10034/

While I don't think there will be a mass adoption by people willing to put on goggles throughout the day, it's clear that a lot of Apple's ecosystem is being directed toward environmental and situational computing, and the SDK backs that up. Using gaze detection to focus on more than one device in a room, surfacing certain interactions in specific rooms, and low-lag screen mirroring from devices are all pretty high-cost investments that are likely to find uses in other products. I look forward to what kinds of "continuity" type features this tech introduces.


If you have one of the newer magsafe iPhones, you can attach one of these to the wall (front camera) or the back of your monitor (back cameras):

https://www.shopmoment.com/products/wall-mount-for-magsafe/s...


33 years, if you go by the patent. Lots more fun facts in Tim Hunkin's excellent Secret Life of Machines:

https://www.exploratorium.edu/ronh/SLOM/0301-The_FAX_Machine...


Tim Hunkin is recently making new videos!

https://www.youtube.com/c/timhunkin1


For instance, with Diagram Maker, application developers can enhance the experience for end customers by enabling them to intuitively and visually build cloud resources required by cloud services such as Workflow Engines (AWS Step Functions) or Infrastructure as Code (AWS CloudFormation) to get the relationships and hierarchies working.

Does this mean that Step Functions and CloudFormation will adopt this library? Both already have similar visualizations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: