Hacker Newsnew | past | comments | ask | show | jobs | submit | Duckton's commentslogin

> It's 100% the brand and nothing more.

What is you basis for saying that?

> Nobody cares if you "experienced" something in the comfort of your own home.

By that rationale nobody would play vide games.

I believe it comes down to the experience. The experience of VR is simply not good enough to attract people.


Is it thought? Can’t web3 be, for an example, patient health data in a ledger? Who says it has to be tied to a monetary value?


Because the security underpinning blockchain is either "spend a bunch of expensive electricity and get extra tokens" (proof of work) or "lock up some tokens for a chance to get more" (proof of stake)

These tokens have to be worth something in order for the security to functionally exist. You can't separate the monetary side from the security, because the monetary side incentives the security. And security is the only thing blockchain adds.


I don't completely agree with that. Yes the network must be secured that way, and the native token must be worth real money, but that's not something applications built on top necessarily need to worry about or interface with. You can build applications that do not use the native token (outside of transaction fees) and that doesn't affect the app's security.


I do not want my health record to be both public and immutable.

Not wanting it to be public should be obvious. Immutable though, what if I need to change my name/gender to match reality?

It's not enough to just update the value, because the old value still exists on the blockchain. That's just another method to find my deadname and use it for harassment.


So I am not an expert in blockchain or web3, I've only completed a few courses implementing a blockchain in golang and I work in the digital health industry.

But the points you raise, are exactly the issues I've been thinking could be solved with web3. I am imagining using it to give control to the patient of who has read access(to what and when), who can add data, etc.

I.e. give full transparency and control to the patient. Instead of the current situation where a patients data is on different systems, you don't know what it actually says, besides what a doctor tells you.


Could you go into how blockchain might solve those problem? Because it seems to create them instead


I can try, but to be totally clear this is only an idea that I have in my head, and is far from fully formed.

But as far as I understand, it should be possible for a user on a blockchain, to have their set of data encrypted in the ledger. It should also be possible to implement a sort of permission scheme.

So I am imagining, instead of relying on things like Epic Systems and other EHR systems, that control your data and might have incentives to not share them with other systems. One could imagine a EHR system based on a blockchain. The patient can then grant permission to, say a hospital, to read certain data from the ledger. This could be scoped to what is necessary in the context of their visit or procedure. After the visit to the hospital, the patient has full transparency to read what data has been added to their own records.

Anyway, I am not capable to give a full technical solution, since I have not thought it fully through, and not nearly knowledgable enough to actually know. So I might be very wrong in my assumptions, and would gladly be told otherwise if that is the case.

Then there's the whole issues of how do you get existing systems as Epic to integrate with said "blockchain EHR".

Edit: This might be of interest: https://journals.sagepub.com/doi/full/10.1177/14604582198663...


The question is not just how do you get existing systems to integrate, it's how do you ensure that everyone uses the same blockchain? Technological solutions don't magically force anyone to agree on things and interoperate.


Yes exactly, that was actually what I meant.


I think that's the killer feature personally (I work in the space). It has uses for censorship resistant communication and things like identity. Not sure if I see the use case in health data though.


AFAIK all transactions on a blockchain require spending some associated currency (e.g. "gas fees"). For investors, I think the killer feature is being able to make money off of every transaction involving identity and health data.


Yes, the gas fees would be very expensive in the current state (think hundreds to thousands of dollars for a regular appointment, tens of thousands for an x-ray). There would have to be some benefit for healthcare providers to pay that and I don't see what that is..


Maybe not the healthcare provider, but a service that allows the patient to be in control of their health records. Control who can read and write to their own data. And is it not up to the implementor of the blockchain, to specify how difficult it is to calculate a new block? So lowering the "gas price"?


I think a system like that is a great idea, I'm just not sure how a blockchain helps. The things that make a blockchain interesting (uncensorable, immutable etc) aren't that important here, and with a public blockchain you still need a whole separate system that's doing access checking (maybe your doctor has a key that decrypts the onchain data). To me that system of sharing data with providers, and giving them credentials to access the data is the difficult part, and blockchain doesn't help.


Well, I do think that that is exactly things that are important. You'd want uncensorable, to give transparency to the patient. E.g. a hospital can't add a record without you knowing, that you might not want an insurance company to have access to later.

Immutable, as a patient you would want to know exactly what your data looks like at any given time. Again insurance is a good example.

If the blockchain is private, could it not be part of the implementation that does the access checking? Can't part of the ledger be unencrypted while other parts are not?

It might be wishful thinking. It's just an idea I have floating in my head, as an actually useful real world implementation for a blockchain.

I shared the link in another comment, but you might find it interesting as well: https://journals.sagepub.com/doi/full/10.1177/14604582198663...


Can you give some examples on why you think Windows XP is better than macOS and also how macOS is crap?


- Finder

- Applications not actually being terminated when clicking on the close icon

- lagging / unreliable context menu opening with middle-index-finger on apps in dock bar

- the concept of installing something by moving it from an icon on the left to an icon on the right

- or when you can't start apps due to connectivity issues

- app removal is totally opaque and sometimes requires to download a custom uninstallation tool (adobe creative cloud f.x.)


Huh? These just sound like grievances that a user used to Windows will have when moving from Windows to macOS, but at that point it's about what you're used to, not what is inherently bad about the design.

Explorer is much worse (drives? Still can't really understand Windows file systems to this day). Sending an app to Applications/wastebin for install/uninstall is (arguable) more visually intuitive for a layperson than an install/uninstall script that most people just click "Next" without reading any of the instructions. The concept of applications having multiple windows is an OS-level thing to get used to.


Windows conflates windows and applications, macOS doesn’t. It’s a mental model thing, I personally like the Mac version better — closing the last window doesn’t have a special case behaviour, and it plays nicer with things like Spotify or Discord that you want running continuously and don’t want to close the whole application inadvertently.

Not sure what you mean about lagging context menu?


I find finder more productive than explorer.

I love the fact that clicking the close icon of a window doesn’t terminate the application

Haven’t experienced this, not sure what you are referring to.

Also something I like. The fact that an installation is just moving an executable, is to me, superior to an entire process with regedit and what not.

As I said somewhere below, it might just come down to what one is used to and not objective facts.


> I love the fact that clicking the close icon of a window doesn’t terminate the application

what is the difference then between closing and minimizing?


I minimise when I want to get the content, fx a VS code workspace, out of the way to retrieve it later.

Close is when I am done with that particular window/workspace.

Command+Q is when I’m done working in VS code entirely.


> what is the difference then between closing and minimizing?

Closing is putting away, as in “I don’t think I’ll need it in the near future”, and minimising is putting aside as in “I’ll probably come back to this in 5 minutes”. The minimised window is not cluttering the screen but still accessible from the dock and list of open windows in its application. This is not related to the problem you claimed to have with an application being still open without having a single window.


>app removal is totally opaque and sometimes requires to download a custom uninstallation tool (adobe creative cloud f.x.)

I mean, maybe, but you're comparing it with Windows that never had anything other than custom uninstallers that leave garbage all over your file system. Windows is 10x worse here.


- There is a both a three-finger-click action and a "hard" click action on the touchpad, both of these can't be set to do a "middle click". You have to buy an app in order to be able to middle click with your mouse! (To open links in new tab or close tabs) - Trying to tweak small problems like the one above often leads to things that look like a great solution but 9 out of 10 are github repos that have not been updated in 10 years and don't work anymore - The recommended way of using only your external display (if you still want to use the keyboard and touchpad) is to mirror the displays and then set the screen brightness to zero

But when I was on Windows I was even more unhappy. I wish Linux had first-class support by more apps.


MacOS and its applications relegate far too much functionality to hidden "power tools." Often its impossible to know what is clickable in macOS.

Windows XP features were easy to discover. Scroll bars were not hidden. Buttons looked like buttons.


Command vs. ALT/CTRL bindings are vastly superior in Windows. The MacOS method of using the Command button is different for the sake of being different, not for any actual productive reason.

I had the first Macbook Retina and used it for years at a company where it made sense to do so; when I handed it in and left for my own startup life, I was open to either OS (couldn't use Linux as the daily driver since I my industry uses a lot of Windows-only programs), and Windows was just far more productive to use on a regular basis. The only thing I miss is Final Cut Pro, and Sublime Text to some degree (VS Code has been an adequate replacement).


LOL.

Not at all. Mac keyboard shortcuts are explicitly more reachable than its Windows counterparts. Try reaching alt+f4 versus command+w/q. Also macOS incorporates more keyboard shortcuts than any other OS.

You will need to elaborate more on this one.


> The MacOS method of using the Command button is different for the sake of being different, not for any actual productive reason.

I’m actually pretty sure the Mac’s Command key predates ctrl being used for this purpose.


It’s funny how different it can be for different people.

I switch regularly between macOS and Fedora, and have windows on a ssd for gaming. I agree on the Command vs. CTRL bindings thing. It’s annoying when switching between the two systems.

I recently wanted to pick up unity, and decided to try it on windows. I have to say, as strongly as you find macOS annoying I find windows annoying. Fx system settings, for some reason when I have to change something it takes me ages to navigate through the UI to find what I’m looking for. But maybe it’s all just personal preference and what one is used to.


> Command vs. ALT/CTRL bindings are vastly superior in Windows. The MacOS method of using the Command button is different for the sake of being different, not for any actual productive reason.

Command is much more accessible as part of a shortcut than control. Also, alt is much more useful as a composition key than as a pseudo-control key. And seriously, who in their right mind believes that things like alt-F4 are a good idea? The way Windows shortcuts work is particularly idiosyncratic and makes sense only as an historical oddity from way back when DOS had to coexist with Windows.


I think you overestimate how long CocoaPods will be used, all third party libraries I’ve used in the last 6-12 months supports SPM


Our app is in objC and we’re not planning a rewrite anytime soon.


Pseudo-counter point: SPM for firebase iOS SDK is in beta, and was released 2 weeks ago. Before this, you couldn't use SPM. The readme recommended (defaulted) to Cocoapods, with "experimental instructions for carthage". Therefore Cocoapods is likely used by most iOS apps with firebase.


And none of the ones I use support SPM.


So apparently a lot of them do support SPM even when it doesn't say so in the README. Confusing, annoying. Out of a project with 10 dependencies I thought only 4 supported SPM by reading the README but in the end only one (which was my own...) Cocoapad didn't have SPM support.


Coinbase is also down, but that’s hardly a surprise


Have you tried navigating to `about:profiles`? That’s how I switch between profiles. Works just fine IMO.


Precisely. People thinking this is to create one UI for both platforms are missing the point.

Just like you can have an app for iPhone, iPad and Apple Watch, all with their own UI to a more or less degree, you will now be able toinclude a macOS build as well.


Shared Text and UI controls and libraries could be useful, but we need to consider the "one UI" given the title of the article "Apple Plans Combined iPhone, iPad & Mac Apps to Create One User Experience". That just screams Marketing Department Overreach to me, and I've arguments about this exact issue with sales and marketing people before.

The "one user experience" idea is a fallacy because the physical interfaces are so much different. There are definitely overlaps with typical phones, computers or TVs, but developers should embrace and accentuate the the differences, not try to force everything to the lowest common denominator.

For example, an iPhone has limited text input because of a small screen where the keyboard takes up almost half the space. However, it does have a great camera and a shit ton of useful sensors that no computer or TV ever will.


It's not like how apps get made is part of Apple's product marketing, if anything it's their developer outreach.

My money is on this being a developer focused update to fix the CocoaTouch/AppKit dichotomy, which will make it easier to develop a Mac version of your software alongside the iOS, watchOS, and tvOS versions. Spinning it as "combined iPhone and Mac apps" sounds like a misunderstanding on the reporter's part.

iOS has gotten a really large developer following with a ton of great apps. Not a lot of that has spilled back to the Mac side of things, but if the basic toolkit were compatible it would be easier for iOS devs to make the jump.


I think it's more likely iOS apps will run in a sandboxed emulator on the Mac. Xcode already does this, and it would trivial to build and bundle an emulated app for MacOS with the standard iOS build.

I can't see real unification happening. The platforms are just too different - not just physically, in terms of interface modality and available hardware, but in terms of design culture. For the most part, iOS apps have very little in common with Mac apps - and that will continue to be true even if Apple releases a series of Tablet Macs as the next evolution of the iPad Pro.

Going the other way makes even less sense. All the big content creation apps are monsters with hundreds of menu options and settings. There is zero chance of being able to port a functionally equivalent version to a device with a touch UI, a much smaller screen, and limited performance.

I hope this isn't based on a fantasy of being able to make everything look and work like Photos or Apple's office clone - because that will mean dumbed down apps on the Mac, and a total loss of faith in the Mac among professionals and power users.


> I think it's more likely iOS apps will run in a sandboxed emulator on the Mac. Xcode already does this, and it would trivial to build and bundle an emulated app for MacOS with the standard iOS build.

I would bet a large pile of money against that happening. Apple has been pretty adamant that iOS is for the touch interface and macOS is for the mouse/trackpad. They know an iOS UI doesn’t work well on a Mac.

If they wanted to do Microsoft style “universal apps” they would have done it years ago.


Xcode doesn't use an emulator. When you build for the simulator it's building an x86_64 binary that runs natively.

Also, to your other point (UI integration), my assumption is that you'll need to make a different UI for macOS than you do for iPhones (you have to do the same for iPads, IIRC, even if you're just creating a larger version of the existing iPhone UI).

So if you have a simple way of handling it, you end up with all of your core logic being shared between all versions, and multiple versions of the UI that you just hook up to your existing controllers.


But there's minimal non-trivial shared core logic, because the apps live in a different user space and do different things with different end goals.

You can port a Mac DAW to iOS - e.g. Cubase to Cubasis - but it's no longer the same product. Most of the features that make the desktop version so useful in a professional context are lost in translation, because iOS doesn't have the resources to support them.

So what do you gain by trying to merge development, except extra work and possibly extra confusion.


> That just screams Marketing Department Overreach to me,

Yeah, and doesn't sound like how Apple has done things in the past. More likely this is Reporter Doesn't Understand Technical Decision Overreach to me....and it worked ("made you look!")


Did a bunch of people let go from the Windows Phone 10 effort land at Apple?


It's not "one UI" as much as not reimplementing parts of the UI that don't change between macOS and iPad. Simplenote, for example, has a two-pane view, with a list of notes on the left and the selected note on the right, with a toolbar above the note. Don't reimplement the entire UI, implement only the diffs.

Like responsive web design, where you implement only the diffs between the phone and PC layouts of your site, while reusing things that don't change.


Despite the title of the article, I do not think this is about designing "one user experience" between mobile and desktop.

I think it's about consolidating the UIKIt and Cocoa frameworks, making it easier for developers to share code between platforms, but not necessarily creating a UI that's the same for both.


I get one result


  I think attempting to use Swift on a Linux server would be a big nuisance
I beg to differ. Look at Vapor, Kitura & Perfect. I know Foundation is missing implementations for Linux, but it is not something that makes it a big nuisance IMO.

You can quickly have a setup with Swift on Linux running a simle CRUD app.


Yes it is possible and IBM is the one pushing for it.

However it is still light years behind JEE and Spring features, including parity with existing JDBC drivers.

Also besides instruments on OS X, there are no comparable performance monitoring tools like VisualVM, Mission Control and many others from the JDK vendors.


But as far as I know, Instruments is using tools available already in lldb, clang and dtrace. So building a UI that displays the collected data should be more than possible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: