Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OIDC (and the rest of the OAuth umbrella of stuff) is one category where every time I have to work with the protocols I think "there must be a less confusing way" and then have a failure of imagination for a simpler way to accomplish the same thing. I think it's because the protocols are conceptually simple, but the cryptographic parts, especially the PKI parts, make them intricate to understand exactly who is attesting or validating exactly what.


What’s insane to me is that it feels like nobody has actually managed to make this easy for developers yet. If they have, I don’t know about them.

I would normally consider myself pretty competent, but I stood up my first fully featured website recently with logins and such, and it took me about 2 days of work to get AWS Cognito working (using their recommended USER_SRP_AUTH). That’s not including 3rd party login functionality from Google and friends.

Their documentation and UX is piss-poor unless you’re willing to onboard your entire project to Amplify and enter npm-hell, which I wasn’t. It’s almost like they don’t even want your business.

I looked into using Auth0 instead and it didn’t seem to be any easier. Better docs, seems like they’re actually written by someone who both understands the auth problem domain and how to explain it to those that don’t, but still complex.

Yet when I was finished finally getting everything to work, it seems like the kind of thing you could easily package into an off-the-shelf product. It’s just that existing products don’t do it. Like why the fuck is there a guide explaining how to write a lambda to convert access codes to refresh tokens and persist them via cookies? That should be part of the Cognito platform!

Honestly thinking of just starting my own auth SAAS with blackjack and hookers


Don't judge the identity server space by Cognito, I beg of you. There are a lot of other players out there (I work for FusionAuth, one of them) who are working to make this easier.

Most have not been abandoned the way Cognito has. (Funny video on the topic: https://www.youtube.com/watch?v=x70EypnAH1Y .)

I don't know why Cognito hasn't seen more improvement. From the outside, it seems like CIAM would be worth investing in as a cloud provider. Say what you will about Azure and GCP, they both have CIAM platforms that see more love than Cognito (Azure AD B2C, Firebase).

> What’s insane to me is that it feels like nobody has actually managed to make this easy for developers yet. If they have, I don’t know about them.

There are definitely folks making it easier to add login/logout to applications (I see some of them pop up in sibling comments, and we are working on that at FusionAuth as well). But some of these are component libraries to proprietary SaaS applications. In this case you lose some of the power and standardization of OIDC. That works great for some use cases and not so good for others. The nice thing about OIDC is that almost everyone works with it (or with SAML). Certainly more than proprietary session based authentication providers.

I will tell you that as we are trying to make authentication simpler at FusionAuth, we have customers coming to us with pretty complicated use cases around federation, scale, automation, permissions and more. It's a balance to try to appeal to the developer who just wants authentication to work as well as the sophisticated customer who has these complex needs.


> It's a balance to try to appeal to the developer who just wants authentication to work as well as the sophisticated customer who has these complex needs.

This sounds like there should be two solutions, one for the simpler case and one for the complex case, rather than trying to make one solution work for all use cases.


I'd advocate that the right "simple" solution is a platform/framework specific auth solution (as I wrote here: https://news.ycombinator.com/item?id=38872704 ).

Those handle a lot of the simple use cases and some of the complex ones. They have the virtue of being well-tested and integrated with your development platform. You can deploy in one step and maintain one database. Which is great till it isn't.

What platform libraries like Devise, etc, don't offer is user data denormalization and isolation, which is useful when you have 2 or more applications with users. Then you want to look at auth servers (self-hosted or SaaS).

But you want the auth server experience to be as simple as possible, which is what we're working towards.

My two cents.


My thoughts exactly. I want a simple auth solution that doesn’t push me towards a full batteries-included platform like Firebase and Amplify, nor a highly-configurable/complex “you can do everything auth!” platform. It’s ok if it’s a little opinionated, as long as it serves my use case of “adding logins and SSO to my website” up to 90% of the problem instead of 50% like what’s out there now.

It seems like an underserved market.


This is pretty much exactly what we built at WorkOS. (I work there.)

Check it out: https://workos.com/

And our Show HN launch a few years ago: https://news.ycombinator.com/item?id=22607402


Have you tried WorkOS? (I work there.)

Makes it super easy to add SAML/SCIM to your app. https://workos.com/

We also recently launched https://www.authkit.com/


> "I work there"

You don't just work there, aren't you the founder? :)

https://news.ycombinator.com/item?id=22607030


Yep!


Not yet, I’ll give it a look next time I hack on my site. “Stripe for auth” is exactly what I’m looking for, and I know I still have a lot of auth head bashing left before I ship.

I’ll say though, my personal “customer demographic” ATM is more along the lines of someone who wants to get working user signups and auth and then never think about it again - so mentioning SAML/OIDC building blocks is a bit of a turn off for me. The reason is that I’m a solo dev trying to ship a browser-based multiplayer game, which I assign a low (maybe 5%) probability of ever becoming something with multiple people working on/turning into a real business - so I need auth, but would prefer to spend as much time as possible on the game itself, and don’t have anybody to farm the work out to.

But I’m happy to give workos a shot to see if it makes my life easier.


WorkOS is pretty tailored to folks building B2B apps where individuals will later be part of a team. (Think Dropbox, Figma, Asana, etc.)

It's less of a fit for B2C products where user identity won't ever be associated with a company (like ecommerce, a game, or a dating app).

The reason is that B2C apps actually have pretty different needs in terms of user identity. For example, most consumer apps will optimize for faster/higher conversion during signup and less security.

But if WorkOS works for your use case, then you should definitely use it. Our free tier includes 1,000,000 MAUs, which is significantly higher than Auth0/Clerk/Stytch/etc. which start charging you around 10,000.


Disclosure: I work for FusionAuth, an auth provider with a free community option.

If I were in your shoes I'd probably use a library built into whatever framework you are using. Auth servers are powerful but are another architectural component you have to manage (even if it is a SaaS, there's still config to manage).

Not sure what you are building it in, but if I were building it in rails, I'd use devise. If JS, maybe nextauth or passport.js.

When you do this you have to accept certain risks (what if your user data gets breached, what if you want to add more functionality) but based on the little you've shared, I think a local solution is perfectly fine.


I had a look at this recently and the pricing was pretty wild. Am I right in understanding that the connection charge is effectively per organisation?


Yes, per organization. SAML/SCIM have no user limits.

Hosted AuthKit is free up to 1,000,000 MAUs.

https://workos.com/pricing


How does AuthKit compared to Auth0? Any major differences?

Also what if you have an existing email-based account system which works fine - can you use AuthKit to add additional sign in methods like social without replacing your existing system?


The open-source nature of AuthKit is pretty different. You can build your own complete custom UI with the React components. Or build your own components from scratch and still use the WorkOS backend.

Outside of that, it's pretty much a drop-in replacement for Auth0. We also have more features, like native SCIM provisioning and a streaming events API to keep your app's database in sync.


WorkOS looks interesting from a features perspective but license model based on number of connected organisations is so high it will mean most SMBs (my clients) can't afford it.


Our customers typically just bundle our pricing within their own team/enterprise plan and pass through the cost. IT admins even within SMB orgs are happy to pay a couple hundred dollars a month more for the enhanced security of SAML auth. And small teams realistically don't need SAML, so you can add a minimum requirement on the number of "seats" (assuming that's how you bill).


Fair points. But SAML doesn't cost much incrementally for each added customer org, yet it enables an SMB to simplify account lifecycle management. Important from a security perspective of course. Most SAAS do put it as an "enterprise" feature but it's a barrier to SMB security best practices.

A more complex yet rationale model would be a small incremental fee per user under the SAML.


I thought so too and we actually tried that first. After talking to about a hundred customers, I heard them resoundingly prefer per-org pricing because the flat cost is predictable within their own deal structure. I think the reason is that user counts can vary dramatically and b2b saas businesses are primarily driven/measured by the number of customers, not end users.


I use Supabase just for auth (use AWS for everything else) and it was incredibly simple. The only issue is that their docs for my niche use-case were slightly out of date, but it still only took me maybe 30 minutes total.


I hand-wrote the largest OIDC deployment in the world, after experimenting with other libraries. It is awful. Do not use OpenID, do not use OIDC.


Do you have a recommendation on what to use instead?


For complex cases, use SSO providers and service-to-service connectors that hide the underlying protocol from you. If you must manage auth in a more custom way, use things like Azure Active Directory or other competitors. They probably use some OpenID or OIDC under the hood, but the vast majority of software products shouldn’t actually need to implement the protocols directly.

For simple cases, plain old TLS should be enough, ideally with short lived client certs.

It’s a bit like “don’t roll your own crypto” advice. Don’t roll your own auth.


Ok, this feels like different advice, to me. It isn't to not use these, it is not to be the one implementing them? That is a lot easier to understand. I've been using AWS Cognito to get basic stuff up and running and it hasn't been too bad, I don't think. Have to convince people to not punch holes in things, but so far I have not been too turned off from things.


> It isn't to not use these, it is not to be the one implementing them?

More or less. The complexity comes from having to solve the edge cases, so it’s helpful to be one level of abstraction higher where your code is closer to your conceptual space.


My recent experience setting up AWS Cognito (not through Amplify) was pretty rough. I think vanilla Cognito doesn’t do a very good job of delivering you something that actually works out of the box with no footguns - you still have to handroll a lot of stuff.


On the AuthN side, it seems to be... fine? For AuthZ, things are not surprisingly outsourced heavily to the application side. I'm not clear on how I would want that to be any different, all told. Last thing I, personally, want to deal with is an annotation style setup to control who can do what. I am luckily working with something where we can have pretty easy definitions on who can do what.

I would love to hear more of the foot guns, though. Not trying to deny they exist.


> For AuthZ, things are not surprisingly outsourced heavily to the application side.

There's some newer startups working on extracting and centralizing AuthZ functionality. Ones I'm aware of:

* permit.io

* cerbos.dev

* Oso

I'm sure there are more.


I think this hits the same points I brought up in https://news.ycombinator.com/item?id=38873614. I do not claim that these should never be used. In fact, I would go farther and say in many cases this sort of thing should be used.


I think this is mostly my own ignorance and inexperience working with AuthN, but I had a harder time than expected just figuring out how to add basic log in and session management to my website. I spent a long time reading all the official Cognito docs getting nowhere. Eventually I started searching on the web and finally found two guides that actually managed to explain what I was looking for: [0], [1].

My philosophy toward authn right now is to never have to worry about security at all, so I want to completely minimize any personal responsibility towards managing passwords and tokens, first by outsourcing it as much as I can to products like Cognito, and failing that, by following best practices. My gripe with Cognito, as someone who doesn’t know much about auth and would prefer to learn as little as possible (I just want to add logins to my site!), is that it doesn’t give you an understandable API or user flow or best practices for implementing what I’d consider to be a “happy path” use case, unless you use Amplify. So if you’re someone like me who is learning as they go, there are tons of footguns and mistakes you can easily expose yourself to.

As an example: it’s not obvious that using their hosted UI with a redirect, for USER_SRP_AUTH, should point to a backend endpoint hosted/managed by you that converts access codes to tokens and performs a second redirect back to your actual site. You could easily do the wrong thing and redirect back to your main site with the access code still in the URL params, and then issue a call from the webclient that converts that code to tokens ( Which is terribly insecure as it opens up an exploit - user could share that URL with another not knowing that the access code in the url params is sensitive and could allow others to sign into their account). In fact, that entire exploit/antipattern was never even mentioned anywhere in any docs I found, but it would be extremely easy to accidentally introduce by naively using Cognito.

[0] https://aws.amazon.com/blogs/security/reduce-risk-by-impleme...

[1] https://dev.to/jinlianwang/user-authentication-through-autho...


I confess I am far less worried about access tokens leaking to end users than I probably should be. Assuming folks are validating their audiences on tokens, I don't see as much danger on the implicit workflow.

I'm also less clear on how the extra redirect there helps? If you are dependent on the user's client machine to follow the redirect anyway, they can still get middled, right? Compromised client doesn't follow the "code" redirect and instead directly calls to your oauth endpoint to get tokens. Since this is the "code" path, they can even get a session token that they can then start using on their own? Or do you lock down your oauth endpoints such that they can't be called? (Or is there more I'm mistakenly ignoring?)


The specific vulnerability I’m mentioning is if the user manually copies their post-redirect url (with access code in url params) and shares it with someone else. Specifically “hey check out this cool game!” (I’m making a game), sends a link, not knowing that nonsense after the site URL contains sensitive info that shouldn’t be shared. And then some savvy user, or bot, hijacks their account.

The extra redirect converts login.mainsite.url/?code=foo to mainsite.url with the code converted to tokens passed back via cookies. That way it’s much harder for a user to leak account details accidentally. In this auth flow, Cognito hands off the login by redirecting to foo.bar/?code=baz which could leak baz if baz gets shared.

My tokens’ cookies themselves are same-site only/https only and not directly accessible, so they’re protected against XSS AFAICT. AFAIK the only MITM security risk, once I got this working properly, is if something on the user’s network sniffs and leaks url params to my login endpoint (not sure if TLS makes this impossible by encrypting the url path, hope it does, but not something I can easily workaround anyway) or injects arbitrary code to my backend (in which case almost everything is compromised anyway).

I’m new to this auth stuff so I might be missing something, but I was surprised at the subtle security risk of Cognito’s default redirect behavior once I noticed it.


Ah, I think I see. The concern is the web app not clearing the access token from the URL that a user accidentally shares? That or maybe URL logs of where a user has accessed would leak an access_token?

This makes sense, and I think is compelling enough. The "code" is protected by some complicated effort in Cognito to make the code single use. (Right?)

Thinking of my hypothetical, I don't think there is any real protection from a compromised client. This is data that you want to give to the user, and you have to do that through the client. But the redirect has to be followed by the user's client, right?

To that end, you are probably still fine doing the code to token exchange using the web browser directly? Just not through the address bar, and instead with a post to the oauth endpoint. You can set the cookie locally, but no need to have another webpage involved.


I guess it depends on what you mean by a compromised client/ how it’s compromised. The auth flow is:

* mainsite.tld checks if user is unauthenticated/uses expired tokens. If so, redirect to Cognito UI hosted in a subdomain (auth.mainsite.tld) but managed by Cognito.

* Cognito UI prompts user for username/email and password. Potentially also MFA. Handles password reset. Eventually also handles signup.

* On successful sign in, Cognito redirects to my login endpoint with the access code in url params (login.mainsite.tld/?code=foo).

* My login endpoint extracts the access code, talks to Cognito again to confer it to tokens. Returns tokens via cookies in a response that redirects to my main site (mainsite.tld). (This is what prevents the user from accidentally sharing their access code in url params, manually copied out of their browser address bar, if I had instead done this in the browser).

* The main site now has working credentials; if the credentials go missing (because user cleared cookies) or expire (indicated in currently-unimplemented response when they interact with my authz/game server) they’ll be redirected back to the same Cognito UI.

I do not have control over how (url parameters) Cognito spits out the access code with this flow; still this flow is preferable to most others as at no point whatsoever am I responsible for managing user passwords, yet unlike a lot of new auth solutions that accomplish the same thing, users still actually have the option to sign in with passwords. What I do have control over is what redirects addresses are allowed out of Cognito, so afaict a compromised client (something bad that points to my login) can only redirect to my login endpoint which only redirects to my main site. There is no way to stop a compromised client (like a malicious browser and unsuspecting user) from doing bad things with the code or tokens but the same is true of anything entered into a browser ever, so that’s not a problem worth caring about.

But maybe I misunderstand (because I’m new to webdev too lol): what you’re suggesting in that last paragraph might be possible if I can reliably get the browser to hide the access code url param from the address bar/history. I just didn’t know how to do that from the browser without a redirect or reload. Even if that’s possible I’d still consider it a pretty glaring footgun, because while (hypothetically) possible it’s not necessarily obvious.


I think the catch there is that your "login endpoint" is still relying on the user's browser to get the code. The cognito endpoint returns a redirect to the user, and it is on them to follow it. So, the "code=foo" is visible to the user. If the user wants, they can try to prevent following the redirect and use that code directly.

That is, between each of your bullet points, there is a request by the user's browser. You do a request to the cognito hosted UI, it returns a code to the browser through a redirect to a webpage that is in it's "allowed list." The idea is that your "allowed list" includes a "login endpoint," but in all cases the code goes back to the user and it is on their browser to send that to the specified page.

I'm asserting that you can have javascript in the main web app that can use the "fetch" api in the browser to exchange a code for a token. That mostly hides it from accidental disclosure. And it makes it so that you don't have to have a special HTTP endpoint with another redirect in there setting cookies. (I'm assuming you'd set local storage or cookies with the fetch data.)

Right? Does that make sense?


Yes, the user can still share their access code if they really want to. That’s like them sharing their password.

What I’m trying to prevent, while adhering to general authN best practices, is a user accidentally or unknowingly sharing their access code because they copied the address in their browser bar/history and sent it to someone. If they jump through hoops to share it there is nothing I can do to stop it. But the default Cognito footgun I’m mentioning is that the code ends up in their browser window in a way that could be easily copy and pasted without them knowing why they shouldn’t do that.


Makes sense, I think.

I don't think you need another endpoint that will respond with cookie commands?

On your page, the one that got the "?code=foo" payload, you can use javascript on your site and make another call to the backend to get the tokens. The same javascript code should clear the URL so that a naive copy/paste doesn't get it.

This is in contrast to having another server side endpoint that can set cookies on another http redirect response to the user. One that has to be in the same domain as your application, for the cookie to set correctly.

This will leak the "code=foo" in any access logs surrounding the user. But, that is already in the user's history and already happened. Is why cognito goes out of their way to make "foo" one-time use.


The specific challenge with authz in the app layer is that different apps can have different access models with varying complexity, especially the more granular you get (e.g. implementing fine grained access to specific objects/resources - like Google Docs).

Personally, I think a rebac (relationship/graph based) approach works best for apps because permissions in applications are mostly relational and/or hierarchical (levels of groups). There are authz systems out there such as Warrant https://warrant.dev/ (I'm a founder) in which you can define a custom access model as a schema and enforce it in your app.


My concerns there are usually that data duplication for various reasons makes a ton of sense in an application. Replicating the permissions system throughout all of this duplication is usually tough, even if you do know the schema well.

Worse, though, often times applications are learning their schema as they go. This is the key benefit of "schemaless" approaches. Anything that adds friction to a schema in the system is likely to get shaken off due to slowing the teams down.

I do agree that resource approaches are the best. I try and boil it down to flat access lists for resources based on ID. Any application call that uses an ID gets checked against access lists for that ID.

I will fully grant that, if you are building a system where you do know the schema very well, then this changes.

Pulling this back to Open ID and friends, I am growing rather disillusioned with the "scope" tag on access_tokens to control this. I love the idea of being able to scope down access. I do not like the idea of leaning on that, too heavily.


HTTP basic auth, TLS with client certs.


Those things don't do what OIDC does?


They do them with much less complexity than OIDC.


They absolutely do not and also introduce a significant amount of overhead with respect to key/certificate management.


And security (basic auth is as good as sending clear text passwords).


> sending clear text passwords

Which is totally fine to do over HTTPS.


Passwords need to be sent both with the request, and to the requestor. I think GP is referring to sending credentials to the service making the request.

It is far better to give service XYZ a time-bound and scope limited token to perform a request than a user's username and password.


Isn't Google moving toward phasing out TLS client certs in chrome/chromium?


Do you have any source for that? I can't find anything online about this, but that would effectively kill browser mTLS.


Chromium removed support for generating TLS Client Certs within chrome in 2016 [0] and ever since then it has gotten harder and harder to use mTLS in Chrome/Chromium. Ten years ago it wasn't a great UX, but now it isn't even obvious how to use it. The impression I've gotten is that Chrome isn't interested mTLS.

[0]: https://groups.google.com/a/chromium.org/g/blink-dev/c/z_qEp...


The crypto isn't complicated. What makes it complicated is the 10,000 different use cases they want the solution to work for, rather than one solution per use case, and a loose coupling interface for all of them.


It’s not because of cryptography, but because of abusing technologies not made for interactive stateful applications (i.e. HTTP, HTML etc.) for interactive stateful applications.


I used to think this, but I’ve also worked on authentication/authorization in contexts where you’re not constrained to HTTP requests, and it doesn’t really get any less complex once federated identities, finer grained access control features, and revocation enter the mix.

Sure trying to do everything through JWT’s and cookies makes things harder, but reasoning about attestations by a user from one federated identity provider that a service from a different federated identity provider should be able to query specific data on behalf of the user is messy no matter where you do it, and every medium sized enterprise has that problem somewhere in the IT stack. At that point JWT is just another serialization format for passing attested data around.


I don't see how the display layer (HTML, CSS) or transport layer (http, TLS) are not suitable for mildly interactive stateful applications. All the state is on the server, except a few cookies.

What they are "abusing" is the fact that the same browser, under user's control, may have access to many sites which don't by default trust each other. The user can attest that they should.


No, it can indeed be vastly simpler to handle auth. But you have to consider many complex aspects to provide a simple solution:

- What are your real use cases (authentication, authorization, delegation?)

- What is your threat model? (avoiding silly mistakes, preventing corporate espionage, defending against targeted attacks require very very different solutions)

- How to integrate into your ecosystem (tech stack, actors, layers..)

Then you might be able to remove some constraints. You might not need authorization delegation, stateless and readable json tokens.

But often it's easier to not think too much about it and just use "an industry proven standard", and that is oauth2 and OIDC:

a large auth umbrella to avoid looking at the sun.


If you're writing software to authenticate users, the protocol is huge and complicated. That's why there are full prebuilt containers and SaaS authentication services that solve this problem. There are entire server implementations you can extend, but with tools like Zitadel and Keycloak ready to be configured and deployed for all manner of use cases, I don't see why you would.

If you're just authenticating your client app against a server, it's pretty easy (all you need is two tokens and a URL for most librarlies). With some web servers (Apache, Caddy, the paid version of nginx) you can put that config in a location block and have it deal with the entire auth flow, so all your application needs to do is take the REMOTE_USER header or call /whoami to find out who the user is logged in as.

Doing auth correctly is just hard. Personally, I treat it like I treat dates/times: use something someone else made, unless you have a particularly weird use case that nobody else supports.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: