Hacker Newsnew | past | comments | ask | show | jobs | submit | OsmanDKitay's commentslogin

The key distinction is that im not trying to reinvent the API format wheel, im building a new steering wheel for a new kind of driver the AI agent. The actual "poorly reinvented wheel" today is screen scraping and brittle DOM manipulation which is how most agents are forced to interact with websites. AURA is the direct answer to that chaos.

A spec like jsonapi is excellent for structuring the payload of a single, known api endpoint. AURA operates at a higher level of abstraction, site-wide capability discovery and state management.

An agent doesnt just need to know how to format a request it first needs to know what is possible to do and whether it can do it right now. This is the problem json:api isnt designed to solve. For example, when an agent first visits a site it fetches the aura.json manifest and sees the login and list_posts capabilities. The create_post capability isnt even visible to it. After the agent successfully uses the login capability, the server's next response includes the dynamic aura-state header, which now advertises that create_post and logout are available. This state-aware dynamic map of available actions which changes with the user s context, is the core of the protocol. It s a formal language for the if-then logic of a user interface, built on established standards like rfc 6570 uri templates so it s a stable foundation. It s not another way to format json, it s a way for a website to declare its interactive grammar to the web.


The way I see it the choice for websites is quickly becoming not ads vs. no ads but getting scraped for free by ai companies vs. offering a clean managed front door. Right now, when an ai scrapes a site, it bypasses the entire user experience that site owners have carefully designed to guide visitors and keep them engaged. The ai just extracts the data and leaves completely ignoring the layout, the related content, and any other part of the site s strategy. Aura is a way for a site to manage that interaction, and this opens up a couple of possibilities. For paid services the aura.json manifest can technically define which capabilities require payment. Your ai agent connected to your payment method, could then directly pay for a api call on your behalf to complete a task. But perhaps more interestingly for adsupported sites aura enables a new kind of advertising. Since the ai s request contains precise user intent(e.g. searching for flights to London) the api response can include a highly relevant, structured ad object right alongside the data. That s an ad delivered at the peak moment of user intent, which is far more valuable than a simple banner impression. It gives control back to the site owner to build a business model that actually works in a world full of ai agents.


Fair point.. If a website's manifest is wrong any agent trying to use it will just get an error. The agent will learn not to trust that site and the site owner just ends up making their own features unusable. The real incentive is to be useful. Websites that provide an honest manifest are the ones that will actually work with agents.


…that’s the point


If someone s goal is simply to break the interaction with agents they can certainly do that by publishing a broken manifest. But that s no different from how the web works today. There will always be a divide between those who build useful services and those who act in bad faith. The solution isnt in the protocol itself, but in the community that uses it. Just as we now have services that warn us about malicious websites a healthy AURA ecosystem would naturally lead to open, community run reputation services. An agent could check a site s aura.json against such a service before trusting it. If a manifest is found to be intentionally misleading its reputation would drop, and agents would collectively learn to ignore it. The system corrects itself through shared information. Trust has to be earned, not just declared in a file.


I chose a single site-wide /.well-known/aura.json manifest primarily for agent efficiency. The goal is for an agent to understand the website s entire capability map with a single, predictable request, rather than having to crawl and parse individual pages to piece that map together. Aura is conceptually similar to robots.txt one file that defines the rules and possibilities for the whole site.

However you are right that context is critical for individual URLs. I handle that with the dynamic aura-state http header. While the manifest is the static map of everything possible, the aura-state header in each response tells the agent what's available right now for that specific page or state. (e.g "you are logged in so create_post is now available").

So i get the best of both worlds efficient site-wide discovery dynamic *state-aware execution


The Reddit example gets right to the heart of it: when a platform loses control over its data and how it makes money, it just shuts the door on open access.

And that s really what aura is trying to address. If we dont figure something out the web could easily end up just being a handful of big ai sites, and all the smaller, independent sites might just fade away.

The goal with aura is to give control back to the people running the websites. It s not about blocking ai but about giving sites a clear, standard way to say "here s how you can work with me meaningfully...". This means an agent can do something specific and useful without costly, aimless scraping, and it lets site owners build cool, new features just for ais.

and you re right to worry about malicious manifests but that s a trust problem. A site that lies in its aura file would get a bad reputation fast just like a phishing site does now.

At the end of the day aura is a bet that we can build an open, capability based web where site creators can join in on the AI revolution on their own terms. Time will tell if it's the right technical answer, but it s a conversation i think we absolutely need to have to keep the web diverse and creative.


That s a reallyy sharp observation and you re right it s the core challenge for the web as we know it. For any site that depends on ads hiding the UI is a complete nonstarter. The key is that aura isnt really about creating another reader mode. Its goal is to fundamentally separate a website s functions from its visual presentation paving the way for a truly intent based web.

Think of it this way -instead of an ai trying to find and click the "post comment" button it can just use the "post_comment" capability the site offers through a clean api. While this seems to sidestep the ad model it actually enables a more direct one. A site could specify that a particular action, say a premium translation feature, requires a small payment or an api key. It s a way to get paid for the actual value you provide, not just for ad views.

This could even change how search works like a search engine that indexes what sites can do, not just what they say. Your personal ai could then find and execute the "book a flight" capability from multiple airlines to find the best deal for you all without you ever having to load a webpage. It s a different way of thinking about the web s economy moving from attention to action.


The main difference is -llms.txt is for reading content --while aura is for performing actions

-llms.txt tells an AI here is a clean, simple version of this page for you to read --Aura tells an AI here is a capability called create_post, it needs a title and content, and you can use it by POSTing to the /api/posts endpoint.

llms.txt is for reading, aura is for doing.

You also asked why you need npm install. You dont! To add Aura to your own website, you only need to create one static file - public/.well-known/aura.json. That's it. It s just as simple as creating a llms.txt file.

The npm install command in my project is for people who want to download and run my full reference implementation (the example server and client). It is not a requirement for the protocol itself.

And for your last question, Aura is new so no other websites are using it yet.


You made good points. Aura is more like a sitemap for actions than robotstxt. I just wanted to make Aura much simpler, using only plain JSON and HTTP. I think the Semantic Web was too complex for people to use. So why is now a good time for this idea? I think it s because of AI agents. Today big companies spend a lot of money on web scraping and their tools break all the time. This gives them a real reason to want a better way. And for your last question, can't AI just read the page? Yes it can, but it s very slow and it breaks easily. It s the difference between using a clean API versus guessing where to click on a page. Aura is just a way for a website to offer that clean API-like path. It's faster for everyone and doesnt break when a small thing like a button s color changes. Thanks for the feedback.


The current way agents interact with the web is a problem.My view isnt that we should "cater" to them but that we should define the terms of engagement before they define it for us. Right now they re using scraping. That gives site owners zero control. Aura is an attempt to hand control back to the site owner by providing a clear aura.json manifest. It s about consent. As for using AI to build it, I believe you have to deeply understand a technology to help steer it.


The comparison to OpenAPI is the main thing to address and you re right to ask why it isn t enough.

OpenAPI is fantastic for describing a static API for a developer to read. But the web is more than that its a dynamic stateful environment built for human interaction. The current trend of forcing AI agents to navigate this human-centric web with screen scraping and DOM manipulation is brittle and I believe, unsustainable. Its like sending a robot into a grocery store to read the label on every single can instead of just asking the manager for the inventory list.

This is where Aura tries to be different in two key ways

Control & Permission:not just Documentation: Aura is designed from the website owner's perspective. It's a way for a site to say "This is my property and here are the explicit rules for how an automated agent can interact with it." The aura.json file is a handshake a declaration of consent. It gives control back to the site owner.

Statefulness(This is the big one): An OpenAPI spec is stateless. It cant tell an agent what it can do right now based on its current context. This is what the AURA-State header solves. So for example before you log in the AURA-State might only show you list_posts and login capabilities. After you successfully call login the very next response from the server includes a new AURA-State header that now unlocks capabilities like create_post and update_profile. The agent discovers its new powers dynamically. This state management is core to the protocol and doesn't really have a parallel in OpenAPI.

You re right to be skeptical and as I said in my post maybe Aura isnt the final answer. But I strongly believe the web needs a native capability-aware layer for the coming wave of AI agents. The current path of brute force interaction feels like it will break the open, human-centric web we ve all built.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: