Hacker Newsnew | past | comments | ask | show | jobs | submit | adangert's commentslogin

And the downfall of anthropic starts, OpenAI has had this all in the bag the whole time. Anthropic is a poor imitation of Sam's Master plan, it was over before it even started. Money grubbers, the lot of em!

It's generally easier to start typing your street address (maybe 4-5 characters), see an auto fill, click your address and all information, including the zip is already filled out.

If you started with just the zip you still have to type your street address afterwards.


Let me reiterate some points for people here:

Income and revenue sources always, inevitably, and without fail, determine behavior.


I think your theory might be missing an extremely relevant and timely counterexample?


I will repeat here again the same comment I made when they posted their constitution:

The largest predictor of behavior within a company and of that companies products in the long run is funding sources and income streams, which is conveniently left out in their "constitution". Mostly a waste of effort on their part.


Anthropic (for the Superbowl) made ads about not having ads. They cannot be trusted either.


Advertisements can be ironic, I don’t think marketing is the foundation I use to decide about a companies integrity.


The largest predictor of behavior within a company and of that companies products in the long run is funding sources and income streams (anthropic will probably become ad-supported in no time flat), which is conveniently left out in this "constitution". Mostly a waste of effort on their part.


I'm not sure Anthropic will become ad-supported - the vast bulk of their revenue is b2b. OpenAI have an enormous non-paying consumer userbase who are draining them of cash, so in their case ads make a lot more sense.


While true, irrelevant.

This isn't Anthropic PBC's constitution, it's Claude's constitution. The models themselves, not the company, for the purpose of training the models' behaviours and aligning them with the behaviours that the company wants the models to demonstrate and to avoid.


Conway's law seems apt here. The behavior of Claude will mirror the behavior and structure of anthropic. If anthropic deems one revenue source higher than another, Claude's behavior will optimize towards that regardless of what was published here.

What a company or employee "wants" and how a company is funded are usually diametrically opposed, the latter always taking precedence. Don't be evil!


Yes, but that is a different level of issue. To analogise in two different ways, first it's like, sure, Microsoft can be ordered by the US government to spy on people and to backdoor crypto. Absolutely, 100%, and most world governments are probably now asking themselves what to do about that. But what you said was kinda like someone saying of Microsoft:

  In the long run autocratic governments spying on their citizens will backdoor all crypto (Microsoft will probably concede to such an order in no time flat), which is conveniently left out in this "unit test". Mostly a waste of effort on their part.
Or if that doesn't suit you: yes, sure, there's a large flashing sign on the motorway warning of an accident 50 miles ahead of you, and if you do nothing this will absolutely cause you problems, but that doesn't make the lane markings you're currently following a "waste of effort".

Also, as published work, they're showing everyone else, including open weights providers, things which may benefit us with those models.

Unfortunately, I say "may" rather than "will", because if you put in a different constitution you could almost certainly get a model that has the AI equivalent of a "moral compass" tuned to supports anything from anarchy to totalitarianism, from mafia to self-policing, and similarly for all the other axes people care about. With a separate version of the totalitarianism/mafia/etc variants for each specific group that wants to seek power, c.f. how Grok was saying Musk is best at everything no matter how non-sensical the comparison was.

But that's also a different question. The original alignment problem is "at all", which we seem to be making progress with; once we've properly solved "at all" then we have the ability to experience the problem of "aligned with whom?"


Is there so far any official/semi-official info about products placement in current generation of LLMs? I mean even for coding agents there's tons of services it can recommend and can be proficient in using (thanks to deliberate training).


OpenAI are testing ads in the free tier of ChatGPT, but they state that the actual LLM responses won't include advertising/product placement [0].

[0]: https://openai.com/index/our-approach-to-advertising-and-exp...


Awful awful awful, ads lead to anti consumer behavior, anti free market competition, turns capitalism into a pay to win game, are similar to a cancer, incentivize the creation of extremely harmful platforms, such as slop filled tik tok, destroys existing companies such as Facebook, generally harms society in every measure, the cons outweigh any and all pros. It transforms your product into a sticky candy box trap for unassuming visitors while your actual customers are now just the advertising industry, you become, as is so well spoken , the product. Adopting ads should be taken as a step towards harming your consumer base so you can vampiricaly extract attention from them indefinitely and forever, as your company slowly becomes a skeleton drain on all of humanity. There is no such thing as a "good" ad.


Dutch Mao (mentioned briefly in the wiki) is a variant where everyone comes up with a hidden rule before the game begins, and is one of my favorite games of all time.

Followed closely by Eleusis the master of inductive reasoning card games, a brilliant zendo like experience: https://en.wikipedia.org/wiki/Eleusis_(card_game)


The main arguments I hear against banning all ads is that it will hurt small businesses, a better solution might be to ban all adds for companies making above X amount per year, or even better: create systems where users pay for ads themselves, then the incentives would switch to be in favor of consumers.

In any case, totally agree, ad companies are out of control, I'm hoping more Kagi like services start appearing soon.


Banning companies above a size still allows a unhappy medium where only "small businesses" BUY the same horrible ads and we drop one or two Army or IBM ads from the lineup.


Not everything has to be black and white, there is middle ground for improvement. I'm not sure anyone loves the same MegaCorp™ ad plastered all over buildings, highways and stoplights.

The size, depth, and reach of the advertising industry is a direct result of the amount of money injected into it. The current ad industry is effective, awful, anti-competitive and resembles more of a cancer at this point than it's intended purpose to provide useful information.


No because small businesses arent hiring ad agencies who spent years studying psychology in order to manipulate people into doing what the company wants, not what the person wants. This is very much an issue of scale


That market is made when you ban "large companies" from making ads.


Deshitification is directly related to profit motives, VC dollars, and providing a service or good that overwhelmingly exceeds any hope of making substantial ROI in the future. None was shown in any of the above promotional materials, your company and product is tarnishing and devaluing the term, congrats on the achievement. We'll continue to look for another word that has not been captured.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: