Hacker Newsnew | past | comments | ask | show | jobs | submit | Ucalegon's commentslogin

Marketing is marketing, nothing about it was ever about being factual when there is a total addressable market to go after and dollars to be made! This is inline with much of the other marketing that exists in the AI space as it stands now, not mention the use of AGI within the space as it stands currently.

Sure, but there are plenty of cases where a deceptive name has been considered enough to at least warrant an investigation: https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

I'm not saying anything is going to happen, ARM holdings has a lot more money and lawyers than Long Blockchain did, but I'm just saying that it's not weird to think that a deceptive name could be considered false advertising.


That would not hold up considering that they consistently use 'agentic' in their press release and make no mention of 'artificial general intelligence'. Just because two things have the same acronym does not mean that they stand for the same thing. Marketing being cheeky is not a crime.

It's not "being cheeky". They know that the holy grail for AI is AGI. They know that people are going to see the acronym AGI and assume Artificial General Intelligence. They know that people aren't going to read the full article.

This isn't just a crass joke or a pun, it's outright deception. I'm not a lawyer, maybe it wouldn't hold up in court, but you cannot convince me that they aren't doing this on purpose.


of course they did it on purpose but thats not illegal. They are not at fault for individuals not reading what the acronym stands for and the intent that they place within the press release, which is very, very clear. They are not obligated or liable for others lack of due diligence.

Leaders in the email security space have been seeing this for a while now [0], this is not new. The problem is the means to protect consumer mailboxes outside of Gmail, isn't cost effective since most people do not actually pay for their consumer mailbox and the impacts of compromised accounts do not actually impact the providers. It is going to be interesting to see how this plays out in the consumer space as the complexity of the problem continues to grow while the technology used to stop it stays in the early-2010s.

[0] https://siliconangle.com/2023/12/19/new-report-warns-rise-ai...


With various websites planning to introduce micro-transactions to read their contents, maybe the end-users should start charging for email deliveries.

You want to send me an email? Please give me $1 first, and if I don't like your content I can, without notice, change that number to $50 per email.


I agree, and I think the answer is that what used to be free, and is now infected with all sorts of enshittification, will be paid-for to be useful.

I pay for email via Fastmail, don't really have a spam problem. I think this addresses your point above, that to have an effective spam filter takes money, and free email doesn't generate money.

I pay for search via Kagi, don't see all those crappy Google Ads and actually get useful search.

I can see the other services (socials, messaging) moving to a paid model to solve the same issues.


The problem is the cat is already out of the bag on the technology. Anyone can go over to Huggingface, follow a cookbook [0], and build their own models from the ground up. He cannot prevent that from taking place or other organizations releasing full open weight/open training data models as well, on permissive licenses, which give individuals access to be able to modify those models as they see fit. Sam wishes he had control over that but he doesn't nor will he ever.

[0] https://huggingface.co/docs/transformers/index


Im thinking mainly if they manage to get some kind of regulations that make open source impractical for commercial use, or hardware gets too expensive for small hobbyists and bootstrapped startups, or if the large data center models wildly out class open source models. I love using open source models but I can't do what I can do with 1m context opus, and that gap could get worse? Or maybe not, it could close, I don't know for sure, and how long will Chinese companies keep giving out their open source models? Lots of unknowns.

I know someone who just spent 10 days of GPU time on a RTX 3060 to build a DSLM [0] which outperforms existing, VC backed (including from Sam himself), frontier model wrappers that runs on sub 500 dollar consumer hardware which provides 100% accurate work product, which those frontier model wrappers cannot do. The fact that a two man team in a backwater flyover town speaks to how out of the badly out of the bag the tech is. Where the money is going to be isn't based off of building the biggest models possible with all of the data, its going to be about building models which specifically solve problems and can run affordably within enterprise environments by building to proprietary data since thats the differentiator for most businesses. Anthropic/OAI just do not have the business model to support this mode of model development to customers who will reliably pay.

[0] https://www.gartner.com/en/articles/domain-specific-language...


The thing about that is the benefits, saving a couple minutes a day and not having to click to different windows where the information is stored, is apparent and intimidate whereas the harms associated with loosing most, if not all your privacy and security isn't felt in the same type of immediate way, so the dopamine of the positive effects completely overwhelms. It is hard for many people to be able to weigh different cost/benefit in situations where it is so one sided on the immediacy spectrum.

>Nobody actually gives a shit, about anything.

That's the case until there is the threat of discovery. The real issue is if the PE firm bought the company for the value of the IP and any damages awarded was included in the 'cost of business', which is why liability needs to be extended to those persons who make that decision, not just the corporate entity.


A lot of that comes down to the costs associated with not being compliant and/or the requirements of existing contracts/insurance policies, where having dedicated FTEs to compliance is a requirement. Compliance might not be hard for the person/people managing the program, however it might seem difficult or complex to the FTEs that have to build to those standards if they do not have a security or governance background.

It all depends on where the the AI is running. The problem with the idea, is that for the majority of Windows boxes where it would be running do not have the bare metal hardware to support local models and thus it would be in the cloud and all of the issues associated with that when it comes to privacy/security. It would be neat, given MSFT's footprint, to look to develop small models, running locally, with user transparency when it comes to actions, but that doesn't align with MSFT's core objectives.

AFAIK the existing Copilot features always use the NPU and do not fall back to the cloud. Given that Windows 12 will require an NPU I don't see why it would fall back either.

This is true for only features of Copilot+. The issue that MSFT faces, especially as it pushes Copilot EVERYWHERE is the reality of the majority if the hardware running Windows does not, and will not have, the NPU required for 12, nor is there the actual consumer purchasing power, to upgrade hardware to have an NPU. This a reality that MSFT just does not seem want to deal with while the push the technology onto consumers because its not based off of the reality of the install base they are dealing with but rather trying to justify their strategic investment into AI in the B2C space without doing the proper product market fit to justify it.

Five stars comment

Thats when you reach out to your insurer and ask them their requirements as per the policy and/or if there are any contractual obligations associated with the requirements which might touch indemnity/SLAs. If it does, then it is critical, if not, then its the classic conversation of cost vs risk mitigate/tolerance.


That is the thing about these conversations, is that the issue is potentiality. It comes back to Amara's Law; “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Its the same thing with nuclear energy in the 1950s about what could be without realizing that those potentials are not possible due to the limitations of the technology, and not stepping into the limitations realistically, that hampers the growth, and thus development, in the long term.

Sadly, there is way, way, way too much money in AGI, and the promise of AGI, for people to actually take a step back and understand the implications of what they are doing in the short, medium, or long term.


What was underestimated in the long term with nuclear power? I like nuclear power but I don't see what long-term effects were underestimated by people in the 50s.


I guess an example would be short term. "A pocked nuclear reactor in every car powering our commute to work" vs long term change "Nuclear power powering vast datacenters that do most of the work for us"


They just are not going to provide insurance to companies who use AI because the liability costs are not worth it to them since they cannot actual calculate risks, it is already happening [0]. Its the one thing that a lot of the evangelists of using AI for entire products have come to realize or they aren't actually dealing with B2B scenarios where indemnity comes into play. That or they are lying to insurance companies and their customers, which is a... choice.

[0] https://futurism.com/future-society/insurance-cyber-risk-ai


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: