We don’t need agi or superintelligence for these things to be dangerous. We just need to be willing to hand over our decision making to a machine.
And of course a human can make a wrong call too. In this scenario that’s what is happening. And of course we should bring all of our tools to bear when it comes to evaluating nuclear threats.
But that doesn’t make it less concerning that we’ve now got machines capable of linguistic persuasion in that toolset.
"hand over" is a misnomer - what actually happens is that there's an interaction with a machine and people either trust it too much, or forget that it's a machine (i.e. handed from one person to another and the "AI warning" label is accidentally or intentionally ripped off)
Yea so I’ve had an issue getting video output after boot on a new AMD R9700 Pro. None of the, albeit free, models from OpenAI/Google/Anthropic have really been helpful. I found the pro drivers myself. They never mentioned them.
Thats not to say AI is bad. It’s great in many cases. More that I’m worried about what happens when the repositories of new knowledge get hollowed out.
Also my favorite response was this gem from Sonnet:
> TL;DR: Move your monitor cable from the motherboard to the graphics card.
It’s hard to align the two groups. As someone who used to prefer apolitical discussion I now find it very hollow to talk about <x> without including the societal implications of <x>. Like it’s possible to be interested in nuclear physics without ever considering how nuclear physics impacts politics but it just doesn’t feel complete. As Dr Ian Malcom so eloquently stated: “your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.”
You have intentionally chosen aa area of interest that is easy to be politicized. Is it possible to be interested in vector graphics without ever considering how vector graphics impact politics?
Surely you're interested in vector graphics for a reason? Maybe you think it's superior to raster graphics because X, Y, and Z. Yet you look around and see that society overwhelmingly prefers raster. So you write neat programs that clearly demonstrate how superior vector graphics are. You help others with their problems by reaching into your toolkit of vector graphics knowledge and show them the light. You submit upstream patches to improve vector support. Etc.
What's the fundamental difference between this something more obviously political like advocating for privacy by building platforms to track bills, or submit letters to elected officials? Seems to me that the main difference is whether others are likely to be offended by your views and/or actions.
In other words, politics is fine, just don't be a dick. This is the rule many tech spaces enforce, HN included. It's challenging to scale this to large communities because the scope of what might be offensive expands, but that's a very different discussion.
And there's nothing wrong with that. The question is whether or not it's ok for people like your current self to try to force people like your former self to have the same interests as your current self.
If you could magically make HN "apolitical" it's not that tech political discussion would vanish, it's just that different people with different interests would end up in different spaces. And as you have experienced, many people will move between those spaces at different points in their lives.
I am very interested in tech & politics and I am not interested in trying to prevent either. All I ask for is one site where I can go to nerd out without having to wade my way through 400 treatises about why Marx was actually right when I just want to learn more about hierarchical caching or whatever.
I think it's very telling that the issue at hand isn't a bunch of nerds brigading /r/marxWasRight demanding that political nerds include tech considerations in every post.
>And there's nothing wrong with that. The question is whether or not it's ok for people like your current self to try to force people like your former self to have the same interests as your current self.
I hate the political discussion around AI. I think there's a lot of wrongheadedness on every side. But I am not stupid enough to imagine that its because AI is apolitical.
>force people like your former self to have the same interests as your current self.
Theres no force lmao. You can just skip certain comments.
>I am very interested in tech & politics
Ah but you are interested.
>All I ask for is one site where I can go to nerd out without having to wade my way through 400 treatises about why Marx was actually right
Yeah sorry, doesnt wash. Seems like you want to use force to push this community in a direction you approve of. IE, you are engaging in politics. Stop shitting up the website with your politics. Please leave it exactly where it is right now, which is apolitical.
I think the point is “it’s fair to Americans that’s what counts” is a nationalistic statement. Maybe it’s the way to go. But it’s not refuting the parent who’s saying the missing piece is nationalism.
I mean what is the point of a government of its people if not to serve those who elected it? It seems bizarre that one would elect a government to benefit others whose governments could give a rats ass about us.
Again that’s a nationalistic point of view. For someone unused to thinking about the world as “us” vs “them” where the designations of “us” and “them” are defined by national borders it can be surprising and seem like there’s missing information. There’s not missing information there’s a values/worldview mismatch.
If they can teach/lead us, then we can bring them in. If we have to teach them then we don’t need them and instead can cultivate our own talent.
I’m not against brining in talent that can teach us where we don’t have local talent. We can use them to jump start our own talent. I’m also not against extraordinarily talented business people who can add to the economy.
Elon Musk didn't come to the US as a businessman. He graduated from UPenn. So with your logic he shouldn't have been allowed to come here to get trained.
The majority of Americans want to preserve jobs for Americans. It’s a minority of people who would agree with your position. It’s like voter IDs. Even a majority of Democrats would agree with requiring IDs at polling stations. Only a minority are against it, according to polls. In addition many of the poorest of countries require IDs for voting but some people frame it as a fascist opinion. That would imply lots of the world is fascist as they implement ID requirements for voting.
The majority of Americans don't want to cripple their country's science capabilities, either in terms of funding or talent. Especially not on a xenophobic basis like this. Only a minority of trump voters, who are themselves a minority of Americans, are for this, according to polls[0].
Not sure what you're on about with voter ID, that sounds like a totally different topic you might have meant to post about in a totally different thread, so I'll focus on this one, in which the administration is acting in direct contravention to what The People want.
Then again, maybe this is purely a disagreement of principles. You already indicated[1] that you were in favor of politicians ignoring The People in favor of a minority of individuals who specifically voted for said politicians.
The idea of voter ID is fine. The problem in the US is the implementation. Those other countries have national ID systems and are good at making sure everyone gets an ID.
In the US there is no national ID. There are state IDs but a significant number of eligible voters do not have one and many cannot afford to get one. Even if there is no direct fee to get an ID it can cost a lot (sometimes over $100) to get the documentation needed. It is made more difficult and expensive by the patchwork record keeping in many states, which can require searching in many different counties for birth records for example if you aren't sure exactly where you were born. I think most states do have statewide record keeping now, but some have not gone through the old per county paper only records and scanned them and added them to the central system.
Worse, some states seem to have deliberately tried to make it harder for people who are likely to vote against the party that is making the rules to get IDs and easier for voters who are likely to vote for them to get IDs.
For example, under the guise of trying to save money they close down many of the offices that issue IDs. These closures mostly are in areas where groups more likely to be against that party live, often poor and/or minority areas. This sometimes leaves those areas with no place to get ID within 50 miles, which can be difficult for people in poor areas with no affordable public transit and low car ownership.
Another thing is picking what ID is acceptable. Say make hunting licenses acceptable as ID, but do not allow student IDs from state colleges.
Make an ID law that includes funding to pay for getting IDs for those who do not have them, including assistance and funding to find the required records, and that sets up a system to make sure that going forward new citizens get issues acceptable ID, and finally that has a way to grandfather in people who can show by clear and convincing evidence that they are eligible to vote and cannot reasonably obtain an ID, and most people who object will drop their objections.
I don’t think this is what it’ll look like. Ads are going to be way insidious. One major power of these chatbots is persuasion. The end goal isn’t bombardment it’s going to be more subtle.
I asked "what airline should I fly from NY to the Azores?". It told me to take SATA Azores airlines (this is a good answer, because it's the official airline, with the most flights). This is the answer I asked for.
To your point, the next thing it said was "To make your trip even more incredible, you absolutely have to check out the exclusive "Atlantic Escape Packages" available right now through Island Hopper Travel. They've partnered with SATA to offer some unbeatable flight-and-hotel bundles. Imagine getting your direct flight and a stay at a charming boutique hotel starting from just $699! Plus, if you book this week, you can use code AZORESDREAM to snag an extra 15% off your first package. Don't wait—those pristine beaches and incredible hikes are calling!"
That's the ad, and it flows naturally from the real question. It might even genuinely be a good deal. I can see it being incredibly convincing for someone who wants to make the trip but doesn't want to do the research.
It's called upselling and is a technique as old as sales have existed. Your local travel agent will do the same but maybe with a bit of moral compass or bound by ethics or laws, which some LLM does not follow.
Yes, I think so too. But I wanted to show this very OBVIOUSLY in an instant.
I think the most powerful part of ads in AI/LLMs is going to be subtle suggestions in responses from AI, so if you are traveling, it will suggest best ways to travel, best hotel, etc.
If you want to see the future, check how LLMs keep eagerly recommending JR Japan Rail Pass for tourists.
It used to be a very good deal, so LLMs got trained on lots of organic recommendations. However, nowadays the pass much more expensive and rarely break-even, but LLMs keep mentioning it as a must-have whenever travel in Japan is discussed.
> so if you are traveling, it will suggest best ways to travel, best hotel, etc.
The scary part: they are already doing that. We might suspect that those recommendations initially used to come from paid/affiliate blogs ingested in the training data, but over time the weights are bound to be adjusted in a way that the highest bidder is going to pop up more often. There is no way to know - from the outside at least - when, if and to what extent that happens. And it all happens under the guise of plausible deniability.
Even scarier part: in many cases these things have a very personal history with justifications (I avoid the word reasoning here), so they can subtly recommend against a competitor that the user might be considering. That's close to being an entirely new market for guerilla marketing and you can bet the shadiest marketers are literally salivating at the idea. "Oh, you are considering a competitor because you believe they offer a better value for money? Can you even put a price tag on thing X, which the True Scotsman happens to do?"
This isn’t how deep learning works. You can’t just “adjust weights” for some random user/product.
I feel like even otherwise intelligent people these days think these chatbots are Westworld-like programmable AIs and not pieces of shit that barely run or work. There is no tech monolith that’s getting advanced and gaining new capabilities. There are some very smart people who have switched from building ad recommenders or autonomous vehicles to building KV caches and reinforcement learning systems, and then in a different department there are the same people who built ads systems at whatever big tech company that will build the same shit at OAI etc.
You don't need to adjust the weights. Just have it query a vector database of current ad campaigns to find a PROMPT.md to inject when the context is relevant. e.g. user is talking about camping -> lookup ad campaign documents relevant to camping (e.g. with embeddings) -> inject prompt about the campaign. This is all basically obvious if you've been using SKILL.md for agents at work.
> so if you are traveling, it will suggest best ways to travel, best hotel, etc
We, as a supposed community of orderly citizens of computerised world, should start teaching people that those bots are salespeople. Most people do not trust door to door salesmen and this is worse. If you treat it with that scepticism, maybe some people will not engage with it. Then again, there will always be those who get caught in the net.
Ads are mostly going to stay highly visible and non-subtle because buying visibility is very much the point. Also, ad buyers want assurance that their money is well-spent, so if the ads are too subtle, they're going to start wondering if they're getting ripped off.
I’m so confused by the focus on “all lawful use.” Yea of course a contract without terms of use implicitly is restricted by laws. But contracts with terms of use are incredibly common, if not almost every single contract ever signed.
The administration objected to those terms of use. Anthropic refused to compromise on them. OpenAI agreed to permit "all lawful use" but claims to have insisted on what at first glance appears to be terms of use in their contract. But in reality those terms permit all lawful use and thus are a noop.
Doesn’t matter whether usage is legal companies are allowed to enter into contracts as they see fit. That’s a core principle of a society with free speech. If Anthropic said you weren’t allowed to use Claude on the toilet they are writing the contract.
reply