Hacker Newsnew | past | comments | ask | show | jobs | submit | Fraterkes's commentslogin

I'll say something positive here as a european: the amount of diverse places that I'd assume would be broadly culturally aligned with Trump that have shown some form of resistance or pretty vehement disagreement with this administration this last year, suggests to me that there is a degree of widespread (kinda bipartisan) idealism in the US that's pretty unique in the west.

Americans individually are probably the most optimistic people in the world. The optimism might be myopically fixed on getting a promotion or winning the lottery or breaking the plate spinning world record. But if you don’t have some big project or self improvement scheme then many people (and most traditionally successful people) will give you a wide berth. People without big dreams might as well have already kicked the bucket.

Regardless of the government this culture is infectious. I think of Nikes famous tagline “Just do it” probably describes America better than any anthem or crusty document.


At the core of "Havana Syndrome" lies the idea that Cuba and/or Russia have managed to develop energy weapons so advanced that the American military command won't even entertain the thought of them existing. I'll let you draw your own conclusions.

> won't even entertain the thought of them existing

Careful, it's also possible that they have thought very hard about such things, and they've decided that revealing what they know would lose them a technological edge.

In other words, what if the CIA/DOD already knows there's a class of devices which could explain the problems, and the denial is about maintaining secrecy over their own operational capabilities?

Imagine something similar in the 1980s: "This tragic mid-air collision was obviously caused by faulty radar or gross pilot error by at least one of the two military planes... Our brightest minds have looked very hard at the problem and there is no such thing as a 'stealth' airplane which doesn't show up on radar."


Or going back further, Allies having cracked the Enigma encryption had to let Allied ships continue getting sunk and soldiers dying because to act otherwise would have revealed that the Enigma had been broken which would have led to an even greater loss of life.

The assumption with these weapons was that they would require too much energy to be portable enough to be undetectable in all of these circumstances (at least based on other reporting on the subject).

If the device doesn't require a lot of power, then it's entirely possible that American military commanders and research leadership would miss it.

Add to that an incentive to avoid helping the victims from a cost and overhead perspective, and you get a big ol' mess.


>At the core of "Havana Syndrome" lies the idea that Cuba and/or Russia have managed to develop energy weapons so advanced that the American military command won't even entertain the thought of them existing.

I just don't think that's true at all. The answer could easily be that Cuba and Russia have developed energy weapons that we only know about from classified sources and therefore cannot discuss their existence.


Sure, if you think the intelligence community is better at physics than the physics community.

There is precedent for this. IC Satellite optics were years ahead of commercial. Same with cryptography. The NSA invented asymmetric encryption and kept it secret. I wouldn’t be surprised if they know a few advanced things about quantum computing that IBM hasn’t figured n out yet

I would, I've written about this before. A common error is to think that say NSA mathematicians have access to what's in the "free world", but those on the outside don't have access to the inside.

The reason this is an error is because research is an interactive process. Spooks can read papers, but they can't freely discuss with outside researchers - not what they themselves are working on of course, but even talking to outsiders about what the outsiders are working on, can be risky, since it can accidentally reveal a lot.

Secrecy cripples research. Even in areas where the TLAs hoover up the majority of graduates (was apparently true of math a while ago), they often fall behind.

There was a time when the TLAs could just call people like Claude Shannon, demand he work for them and never tell anyone about what he was working on, and he would say OK. That time is long gone. They don't have that goodwill any longer, and the price of isolation for a researcher has only gotten worse as communication has improved.


I also don't really see this as necessary -- was the physics community attempting to create energy weapons?

There's also the chance it's not a weapon, but something that mistakenly turned into a weapon when it was tested on live subjects.

I don't think randomly attacking embassy staff (iirc, not everyone was CIA - there were just desk people affected) makes sense for anyone to do, but trying to listen on them and fucking up sounds right up their (or our) alley.


This was the point I made in another comment here. My bet is the US deployed the weapon and accidentally sickened their own people. So of course they play stupid and deny that any such tech could exist.

Though the Russians have been very clever in the past stumping the US: https://en.wikipedia.org/wiki/The_Thing_(listening_device)


Or that we have them too…

Any source for that external physical cause? Ideally by a publication/source that a skeptic like me won't just dismiss?

I feel corny being so positive about a megacompany, but I bought my first Macbook air half a year ago after a life of PC's, and it has been genuinly surprising to use something made by a huge company that is constantly better than I expected.

I have a macbook air from 2022 and it is easily the "best" computer I have ever owned.

Its portable. It has a great keyboard, screen, and battery life. No fans or overheating. No issues with the operating system or installing software I need.

I can even use it for some lighter software development directly, and for everything else I can ssh back to a beefier machine.

If I weren't already so happy with this macbook air, I would be ecstatic for the neo.


Same. I got the 2024 15" Macbook Air when CostCo had it for $849.00*

Hadn't purchased a laptop new since college scholarship decades ago. This machine continues to make an immediate impression. The entire thing is thinner than just the bottom of my college CoreDuo. It also lasts 8x longer, on battery.

I just use mine as a tertiary machine (i.e. bedtime reading/podcast), but if you ever want to run the machine hard long-term, you can use 1mm thermal pads between the heatsink and bottom of external case (and then it'll never throttle).


> if you ever want to run the machine hard long-term, you can use 1mm thermal pads between the heatsink and bottom of external case (and then it'll never throttle).

That will spread the heat to the battery and degrade it much faster.


The inverse is true:

This removes heat from the internal compartments (which logic board heat sink and battery co-habitate [0]) by transferring it outside via heat conduction through the case. There is no detectible heat increase (to touch) — consider the heat masses relative sizes (processor v. entire metal case).

[0] See <https://www.youtube.com/watch?v=jXY9tCBpf48&t=188> — thermal pad placement goes between four central screws (above processor)

As a thought experiment: how would ejecting heat from the inside increase its temperature?


Nice hardware (except for the reflective screen). Software is okay, fiddly, often fights with the user, bugs surprisingly often.

The best computer, but with the worst software (well maybe Windoze is even worse these days). If you could run Linux on them, without compromises, it would be perfect.

It's not that bad really. Windows was always a bit flakey and crashy and Linux has a job running the software I use. I'll give you Apple can be a bit control freaky as to what you do with your own machine - getting rid of 32 bit annoyed me - but nothing's perfect.

Same here. I've been buying Airs ever since they came out and they always exceed my expectations. I use them as primary dev machines.

Same. Equally comfortable on Windows, Mac and Linux. But almost almost all new hardware choices for the last 25 plus years have been mostly from Apple. The old Macs don't really die, even as I replace them with faster models, so my house is slowly becoming an Apple/Mac museum, starting with a Mac 512k, Mac CI and Mac LC, and so on, right down to a trash can Mac in the mix, and then to M series Macs. All CPU generations from Apple: 6502 (Apple ][), 68000, 68040 (NeXT) PPC, ARM (Newton, iDevices), Intel and M series. Can't get myself to throw/give/sell them away.

Coming to terms with two uncomfortable truths: I'm a hoarder, and an unapologetically incorrigible Apple fanboi.


Is there any reason to not just switch to 1-based indexing if we could? Seems like 0-based indexing really exacerbates off-by-one errors without much benefit


I'm not sure what that has to do with the article, but anyway: https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD831...

That said, I'm not sure how 1-based indexing will solve off-by-1 errors. They naturally come from the fencepost problem, i.e. the fact that sometimes we use indexes to indicate elements and sometimes to indicate boundaries between them. Mixing between them in our reasoning ultimately results in off-by-1 issues.


This is an article that (among other things) talks about off-by-one errors being caused by mixing up index and count (and having to remember to subtract 1 when converting between the two). That's what it has to with it.


If you always use half-open intervals, you never have to subtract 1 from anything.

With half-open intervals, the count of elements is the difference between the interval bounds, adjacent intervals share 1 bound and merging 2 adjacent intervals preserves the extreme bounds.

Any programming problem is simplified when 0-based indexing together with half-open intervals are always used, without exceptions.

The fact that most programmers have been taught when young to use 1-based ordinal numbers and closed intervals is a mental handicap, but normally it is easy to get rid of this, like also getting rid of the mental handicap of having learned to use decimal numbers, when there is no reason to ever use them instead of binary numbers.


I must have missed that part, my bad


When accessing individual elements, 0-based and 1-based indexing are basically equally usable (up to personal preference). But this changes for other operations! For example, consider how to specify the index of where to insert in a string. With 0-based indexing, appending is str.insert(str.length(), ...). With 1-based indexing, appending is str.insert(str.length() + 1, ...). Similarly, when it comes to substr()-like operations, 0-based indexing with ranges specified by inclusive start and exclusive end works very nicely, without needing any +1/-1 adjustments. Languages with 1-based indexing tend to use inclusive-end for substr()-like operations instead, but that means empty substrings now are odd special cases. When writing something like a text editor where such operations happen frequently, it's the 1-based indexing that ends up with many more +1/-1 in the codebase than an editor written with 0-based indexing.


This is a matter of opinion.

My opinion is that 1-based indexing really exacerbates off-by-one errors, besides requiring a more complex implementation in compilers, which is more bug-prone (with 1-based addressing, the compilers must create and use, in a manner transparent for the programmer, pointers that do not point to the intended object but towards an invalid location before the object, which must never be accessed through the pointer; this is why using 1-based addressing was easier in languages without pointers, like the original FORTRAN, but it would have been more difficult in languages that allow pointers, like C, the difficulty being in avoiding to expose the internal representation of pointers to the programmer).

Off-by-one errors are caused by mixing conventions for expressing indices and ranges.

If you always use a consistent convention, e.g. 0-based indexing together with half-open intervals, where the count of elements equals the difference between the interval bounds, there are no chances for ever making off-by-one errors.


I would bet that in the opposite circumstance you'd say the same thing:

"Is there any reason to not just switch to 0-based indexing if we could? Seems like 1-based indexing really exacerbates off-by-one errors without much benefit"

The problem is that humans make off-by-one errors and not that we're using the wrong indexing system.


No indexing system is perfect, but one can be better than another. Being able to do array[array.length()] to get the last item is more concise and less error prone than having to add -1 every time.

Programming languages are filled with tiny design choices that don’t completely prevent mistakes (that would be impossible) but do make them less likely.


Having to use something like array[length] to get the last element demonstrates a defect of that programming language.

There are better programming languages, where you do not need to do what you say.

Some languages, like Ada, have special array attributes for accessing the first and the last elements.

Other languages, like Icon, allow the use of both non-negative indices and of negative indices, where non-negative indices access the array from its first element towards its last element, while negative indices access the array from its last element towards its first element.

I consider that your solution, i.e. using array[length] instead of array[length-1], is much worse. While it scores a point for simplifying this particular expression, it loses points by making other expressions more complex.

There are a lot of better programming languages than the few that due to historical accidents happen to be popular today.

It is sad that the designers of most of the languages that attempt today to replace C and C++ have not done due diligence by studying the history of programming languages before designing a new programming language. Had they done that, they could have avoided repeating the same mistakes of the languages with which they want to compete.


array[array.length()] is nonsense if the array is empty.

You should prefer a language, like Rust, in which [T]::last is Option<&T> -- that is, we can ask for a reference to the last item, but there might not be one and so we're encouraged to do something about that.

IMNSHO The pit of success you're looking for is best dug with such features and not via fiddling with the index scheme.


If your design works better in one scenario usually means it works worse in other scenarios, you just shuffled garbage around.


Fundamentally, CPUs use 0-based addresses. That's unavoidable.

We can't choose to switch to 1-based indexing - either we use 0-based everywhere, or a mixture of 0-based and 1-based. Given the prevalence of off-by-one errors, I think the most important thing is to be consistent.


Because it is not how computers work. It doesn't matter much for high level languages like LUA, you rarely manipulate raw bytes and pointers, but in system programming languages like Zig, it matters.

To use the terminology from the article, with 0-based indexing, offset = index * node_size. If it was 1-based, you would have offset = (index - 1) * node_size + 1.

And it became a convention even for high level languages, because no matter what you prefer, inconsistency is even worse. An interesting case is Perl, which, in classic Perl fashion, lets you choose by setting the $[ variable. Most people, even Perl programmers consider it a terrible feature and 0-based indexing is used by default.


> Is there any reason to not just switch to 1-based indexing if we could? Seems like 0-based indexing really exacerbates off-by-one errors without much benefit

You'd just get a different set of off-by-one errors with 1-based indexing.


1-based indexing doesn’t work well as soon as you have a start offset within a sequence, from which you want to index. Then the first element is startIndex + 0, not startIndex + 1. 0-based indexing generalizes better in that way.


You say "seems like", can you argue/show/prove this?


I think that many obo errors are caused by common situations where people can mistakenly mix up index and count. You could eliminate a (small) set of those situations with 1-based indexing: accessing items from the ends of arrays/lists.


And in turn you'd introduce off by one errors when people confuse the new 1-based indexes with offsets (which are inherently 0-based).

So yeah, no. People smarter than you have thought about this before.


The idea of languages "stealing" ideas from each other is not something anyone building a language cares about. I'll just charitably assume you've completly misinterpreted something he said.


I've seen a hundred ai-generated things, and they are rarely interesting.

Not because the tools are insufficient, it's just that the kind of person that can't even stomach the charmed life of being a programmer will rarely be able to stomach the dull and hard work of actually being creative.

Why should someone be interested in you creations? In what part of your new frictionless life would you've picked up something that sets you apart from a million other vibe-coders?


> stomach the dull and hard work of actually being creative

This strikes me as the opposite of what I experience when I say I'm "feeling creative", then everything comes easy. At least in the context of programming, making music, doing 3D animation and some other topics. If it's "dull and hard work" it's because I'm not feeling "creative" at all, when "creative mode" is on in my brain, there is nothing that feels neither dull nor hard. Maybe it works differently for others.


What sets you apart from millions of manual programmers?


I've been a professional programmer for 8+ years now. I've stomached that life. I've made things people used and paid for.

If I can do that typing one line at a time, I can do it _way_ faster with AI.


You may be mistaking some ai dev with non, because it doesn't have tell tails


Many AI generated web sites have a “look” and it’s not just all the emojis.


I've loved using Godot more, and it's been very informative as the first big OSS project where I'm closely following the development / proposals / devchat. I don't agree on many of the points by people downthread: I use C# almost exclusively and while it's been awkward (and clearly not a "priority") it's pretty seamless to use once you set up some stuff (though it certainly helps if you keep much of your logic in C# and mostly use Godot as a frontend, crossing the boundary is kinda awkward and slow).

Having said that, I do agree that Godot has a bit of complicated identity: it is at once geared towards being a good first programming experience, and a general purpose replacement for stuff like Unity.

I'd prefer a focus on the second part, there's a huge industry of game-devs right now, and providing them with the stability of a solid, free, transparent engine would be a great service.


I'm in the same position, I use C# both because that sort of syntax is more familiar to me, but also because it just seems better as a language (in terms of both code structure and performance).

There's a lot of downplaying of the advantages of C# in the Godot community, seemingly moreso by people who are amateur game devs/programmers, who perhaps just don't need the advantages for their particular kind of game.


I am a C# dev by day and love working with it. I miss interfaces, Linq, and the nicer pattern matching features of C# when using GDScript, but overall GDScript is quite adequate for what it needs to do and the game dev loop feels faster when using it. They can interop as well without too much friction, so if you have the .NET version of Godot, it can have some code in C# where (if?) you need it and other code in GDScript when you don’t.


Hats off to your son too, I'd say.


You don’t think that there could be purely organic reasons why content showing US hypocricy might be immensely popular in South America?


> TikTok users can't upload anti-ICE videos.

I am responding to the fact US TikTok does not show videos of an armored vehicle driving through a crowd of protesters standing in front of it like the lone man in Tiananmen Square. They are being removed.

This ability to control what information TikTok users are presented with is the reason TikTok was originally banned in the United States.

I am being objective discussion how TikTok is being used as a propaganda tool whether or not I personally agree with China influencing people in South America or whether or not what the United States government is doing to protestors is good or bad. I'm not putting a value on it. I'm pointing out that when I'm in South America and someone links a video in a text message and I start to doom scroll after a while I will start to be introduced to videos of the Unites States government committing violence against Spanish speaking people.

> might be immensely popular in South America

Objectively the current United States regime was hugely popular in Spanish speaking countries like it was in Spanish speaking Florida. Up until a couple months ago, people would tell me how much they support and admire the current regime in the United States. That has changed recently which likely has to do with the content they receive via TikTok which is controlled by the Chinese government which is why it was banned in the United States. After being sold, it is not surprising that the United States is using it the way they accused the Chinese of using it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: