> AI could repeat this pattern at a larger scale — generating faster results within the existing paradigm, while the structural conditions for disruptive science remain unchanged or worsen.
Worsen. LLMs discard/loses and mixes data on their statistical "compression" to create their vectorial database model. Across the time, successive feed back will be homologous to create a jpg image sourcing a jpg image that was created from another jpg image, through this "gaussian" loop.
Those faster (but worst) results will degrade real valuable data and science at a speed/rate that will statistically discard good done science on a regular basis, systematically.
Epstein's friends who had appointments in the WTC on 9/11 mysteriously canceled last minute or had other excuses.
- Lutnick, whose whole Cantor & Fitzgerald was obliterated, brought his son to school after a last minute "argument with his wife".
- Sarah Ferguson was "stuck in traffic" and late.
- Michael Jackson "overslept".
There is another one whom I don't remember. Maybe the Bayesians here can calculate the probability using a control group of all well connected people who miraculously survived. There aren't that many.
Epstein and Maxwell's other friends of course were connected to funding Palantir.
Linux always has been a system were the existence of malware was ignored, specially Desktop, contrary to other OSes (tooling included). But since a couple of years ago can be observed (I observe) slooow movements trying to correct this colossal mistake.
If this is the best way to do it or not, I do not enter. I particularly just welcome most of the advancements about this matter in Linux due such absence of worrying, keeping my fingers crossed that the needed tooling arrives on time (ten years behind Windows, I think).
so the security um, hack here is that someone has unauthorized access to your machine. its not related to x11. If you run untrusted code, thats it... who cares about x11?
Ideally one want to detect malware the earlier possible, and try to restrict what they can do from the beginning, until is noticed.
In this case Wayland, voluntarily or not, it's more restrictive than X11 with the access to screen and keyboard.
I know, I know, later the reply of the community will be a couple of downvotes more and "that already existed", "you could use, bla bla bla", and this is how Linux is ten years (minimal) behind Windows in tooling for this matter ¯\_(ツ)_/¯
Fukushima was designed to be constructed on a hill 30-35 meters above the ocean, but someones decided would be cheaper to construct it at sea level in order to reduce costs in water pumping, others decided to approve this, and much latter, one decade before the disaster when was requested to reinforce the security measures within all the reactors at Japan, those in charge of Fukushima decided to ignore it, again, pushing for extensions year after year until it all blew up. Decades of bad decisions with a strong smell to corruption.
I mean, ok. So say they build the plant 35m higher up, then get hit by a tsunami that is 36 meters higher [0] than the one that caused the Fukushima disaster? If we're going to start worrying about events outside the design spec we may as well talk about that one. If they're designing to tolerate an event, we can pretty reliably imagine a much worse event that will happen sooner or later and take the plant out. That is the nature of engineering. Eventually everything fails; time is generally against a design engineer.
Caveating that I'm not really sure it was even an out-of-design event, but if it was then it is case closed and the swiss cheese model is an inappropriate choice of model to understand the failure. If you hit a design with things it wasn't designed to handle then it may reasonably fail because of that.
[0] https://en.wikipedia.org/wiki/Megatsunami homework for the interested, it is cool stuff. Japan has seen some quite large waves, 57 meters seems to be the record in recent history.
In Japan they have the "Tsunami Stones" [0] across the coast, memorials to remind future generations of the highest point the water reached.
It was negligent to construct a nuclear plant at sea level, it was just a plant waiting to be flooded, and for such case they had ten years to design protections after being requested to reinforce measures (along with the other Japanese plants), but I can imagine the ones that should put the money was not very collaborative (I even doubt if such responsible learnt the lesson).
If it was a cheese model or not I do not enter (notice that parent of parent and me are different users), their negligence breaks all the possible logic we could apply without introducing the corruption's variable behind such decades of bad decisions.
> It was negligent to construct a nuclear plant at sea level, it was just a plant waiting to be flooded,
So why did they build it there? It isn't a gentleman in a clown hat hitting himself on the head with a rubber mallet, they had a reason. These things are always trade-offs.
Maybe if they'd built it up on the hill there'd have been an earthquake, a landslide then the plant slides into the sea and gets waterlogged. I dunno. If we're talking about things without a clearly defined bounds of risk tolerance that is the sort of scenario that can be bought up. You're talking about negligence, but you aren't saying what tolerances this plant was built with, what you want it to be built to or what the trade-offs you want made are going to be. Once you start getting in to those details it becomes a lot less obvious that Fukushima is even a bad thing (probably is, the tech is pretty old and we wouldn't build a plant that way any more is my understanding). It isn't possible to just demand that engineers prevent all bad outcomes, reality is too messy. It isn't negligent if there are reasonable design constraints, then something outside the design considerations happens and causes a failure, is the theoretical point I'm bringing up. It is just bad luck.
The whole affair seems pretty responsible from where I sit a long way away. Fukushima is possibly the gentlest engineering disaster to ever enter the canon. It is much better than a major dam or bridge failure for example, and again assuming the event that caused the whole thing was unexpected not even evidence of bad management. Most engineering failures involve a chain of horrific choices the leave the reader with tears in their eyes, not just a fairly mild "well we were hit with a wild tsunami and doubled the nominal price tag of the cleanup with no obvious loss of life or limb". And bear in mind we're scouring the world for the worst nuclear disaster in the 21st century.
> "well we were hit with a wild tsunami and doubled the nominal price tag of the cleanup with no obvious loss of life or limb"
This is a bit of a wild understatement. (1) the tsunami was by no means wild, as multiple posts here have referenced, and (2) the incident resulted in a number of significant injuries, not including for deaths involved in the evacuation. And those deaths very much count - you can't hand-wave away the consequences of the evacuation on the basis of hindsight that the evacuation was larger than the final outcome necessitated.
> And those deaths very much count - you can't hand-wave away the consequences
I don't. If it is what it looks like, the government officials that ordered/organised the evacuations should be harshly censured and the next time evacuation orders should be more risk-based and executed in a safer way. What little I've gleaned suggests an appalling situation where a bunch of presumably old people were forced from their homes to their deaths. The main thing keeping me quiet on the topic is I don't speak Japanese and I don't really know what happened in detail there.
<< The Fukushima Daiichi Nuclear Power Plant construction was based on the seismological knowledge of more than 40 years ago. As research continued over the years, researchers repeatedly pointed out the high possibility of tsunami levels reaching beyond the assumptions made at the time of construction, as well as the possibility of reactor core damage in the case of such a tsunami. However, TEPCO downplayed this danger. Their countermeasures were insufficient, with no safety margin.>>
<< By 2006, NISA and TEPCO shared information on the possibility of a station blackout occurring at the Fukushima Daiichi plant should tsunami levels reach the site. They also shared an awareness of the risk of potential reactor core damage from a breakdown of sea water pumps if the magnitude of a tsunami striking the plant turned out to be greater than the assessment made by the Japan Society of Civil Engineers.>>
Even leaving aside they ignored the original placement in order to reduce costs by using biased seismological reports of their convenience, TEPCO knew the plant was at risk, they was warned successively it was at risk. And the supposed regulator NISA [0] closed the eyes conveniently (conveniently for someones).
<< TEPCO was clearly aware of the danger of an accident. It was pointed out to them many times since 2002 that there was a high possibility that a tsunami would be larger than had been postulated, and that such a tsunami would easily cause core damage.>>
From the other url I put (I updated it with a cached url, I didn't noticed the article was deleted),
<< there appear to have been deficiencies in tsunami modeling procedures, resulting in an insufficient margin of safety at Fukushima Daiichi. A nuclear power plant built on a slope by the sea must be designed so that it is not damaged as a tsunami runs up the slope.>>
EU raised the maximum permitted levels of radioactive contamination for imported food following Fukushima, this is not a gentlest gesture to the Europeans. Japanese citizens also received their dose, at time the more vulnerable ones was recruited by the Yakuza to clean up the zone.
No, I'm just trusting that you'll be honest about what it is saying. I don't need to read a report to persuade myself that a 40 year old plant was designed based on the best available knowledge of 40 years ago. That seems like something of a given. I'm just not sure where you are going with that, it doesn't obviously suggest negligence to me.
You're not saying what tolerances you want them to design to. We both agree that there are scenarios that can and might happen. Obviously is is possible for a tsunami to take out buildings built near the shore in Japan so it doesn't surprise me that people raised it as a risk. A lot of buildings got taken out that day. That doesn't obviously suggest negligence to me; obviously a lot of people were happy living with the risk.
> EU raised the maximum permitted levels of radioactive contamination for imported food following Fukushima
Oh well then. I had no idea. I thought the consequences were minor and now I have learned ... there you go, I suppose. I'm not really sure what to do with this new information.
> I'm just not sure where you are going with that, it doesn't obviously suggest negligence to me.
You didn't read the report or search for information about the matter, but I have not problem to repeat it for you,
The General Electric's design was originally designed to be placed 30-35 meters above the ocean, instead of this TEPCO modified such design and constructed at sea level (almost) recurring to studies convenient to their purpose, cheaper, this in one of the more tsunami-prone countries, with an history of ones reaching 20-30 meters. When those -for them- convenient studies was not longer justifiable, as deeper studies did finally refute them, they decided to just keep ignoring all the warnings and requests to reinforce the safety. They knew the nuclear plant was in danger, they always knew it, General Electric didn't designed at 30-35 meters above the ocean by coincidence, and this happened with a supposed regulator always closing the eyes to this, conveniently, across those years, ignoring even pipes with fissures.
Well, this obviously suggest negligence to me. Decades of bad decisions with a strong smell to corruption.
> You're not saying what tolerances you want them to design to.
What about tolerance to avoid a meltdown of the core, specially under two events, an earthquake and a tsunami, exactly what happened after ignoring the warnings and requests to reinforce the safety.
> Oh well then. I had no idea. I thought the consequences were minor and now I have learned ... there you go, I suppose. I'm not really sure what to do with this new information.
Keep the sarcasm for other places, if you don't mind. It is not a mere gentlest engineering disaster as it reached the whole planet, with ate TEPCO's cesium-137, specially the Japanese. And it is not a mere gentlest engineering disaster when you have to force vulnerable people to go to ground zero to move contaminated land and water.
> What about tolerance to avoid a meltdown of the core, specially under two events, an earthquake and a tsunami, exactly what happened after ignoring the warnings and requests to reinforce the safety.
I wasn't going to reply but that seems like it moves the conversation forward; so why not?
It seems to me your design goal is fundamentally incompatible with a lot of the specific complaints of negligence. If you want a design that doesn't melt down when there is an earthquake and a tsunami, then moving the reactor to higher ground isn't helpful because it won't achieve the design goal. The design is still fundamentally vulnerable. Moving the reactor up 35m still leaves it vulnerable to a large enough tsunami and a big enough earthquake.
If your solution is moving the site uphill, then your design goal should be talking in terms of a 1 in X year event. If you want the risk completely mitigated then in this case it isn't relevant where the site is since the obvious way to achieve that design goal is just build something that doesn't fail when flooded. Coincidentally that seems to be the approach that the newer generation designs use - change how the cooling works so that it can't melt down in any reasonable circumstances, tsunami or otherwise.
I will note that there is a reading of your comment where you want the design to be able to tolerate this specific event. I'm ignoring that reading as unreasonable since it requires hindsight, but in the unlikely event that is what you meant then just pretend I didn't reply.
> Keep the sarcasm for other places, if you don't mind. It is not a mere gentlest engineering disaster as it reached the whole planet, with ate TEPCO's cesium-137, specially the Japanese. And it is not a mere gentlest engineering disaster when you have to force vulnerable people to go to ground zero to move contaminated land and water.
Which one do you think was gentler and a story of similar popularity as Fukushima? It is pretty usual to have multiple people actually die and it be the engineer's responsibility once something becomes international news. Even something as basic as a port explosion usually has a number of missing people in addition to a chunk of city being taken out. To anchor this in reality, Fukushima at a class 7 meltdown might have done less damage than a coal plant in normal operation. Coal plants aren't pretty places and air pollution is nasty, nasty stuff.
> It seems to me your design goal is fundamentally incompatible with a lot of the specific complaints of negligence. If you want a design that doesn't melt down when there is an earthquake and a tsunami, then moving the reactor to higher ground isn't helpful because it won't achieve the design goal.
My goal? My solution? My design!? you must be now kidding,
- GE original design 30-35 meters above the sea.
- Warnings about reinforce safety along one decade.
- Tsunami at Fukushima's nuclear plant, 15 meters above the sea.
> I wasn't going to reply but that seems like it moves the conversation forward; so why not?
Foward to... nothing it seems. You just replied with hypotheticals like if the event didn't happened, and as if such event would have been impossible to avoid, with some kind of dissociative reflexions that surpass the cynicism. I'm the one that is not going to reply.
> Caveating that I'm not really sure it was even an out-of-design event but if it was then it is case closed and the swiss cheese model is an inappropriate choice of model to understand the failure.
This is not how safe systems are designed and operated. Safety is not a one-time item, it is a process. All safety-critical systems receive attention throughout their operating lives to identify and mitigate potential safety risks. Throughout history, many safety-critical systems have received significant changes during their operating lives as a result of newly-discovered threats or recognition that threats identified during the initial design were not adequately addressed. Many (if not most) commercial aircraft have required significant modifications to address problems that were not understood at the time they were initially built and certified. Likewise, nuclear power plants in many countries have received major modifications over the years to address potential safety issues that were not understood or properly modeled at the time of their design. Sometimes, this process determines that there is no safe way to continue operation - usually that there is no economically viable way to mitigate the potential failure mode - and the system is simply shut down. This has happened to a few aircraft over the years, as well as several nuclear power plants (in many cases justified, in others not so much).
Fukushima existed in just such a system, and that the disaster occurred was the result of failures throughout the system, not a one-off failure at the design stage.
> I mean, ok. So say they build the plant 35m higher up, then get hit by a tsunami that is 36 meters higher [0] than the one that caused the Fukushima disaster? If we're going to start worrying about events outside the design spec we may as well talk about that one. If they're designing to tolerate an event, we can pretty reliably imagine a much worse event that will happen sooner or later and take the plant out. That is the nature of engineering.
I think you are missing the point. Obviously it is possible that a tsunami higher than any possible design threshold could occur (it is, after all, possible that an asteroid will strike in the pacific and kick up a wave of debris that wipes everything off the home islands). However, the tsunami that struct Fukushima Daiichi was no higher than a number of tsunamis that were recorded in Japan within the last century. The choice of DBA tsunami height was clearly an underestimate, and underestimates were identified for Fukushima and other plants prior to the accident but not acted upon. This was not a cases of "a bigger wave is always possible", it was a case where the design, operation, and supervision were wrong, and known (by some) to be so prior to the accident.
> The choice of DBA tsunami height was clearly an underestimate, and underestimates were identified for Fukushima and other plants prior to the accident but not acted upon.
Not much of a swiss cheese failure then though. The failure is just that they committed hard to an assumption that was wrong.
My point is that unless it is actually an example of multiple failures lining up then this is a bad example of a swiss-cheese model. Seems to be an example of a tsunami hitting a plant that wasn't designed to cope with it. And a plant with owners who were committed to not designing against that tsunami despite being told that it could happen. It is a one-hole cheese if the plant was performing as it was designed to. The stance was that if a certain scenario eventuated then the plant was expected to fail and that is what happened.
Swiss cheese failures are there are supposed to be a number of independent or semi-independent controls in different systems that all fail leading to an outcome. This is just that they explicitly chose not to prepare for a certain outcome. Not a lot of systems failing; it even seems like a pretty reasonable place to draw the line for failure if we look at the outcomes. Expensive, unlikely, not much actual harm done to people and likely to be forgotten in a few decades.
Plus, related (storage), you do not want to put hydroelectric in water reservoirs targeted to population consumption, as you could find out one summer that the reservoirs are empty, the result of such water being used with the intention of generate electricity, or even used as inertial stabilizer for renewables.
This is the moment were at the news you read "There's a drought because it isn't raining" and similar excuses, when in reality your five years of water's reservoirs become reduced to half -or one third- due they focused the electricity production over the population real water demand.
I mean, hydroelectric needs at least two level’s reservoirs, one to generate electricity (or even exclusive two level's reservoirs with water pumps for this), and the next one, absolutely untouchable by the electric companies, targeted as water storage for the population/agriculture, the classic more than five years reservoir, for real.
Not only the US. In the updated post [1] that was deleted at Reddit [2], it is commented there are three firms confirmed operating for Meta in both EU and US jurisdictions,
Firm: Trilligent (APCO Worldwide subsidiary), EU Role: EUR 680K for AI Act, DMA, DSA. US Connection: APCO offices in DC; Meta VP calls them "integrated members of our Meta team".
Firm: White & Case LLP, EU Role: EUR 50-100K. digital markets/services. US Connection: Lead international outside counsel, 70+ lawyer team.
Firm: FTI Consulting Belgium, EU Role: EUR 10-25K. US Connection: Subsidiary of FTI Consulting Inc (NYSE: FCN, HQ Washington DC).
This sounds like the mere tip of the iceberg, as it is commented that they maintain two separate networks with no overlap (their age verification lobbying goes through local specialists with no international footprint).
Trilligent (APCO Worldwide subsidiary), clients for closed financial year, Jan 2024 - Dec 2024,
- meta platforms ireland limited and its various subsidiaries, 50'000€ - 99'999€: EU Green Deal, EU AI Act, the European strategy for a better internet for kids (BIK+), online safety.
- verifymy limited ( age verification business), 0€ - 10'000€: Digital Services Act; eIDAS Regulation; Strategy for a better Internet for kids (BIK+); EU Artificial Intelligence Act; General Data Protection Regulation.
- user rights gmbh, 0€ - 10'000€: Digital Services Act.
> The vendor becomes a strategic chokepoint, and there's no precedent for how that plays out in a peer conflict.
If you turn commercial infrastructure into a military tool, you put it within the firsts rows of targets' list to dismantle in case of conflict.
Given the large number of Starlink's satellites, you will inevitably have to use their own space debris to dismantle them, which will turn the LEO orbit inoperable (for centuries). With this you reduces the agility that was giving those satellites.
You would therefore be forcing the use of military satellites placed at higher orbits (lower resolution, number, more use of fuel, slower) and also forcing to use military airplanes and drones to fly over your territory (exposition).
It would not be debris produced by a decoupling that loses speed, with decrease their centripetal force progressively, so therefore progressively they fall to the earth, deorbit.
It would be a cluster of successive collisions in a short period of time. With each collision, each destroyed satellite would produce hundreds of thousands of microfragments at increased speeds, with would make them reach different orbits.
The microfragments at lower heights of LEO would decrease their speed due the atmosphere within months to a few years, and the ones at higher heights of LEO from decades to centuries, but this ones, at time they loses such speed they would decrease the height of their orbits and sweep across their new orbiting area (like a net/mesh), their kinetic energy would keep being able to destroy or damage what they cross.
If it were done it would be like a planned Kessler Syndrome event, and LEO is currently saturated with satellites.
Several people at my work do use LLMs for this in code, commit messages, and even on Slack. It may not be everyone or even a majority but it is something that some people legitimately do.
While many here are saying "who cares about your spelling and grammar," they have not been the people whose poor English gets them flagged as being somehow less intelligent or credible. Half the problem with LLMs is that they speak eloquently and we use that as a signal of someone's intelligence and trustworthiness. For someone who is otherwise intelligent but doesn't know English well this can be a major setback.
That is not a good idea. To deal with LLMs one need to have knowledge about the topic of the query, case contrary one will not be able to detect the errors of their output-prompts. The test is easy, if after one or three queries one do not detect the errors, one is done, the person is reading the output-prompts in passive mode.
The self-learn path require also to cultivate a intuition that comes from searching and reading technical doc that a LLM will not give you, among other things.
Anyway, I observe how the warning of the other user about this got downvoted and critiqued. I expect the same, and leave this thread with peace of mind subscribing to such warning, as a message to the OP.
Worsen. LLMs discard/loses and mixes data on their statistical "compression" to create their vectorial database model. Across the time, successive feed back will be homologous to create a jpg image sourcing a jpg image that was created from another jpg image, through this "gaussian" loop.
Those faster (but worst) results will degrade real valuable data and science at a speed/rate that will statistically discard good done science on a regular basis, systematically.
IMHO.
reply