Hacker Newsnew | past | comments | ask | show | jobs | submit | axilmar's commentslogin

Amen.

Reading the comments from top to bottom, the above is exactly what I wanted to write.


What if there are hidden properties that affect each particle of a pair after they are split, in predetermined ways? that would certainly create correlations that seem to be created on the fly, but they would simply be the result of the inner workings of particles that don't communicate.

For example, if we have two balls, each containing a spin mechanism inside them which makes them spin in relation, let's say, to the magnetic north, and we throw one of them to the other...and later we discover that their spin is somehow correlated.

That is not action at a distance, that is the result of the inner mechanism of each ball.

Why not a similar thing happens with particles?


Those kind of mechanisms can't create the kind of correlations that are observed in real particles, so we know that something else is going on.

You can play around with this by trying to design the pair of devices that were described in my second link (https://news.ycombinator.com/item?id=35905284).

To recap, you want to design a pair of devices that each have 3 buttons labeled A, B, and C, a red LED, a green LED, and a counter. The counter starts at 1000. When you press any one of the buttons one of the LEDs flashes and the counter decrements. When the counter reaches 0 the device stops responding.

You should also specify a way that if the devices are brought together the pair of them can be reset.

You can specify any kind of non-quantum hardware you want in the devices. As much computing as you need, as much RAM and ROM and disk as you want, and physical sensors. Include clocks if you need to. You can include true random number generators. It doesn't have to be limited to current technology--it just has to be limited to known physics and not use quantum entanglement.

What are need to achieve with that hardware and whatever algorithms you specify is:

1. Suppose someone has used one of the devices, and recorded the results of a very large number of interactions.

Suppose that a statistician is given a list of 5-tuples (P, F, n, R, t) of those interactions with one of the devices, where P is which button was pressed, F is which LED flashed, n is the value on the counter when the button was pressed, and R is how many times the device has been reset (i.e., R = 0 the first 1000 times the device is used, then when it and the other device are reset R = 1 for the next 1000 uses and so on), and finally t is the time at which the button was pressed.

It should not be possible using any known statistical test on that list of 5-tuples for the statistician to distinguish the device from a device whose algorithm is simply:

  if any_button_pressed():
    r = uniform_true_random_from_0_to_1()
    if r < 0.5:
      flash(GREEN)
    else:
      flash(RED)
2. If the lists of 5-tuples from both devices matched up by n and R we should find that (1) if the same button was pressed on both, the same color LED flashed on both, (2) if B was pressed on one and A or C on the other, then 85.355% of the time the same color flashed on both, and (3) if A was pressed on one and C on the other than 50% of the time the same color flashed.

A couple things to note.

1. The above has to hold even if the users take the devices very far apart from each other before they start pressing buttons.

In particular the users might choose to take the devices so far apart before they start pressing buttons that each has finished their run of 1000 before any possible communications from their device could reach the other.

2. The users might wait a long time before starting a run of 1000, and they might wait a long time between presses within a run.

3. The users are determining when to press independently so you can't count on them alternating. You can't even count on them overlapping: one might do all 1000 presses before user the other starts.

4. The users might use a true random number generator to determine which buttons to press.


Of course the math check out, they are correct, but in my opinion, time is not a dimension, it's the 'refresh rate' of matter.

To me, there are only 3 dimensions, that of space.

That does not mean the relativity math is useless. On the contrary, what they describe is real and we can experimentally verify it.

But that does not mean we can 'move' into time, as we 'move' into space. That's why time is not a dimension.


> time is not a dimension, it's the 'refresh rate' of matter.

Exactly. Time is just a very useful fudge to describe change. If nothing changes, there's been no time. If something changes, there has been time.

A dimension is just a useful number that you can operate on. You can have a physics where the fourth dimension is how blue something is, and the fifth dimension is how good Mary thinks it tastes.


How does your refresh rate time accounts for time slowing down for things moving fast (relative to you) regardless of spatial direction they are moving?

Spacetime simplifies many things for example in that framing nothing is ever at rest or nothing ever travels at different speed. The speed of everything is the same, it's just that things spatially at rest have all their speed in the direction of time. Accelerating something in spatial direction is rotating (mathematically) their motion away from time direction, into some spatial direction. This requires energy so the time direction is lowest energy but to rotate it away from it you need to put in energy. If you want to rotate it to 45 deg you need infinite energy.


The refresh rate doesn't slow down, but the speed at which things change does, at high speeds.


I don't believe refresh rate captures the transmission of time onto other objects it would only capture individual time, they have to sync up somehow and gravity affects it so its not just matter, its the impression of matter onto the fabric of space


Space has no 'fabric'. Space is not a physical entity, but a set of coordinates.


"refresh rate" implies discrete steps, whereas not only we haven't discovered such (planck time is not it), but also we have no idea how a transition between different refresh rate would look like...


To me, dimensions are just more columns in a table. Seeing in 10 dimensions? No prob.


> Victor can prepare a pair of quantum particles in a special state known as an entangled state. In this state, the outcomes of Alice's and Bob's measurements are not just random but are correlated in a way that defies any classical explanation based on local hidden variables.

What if there are no hidden properties per particle, but the combination of specific property values of particles allow for breaking Bell's Inequality?

I.e. what we call 'entanglement', it might not be 'action-at-a-distance', but the simple effect of the interaction of the properties of the two particles as they are generated.

For example, if we have two billiard balls, which are really close together, and we hit them with a third ball simultaneously, their spin will be correlated when we measure it for both balls (without taking into account other factors, i.e. friction, tilting of the table etc). Wouldn't that break Bell's inequality as well? the spins of the two balls will be correlated.


"their spin will be correlated" - in this case the billiard's spin is a per-ball property that is set before they are sent to Alice and Bob, and happens to be correlated. You can simulate this in the Python code, but you will not be able to break the Bell inequality like that. This is similar to the dice example I give, where the objects sent to Alice and Bob are random from their perspective (since the dice roll happens with Victor), and correlated.

In general, classical correlation cannot break the Bell inequalities [assuming no peeking, ie. no action-at-a-distance in the measurement devices]. To be clear, I didn't prove this in the article, the approach the article takes is "here is some code, play around with it to get a feeling for why".

Hope this helps.


> In general, classical correlation cannot break the Bell inequalities [assuming no peeking, ie. no action-at-a-distance in the measurement devices].

What if the particles have properties that mutate their state after they are sent to Alice and Bob?

Suppose, in the billiards example, that I put a small device into the balls that changes the spin of the ball to some predefined value.

Wouldn't that break the Bell inequalities without action at a distance?

The reason for the breaking would be that the state of the balls would be modified after they are sent to Alice and Bob. It would look like action at a distance without being 'action at a distance'.


It doesn't matter when the state of the ball changes (when Victor sends them, on the way, when it's measured). You can play around with this in the Python code, there is shows up like "it doesn't matter which function you put that line of code, the functions are called one after the other". The functions in question are generate_composite(), split(), and the 4 measure_X_Y(), called from bell_experiment().


My question for Haskellers is how to do updates of values on a large scale, let's say in a simulation.

In imperative languages, the program will have a list of entities, and there will be an update() function for each entity that updates its state (position, etc) inline, i.e. new values are overwriten onto old values in memory, invoked at each simulation step.

In Haskell, how is that handled? do I have to recreate the list of entities with their changes at every simulation step? does Haskell have a special construct that allows for values to be overwritten, just like in imperative languages?

Please don't respond with 'use the IO monad' or 'better use another language because Haskell is not up for the task'. I want an actual answer. I've asked this question in the past in this and some other forums and never got a straight answer.

If you reply with 'use the IO monad' or something similar, can you please say if whatever you propose allows for in place update of values? It's important to know, for performance reasons. I wouldn't want to start simulations in a language that requires me to reconstruct every object at every simulation step.

I am asking for this because the answer to 'why Haskell' has always been for me 'why not Haskell: because I write simulations and performance is of concern to me'.


I'm not sure why you say not to respond with 'use the IO monad' because that's exactly how you'd do it! As an example, here's some code that updates elements of a vector.

    import Data.Vector.Unboxed.Mutable
    
    import Data.Foldable (for_)
    import Prelude hiding (foldr, read, replicate)
    
    -- ghci> main
    -- [0,0,0,0,0,0,0,0,0,0]
    -- [0,5,10,15,20,25,30,35,40,45]
    main = do
      v <- replicate 10 0
    
      printVector v
    
      for_ [1 .. 5] $ \_ -> do
        for_ [0 .. 9] $ \i -> do
          v_i <- read v i
          write v i (v_i + i)
    
      printVector v
    
    printVector :: (Show a, Unbox a) => MVector RealWorld a -> IO ()
    printVector v = do
      list <- foldr (:) [] v
      print list
It does roughly the same as this Python:

    # python /tmp/test28.py
    # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
    # [0, 5, 10, 15, 20, 25, 30, 35, 40, 45]
    def main():
        v = [0] * 10
    
        print(v)
    
        for _ in range(5):
            for i in range(10):
                v_i = v[i]
                v[i] = v_i + i
    
    
        print(v)
    
    if __name__ == '__main__': main()


I have a rather niche theory that many Hindley-Milner type inference tutorials written by Haskellers insist on teaching the error-prone, slow, details of algorithm W because otherwise the authors would need to commit to a way to do destructive unification (as implied by algorithm J) that doesn't attract pedantic criticism from other Haskellers.

For me, I stopped trying to learn Haskell because I couldn't quite make the jump from writing trivial (but neat) little self-contained programs to writing larger, more involved, programs. You seem to need to buy into a contorted way of mentally modelling the problem domain that doesn't quite pay off in the ways advertised to you by Haskell's proponents (as arguments against contrary approaches tend to be hyperbolic). I'm all for persistent data structures, avoiding global state, monadic style, etc. but I find that OCaml is a simpler, pragmatic, vehicle for these ideas without being forced to bend over backwards at every hurdle for limited benefit.


> In imperative languages, the program will have a list of entities, and there will be an update() function for each entity that updates its state (position, etc) inline, i.e. new values are overwriten onto old values in memory, invoked at each simulation step.

> In Haskell, how is that handled? do I have to recreate the list of entities with their changes at every simulation step? does Haskell have a special construct that allows for values to be overwritten, just like in imperative languages?

You don't _have to_ recreate the list each time, but that's probably where I'd suggest starting. GHC is optimized for these kinds of patterns, and in many cases it'll compile your code to something that does in-place updates for you, while letting you write pure functions that return a new list. Even when it can't, the runtime is designed for these kinds of small allocations and updates, and the performance is much better than what you'd get with that kind of code in another language.

If you decided that you really did need in-place updates, then there are a few options. Instead of storing a vector of values (if you are thinking about performance you probably want vectors instead of lists), you can store a vector of references that can be updated. IO is one way to do that (with IORefs) but you can also get "internal mutability" using STRefs. ST is great because it lets you write a function that uses mutable memory but still looks like a pure function to the callers because it guarantees that the impure stuff is only visible inside of the pure function. If you need concurrency, you might use STM and store them as MVars. Ultimately all of these options are different variations on "Store a list of pointers, rather than a list of values".

There are various other optimizations you could do too. For example, you can use unboxed mutable vectors to avoid having to do a bunch of pointer chasing. You can use GHC primitives to eek out even better performance. In the best case scenario I've seen programs like this written in Haskell be competitive with Java (after the warmup period), and you can keep the memory utilization pretty low. You probably won't get something that's competitive with C unless you are writing extremely optimized code, and at that point most of the time I'd suggest just writing the critical bits in C and using the FFI to link that into your program.


You... don't. You have to rely on compiler optimizations to get good performance.

Monads are more-or-less syntax sugar. They give you a structure that allows these optimizations more easily, and also make the code more readable sometimes.

But in your example, update returns a new copy of the state, and you map it over a list for each step. The compiler tries to optimize that into in-place mutation.

IMO, having to rely so much on optimization is one of the weak points of the language.


You do, and you'll have to use do destructive updates within either ST or IO monad using their respective single variable or array types. It looks roundabouty, but does do the thing you want and it is fast.

ST and IO are "libraries" though, in the sense that they not special parts of the language, but appear like any other types.


Fast immutable data structures don't rely on compiler optimizations. They just exist lol.


An example of how to use the io monad for simulations https://benchmarksgame-team.pages.debian.net/benchmarksgame/... It’s one of the nicer to read ones I’ve seen. Still is terrible imo.


I mean, Haskell has mutable vectors[1]. You can mutate them in place either in the IO monad or in the ST monad. They fundamentally work the same way as mutable data structures in any other garbage collected language.

When I worked on a relatively simple simulation in Haskell, that's exactly what I did: the individual entities were immutable, but the state of the system was stored in a mutable vector and updated in place. The actual "loop" of the simulation was a stream[2] of events, which is what managed the actual IO effect.

My favorite aspect of designing the system in Haskell was that I could separate out the core logic of the simulation which could mutate the state on each event from observers which could only read the state on events. This separation between logic and pure metrics made the code much easier to maintain, especially since most of the business needs and complexity ended up being in the metrics rather than the core simulation dynamics. (Not to say that this would always be the case, that's just what happened for this specific supply chain domain.)

Looking back, if I were going to write a more complex performance-sensitive simulation, I'd probably end up with state stored in a bunch of different mutable arrays, which sounds a lot like an ECS. Doing that with base Haskell would be really awkward, but luckily Haskell is expressive enough that you can build a legitimately nice interface on top of the low-level mutable code. I haven't used it but I imagine that's exactly what apces[3] does and that's where I'd start if I were writing a similar sort of simulation today, but, who knows, sometimes it's straight-up faster to write your own abstractions instead...

[1]: https://hackage.haskell.org/package/vector-0.13.1.0/docs/Dat...

[2]: https://hackage.haskell.org/package/streaming

[3]: https://hackage.haskell.org/package/apecs


apecs is really nice! it's not without its issues, but it really is a sweet library. and some of its issues are arguably just issues with ECS than apecs itself.


In your imperative language, imagine this:

    World simulation(Stream<Event> events, World world) =>
       events.IsComplete
           ? world
           : simulation(applyEventToWorld(events.Head, world), events.Tail);

    World applyEventToWorld(Event event, World world) =>
       // .. create a new World using the immutable inputs
That takes the first event that arrives, transforms the World, then recursively calls itself with the remaining events and the transformed World. This is the most pure way of doing what you ask. Recursion is the best way to 'mutate', without using mutable structures.

However, there are real mutation constructs, like IORef [1] It will do actual in-place (atomic) mutation if you really want in-place updates. It requires the IO monad.

[1] https://hackage.haskell.org/package/base-4.20.0.1/docs/Data-...


> does Haskell have a special construct that allows for values to be overwritten

Yes and no.

No, the language doesn't have a special construct. Yes, there are all kinds of mutable values for different usage patterns and restrictions.

Most likely you end up with mutable containers with some space reserved for entity state.

You can start with putting `IORef EntityState` as a field and let the `update` write there. Or multiple fields for state sub-parts that mutate at different rates. The next step is putting all entity state into big blobs of data and let entities keep an index to their stuff inside that big blob. If your entities are a mishmash of data, then there's `apecs`, ECS library that will do it in AoS way. It even can do concurrent updates in STM if you need that.

Going further, there's `massiv` library with integrated task supervisor and `repa`/`accelerate` that can produce even faster kernels. Finally, you can have your happy Haskell glue code and offload all the difficult work to GPU with `vulkan` compute.


> ECS library that will do it in AoS way

TLAs aren't my forte. It's SoA of course.


> My question for Haskellers is how to do updates of values on a large scale, let's say in a simulation.

The same way games do it. The whole world, one frame at a time. If you are simulating objects affected by gravity, you do not recalculate the position of each item in-place before moving onto the next item. You figure out all the new accelerations, velocities and positions, and then apply them all.


I don't understand why you hate the IO monad so much. I mean I've seen very large codebases doing web apps and almost everything is inside the IO monad. It's not as "clean" and not following best practices, but still gets the job done and is convenient. Having pervasive access to IO is just the norm in all other languages so it's not even a drawback.

But let's put that aside. You can instead use the ST monad (not to be confused with the State monad) and get the same performance benefit of in-place update of values.


Use the ST monad? :)


Well what kind of values and how many updates? You might have to call an external library to get decent performance, like you would use NumPy in Python. This might be of interest: https://www.acceleratehs.org/


You can use apecs, a pretty-fast Haskell ECS for those sorts of things.


"Pretty fast".. relatively speaking, considering that it's in an immutable, garbage collected language. Still woefully slow compared to anything else out there(say, bevy? which incidentally works similarly to apecs) and mostly practically unusable if the goal is to actually create a real product.

Want to just have fun? Sure.


You can create a real product with apecs lol. It is not going to block an indie game written in Haskell with it, for instance. And you could totally use it to write simulations for stuff too.

Also from the apecs README:

> Fast - Performance is competitive with Rust ECS libraries (see benchmark results below)

Sounds like that "woefully slow" judgment of yours wasn't based in any real experience but rather just your opinion?


Here is a good approach between using microservices or a monolith: create your product as a series of libraries that can be used to built either a microservices-based product or a monolith product.

In this way, you can turn a monolith to microservices very easily, since the core of your code will be there already.


The Settings app is a control panel though.


It’s like modern cars with plastic covers on the engine bay to make it look like a “Modern engine” and underneath all it’s the same one designed in japan in the 80s.


Mini*


This story makes me wonder if it's better to use Ada/SPARK than Rust, from a safety perspective.


It's a shame really that Namco didn't want to pay a lot of money for the rights of Ms Pac Man.

I don't understand their approach.

They clearly have a lot more money in their disposal than AtGames. Why not buy the second most significant character in the Pacman franchise?


Requirements always exist in a project, even if they are not written.

And how they cannot exist, since the only reason a program is written is because of some need...

And since requirements always do exist, they better be in written form, so as that they can be referenced, tested, discussed etc.

In previous projects, JIRA was used. In the current project, an internal tool is used, pretty much similar to JIRA.

In some older projects, a simple Word document was used and an Excel sheet for tracking progress.

Writing a requirement should be simple enough: it shall either define things, or describe processes and their side effects.

It is better to have a dedicated person to write them, because the wording and style of requirements should be consistent, and the person that writes them will gradually be more and more experienced in writing better requirements.

Having good requirements makes it a lot easier for developers to write the required programs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: