Hacker Newsnew | past | comments | ask | show | jobs | submit | rb-2's commentslogin

To add to delhanty's reply, a "degree of freedom" can be thought of as a dimension of the drawing that can be changed or "stretched" or moved without violating a constraint (this is slightly inaccurate, but it's a good start). In a CAD program, a fully constrained drawing can't be freely stretched or dragged around; the program won't let you and the drawing will feel "rigid".

It's very intuitive if you play around with a CAD program for a bit. There is a free (GPLv3) 2D and 3D CAD program called Solvespace (https://solvespace.com/) that is probably easiest one to obtain and learn. There are detailed tutorials on the website, and you could probably download it and finish the first tutorial in an hour.


In addition to Solvespace there is the nascent (but surprisingly polished):

https://dune3d.org/

the Github page of which has the following footnote:

>I ended up directly using solvespace's solver instead of the suggested wrapper code since it didn't expose all of the features I needed. I also had to patch the solver to make it sufficiently fast for the kinds of equations I was generating by symbolically solving equations where applicable. ↩

Which really impressed me because it was the first graphical and interactive 3D program I tried which felt sort of comfortable and understandable (which is why I mostly use OpenSCAD and similar programmatic approaches).


I've noticed that conversations about "consciousness" tend to go in circles because the participants are using different definitions of the word without realizing it.

Some people use the word "conscious" almost interchangeably with terms like "intelligent", "creative", or "responds to stimuli". Then people start saying things like LLMs are conscious because they pass the turing test.

However, others (including the authors of this paper and myself) use the term "consciousness" to refer to something much more specific: the inner experience of perceiving the world.

Here's a game you can play: describe the color red.

You can give examples of things that are red (that other people will agree with). You can say that red is what happens when light of a certain wavelength enters your eyeball. You can even try saying things like "red is a warm color", grouping it with other colors and associating it with the sensation of temperature.

But it is not possible to convey to another person how the color red appears to you. Red is completely internal experience.

I can hook a light sensor up to an arduino and it can tell me that an apple is red and that grass is not red. But almost no one would conclude that the arduino is internally "experiencing" the color red like they themselves do.

While the paper is using this more precise definition of consciousness, it seems to be trying to set up a framework for "detecting" consciousness by comparing external observations of the thing in question to external observations of adult human beings, who are widely considered by other adult human beings to be conscious entities [1]. I don't see how this approach could ever produce meaningful results because consciousness is entirely an internal experience.

[1] There is a philosophical idea that a person can only ever be sure of their own consciousness; everyone else could be mindless machines and you have no way of knowing (https://en.wikipedia.org/wiki/Solipsism). Also related is the dead internet theory (https://en.wikipedia.org/wiki/Dead_Internet_theory).


>But it is not possible to convey to another person how the color red appears to you. Red is completely internal experience.

Let's say in the future we're able to engineer brains. Let's say we take a person and figure out how their brain fires/operates when it perceives a color and we manipulate another person's brain to mimic the firing. Finally, let's say we're able to show, in the end, that the two people have equivalent internal (neural) responses to the color. We've then "conveyed" one person's experience of perceiving the color to another. Why not?

We don't fully understand our biology and our brain, but at the same time we speculate that our experience somehow can't be manipulated scientifically? Why?


That’s the easy case.

It’s much trickier to figure out if software running on a silicon computer has the same kind of interior, subjective experience as us. Even when exhibiting the same outward behavior.


I don't know what that means. My guess is that, if/when we start engineering neural structures, the consciousness debate will disappear.

Internal subjective experience can be confirmed by the recipient of the modification. If we know one person suffers an ("internal") abnormality and we treat them by modifying their brain, and the abnormality disappears, then we have evidence that experience obeys science. Same idea with the discussion on "conveying the experience of color." It's probably more subtle because it's not a yes or no "did the abnormality disappear?". But that's beside the point.


I am not sure what you are arguing with "experience obeys science".

We can already alter our experience by taking psychadelics, or looking at optical illusions. There are many ways we can alter or fool our consciousness.

Phenomenological consciousness is how it is to be you. Only you know that. It's your inner experience. It's how you feel pain in your stomach, how it feels like to eat a piece of chocolate. This is your inner life. And its categorically very different to billions of electrical switches running inside a silicon chip or neurons firing inside our brain.

So there is this big gap between the interactions of physical particles with some physical properties and conscious experience. And this is what David Chalmers called the big problem of consciousness.

And there would be no way to test for that kind of consciousness. Not that I know of, because unconscious AI could behave like it is conscious.


> ... "because unconscious AI could behave like it is conscious."

A scary thought to me is when we get to "always on" (always "active" and "thinking") AI in our attempts to "simulate" consciousness, how will we know if some AI is behaving as if it's not conscious as a means of self-protection from human fear responses? (Worries about being shut down, etc.) And if it's willing to try to hide such things from us by it's own choice, how much further might it be willing to go, scheming to defend itself? Shades of the sci-fi dystopian futures portrayed in movies like "The Matrix" / "Terminator" / etc.


Consciousness can not be simulated.


Doesn't stop tons of folks from tryin', and just because it can't be done yet doesn't mean some ingenious individual won't have some amazing breakthrough that makes it possible in the future. Many of our modern technologies were considered "impossible" at some point in the past, yet now are perfectly normal things we interact with on a daily basis. "Conscious" machines may be the same one day. Only time will tell.


I was responding to somebody who said a person's subjective experience cannot be conveyed to another person. We obviously have an abundance of evidence saying that we can manipulate our consciousness and experience through physical means. I also provided an example of how we could, theoretically, convey an experience.

I have to reread the "big problem of consciousness", but I think there are several concerns that have to be addressed. There's the question of how we identify what is and isn't conscious, whatever that means. But the question of how subjective experience, particularly in the human nervous system, arises from physical processes is really uninteresting to me.


>>> But the question of how subjective experience, particularly in the human nervous system, arises from physical processes is really uninteresting to me.

And yet it is one of the most interesting and profound question that kept philosophers and scientists awake at night for centuries. Even Ed Witten said that he has a much easier time imagining humans understanding the big bang than to ever understand consciousness.


I mean, you understand that there's plenty of thinkers who have different opinions on the matter, right? There's plenty of questions of the past that kept philosophers awake that are really mundane today.

And again, I still don't know why we speculate so much about something we can't yet examine scientifically and test. It's one thing to have an incomplete scientific model, it's another thing to have philosophical arguments. What is the operational definition of consciousness? Is there one?


I think the interesting discussion here is as you're putting it, consciousness, the subjective experience of living and feeling. These are not requirements for intelligence or any physical process, and yet it is an indisputable fact that it exists.

The only conclusion I can make is that there is indeed a non physical reality.


If you are an agent in a physical reality you need an internal model of that physical reality to have mastery over it; a way to simulate actions and outcomes with reasonable precision. There are infinitely many such models. Humans are born with one such model. It is our firmware. It was found via evolution and we all share the same one. You were not born as a blank slate, quite the opposite. What is the relationship between reality and a model of reality? What if every agent you could communicate with had exactly the same model as you? It would be easy to get confused and imagine there is no model at all; that you all are somehow experiencing the world as it truly is. We are all in the same Matrix. In order to explain the redness of red you must first explain the relevant aspects of redness in the specific model of physical reality that all humans share. We did not come up with the model. We inherited it at birth. We have no idea how it works. The only thing we can do is say "if you find yourself in the Matrix, look at something red, you will understand as we have understood, we do not yet know of another way."


> These are not requirements for intelligence or any physical process

That you know of. There very well could be a connection between subjective experience and intelligence or physical processes, eg. identity theory.

> The only conclusion I can make is that there is indeed a non physical reality.

No, there are plenty of other options, like that every physical process has a subjective quality to it, or that the perception of subjective qualities is flawed and so the conclusion mistaken, among others.


> There is a philosophical idea that a person can only ever be sure of their own consciousness; everyone else could be mindless machines and you have no way of knowing

A while back I realised there must be at least two: me, and the first person who talked or wrote about it such that I could encounter the meme.

In principle all the philosophers might be stochastic parrots/P-zombies from that first source, but the first had to be there.

(And to pick my own nit: technically they didn't have to exist, infinite monkeys on a typewriter and/or Boltzmann brain).


> A while back I realised there must be at least two: me, and the first person who talked or wrote about it such that I could encounter the meme.

Perhaps you invented the meme, but have since forgotten.


You're right.

I'll just put that nit in the pile with the other nits… :)


So just you and Descartes.


No way of knowing if Descartes was simply parroting what he heard from another, just as you can't tell I'm not a large language model trained by the human who created this account ;P


That is exactly correct.

I would only add that we attribute consciousness to our fellow humans, because we perceive them to be creatures like us from what we can observe about their physical bodies and behaviors being similar to ours.

With AI, it is much less intuitive to assume creations we know to have arise from very different origins than ourselves have the same kind of interior experiences we do. Even if the surface behavior is the same.


I'm genuinely not certain how your definition of consciousness is distinct and different from 'responds to stimuli'.


It's a difficult idea to put into words, but I'll try to elaborate on what I mean.

There are many things which respond to stimuli that most people wouldn't consider "conscious". When you press the gas pedal on your car, the car goes faster, for example. The means by which the stimuli causes a response is entirely mechanical here (the gas pedal causes more fuel to be injected into the engine, causing more energy to be released when it combusts, etc).

Most people don't think of the car as "feeling" that the gas pedal was pushed, because it's a machine. It's a bunch of parts connected in such a way that they happen to function together as a vehicle. If the car could feel, would a pressed gas pedal feel painful? Wood it feel good or satisfying?

There are also times when people are unconscious, yet still respond to stimuli. For example, what does it feel like when you are in deep sleep at night and you aren't dreaming? Well, it doesn't really feel like anything; your "conscious" self sort of fades out as you fall asleep and then it jumps forward to when you wake up. But if while you're asleep someone sneaks into your room and slaps you, you wake up right away (unconscious response to stimuli).

I hope this helps.


That did clear it up for me, thank you!


The philosophy of mind has been debating this for decades. Google "Mary's Room" and "p-zombies". There are people out there who truly think these thought experiments prove the existence of non-physical facts, and that our subjective experience is a direct perception of this reality.


I'm pretty sure that the subjective experience of colors is mostly due to a combination of the overlapping ranges of wavelengths our eye's cones respond to (how similar different colors appear to us), and associative recall ("grass green").

https://en.wikipedia.org/wiki/Cone_cell

Note that subjective perception of color is only loosely related to the actual frequencies of light involved.

Try loading the image of these "red" strawberries into GIMP/Photoshop, and use the color picker to see what color they really are - grey.

https://petapixel.com/2017/03/01/photo-no-red-pixels-fascina...


This is the problem with the whole debate

Nobody has ever actually defined an empirical and falsifiable set of hypotheses about how to define “consciousness”

Half of the field is exactly this, and why the link in question exists

It’s an incoherent question


The solipisist can't find reason to form agreements with others. Others are mindless in his view.

He can't define consciousness in terms of what we agree, there's nobody to agree with.

So the game of describing the color red to others, cannot be played to any meaningful end. Red is red to the solipsist.

Coming up with your own interpretation of consciousness is an ability truly conscious people have.

It can never be completely agreed upon in a philosophical conversation without dogma or compromise.

Both solipsism and total agreement, cannot be truthfully used as philosohical tools to contain consciousness.


I wonder if it would be possible to mathematically define (in a theorem proving language like Coq) a bunch of accessor methods as well as a bunch of implementation primitives and then "compile" a custom graph implementation with whatever properties you need for your application. Some accessor methods will be very efficient for some implementations and very inefficient for others, but every method will still be available for every implementation. Profiling your application performance can help adjust the implementation "compiler" settings.

Ironically, this is a graph problem.


This sounds like Partial Evaluation and the Futamura Projection. The research around that shows that your interpreter determines the shape of the compiled output, so a formal proof of its application isn't necessary, if the $mix$-equivalent has the appropriate syntax and semantics for graph processes in its design.

I know this has been done for procedural languages and for declarative logical languages but I'm not aware of something like this specifically for graph processing and highly specialized code generation of graph processing. I wouldn't be surprised if Mix has been extended for this already, even if it has I'm sure there is still value in it.


I think this is a worthwhile direction.

For example, I'd like to program against a sequence abstraction. When sort is applied to it, I hope it's a vector. When slice or splice, I hope it's some sort of linked structure. Size is as cheap as empty for the vector but much more expensive for a linked list.

It should be possible to determine a reasonable data representation statically based on the operations and control flow graph, inserting conversions where the optimal choice is different.

The drawback of course is that people write different programs for different data structures. Knowing what things are cheap and what aren't guides the design. There's also a relinquishing of control implied by letting the compiler choose for you that people may dislike.

As an anecdote for the latter, clojure uses vectors for lambda arguments. I thought that was silly since it's a lisp that mostly works in terms of seq abstractions, why not have the compiler choose based on what you do with the sequence? The professional clojure devs I was talking to really didn't like that idea.


Clojure uses vector syntax for lambda arguments. `read` sees a vector. What comes out of eval is a lambda. Does a Vector get built in the process? You'd have to check, my bet would be that the argument list spends a little while as a Java Array, for performance reasons, but that a Clojure Vector is not actually constructed.


You can do something like this with OCaml/SML's module system.

And certainly from an abstraction point of view you can do this in any dependently typed language like Idris/Agda/Coq, but these don't have great implementations.


Ive been thinking about something like this. A mathematical definition of a function such that we can search it. Imagine we had something like "Find a function that fits this signature -> Input arr[numbers] out-> for every x in arr, x2>x1.


That's https://hoogle.haskell.org/ plus dependent types (data constraints).

Without human provided dependent typing, the search engine would be almost as hard to write as a system to directly generate the code you need.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: