How do you approach Heidegger's or Derrida's arguments that 'language is complicated, something magical must be going on'? Will you allow for no qualitatively different meaning to language than probabilistic associations of sounds with objects?
I can't speak to the OP, but extraordinary claims require extraordinary evidence; we are yet to observe anything in the natural world where "something magical [really turned out to be] going on", so this really is an extraordinary claim. It has been several years since I was current with the linguistics literature, but as far as I know no extraordinary evidence has been produced.
I don't think that requires that we view language solely as a probabilistic map between sounds and objects. All sorts of emergent behavior appear to be "magical" at first glance.
I did not reference linguistics literature, but philosophy literature (a distinction worth making because they are approaching the problem at different levels). I've never been good at the 'language problem' elevator speech, but how is it possible that we can capture the full power of language while using language to describe it? Language is a technology that allows for the production of concepts like 'probability' or 'apple'. Language as a tool is so fundamental to cognition that it becomes difficult to separate the two. Heidegger's Being and Time explores that line of thinking, and Derrida loves to play the game of "well that means you can't say anything!" Both have provided me with insight into what "magic" is referencing here (rather than your fairly pithy absence-of-evidence reference).
You're asking if I believe in qualia, and the answer is no. There are firing neurons and that's it. The great variety of ways in which neurons can fire, and the great variety of experiences that shape how neurons fire combine to form an exquisite set of possible firing patterns (this is literally what makes me me and you you) but ultimately, to mis-use Gertrude Stein's famous phrase, 'there is no there there.'
I don't believe I referenced qualia. The problem of language is an emergent phenomenon just as the utility of language is. I don't believe in subjective essentialism either.
It wasn't at all clear that that is what he was asking you. And you need to qualify the sense in which you "don't believe" in qualia. You don't believe that consciousness has phenomenal properties? Qualia certainly exist in some sense.
From what it sounds like, you are just dismissing compelling philosophical issues because it frustrates your beliefs.
Your observation is so vague and general to the point of being rather meaningless. Almost every physical theory is described by an underlying mapping between inputs.
The interesting point is the expression power of your model. : to take an example I am somehow familiar with, current large vocabulary speech recognizers have millions of parameters. They work relatively well, but they are very difficult to interpret, and it is hard to see how they help us understanding how speech recognition actually works in our brain.
To make a somehow flawed analogy, every Turing complete language is equivalent, but getting the machine code of a very large project is not very interesting if you want to understand it, while it is mostly enough if you just want to use it.
Do you have any reason to suspect this isn't how the brain works? Maybe language isn't a small set of high level rules. Why should we suspect it to be? The probablistic models seem to be very similar to how real people actually learn informal language. Formal languages of course have high-level rules, and these are well modelled algorithmically.
I don't particularly have any reason to believe one way or the other. Certainly, the probabilistic models for language are created "out of the blue" without any attempt to model how human learn languages.
That is EXACTLY the stance that Chomsky is challenging here. Don't get me wrong, I think the probabilistic view has truth value and is a powerful predictive tool. However, I don't think it captures the phenomenological properties of the act of language based cognition. See my cousin reply referencing Heidegger.
Chomsky claims language is built-in. So probabilistic associations are the exact opposite of his claims. Interestingly, there have been some very good baby studies that show babies inherently know statistics needed to learn probabilistic associations. You can show them a couple different color balls going into a box, then take out a bunch of the rare color that went in, and they will register more surprise even long before they can even talk. Things like that. Chomsky's assertions are, well, what you would expect from someone that old. They didn't understand neurons and biology and genetics so well then, so yay, magic things are possible!
In general he does. In this article he doesn't talk about he talks about approaches to AI.
> there have been some very good baby studies that show babies inherently know statistics needed to learn probabilistic associations.
Very good. Can we identify how that works and then build a robot that has the same mechanism in a more efficient way than simply simulating a brain at a molecular level. That is his argument here.
> They didn't understand neurons and biology and genetics so well then, so yay, magic things are possible!
So where were you 4-5 decades ago when he proposed his theory to propose a better one?
The idea that meaning consists of associations is extremely primitive. It works for concrete nouns and verbs but it quickly fails as things get more complex. Language is used to refer to refer to abstract things, imaginary things, counterfactual situations, etc. And even if you do arrive at a series of concepts using associations, you have to understand how they are supposed to combine, even for completely novel sentences. In all these cases, there's arguably nothing there to associate with. I can't answer your question (I think no one can), but we can conclude that meaning is more than associations.
Isn't that Chomsky's argument here (in this article, not in his approach to linguistics in general)? -- That it is a good idea to try to find a better understanding of how the internal mechanisms work so we can build or simulate it better. You do that with carefully constructed experiments not with just observing inputs and outputs and training a neural network or a Markov model with it.
Please argue that there is nothing to associate with.
Why is a real observation from your senses more privileged inside your brain that a random well-formed value by a (hypothetical) random number generator neuron?
I argued that if you only have associations with previous experiences, you won't be able to deal with novel input. Ergo, you need more than just associations (synthesis, imagination, counterfactual reasoning, etc.).
As to your second question, I don't see how it relates to my argument, but I'll answer anyway. If you're comparing a observation to a random number, you're looking at the observation qua value, in which case it has the same status. If, however, you look at the level of interpretation (what it means in your brain), the observation has a complex set of relations with the rest of your brain and gives rise to a perception, wheres the random number value is just noise that has to be tolerated by the brain.
Saying everything is just probabilistic associations is like saying everything is made from quanta of energy and thus every higher level concept or model is useless -- just simulate the quarks and you are set. Not only that -- simulate it by recording and observing the energy patterns going in and coming out of a black box.
Yes you can get some things to work and some to work well but the idea is that perhaps there is a better model that describes the mechanism or the encoding of meaning. That's what Chomsky is trying to say in this particular article. Some stopping at a brute force approach is a fine engineering approach but that doesn't mean everyone should, it is still worth trying to find a better model for it, if at least, just to gain an understanding.
It's skyhooking. Given enough time and enough compute power, we should be able to completely simulate it. It may not be economically feasible, and it may take a long time.