Hacker Newsnew | past | comments | ask | show | jobs | submit | ssdfsdf's commentslogin

Lol. We do not live in a meritocracy, you do not become a CEO with mere talent.

(I realize you were being sarcastic, but misinterpreting you enabled me to make my point elegantly)


Well, I am sure that he is a CEO in the hearts of the Chinese programmers he hired. :)


It does rather depend on how you define "meritocracy":

http://www.theguardian.com/politics/2001/jun/29/comment


I agree, the way that the system reasons should be configurable by the system itself. Of course you have to give it some fundamental operation(ultimately we operate within the mappings of the universe), but this should be as low level as possible, allowing the system to build its own composite structure with which to reason.


While I think the hypergraph is a great data structure, and I agree graph rewrite rules, written as graphs have a lovely symmetry, this approach feels too low level to me. Sure you can have graphs representing both data and process ala "To Dissect a Mockingbird", but it feels that this interesting interplay of data and process is sitting too high up in the hierarchy of structure, I think you want this happening at a very low level, giving more space for emergence (whatever that may be).

There is not enough symmetry, I think the correct solution to this problem is going to look obvious. I don't think this is an engineering problem, I think it is a radical re-imagining of what intelligence is. My hope is with reservoir computing.

I think what we need to do is a mashup of reward modulated hebbian learning and reservoir style techniques. We need to take a gigantic sledge hammer and smash apart the incoming stream of data, spray it as far across the space as possible and then linearly compose the pieces to construct something that looks right. Combine this with hebbian learning so that those mutated, fragmented, mutated pieces of the incoming object which are useful for the purpose of the device are made more likely to occur within the network.

So you need a structure where it is possible to enhance the probability of some perturbation of the data through some global learning rule. Then you need a way of bringing those pieces together to reconstruct either the object itself or an object of use. And you need lots of it, billions of active processing elements and trillions of sparse connections.

Just my rant, perhaps it will spark something in a mind elsewhere, just passing on the pieces of the puzzle that I have smashed apart in my head. Perhaps we need to flip this all around, the patterns come from the world, they take root in our head and use the substrate to evolve, before passing out into the world again. what an GAI needs to do is provide a place for these patterns to take root and evolve according to the GAI specific objective function...


I wonder if someday in the future, when AI is common place, people will implement these early attempts at GAI on better hardware.

Kind of like what we have done with the babbage difference engine.


Why is it important to have both n-type and p-type transistors in the same circuit, why not just use p type carbon nanotube transistors?


It has to do with small amounts of leakage current on the device level. If you just use 1 type of transistor (p or n -type) then when you change from on/off or off/on you will lose more current than if you use both types in your circuit. Sum this across billions of transistors and the equation starts to make sense.


Noise rejection is another important advantage of CMOS. I don't know enough about these materials to know whether it applies here or not.


Can you explain what the importance of this leakage is? Electricity costs 'power-wise' it makes sense but are there other reasons to want to avoid transistor leakage?

Disclaimer: Not a hardware guy but am utterly fascinated, ooh's and ahh's may spontaneously result.


I'll speak to this a bit...

Transistor logical state, typically logical 1 or logical 0, is represented physically by positive and negative charges, respectively. Therefore, we can change the device logical output of 1 or 0 by changing the charge output as positive or negative.

However, to change the charge of a piece of metal from positive to negative (or neutral to negative... it doesn't matter because we're worried about relative change), you need to physically move electrons (charge) on or off the metal. This movement of charge is real 'work' if you're unable to fully recover the charge when the device changes state. This results in spent energy.

Leakage current is important to consider mainly for power and heat transfer. As we require devices to increase computing performance and decrease power consumption, it is important to use techniques that economically reduce power. As power consumption of a chip increases, the spent energy of a toggling transistor becomes heat that needs to be removed in order to ensure proper functionality (in the worst case, the chip burns itself up). Therefore, proper thermal simulation and validation must be made at the system level (depending on what type of designer you are).


Dwolb's explanation isn't quite clear, and I tried to type an explanation as well, but the wiki page has a vastly better explanation as to the advantages of CMOS over NMOS.

http://en.wikipedia.org/wiki/NMOS_logic


Yeah, I lost all interest in the story after seeing this.


The "Pentagon Wars" scene reminds me strongly of software development in many companies I have worked for.


I agree. I am not convinced that such a system is better. But I don't think the opinion expressed is infantile. Angry yes, difficult to convincingly argue, sure. But wrong? Who knows...


I often wonder if there is some notion of a basis of computation in mathematics. You can do stuff in binary, trinary, what about further out systems? What about working with functions/mappings which take more than two inputs. What can be said about the expressive power of these different ways of computing? Any one know where I should be looking for this kind of stuff?


Describing computation in mathematical terms is arguably the core of Computer Science as a subject - particularly the "Theory of Computation":

http://en.wikipedia.org/wiki/Theory_of_computation

If you are interested in "functions/mappings" then you can look at Lambda Calculus and work your way right up to modern functional programming languages:

http://en.wikipedia.org/wiki/Lambda_calculus


I'm reasonably well versed in these topics, I found them unsatisfying, they don't capture the essence for me. I don't really know what I'm looking for I just know I haven't seen it yet.


For what it's worth, for all the people I never told that I thought this to be true. I told you so :)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: