Hacker Newsnew | past | comments | ask | show | jobs | submit | ctl's commentslogin

Software-only machines are qualitatively different than machines with paper trails, since they can be tampered with en masse by a small number of adversaries without needing physical access at voting time; and because such tampering may not leave any trace whatsoever.

Do you really not see how that’s a big deal?


It's politics. He's saying it because it's an effective thing to say, not because he actually didn't know health care is complex.


I doubt it. I don't think he is that smart. He can't even talk like a normal person.


So your philosophy is, "I only do healthy things if they're also painful"? That doesn't seem productive.


Sure, you can boil anything down to a contrived sentence that misses the point, but I never once mentioned it being painful. It's much more about building habits, and it's up to you to decide if habits are meant to be painful or not. Choosing healthy ingredients, researching recipes, cooking while I listen to a podcast, and then enjoying a nice meal with good company isn't what I consider painful.


Let's look at the most important section of the paper. He estimates the processing power of the brain:

The human brain contains about 10^11 neurons. Each neuron has about 5 • 10^3 synapses, and signals are transmitted along these synapses at an average frequency of about 10^2 Hz. Each signal contains, say, 5 bits. This equals 10^17 ops. The true value cannot be much higher than this, but it might be much lower.

In other words, there are 5 * 10^14 synapses in the brain, and each synapse transmits up to 100 signals per second, and we can probably encode each signal with 5 bits. That's ~10^17 bits per second.

So, uh... does anybody else notice that that's not an estimate of processing power?

That's an estimate of the rate of information flow between neurons, across the whole brain.

The level of confused thinking here is off the charts. Does this guy not understand that in order to simulate the brain, you not only have to keep track of information flows between neurons, you also need to simulate the neurons themselves?

That's not merely a flaw in his argument. It indicates that he has no idea what he's talking about, at all.

Needless to say, this paper and its conclusions are complete nonsense.


Neural nets require a couple of FLOPs per synapse. The processing power required is a direct function of the number of synapses. Each neuron is essentially applying a particular logical op, and counting the neurons and their inputs gives you the number of ops. I don't get why this seems so objectionable.

Sure, real neurons in the brain might be doing something a couple of orders of magnitude more complicated than the nodes in an ANN, so you could tack on another 10^2 factor to those estimates if you like. But fundamentally, counting synapses is a reasonable way to get a Fermi estimate of the brain's processing power, and Bostrom's estimates are not significantly different from those others have arrived at by similar methods.


You’re right. I didn’t read the paper very carefully, and was myopically focused on the emulating-a-real-brain AI strategy. As in, let’s slice up a real human brain, map the neurons and synapses, and then simulate them as faithfully as possible.

To do that you need a great deal of fidelity in your simulations of neurons, which are enormously complex. But there is an argument to be made that neuronal complexity is incidental to the brain’s overall “computational capacity”; that you could replace the neurons in a human brain with much simpler nodes and still end up with a functional intelligence, after sufficient rewiring.

I don’t think that claim is obvious, but it’s definitely possible. And if it’s true, you can have human-level intelligence for 10^19 ops, given suitable software.

So I apologize for my post. It was over the top and unfair.

All that said, I still disagree with Bostrom’s conclusions. I think he enormously understates the difficulty of creating intelligent software, if we’re not just copying an existing brain.


I formally studied biology not CS, partly out of an interest in AI.

Everyone who thinks superintelligence or even just human or higher-animal level intelligence is right around the corner needs to study genomics, proteomics, molecular biology, and neuroscience. Study them with an open mind and think about what's really going on.

A neuron is not a switch. A neuron is an organism. It contains a gene regulatory network more complex than the entire network topology of Amazon's entire web services stack, and that's just looking at the aspects of gene regulation and enzyme (a.k.a. nanomachine) operation that we understand. There are about 100 billion of these in the brain and every one of them is running in parallel and communicating constantly. There are also about 10 glial cells for every one neuron, and glia are involved in neural computation in ways we know are there but don't yet fully understand. (Seems to be related to longer term regulation of synapse behavior, etc.) Each glial cell also contains a massive gene regulatory network and so on.

The CS and AI fields suffer from a lot of Dunning-Kreuger effect when they talk about biology. The level of processing power and the parallelism that's going on in the brain of a living thing is simply mind numbing. It's as incredible as the sense you get of the scale of the universe when looking at the Hubble Deep Field.

Our present-day computers are toys. We are not even close. It would at least take advances equivalent to the ones that took us from vacuum tube ENIAC to here.

Edit: I don't write off superintelligence categorically though. I think we could achieve forms of it not through pure AI but by deeply augmenting biological intelligence. Genetic and biochemical performance enhancement could also play a role. Imagine having more working memory, perfect motivational control, the ability to regulate your own desire/motivational structure, and needing only a few hours of sleep. Cyborg superintelligence is a possibility in the foreseeable future and it does raise issues similar to those the superintelligence folks raise. So I don't dismiss an intelligence explosion. I just very strongly doubt it would be purely solid state.


>The CS and AI fields suffer from a lot of Dunning-Kreuger effect when they talk about biology

I'm sure this is right, but what about the reverse -- how much do you know about AI?

AI need not be as complex as natural intelligence to be more intelligent. A lot of the complexity in the natural world is due to the blind and haphazard nature of engineering by natural selection. Do we understand, completely, at a molecular level, the physical and control systems of bird and insect flight? Or how fish swim? Probably not. But by understand the principles and applying a certain amount of engineering brute-force, we've produced machines that by many sensible measures out-fly and out-swim natural machines.


>Do we understand, completely, at a molecular level, the physical and control systems of bird and insect flight? Or how fish swim? Probably not.

That's an excellent point. But at the same time, we do have some level of understanding of the mechanics of swimming and flying. The same really can't be said of intelligence.


That depends what you mean by intelligence.

We understand enough to build computers that win at chess, to build computers that run financial trading algorithms, to build Google.

I agree that intelligence is in some ways harder to fully define than flight, but that doesn't mean that we don't have any understanding of any parts of it.


Google's definition (which is a good starting point) of intelligence: "the ability to acquire and apply knowledge and skills."

As far as I know, we have very little if anything in the way of software that accomplishes general learning (not limited to a specific domain).


Of course we are far from reaching human level yet, but generalised Moore’s Law means the number of years until we reach human level is not that far away.

There is of course the issue that since brains evolved rather than being designed that they can be inefficient in their processing. Look at how poor humans are at arithmetic - we need to divert a huge fraction of our processing power to do what a computer designed for arithmetic can do very efficiently.


Is Moore's Law still a thing?

I don't doubt we can go far beyond present compute power since I am far beyond present compute power and I am reading this. But is the economic driver there?

At the endpoint most people use PCs, tablets, and phones to browse the web, write e-mails, play games that are already pretty good, etc. In the cloud we can always just make data centers larger.

There's obviously always a push for speed and density, but is that push still powerful enough to pump the billions upon billions that will be required to make leaps into areas like 3d circuits, photonics, quantum computing, etc.? At what point does the economic driver drop below the threshold needed to overcome the next hurdle?

First we flew in balloons. Then we flew in fixed wing airplanes. Then we motorized them even more and fought wars with them. Then we built jets. Then we broke the sound barrier. Then we went to orbit. Then we built the SR-71 blackbird and pioneered stealth. Then we landed on the moon.

Then nothing happened in aerospace until Elon Musk, and he's just getting back to where NASA should have been in the 80s. Meanwhile the Concorde is still cancelled and commercial flights are no faster than they were in the 70s.

I'm a bit concerned the computing is about to do what aerospace did. I take some of the breathless hype you hear today as a contrarian indicator for this, since before aerospace went comatose we saw this:

http://i.kinja-img.com/gawker-media/image/upload/t_original/...

I hope not but history does rhyme and economies are more powerful than wishes (or even governments).


I'm not sure how seriously we should take Moore's Law when it comes to these things. It applies pretty well so far to the development of silicon-based microprocessors, but at some point, we're going to come up against some hard physical limits on those. Once that happens, we may be stuck until we can come up with something fundamentally new.

We already seem to be up against some limits in a way as far as single-threaded processing power - it doesn't seem to be going up all that fast in the last few major cycles of processor development.


This is why I said generalised Moore's law, not Moore's law. We are pretty much at the limit of current designs, but there is still plenty of room for parallelising computation.

I do agree we are going to need something new to get to human level.


> It contains a gene regulatory network more complex than the entire network topology of Amazon's entire web services stack...

Maybe it's not a fair comparison but I decided to look it up:

Human genome has about 3.2 billion base pairs, which is about 6.4Gbits = 800MB. The size of linux-4.4.1.tar.gz is about 83MB. So, in a sense, the human genome is only about ten times the compressed size of Linux kernel, never mind everything on top of that.


> A neuron is an organism. It contains a gene regulatory network more complex than the entire network topology of Amazon's entire web services stack, and that's just looking at the aspects of gene regulation and enzyme (a.k.a. nanomachine) operation that we understand.

Can you give some more details about this? How are you quantifying the complexity of a neuron and of the AWS stack?


Interestingly, recent research suggests synaptic variability does come to about 5 bits.

http://www.salk.edu/news-release/memory-capacity-of-brain-is...

“We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about eight percent different in size. No one thought it would be such a small difference. This was a curveball from nature.” Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.

It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small. But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few. “Our data suggests there are 10 times more discrete sizes of synapses than previously thought.” In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.


Bostrom's paper is based on Hans Moravec's thinking and Moravec's paper is pretty well argued http://www.transhumanist.com/volume1/moravec.htm

Bostrom as a philosopher may be fuzzy on processing power but Moravec who was actually building robots has a pretty good grasp.


In a recent estimate of the bits per synapse they found it to be an order of magnitude higher than previous estimates: http://www.eurekalert.org/pub_releases/2016-01/si-mco012016....


Yup. I want to again quote a paper recommended to me on HN a while back [1]

However, the relevance of Turing model is questioned even in case of present-day computing [33] [34]. Indeed, any computing machine that follows a Turing model would be highly inefficient to simulate the activity of biological neurons and experience an increased slowdown. Since the super-Turing computing power of the brain has its origins in these ‘strong’ interactions that occur inside neurons, current models have missed the most important part. Simply, Nature doesn’t care if the N-body problem has analytical solutions [36] or can be simulated in real time on a Turing machine [37].

...

While previous models have attempted to represent Hamiltonians using Turing machines [35] the paper [1] shows that the Hamiltonian model of interaction can represent itself a far more powerful model of computation. Turing made an important step forward; however, there is no need to limit natural models of computation to Turing models. In this sense, the new framework of computation using interaction is universal in nature and provides a more general description of computation than the formal Turing model. In other words God was unaware of Turing's work and has put forward a better model for physical computation in the brain.

http://arxiv.org/ftp/arxiv/papers/1210/1210.1983.pdf


Well, if it's possible to build a human level intelligence, it's probably possible to build an intelligence that's much like a very smart human except it runs 100x faster. And in that case, somebody with sufficient resources could build an ensemble of 1000 superfast intelligences.

That's a lower bound on the scariness of AI explosion, and it may already be enough to take over the world. Certainly it should be enough to take over the Internet circa 2015...

To my mind it seems pretty clear that if AI exists, then scary AI is not far off.

That said, I don't worry about this stuff too much, because I see AI as being much technically harder, and much less likely to materialize in our lifetimes, than articles like this suppose.


My understanding is there are two reasons you want inflation:

(1) Inflation means real interest rates can go negative, which means it's harder to get stuck at the zero lower bound (as we are today).

(2) Wages are sticky, i.e. workers fight very hard against having their wages reduced. When there's inflation you can effectively cut wages by giving raises below inflation. Without inflation it's very hard to cut wages, and the economy becomes brittle.


That's exactly why it's called a deflationary spiral: once you're in it, it's hard to get out even with aggressive monetary policy.


Maybe the problem is that they're trying to hold asset prices up at a completely bananas level. And because it's such a crazy high level, the only way people will hold them is constant money printing.

Perhaps letting the prices return to levels that are based on fundamentals instead of constant printing would solve the problem. There's no guarantee it becomes a spiral, especially if the deflation is mild.

Mild deflation with zero interest rates is roughly the same as zero inflation with slightly higher rates or mild inflation and moderate interest rates. There's very little technical difference between the three so long as the delta is only 2% or so.

In other words, the world wouldn't grind to a halt with 2% deflation. It's 20% deflation that causes the spiral, not 2%. Unless of course you're so heavily financialized that 2% makes all your models blow up and your banking sector is so entrenched in and around the government that they can hold an entire country -- if not the world -- hostage.


> My general impression is that more and stricter regulations in an industry is usually (though not always) a bad thing for consumers in that industry

In cable internet the average consumer has only one or two potential providers. On top of that, companies who try to break into the industry face significant political and regulatory challenges when they lay fiber.

I don't understand how you can think this kind of industry should be unregulated. There are few market forces keeping providers in line, and we see the results of that in the industry's attempts to extort internet companies with throttling.


The industry is already highly regulated, and there's a difference between thinking an industry should be unregulated and thinking that not all conceivable regulations are good for an industry.


This is astonishingly high quality for pop science. It deserves more upvotes!


You're complaining about the system... in response to a powerful instance of the system working correctly? What?


Yes, perhaps one instance, but the system hasn't finished its work yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: