Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Thalmic (YC W13) introduces gesture control without the cameras (newscientist.com)
194 points by pauldix on Feb 25, 2013 | hide | past | favorite | 47 comments


Just amazing! Hardware startups are beyond sexy. There is something so compelling about tactile devices... no amount of sophisticated software engg can come close to that feeling. Wipro's chairman was once asked why wipro still made soap & shampoos even though wipro was a billion dollar software services shop. He just picked up a bar of soap and said - its just very physical. Software, you can't touch, you can't see...

In the early 90s, there was this thing called a centronics port. You looked behind an old dot matrix printer, you found this large ugly parallel port with 25 pins ( http://bit.ly/YT4xRh ). I asked my professor "how does the printer print ? ". He took my question at face value, hooked up the centronics port to a CRT...soon I realized pins 2-9 would output any goddamn digital signal ! Literally anything I wanted. I wrote a quickbasic program to print square wave, sine wave sawtooth... and those waves started showing up on the CRT. So it got me thinking...if I feed that 8 bit output to an 8-bit TI DAC, I can make an analog function generator! ( http://bit.ly/WmFy8o ) My final version needed a 741 op-amp & a 555 timer, but it actually worked over a wide range of frequencies, and was about one hundredth the price of a real Kenwood function generator! Ofcourse, back in those days, there was no YC or startup culture, & that was the end of that device...:( But just thinking about that one hardware device gives me more pleasure than all the software I wrote after graduating.


Yes hardware startup are sexy, but most of them are also very software-intensive. Leap Motion needs a lot of processing power, even for a simple visualizer app, and I bet Myo does too.

Other examples: guidance of things like the Sphero or the Parrot AR Drone are very complex. Usually there's a lot of maths involved, which has to be translated into software.


I would think they just map muscle contractions to commands for the drones. No math required except to parse contractions.


Electrical input from the body is extremely noisy. I wouldn't be surprised if they were using machine learning to train their algorithm to recognize which electrical patterns correspond to which muscles being activated.


The math behind this is actually rather complex. Heavy duty machine learning and statistical analysis.


Well of course. But that's DSP for parsing the muscle contractions, right?

I would imagine you guys didn't do any application-specific math like guidance of the choppers..? Just curious!


Your last sentence does paint a pretty clear picture - get back into hardware building, you said it yourself, we do now have YC and the startup culture.


> The first generation can recognise around 20 gestures, some as subtle as the tap of a finger

I wonder if it's able to distinguish between fingers. If so, it would be possible to create some sort of chorded keyboard, without the keyboard of course. Combine with Project Glass and you have awesome sauce.


I wonder if it's able to distinguish between fingers

From the FAQ (https://getmyo.com/faq):

"What sort of precision does the myo have?

The MYO senses gestures and movements in two ways: 1) muscle activity, and 2) motion sensing. When sensing the muscle movements of the user, the device can detect changes in hand gesture right down to each individual finger."


I've worked extensively with sensored gloves in my job, both designing the gloves, and developing software to do cool things with them. By far the most difficult thing to do is make a compelling application that effectively utilizes the capabilities of the hardware. This is a very cool piece of hardware, but it's success is going to be defined on whether or not they are able to make it better than existing input devices, which tend to be quite good already.


That was my first thought too. The video game shot made so much sense though.

Good luck to these guys, hope they make it into a massively big company.


I'm terribly excited about this interface. I've been hoping that this is right path towards a Rainbow's End(http://en.wikipedia.org/wiki/Rainbows_End) kind of interface.

I was worried that this kind of approach would languish inside MSFT research. http://www.extremetech.com/extreme/133732-microsoft-demos-mu...

Are you worried about the MSFT patent?

Having your arms hanging loosely or with the elbow support while gesturing seems to be the right way to take advantage of human fine motor control while manipulating a digital interface. It is a good workaround to Gorilla arm.

I suspect that the 20 year interface that Gabe Newell is expecting will use this kind of input.

http://www.theregister.co.uk/2012/07/26/valve_considered_ton...


This is cool, but I couldn't believe their video included the skiing example. As an avid skier, I can't think of anything more life threateningly dangerous than messing with menus to share a video while you're sliding down a mountain.

I get that it has nothing to do with armband itself, it just seems like a very poor choice. I cringed.


It wasn't that cringeworthy!

The skier used it at the top of the run and once at the bottom of the run to Share when the jump was complete.

Also, you should note if it works as well as is claimed, it would be every bit as fast as every other gesture in the human body. So, its not like you're fiddling with a touchscreen or mousepad while moving on mountain...


I haven't been so excited in a startup's product in a very long time. I'm constantly tapping out rhythms with my fingers - often quite pleasant and complex ones - and for whatever reason I've never been able to do the same onto any kind of midi device - a keyboard or a set of drum pads or whatever. I would love to give this a try as a midi controller!


Ok, this is what I think we will see a lot of. No talking, no vision, no touching, just some simple gestures.

Dave Rosenthal (former MIT/Sun/nVidia not archivist guy) and I talked a bit about gesture control when the first MEMS accelerometers came out. They allowed for a wide range of controls but they drifted horribly so it wasn't possible to do a sort of "hold for fast forward" kind of motion. You had to have a move to start and a move to end.

To some extent this is the same problem as the Leap which can know where your fingers are but you can't hold a gesture and move your hand (afaict, I've not seen the LEAP SDK in action yet)


It is nice to see some competition to Leap Motion[1], which made into HN's first page a few weeks ago[2].

Getting them integrated with Chrome/Firefox would be amazing.

[1] https://www.leapmotion.com/

[2] http://news.ycombinator.com/item?id=5179335 and was not the first time:

http://news.ycombinator.com/item?id=4250536

http://news.ycombinator.com/item?id=4170446



First thing something would do with this: integrate it with Google Glass. While nodding your head or tapping a touchpad on the side by your temple might be awkward, discretely swiping up and down with your had would be more natural and less obvious.


I wonder how well it can perform for amputees. I'm guessing there is HUGE potential here, one that a Leap Motion simply can't compete with.


Just a thought in this field of motion controls - could be cool to use gloves/finger/points of contact tapping as part of the motion/gesture control aspect - ie: tapping my thumb and forefinger together is 1 gesture, thumb and middlefinger another, thumb and ringfinger etc. So on top of using motion and speed, you also detect commands when points connect together.


Will you be offering the units for free to developers, like the Leap Motion, or should I just pre-order right away?


I wonder how good this can be at spatial accuracy. With optical motion sensing, you can relate gestures to physical space: you can point at something.

I love the concept here, but I wonder how it can go beyond "the user is wagging their finger" to "the user is pointing at an object on the screen"


Depending on the sensitivity of this, the potential is endless. There are many multi-finger gestures that would work very well in three-dimensional space that have not yet been exploited with current technology. I'm excited to see what will come of this.


Just preordered one, if it's as good as it looks on the video the possibility to build amazing things are endless. As stated in the the NewScientist article, this combined with glass and some home control equip. Oh the future is now ! :)


I guess this could work well with Google Glass.


Youtube video link of the MYO: http://www.youtube.com/watch?v=oWu9TFJjHaM


Wow, not only is the product amazing, but so is the naming.

The product uses EMG, as in electromyography. And it's a personal armband, as in My 'O'.


Good observation! I always like these minimalist signature loadings.

I like the logo too, I see it as some kind of trans-binary trans-bi-state non-analog frequency peak-detector (0,1) state, not `ol tired familiar 0,1 on-off switch. Something new!


Exactly. We were hoping someone would pickup the reference!


This is awesome. The Kinect has a ton of great applications, and this looks even better. I can't wait to play around with one.


This one indeed looks better, but the thing that caught my attention was their vision of enabling humans to do more with computers, to enhance their abilities (Finally no more of the BS that touch is the way forward...). I feel this vision is going to be their killer app (eventually); and one of the best ones to have come out of YC recently.

Best of luck!


this seems way cooler to me than the ones that use visual input.. looking forward to the dev kit!


Amazing that I'm still thinking back to Minority Report over a decade later (released in 2002!).


Given that the technology in that film was designed by actual interface researchers, based on their research, it's no surprise that it continues to resonate so well. Also, sci-fi does have a tendency to inspire, or at least be inspired by, the leading edge of technology.

Said researchers now have a company, Oblong industries, that's basically making the Minority Report interface a reality: http://oblong.com/


Wow. This is the future of human/machine interface (until we get direct brain/machine interfaces (BMI) working with fidelity and no/little invasiveness). I really hope the actual product works as well and as smoothly as in those demos.


Can you ensure that common activities such as typing, writing, stretching etc. won't be registered as intended triggers for actions? Anyway, amazing product! I can totally see myself using and developing with one!


From their faq[0], you have to perform an "unlock" style gesture to begin input.

0. https://getmyo.com/faq


I wonder if they've looked into adding an NFC chip into MYO. It may seem crazy for a gesture control armband but it could add potential functionality to the armband and also improve communication among the masses.


Very cool technology!

Honest question : Is this area a patent minefield? Or are the Myo guys busily sewing it all up? (I'm glancing nervously at the may-be-more-tricky-than-anticipated 3d printing field, for instance).


`k people, I wanna see a coupla-three dev teams getting these patent specs sewn up before Myo flips behind Apple's iwatch wristband wall.

1. Permute & Enumerate Discrete Associative Layer(PEDAL) Group. Enumerate all normative ergo kenesiologic mapping of discrete hapt events onto an associative array we can MTF onto task expectation frequencies.

2. Dynamic Adaptive Nascent Cognitive Evolution(DANCE) Group. Get past the AA(Acknowledgment Annunciator), transparently and simultaneously entrain both user and device handshake onto evolving MTF surfacing situ specific command completion.

3. Comparitive Haptics Integral Nuance Advancement(CHINA) Team. Collide Leap, Myo, Kinect, Oculi; onto mobile, glass, watch.

That's it. Git'er done like yesterday, fellas. Now!

Out!


Very cool.

Just a heads up: At the bottom of https://getmyo.com/faq, the @ThalmicDeveloper <a> tag is being rendered as text.


Thanks, just took it out!


Does it suffer `Gorilla Arm' as others? Is it necessary to keep one's arm aloft for hours to get in a day's work? It certainly looks exciting for momentary input.


That's one of the advantages of the MYO: You arm can be at your side, on your lap, an arm rest, etc. It doesn't need to be seen by a camera.


I really hope these guys are in contact with the guys at Oculus. I feel like we're finally at the cusp of some incredible VR technology.


black magic!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: