Hacker Newsnew | past | comments | ask | show | jobs | submit | rnemo's commentslogin

More than one OpenBSD hacker used loongson


If you want to support MIPS for embedded applications then loongson is one of the fastest and cheapest ways to run stuff right now.


How's availability outside of China? I'm not finding many of these systems in the US.


I don't know about the US but there is a place in the Netherlands that says they have netbooks and small systems in stock.


Large (tech) company employee here. For the project I work on, most of our production stack is made out of FOSS software that's wrapped up in support contracts.


Intel's Ivy Bridge processor has been out for almost a year now, and is on the 22nm fab process.


Ivy Bridge still hasn't been released in the server space. The server market currently has Sandy Bridge chips. The Ivy Bridge chips are due at the end of the year.


It's 22nm finfet. The finfet is equivalent to a 0.5/1 generation advantage in power or speed.


How is Terminal.app embarrassing? In most of the ways that a traditional terminal is important, it acts exactly as expected, and still has convenient "new" terminal things like tabs and profiles.


I think the last time a lot of people used it before jumping ship for iTerm was about 10.4, where it was pretty horrible.


Everything in moderation. Video games can eat your life but so can anything else, it's all about how much self control you have. I still play a lot of games because, hey, I have to do something with my spare time (and having spare time is essential), and I find it to be one of the more engaging things I can do. Not to mention multiplayer games are the only way to do anything with a lot of my friends, as they live in different states.


The PC as a platform for gaming was absolutely dying around the time of the current generation console launch, the console was meeting it (price of games, internet multiplayer, AAA title selection) or beating it (ease of use, cost of hardware, marketing) in nearly every way. At that time the best advantage to the PC was the potential for better graphics, but the cost of a graphics card to do this was more than the cost of an entire console. Plenty of industry vets joked that the PC would soon be only for hardcore FPS or MMO players.

Since then no one thing has happened to bring the PC back to life, nor has there been a single moment where the course changed. Changing to digital distribution, spearheaded by Steam, has arguably been the biggest savior to PC gaming, but the declining price of both good hardware and new games, the rise of indie titles, and the spread of gaming to OS X have all been significant contributors.


I hate being that person, but I have to honestly ask, if the scientific community as a whole cannot agree on exactly how much effect humans personally cause to the environment today, how can anyone claim with the level of surety that these people seem to have that something major is going to happen, that humans can mitigate, within a lifetime?


Public uncertainty does not prevent isolated individuals from getting science done. There's no reason to suspect that publicized uncertainty about global warming has anything to do with objective difficulty in obtaining solid evidence rather than its tendency to attract polemicists, cranks, and opinionated amateurs.


What difference does it make? Humans obviously have the power to drastically alter the planet. Doing so in a sustainable way is just good sense.


I agree with most of your post, except:

"2) All new team members were automatically assumed to be terminated within a month or so. This was usually true. Any new employee that didn't pull their weight, sat around waiting for someone to tell them what to do, or lied in any way was terminated."

Quite frankly this just sounds like whomever is in charge of hiring in these teams does a shit job of it. If, during an extended interview or round of interviews, a person cannot manage to get enough of an idea about how a candidate is going to mesh with the team that new hires often last less than 30 days, that person should not be hiring. If that person also does the firing as well they shouldn't be allowed near personnel management whatsoever, and their overall ability to lead should be questioned.


If the goal were to minimize the number of false positives, I would agree. But the managers of these teams had enough experience to know that they'd rather hire one person a month for a year and only keep one gem than to spend days or weeks trying to find that one gem and then agonize over firing them (knowing that the hiring process is slow and expensive) when the gem loses its shine.

Having done a fair bit of hiring myself, I can't even begin to reliably identify gems. If you know of a way or can explain how multiple rounds of interviews can do a reliable job of identifying them, I'm all ears. I can tell the stinkers right away I think, but gems, no that's really hard. The best candidates I've interviewed (resumes, work experience, knowledge testing, etc.) haven't had any better luck becoming great team members than those with a mediocre interviewing quality.

Cause here's the thing about gems: They only work within their setting and can be created, with some effort and skill, right out of raw material.


I disagree with what your idea of proper hiring apparently is; to hire and test out a lot of people often is better than to spend extra time finding the right person. In my experience, a work environment that is a revolving door of often failing new staff is a waste of everybody's time, whereas a work environment with more carefully selected new staff that sometimes fails is only a waste of management's time. Such are the burdens of management.


I agree. This sounds like bad/lazy management, and a broken on-boarding process.

As a consultant, I've grown used to working around bad on-boarding processes but most FTEs aren't used to jumping into existing teams without being given a lot of knowledge. I can imagine tons of great people washing out of such a team not for any good reason but just because they aren't used to self-service on-boarding.


This may depend on the specific job. In sales I could see this with some correct period.


Sales has its own high-risk / high-reward culture that goes along with the people who hunger for it as a career. Since most sales positions are paid on commission, failure to meet sales goals tends to result in a firing.


In my experience salespeople do need some ramp-up time (although it depends on inside/outside, size of sale, etc.), so 30 days may not be reasonable.

Commission vs. base actually argues for letting them stay LONGER. They self-select to leave if not making commission -- at a startup, that could be due to the product not being in the right place, though, so in a startup you often pay more base than at an established provider.

The other issue is that the cost of a "seat" for a salesperson, especially outside, can be really high, independent of production. It's pretty reasonable in enterprise for a great salesperson to be burning $500-1000/day in expenses (flying every couple of days, hotels, cars, meals with clients, etc.). Plus, potentially needing a sales engineer or engineering support from the development team, and of course the opportunity cost of giving them certain sales leads ("these leads are shit! give me the Glengarry leads!").

Enterprise sales is one of the reasons it sucks to do an Enterprise startup.


I don't think interviews, even with technical components, can tell you even with 75% certainty how a candidate is going to perform at the actual job. There are considerations like work ethic and how well someone can grok the actual issues the company faces (as opposed to a toy problem) that you just can't really know until they're actually doing the work (or not). Some people are really smart but turn out to be lazy. How do you weed them out in an interview?


Cloud and IT are not mutually exclusive. The roles they provide are so different I can't at all see why anyone bothers to makes this apples and oranges argument.


Spot on, this whole argument seems to be based on a misunderstanding of what IT does.


A large part of what IT departments currently do will, eventually, be replaced by cloud-based services that are easily managed. When you hire Google Apps, a lot of your server management disappears. When you put your servers on EC2, you no longer need to manage the boxes they run on.

Things like custom app development, account management and things like that will remain.


I'm having trouble getting a reference to USAAs current eligibility information, because their website is a mess of badly named links and what Google finds redirects me to their homepage, but as I understand it pretty much everything under their banking umbrella is now available to everyone, including the credit card and loan stuff, and with it, remote deposit access. That said these breathless USAA cheerleaders should try having actual contentions with them sometime, it's a pretty poor experience reminiscent of news stories you hear about BoA and similar


I've been a USAA customer since 1999 and have always had great customer service from them. The most recent instance was about a year ago or so I had several charges I didn't recognize for a few hundred bucks each hit my checking account one morning. They were all for perfume and shoe stores in the UK (I was living in FL at the time). It took about 5 minutes on the phone to get the charges removed and a new card on the way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: