Hacker Newsnew | past | comments | ask | show | jobs | submit | catscratch's commentslogin

The majority of today's US med school students admitted in 2016 were born in 1992. They grew up watching the silly kids and stupid parents sit-com world of Disney Channel and Nickelodeon while getting soccer trophies for showing up. They wasted a lot of time playing non-educational games rather than reading or working hard to accomplish real things.

While they had to work incredibly hard to get into medical school, and it's harder than it's ever been, I feel like anyone born in the 90s or later is going to be at a disadvantage, because the level of stress is not only much higher than before but many just aren't prepared for the stress.

<sarcasm>However, I think that because of this, medical schools will have to change. It's just too darn hard!</sarcasm>

Getting real though: med school is f'ing hard. Reading this did not surprise me, and if anything, I was surprised that the writer did not know what he was getting into when he got in. The things he is complaining about are the same things I've heard for decades.

Also- the Helen Keller comment, while disrespectful to those that are hearing and/or sight challenged, made me laugh.


I can't tell if this sarcastic, but I think its important to point out that much of the challenge and stress of US medical school is self inflicted by the current culture - and leads to bad health outcomes for patients.


My point was that medical school culture hasn't changed. It's been brutal for years.

The main difference between those going to med school and the 90s and now in my opinion is a mix of the expectations by students that the increase in level of difficulty should be comparable to the transition between high school and college, or college and graduate school. It's not, and it never was. High expectations have always been the rule.

However, for the past 25 years, there have been many more immigrants or children of immigrants that work their asses off competing harder and raising the bar.

The medical schools should not lower the bar to make it less stressful. Instead, we need more medical students to matriculate to P.A. programs and nursing programs, where they can do just as much good helping many of the same patients, sometimes making similar salaries. Eventually there will be more medical schools, which will help some. Or, maybe some of these doctors that feel that they had to go through too much can work their way up and teach so that they can give their students an easier individualized and sensitive education and see where that leads.

However, imo it should always be extremely difficult to get in and to succeed. That's the point. I don't think people should commit suicide. They should just quit.


There is little evidence that hazing leads to better outcomes. Nor, does making medical school technically difficult leads to better outcomes. The US takes longer to train its medical professionals then most other places leading to more expensive care, and a overall deficit of doctors.

As somebody who works with pathologists, I don't find the academic part of medicine more difficult then say physics - but the pay is substantially different. This artificial labor shortage is protected by the systematic failures discussed in many of the HN comments.

None of these things are defensible.


I only have a very shallow understanding. I think the term 'resident' comes from actually living at the hospital. A resident was expected to care for a single patient from beginning to end. They needed to be available at all times to handle that specific patient's needs.

There are deep challenges with continuity of care. How does one person hand of treatment to another person?

Imagine you and I are both working on a project. There's only one computer, you use it from 6AM to 6PM, I use it from 6PM to 6AM. We can only work on one feature at a time. How do we coordinate that handoff every day? That alone sounds like a huge pain. With a doctor, everything can super time critical, and requires just as much depth of understanding. You may have a an understanding of something that i didn't understand in the handoff. building a project, we roll back my code talk some more and do better the next day. With doctors, maybe somebody dies.

I know it's a convoluted example. I imagine most stuff is routine and can be passed around safely. But it's only routine until it isn't, and there's no way to know up front. As much as developers hate being treated like cogs, it seems like it would make doctors lives quite a bit easier. Just seems like a super hard problem. And if you don't get it right people could die. So, like, that sucks.


You meant to reply to a different comment?


Vague, long winded, round about way of saying, maybe it's not self inflicted. Continuity of care is hard. Maybe fewer patients or other responsibilities would help. Ultimately, in my shallow understanding, residents just sort of have to be there to take care of their patients, because passing patients around is dangerous.


Where the change will come is when we are no longer dependent on paying for network access. That will be the financial incentive.

A fully decentralized web would be delivered peer-to-peer via a mesh network or something similar. Anything else is a farce, because it's not just about who holds the data on the network, it's about the network itself. If any link in the chain of the communication can be controlled by some centralized power- it's not decentralized.

On a fully decentralized network, three things restricting freedom and privacy would be:

* the personal device used to (inter)connect, like malicious code hiding in firmware

* those controlling the power needed for the device

* those that can interfere with communication or alter data on the distributed network, either as a peer on the network, through malware/disruptive communication on the network, or those blocking communication

The problem with the decentralized web, though, is that when the web is fully free- if everyone stores part of the content from everyone else, then they could be storing things that are illegal and that they don't agree with. I personally don't want to participate in any network where I can't control what data is stored locally.


For me the following points come to mind reading something like this, which I think you also partly addressed:

1) Cloud hosting --> might as well use the centralized application because this is still centralized.

2) Peering your own hardware for a direct internet connection is not accessible to most people.

3) even in a p2p context, your PC or mobile device is not under your full control and especially in the US, most broadband goes through one of a small number of service providers (Comcast for example)

So I agree with you: network architecture is a major stumbling block.

On second thought, though, that still doesn't mean we shouldn't strive to create "decentralized" applications that are accessible to most average people somehow. I believe this is possible and without a huge upfront cost (besides time). They could still gain power through traction.

And as far as hosting content for others in a p2p context, I think part of the point of a scheme like that is you should never be able to know what you're hosting. It should be opaque encrypted blocks of data, right?


This won't happen anyway, because it's a classic collective choice problem. If all people put some alternative software on their WIFI router at the same time, this fully decentralized mesh web would come into existence at once in most densely populated areas in Europe and the US (all big cities at least), and almost everybody would have an immediate advantage from it. But if only a few people do it first, they'll have tremendous disadvantages from it due to freeriders, abuses, etc. So it won't happen ever.

I remember to have seen a protocol for such ad hoc mesh networks that wasn't even IP-based and could be implemented on most routers. The network can dynamically self-configure, route around failures, and nodes can go in and out of existence whenever they want. It looked pretty cool but unfortunately can't remember where I've seen it. :/



> Where the change will come is when we are no longer dependent on paying for network access.

When is that going to happen? All those routers, etc. don't run on fairy dust and the people that keep them serviced and running kind of enjoy doing things like being able to afford to eat, etc.

Even if you get rid of that part, you have to provide power to the devices and pay for a place to put them. Don't expect that a local telco's going to let you just slap a box on their pole rent-free. Any halfway savvy businessman is going to say "Good of humanity? That'll be $25/mo to put your box in my shop." unless there's some larger, provable benefit to him or herself.


> When is that going to happen?

It already happens/has happened with AMPRnet, HSMM, CUWiN, Freifunk, FunkFeuer, OpenWireless, Firetide, Guifi.net, Netsukuku, Ninux, Senceive, and others.

https://en.wikipedia.org/wiki/Wireless_mesh_network

When higher levels of portable energy become safer and cheaper, then it would be more likely that a small radio could be used to transmit and receive over longer distances.

If a company like Google and Facebook were to provide free adequately high-speed internet access, there'd be less incentive for a free peer-based network wherever they provided free access. Google's free wired plan is lower speed and they didn't plan to expand it; however, I'm not sure about their plans for Wifi. Neither Facebook nor Google have stated they plan to provide access everywhere. So, free access via a peer-based network would remain an incentive over paid plans for the foreseeable future.


Didn't work for me either time I tried.


You are right; with the current algorithm it is quite difficult to fully unscramble the image into one single strip. For now, the best you can do is get a bunch (~20) of strips after annealing, and manually flip and rearrange them (in an image editor) to get back the original image.

The default temperature of 4000 is fine for all the example images at 600×400.

The default 30 million iterations leads to a reasonable run time of about 10 seconds, but leaves many small strips after annealing. Bumping up the number of iterations to 300 million, 3000 million, etc. will take proportionally longer time, and give marginally better results.


Based on the description in the webpage, I expected it to work on the default image with default parameters


As stated in the "notes" section, you need to set the number of iterations to about 1 billion, and also adjust the temperature according to the picture's characteristics.


> stay away from: in sales for 8-10 years.

There are definitely startups that do the "stay away from: in development for 8-10 years", however it's worded as "must be enthusiastic code ninja."


> soulless cookie cutter process that makes people feel like a commodity

I agree most should steer clear from them, but the interview process is often an accurate representation of the level of the team:

* If the interview involves a copied a set of questions off the internet, verbal or written, they don't have the skills on the team to have a dynamic discussion.

* If the interview doesn't have enough standard questions, then they're either inconsistent or might rely heavily on certain people's intuition, the latter which could go either way but the former could mean complete disorganization.

* If they expect you to know the circumference of the Earth (equatorial 24874 mi/40030 km, meridional 24860 mi/40008 km) and capital of Sumer in 2281 (Uruk), then they could be interested in someone that has a scientific mind, great memory, and likes trivia or is very into history.

* If they give you problems to solve, they want to see and hear how you think.

* If they give you homework, they probably just want you to provide solutions in code and figure things out on your own to some extent, and the amount of time they give you to do that is indication of how they estimate tasks.


"* If they expect you to know the circumference of the Earth (equatorial 24874 mi/40030 km, meridional 24860 mi/40008 km) and capital of Sumer in 2281 (Uruk), then they could be interested in someone that has a scientific mind, great memory, and likes trivia or is very into history."

===> ... or who has the skill of typing "google.com" on a browser


This doesn't make me want to use JS. The power of JS is in two things, it's in every major browser and it doesn't completely suck. JS syntax kind of sucks. The power in JS is that it's dynamic and lets you send functions around, but defining functions is much uglier than defining a lambda in Ruby:

-> {anything goes here}

or

->(a,b,c) {anything goes here}

The problem with Ruby is that you then have to .() the lambda vs. (), so that is more verbose than just calling the function.

If browsers were to embrace a language that was more Ruby-like and less clunky than JS, I'm sure I'd use it more.


Ruby suffers from a confusing set of overlapping features in Procs, Blocks and Lambdas. JavaScript's function passing is a lot less opaque.


ES6 lets you define functions like a => a * 2

Or (a, b, c) => a * b ^ c


That's succinct, but less clear than the Ruby version, imo, as you don't have any scope indicators required for the function body.

JS:

  let f = (a,b,c) => a * b ^ c;
  f(2,3,4);
vs. Ruby:

  f = ->(a,b,c) { a * b ^ c }
  f.(2,3,4)
Ruby's shorter and clearer. But, when you use the Ruby lambda more than once in the code, you lose the brevity advantage, because of the "extra" dot. But, in Ruby I use methods more than lambdas, which would be:

  def f(a,b,c) { a * b ^ c }
  f(2,3,4)


> Ruby's shorter and clearer.

I would say the different is negligible here and claiming one is more clear than the other is purely preferential. Coming from a JS background the JS is more clear but not substantially enough for me to claim it is outright more clear a language syntax. Someone coming from a ruby background may argue the other direction.

Some languages are quite different but you are splitting hairs here and trying to be conclusive about it.


None is preventing you from writing let f = (a,b,c) => {a * b ^ c}; in JS


when you include curly braces in arrow functions you lose the implicit return, so this function would return undefined


You can get the implicit return if you use parentheses instead:

f = (a,b,c) => (a * b ^ c)


very true :)


the ruby versions isnt shorter though; you just included things you dont need: semi colons and 'let'


For those that want to focus without as much jumpiness, Evekeo is a good choice. It's not as much of an upper since has less dextro, but still works fine.


I hadn't heard of Evekeo, and was surprised to find out that it is just Amphetamine Sulfate. Is it supposed to be an immediate release preparation?

I've been taking Vyvanse for a while now. Lysine-dextroamphetamine is really working great for me. Lasting almost 10 hours, not over- or under-stimulating.


I liked Vyvanse more than Evekeo. I had much less depression, was much more motivated and able to focus during the day as well as evening, and I still could get to bed easily and sleep well-enough. But, I acted too hyper on it.

Perhaps I could try to practice more self-control and continue taking it or could try taking 1/2 the dose, but after I had problems, my doc put me on Evekeo. I take one in the morning and mid-day, and I'm much more even-keeled than on Vyvanse. The downside is that it's not time-released. I have ups and downs and my brain is not working as well in the evening like Vyvanse. I also don't get sleepy as much at night when I should. But, it's much, much better and safer for me than not taking anything. I might start taking a low dose of Zenzedi in the afternoon to see if that helps problems getting to sleep.

Evekeo is 50/50 dextroamphetamine/levoamphetamine; in some sources it's just called amphetamine sulfate, but the ratio is important.

Zenzedi is also made by Arbor Pharmaceuticals and is pure dextroamphetamine sulfate (which is the really addictive one).

Adderall/Adderall XR is 75 percent dextroamphetamine (the regular Adderall is a mixture of dextro salts in that 75 percent) and 25 percent levoamphetamine.

In comparison, Vyvanse is the prodrug lisdexamfetamine that is converted to dextroamphetamine and lysine by the hydrolytic activity of red blood cells, so it's time-released.

I'm told the levo form is for increased wakefulness and improved focus and dextro is for energy and speeding up executive function. I don't know if it's really that simple, but that seems to mirror my experience.


http://regionals.burningman.org/

The map in that would probably overlay nicely with a global wealth map:

https://cdn.credit-suisse.com/articles/news-and-expertise/20...


A history of new things geared toward web app developers, starting with relevant popular technologies:

Late 1970s: microcomputers, explosion of BASIC and ASM development

Early 1980s: proliferation of modems, BBS's become big, Compuserve becomes big- people able to read news online and chat in real-time (but not popular like much later). software stores, software pirating, computer clubs, widespread use of Apple II's in schools. Microsoft Flight Simulator released in 1982 is first super-popular 3D simulation software.

Mid-1980s: GUIs- Macintosh 1984 based on ideas from Xerox PARC.

Late 1980s: Graphics had more colors, more resolution, faster processors. So- cooler games. File servers. 1987 GIF format, 1989 GIF format supporting animation, transparency, metadata- not that popularly used though- was a compuserve thing.

Early 1990s: Internet, realistic quality pictures, webpages/browsing, global file servers. Mosaic web browser. Most pages involved horizontal rule dividers that might be rainbow animated GIFs. Bulleted lists. Under construction GIFs were popular. Linux. JPEG format. Netscape. Blink tags.

Mid 1990s: Windows 95 (with Winsock). IE vs Netscape. IE had marquees. VBScript. (Mocha->LiveScript->)JavaScript. Applets. Shockwave. WebCrawler search. Altavista search. OOP pretty solidly how you should program now with C++ having been around for a while and Java slow but write once/run anywhere and OOP. Apache webserver. CGI: can email from webpage.

Late 1990s: ActionScript. Google search. CSS. Extreme programming. Scrum. JSP. Some using ORM via Toplink. Java session vs. entity beans. IIS. Java multithreading. Amazon gets patent for 1-click ordering. AOL instant messenger. PhP.

Early 2000s: ASP. .Net/C#. Hibernate ORM (free). Choosing between different Java container servers.

Mid 2000s: Use CSS not tables. Rails.

Late 2000s: SPA and automatic updating of content in background via Ajax. Mobile apps. Mobile web. Scala. Cloud computing start. VMs. Streaming video mature. Configuration management via Chef/Puppet.

Early 2010s: Cloud computing standard. Container virtualization. Video conferencing is normal- not just big company office thing. Orchestration of VMs more normal.

Mid 2010s: Container Quantum computing starts at a basic level (not important yet).

Note how I can't really thing of anything recently that has to do with new things in webdev.


Dejavu:

> Early 2010s: Cloud computing

1960s: Client/Server Architecture. Big servers and small clients.

> Mid 2010s: Quantum computing

before 1950s: Analog Computers

https://en.wikipedia.org/wiki/Analog_computer

There is nothing new under the sun. Analog computers passed away because they were not usable. Ok, quantum computing may be different but their practical use is also questionable.


> Dejavu: [...] 1960s: Client/Server Architecture. Big servers and small clients.

This is right and wrong at the same time. Right, because the Cloud reuses some basic concepts from the mainframe era (e.g., virtualization), which had been neglected for some time. Wrong, because writing your application to run efficiently on a mainframe is totally different from writing your application to run efficiently on Cloud infrastructure. Also, there is no thing such as small clients anymore, mobile apps and Web frontends are nowadays as complex as the usual 1980s fat-client software.

IMHO this is a very good example for technology not making circles, but evolving in spirals.


> there is no thing such as small clients anymore

This is also right and wrong :-) Right regarding your perception, wrong regarding relative power. 1960s clients were small compared to today's small clients. However, 60s server were also small in relation to cloud servers. Today our small clients provide browsers and stuff like that but they aren't useful without servers. They can't run top-notch 3D games without high-end servers. The third wave of C/S will be in the area of A.I. with (small) clients which will possibly as powerful as today's cloud servers.


Trends:

Ajax, Long polling, WebSockets

jQuery/MooTools/Prototype, Bootstrap/CanMVC, Angular/React

Javascript debugging tools, profiling, 60fps, responsive pages, AMP

RSS, Web Push, WebRTC

HTTP Auth, Cookies, oAuth, new social protocols

Perl, Java, PHP, Node.js, Go


Thanks! You caught some ones I missed, so here are some edits and responses to your list:

1. I didn't mean to put "container" in front of quantum computing.

2. I didn't mention history of certs or encryption, as I think that security is often a feeling rather than a reality. I'm not sure that "HTTPS everywhere" plugin and then movement in early 2000s was innovation more than it was tightening up security after Firesheep.

3. Yes, I should've included WebSockets over long polling in Early 2010s.

4. Yes, RSS mattered- 1999/Early 2000s.

5. I probably shouldn't have mentioned OOP, etc. as I didn't mean for methodology to matter, since it doesn't matter to users. Similarly debugging tools don't matter for innovations that users see.

6. Yes, fluid layout, grid layout, and responsive design in Late 2000s (though Audi had responsive in 2001).

7. jQuery/MooTools/Prototype, Bootstrap/CanMVC, Angular/React - none of the implementation details of these things matter. The only things that matter are how things appear to the user- like whether a page has a clunky refresh or smooth transition and whether things update automatically when they are changed elsewhere. Also, Applets, Flash, frames, and the move to JS all screwed the visually impaired.

8. Cookies mattered because they were used to track users in ways they didn't want to be tracked. People disabling JS for a while mattered. US announcing Java was insecure mattered. Flash and Flash being abandoned mattered.

9. Forgot to mention frames in Mid/Late 1990s.

10. As you mentioned oAuth, SSO becoming a big deal in the Late 2000s with Facebook, Google.

And I should have mentioned blogging, microblogging, move of much of the web to Facebook, Tor/private web, peer sharing and impact on music industry as well as impact on the value of well-created data and applications vs. the value of constantly creating data and making data available and clear.

Despite all of the things I missed, the point is that the things that really matter aren't new libraries and frameworks- they are technology and how the world uses it. If a user can't tell a positive difference between something you were doing 5 years ago and today, then you didn't really innovate.


> 1) Drought. Severe drought. Trees need water. There's less of it.

Due to alfalfa, almond/pistachios, pastures, rice, corn, other field crops, vineyards, other truck crops, cotton, grain, and tomato, primarily, according to: http://www.takepart.com/article/2015/05/11/cows-not-almonds-...

I was surprised about alfalfa and pasture.

And "The cattle sector of the Brazilian Amazon, incentivized by the international beef and leather trades, has been responsible for about 80% of all deforestation in the region, or about 14% of the world's total annual deforestation, making it the world's largest single driver of deforestation."

https://en.wikipedia.org/wiki/Deforestation_of_the_Amazon_ra...

Convince your friends not to eat as much meat, especially beef. It's bad for the environment.


Unless you believe rivers flow uphill into the mountains, farming has nothing to do with trees dying in the mountains. Salmon downstream in rivers have a complaint, trees upstream do not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: