Hacker Newsnew | past | comments | ask | show | jobs | submit | adamrneary's commentslogin

It's actually neither. :)

We have a shared development environment with a persistent (and thus deterministic) dataset. So when one developer runs a query against that dataset, another developer could run the same query and get the same result.

The best thing about this, of course, is that if your Storybook data looks at listing 112358, you can also open that same listing in development and see the same result in the product. Very powerful.


Thank you for the quick reply!

My question was more about how that persistent dataset in your shared development environment is created though. As that dataset has to first exist for people to query against it.

Curious if that creation process is manual or automated somehow through inference on the types in the schema.


For sure. Unrelated to GraphQL in this case. Very manual (in a good way!). Sometimes it's achieved manually via the UI. Sometimes manually via scripts, etc. Imagine you're creating a new field for Business Travel, and you've added some new fields via the API. One way or another you're grabbing 1-2 listings in development, modifying those listings to reflect the desired change, and then in the dev setup, you're saying "grab listings x and y and update their data for use in Storybook and unit tests."

In my experience, if the system is working properly, there's not a lot of room for type-driven inference. We often get a design with very explicit data present, and we want to bring that data in rather than calling on Casual or


Thanks! Yeah that approach makes a lot of sense for the use cases presented in the post, and may be the most pragmatic path for enabling this workflow.

Although I feel the automated data generation approach still has value in that it can introduce some variance in the dataset to better represent real-world data, potentially uncovering issues in handling of edge cases in the design/implementation of the UI, that the original conveniently customized dataset that came with the design wouldn't. Though such an approach will likely also need to offer the ability for users to override/customize the generated data on a case by case basis in order to be useful for real world applications, so we'll probably end up with a bit of a hybrid approach at the end of the day regardless.


Very cool! Is the shared dev environment read-only? (Or is it not a concern that one dev with write access might accidentally corrupt the data for the others?)


Hey folks — this post got a great reception on the reactjs subreddit yesterday. Someone suggested I post this as a Show HN, and I think it's a great idea. I'd be happy to answer any questions or talk through our approach if folks are interested.


Awesome!


This is useful, but I am not sure at scale (i.e. a constellation of micro-services working together) I would want some of this functionality abstracted behind a library. I prefer gulp and wrote a couple articles talking through a simple process for local development, testing, and deployment if anyone finds it useful:

* https://medium.com/@AdamRNeary/developing-and-testing-amazon... * https://medium.com/@AdamRNeary/a-gulp-workflow-for-amazon-la...


The absence of a clear response indicates to me that the brass is currently weighing the pros and cons of admitting there was a problem. This is the sort of thing where those who really weren't affected get way out ahead of this sort of thing with vivid detail. I deleted my account.


Cause and effect: A statistically insignificant number of fires in Teslas caused a disproportionate amount of news coverage (there was much less news coverage about Tesla's best-ever safety rating). This perception needs to be overcome, even if it means informed consumers having to pay for titanium underneath otherwise safe cars. Tesla is doing their part, but it's a shame to see so many outside factors driving up the cost.


I worry about the effects of this overdoing it on the eventual Gen 3 vehicle. Clearly, at least in general a luxury vehicle that costs $70-100K is going to have more wizbang safety features than a $35K mid-range car. However, if Tesla is advocating this shielding to make electric batteries less likely to catch on fire, and then the Gen 3 skimps on electric-specific safety features they could definitely get PR flak for it. On the other hand, if they want to maintain this super-safe image, it is going to push the price up (and my guess the timeout out) for the more general appeal vehicle.

I'm hoping that my current vehicle lasts just long enough (both in terms of mechanically, and my patience with it getting older) for Tesla to come out with something in my price range.


(there was much less news coverage about Tesla's best-ever safety rating)

Probably because NHTSA told them to cut it out.[1]

[1] http://www.theverge.com/2013/11/23/5135258/nhtsa-tesla-star-...


The shame of all of this noise is that resources going into medical research today ends up getting spent on data security and building expensive, custom solutions that avoid using servers of a certain type or location in the name of privacy.

Sure, it would be more secure to conduct medical research without using computers at all, but what about all those people dying of nasty diseases? If I had 6 months to live, I probably wouldn't mind these "criminals" trying to find me a cure.

Instead, we have a deafening din of screaming about data privacy and little or no mention of the benefits of the medical research itself. If people could calm down a little bit about Big Brother, these guys could spend more time doing their jobs, helping sick people.


Medical data is a great tool but the problem is that these stories are poisoning public good will. There is no point telling people to calm down when they have just learned that records of every meeting they ever had with their doctor were available on the public internet and identifiable to anybody who knows their address and DOB. That is something that people rightly get upset about.

Additionally, it's not like these events are all just accidents or incompetence. The UK government made a policy decision to sell medical records to insurance companies[1].

Also, is it really true that release to the insurance industry is unacceptable to the HSCIC? Its own information governance assessment from August says that access to individual patients records can "enable insurance companies to accurately calculate actuarial risk so as to offer fair premiums to its [sic] customers. Such outcomes are an important aim of Open Data, an important government policy initiative."[2]

[1] http://www.telegraph.co.uk/health/nhs/10659147/Patient-recor...

[2] http://www.theguardian.com/commentisfree/2014/feb/28/care-da...


Not underplaying at all - your point is spot on - but this data only relates to hospital attendances and not GP interactions. Currently GP interactions are not available in the database, and that's the point of care.data.


Sorry. Yes you are quite right.

When I said public internet I was actually referring to the things Ben Goldacre has been tweeting ( https://twitter.com/bengoldacre/status/440475049880195073 ) and I'm not sure which data set he is talking about.


The shame of all of this noise is that resources going into medical research today ends up getting spent on data security and building expensive, custom solutions that avoid using servers of a certain type or location in the name of privacy.

If the various disclosures, legal or not, actual or planned, actually had anything to do with legitimate clinical care or medical research, I think a lot of us would look more kindly upon them. There seems to be little evidence that this is the case, and plenty of evidence that the data was or was going to be disclosed, for profit, to organisations who are not involved in either direct clinical care or legitimate medical research, such as insurers and foreign governments.


It's not reasonable to say that the Model S has a 25x increased change of catching fire. The sample size is orders of magnitude too small (there were 3 instances so far?).

Put simply, there is no statistically significant difference whatsoever between the Model S and the broader population in terms of fires post-collision, and Musk is understandably frustrated about the bogus press claiming there is.


Engineering lets you model things without having to do experiments. It's possible to know that a car design is more likely to catch fire just by analyzing it. You don't have to wait for thousands of cars to catch fire.


And are you saying then that Tesla has not done the Engineering?


I'm sure they have. But as codex said, Musk is simply not answering the question.


"...the scientific value was questionable..."


It's an interesting product as a tool for documentation, but where it gets really useful is that next step (presumably to come) where the structured content is not only used to mock a server, but to actually implement the server.

I would love to do a one-click deploy to express/everyauth (for node) or app simple sinatra/warden app for the ruby folks.

Once you define the auth requirements and the API itself, there's little left beyond the boilerplate.

I imagine a simple, closed-loop solution with semantic versioning that consumes the blueprint like a config file. As we update the blueprint and bump versions, the appropriate changes would be reflected on the server. Your live environment would be naturally in sync with documentation, and it would be versioned and tested.

Such a solution entirely frees us up to focus on the app itself, which would be very cool.


We do have some prototypes, but we are yet to venture beyond scaffolding..and honestly, we are more focused to provide basic, universal tooling and let respective communities to handle their-favorite-language/framework bindings.

From what I tried, it is fairly easy to get "almost there", but the remaining 10% is dealbreaker. If too much editing constraints are placed on resulting application, it feels odd and it's easy to break them and thus diverge from original blueprint.

One possible approach is to maintain original blueprint, then say "Let us implement" and completely blueprint as "primary data source" into scaffolded application -- into module/method docstrings and start generating blueprint back from there.

However, we are yet to find something that feels unobstrusive and natural during whole development cycle. Suggestions certainly welcome.


Right on. In our particular case, we have an API that's fairly well built-out, so I would probably toy around with this on the next side project.

Rather than having a 100% there solution that's opaque, I think the cool thing would be to have an open source project that handles the boilerplate.

Then, you could fork, point to your blueprint, and deploy in minutes and then do whatever tweaking is required to cover the remaining 10% (there's always something).

Keep up the great work--I am definitely interested (and will check out swagger, as well!).


Generating a server stub was done long ago, in swagger, which is completely OSS:

https://github.com/wordnik/swagger-codegen#to-build-a-server...

And it's based on an intuitive JSON structure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: