Hacker Newsnew | past | comments | ask | show | jobs | submit | cpx86's commentslogin

When it comes to Wine, aren't they already doing this? Steam develops Proton in cooperation with CodeWeavers, who are the main sponsors of Wine, and parts of that work is upstreamed to the Wine project. The NTSYNC patch from what I can tell was also submitted by a CodeWeavers employee, so it doesn't seem far-fetched to say that Steam probably contributed to making this happen in Wine.

There are many other open source projects that gets used that never sees the spotlight like Wine does, but they are crucial too. Think audio codecs & processing, compression libs, networking libs, even sqlite. Our society depends on these projects too but there are too much friction for normal people to contribute to them (if they are even aware). Steam checkout is a low friction surface where normal people spend time. A small optional checkbox at the bottom with a two sentence explanation or link to a blog post to explain where the money goes, will add minimal new friction while giving people the opportunity to contribute to something meaningful. I think many gamers (esp adult ones) knows what open source means and they will actually contribute now & then. Fund allocations must be transparent (crucial!) so people can see where the money went.

Oh absolutely, I would welcome some way of sponsoring such projects in general. I just meant to highlight that for this particular feature and project, there is already a form of sponsorship happening.

That is an interesting argument. Do you believe that the same would apply to humans? I.e. if someone wishes to minimize the suffering of humans, is the logical conclusion that they should pursue omnicide?


To be clear, I don't believe in that goal in either case. But yes, the only way to truly end human suffering would be to end the humans.


FWIW the "Why?" page does a good job (IMHO) of what it is and what it's trying to achieve. Well-written, although perhaps not exactly concise. https://docs.endatabas.com/appendix/why.html


I took a stab at writing up a "What?" page to pair with this: https://docs.endatabas.com/appendix/what.html

It tries to be more concise. :) Would love your feedback, if you have any. Our Discord is on the homepage, if that's easier than email.


> That's why Microsoft got away with proprietary date formats in System.Text.Json.

What's proprietary in it? It follows ISO 8601-1:2019 and RFC 3339 according to the docs.


Sorry, that should be System.Runtime.Serialization.Json. System.Text.Json is the newer class that replaced it.

In .Net Framework 4.6 and earlier, the only built-in JSON serializer in the .Net Framework was System.Runtime.Serialization.Json.DataContractJsonSerializer.

You can still see it. If you're on Windows 10, run Windows Powershell v5.1 and run:

  Get-Item C:\Windows\System32\notepad.exe | Select-Object -Property Name, LastWriteTime | ConvertTo-Json
You'll see this output:

  {
    "Name":  "notepad.exe",
    "LastWriteTime":  "\/Date(1626957326200)\/"
  }
Microsoft didn't fix their weird JSON serialization until quite late. They may have back ported it to the .Net Framework, but they've deleted that documentation. Powershell v6 and v7 include the newer classes that are properly behaved. This is why Json.NET used to be so popular and ubiquitous for C# and ASP applications. It generated JSON like most web applications do, not the way Microsoft's wonky class did. Indeed, I believe it may be what System.Text.Json is based on.


Oh that one - yeah I've always steered clear of DataContractJsonSerializer. Never understood why they did it so weird.

To be fair, RFC 3339 wasn't even published back when this class was implemented (in .NET 3.5) so I guess they just went with whatever worked for their needs. ¯\_(ツ)_/¯


I'd be quicker to believe that it's because 2007 was still in the middle of Steve Ballmer's Microsoft, where embrace-extend-extinguish was their de jure practice.


This is just my very personal and subjective experience, which may or may not apply to your working environment since I've no idea how many developers or teams you have or how your responsibilities are defined. This is at least what I've learnt so far in my environment:

- Be transparent about your decisions, motivations and opinions. People will come to you for advice or suggestions for what to do, and if they don't understand your thought process that causes unnecessary friction.

- Document everything in writing publicly (except confidential/sensitive info, naturally) - decisions, designs, ideas, proof-of-concepts. This will be helpful both to your fellow engineers since they can access this information on their own without you having to explain it to them every time they ask, and also helpful to you to recall the context in which you did something. I find myself often going back to notes I wrote months or even weeks ago, to remind myself of e.g. the motivation for why a decision was taken. Having it public also forces you to write in a clear, structured and professional manner since you're writing for a broader audience.

- In terms of studying, formulate a vision for where you think your software should be within 1, 2 or 3 years, and spend time researching what options can take you there, learning how they work, and so on. I've found InfoQ to be a pretty good resource for keeping tabs on what others are doing in the field.

- Be patient. Be prepared to repeat yourself multiple times, sometimes to different people, sometimes to the same people. Be prepared to communicate a lot, and keep in mind to tailor your message depending on who you're communicating with.

- Learn to let go of details. You will see code and solutions pushed out that you perhaps don't fully agree with. Take a step back and consider if it's really that important or if it's good enough. If something gets pushed that really isn't up-to-par, be humble and consider that you might not have communicated the requirements clearly enough.

- Make sure to understand the business side of the company, and always take that perspective into account when making decisions. You might from a technical perspective think a piece of software is in desperate need of refactoring, but from a business value perspective it might not make any sense. Be sure that you agree yourself with those kinds of decisions (i.e. don't blame "the business") because you'll likely find yourself having to explain and champion them to others who disagree with them.

- I realized now that all of the above is mostly "soft skills" and have very little to do with technical skills or training. Which I suppose is the biggest learning I've had so far - for me the biggest gap by far was (and still is) mostly about communicating, working with others, and taking the broader needs of the company into account and not just the technical aspects.

Just my 2c - hope it can be helpful to someone.


Exactly this. Treat the DB schema as you would any typical API schema. A lot of the techniques used for evolving application APIs can be used for sprocs and views as well, e.g. versioning for breaking changes, adding optional parameters or new result fields for non-breaking changes. Fundamentally I don't think there's much difference between say, a DB schema defined with sprocs/view or an HTTP schema defined with OpenAPI. Both describe an API contract between two remotely communicating processes, the former case just happens to use a SQL dialect/transport to do it.


Interesting that you mention doing this (version numbers) with views too, I didn't think about that ...

... Maybe that could be a way to "perview" a database migration, before running the migration for real. There could be a table 'Some_table_view_v2' that shows how that table would look, after an upcoming data migration. And then v2 app server code, would use that new view. — Then one could be more certain that the data migration, will work fine.

(At the same time, one might need to be careful to let just a small a fraction of the requests, use the new View, if the view is a bit / a-lot slower than the real table.)


> You'll spend more time & money on the OpEx cost with Kafka than picking up the client library for Pulsar.

Could you elaborate why this would be the case?


Not the OP, but I think they were exaggerating a bit. In practice, operating kafka is a major PITA, because it means you have to

(1) choose a "flavor" wrapper (confluent seems to be a popular one), because the base project isn't easy to develop against

(2) write your own wrappers of those wrappers, to keep your developers from shooting themselves in the foot with wacky defaults

(3) suffer the immense pain that is authenticating topic write/reads, if that's even possible???

(4) stand up zookeeper... and probably lose some data along the way.

(5) suffer zookeeper outages due to buggy code in kafka/zk (I've experienced lost production data due to unpredictable bugs in kafka/zk, but obviously YMMV).

Based on my naive assessment, the kafka/zookeeper ecosystem is maybe 10x as complicated as the problem it's solving, and that shows up in the OpEx. I personally doubt that Pulsar is that much better, but it might be.


These are also valid. I wrote the reply explaining some of the OpEx here: https://news.ycombinator.com/item?id=21938463


What do you mean by 1 and 2? I'm guessing you're referring to the kafka-clients API? The defaults for producer and consumer conf are quite sensible these days.


I wasn’t around to make those decisions at my company, but I imagine that the “these days” component was the cause? There are a lot of configurations, new ones appear and old ones disappear or change names, etc.

In this churny environment, where you want to keep on latest versions (necessitated by bugs mentioned in), you need abstractions to protect you somewhat from the churn.

Confluent also seems to have a fair amount of churn, so you need wrappers for that, that you can update all at once for your developers.


Sorry, when I say these days, I mean >= Kafka 1.0. Things like auto commit offset in 0.8 days were something like 1 minute, as opposed to 5 seconds onwards, max fetch bytes was set significantly higher etc.

My biggest problems with it were when developers who didn't really understand Kafka started setting properties that had promising names to bad values to "ensure throughput" - let's set max.poll.records to 1 to ensure we always get a record as soon as one is available!

That might be my biggest issue with Kafka - it requires a decent amount of knowledge of Kafka to use it well as a developer. I'm not sure if Pulsar removes that cognitive burden for devs or not, but I'm interested in finding out.

And yeah, the wrappers to remove that burden were written in our company too - but then proved quite limiting for the varying use cases for a Kafka client in our system. sigh


C# didn't either, it was introduced in C#/.NET 2.0. As I've understood it, Java chose type erasure to stay backwards compatible with older JDK versions, whereas .NET instead chose to break compatibility with 1.x and force dependents to target 2.0.


One thing I think is worth considering is why you enjoyed coding to begin with? For some developers it seems to be the craft itself that gives them enjoyment, but IME they are relatively few, and for most people coding is simply a means to some other more highly valued end, be it influence, business impact, money or what not.

For me personally, when I had a lot less experience the attraction was mostly a feeling of accomplishment and satisfaction that I could make a machine do exactly what I envisioned in my mind. As I accumulated more and more professional experience, the source of my satisfaction became increasingly distant from the actual code itself, e.g. analyzing a business need and identifying a technical solution that met it became more satisfying than writing the actual code itself. 10+ years down the line now and in my current role I very rarely write any production code. To the extent that I miss it, it's probably mostly down to nostalgia. I typically get more satisfaction from working with strategic technical problems, enabling developers, doing high-level designs, liaising between tech and other departments, etc.

So TL;DR - yes, it's perfectly normal to find non-coding software development activities more gratifying :)


I would expect to see at least some legal judgement against dark UX patterns. I was quite heavily involved on the technical side in GDPR compliance (EU company) and my understanding from the legal folks was that the regulation strictly forbids at least certain types of UX patterns, e.g. opt-out is a big no-no, consent should always be opt-in, you can nudge the user towards consent, the purpose of data processing must be expressed in an understandable language, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: