Hmm, it seems pretty clear that climate is getting hotter, so it seems natural for some people to be worried about what will happen to the planet in a few decades (me for one).
And, you may be right, it may not be that big a deal and that we're being alarmists, but it seems like we currently have the tools to slow it down greatly. Why not be on the safe side and use them?
... but to be honest, guessing my opinion won't sway you in any way, still thought I'd try. thanks!
The value of plowing ahead and using more energy is worth far more than making sure Florida doesn’t lose some coastline.
The presumptions I see that annoy me with the alarmists, is that they completely negate human agency and ingenuity, and they ignore the economic cost of many of the proposed plans.
Natural gas is far better than coal and should be encouraged rather than condemned. Nuclear power is best of all, is the cleanest and safest energy, and yet is hardly ever the first choice of the alarmists.
I’d rather spend double the energy unlocking breakthroughs in science with the help of AI, and address the problems when they come. I don’t go out of my way to lower my “carbon footprint”, but I also don’t just do things that are wasteful and deliberately harmful to the environment.
AI making us forget how to think for ourselves is a far bigger risk to mankind than climate change. Thanks.
Agree that you need to balance costs with benefits, but nowadays, solar and wind are often the cheapest options (southern states or states with lots of wind). And nuclear is an option that even some staunch environmentalists support these days.
Yeah, don't think most people who support battling climate change are extremists. We just believe it's a big problem, and, to put it in monetary terms, having to deal with major changes in climate could cost the world tens of trillions of dollars by some scientist predictions. Yeah, it's like any problem, doing relatively small fixes now could save enormous amounts of time and money later down the line. Seems like it would probably good usage of our efforts.
I probably just overreact and judge too quickly certain statements from my experiences of people who act like I’m destroying the earth because I have more than 3 kids.
I appreciate reasonable people though, and I should not assume, everyone is a crazy alarmist because they have any concern, so I apologize.
... and not just giving you lip service, but I do find the far left to have gone too far themselves (am a moderate independent myself). They're assuredness that everything they believe is the only correct way to think is frustrating (they are often the least understanding). Yeah, it seems if you step out of line and say anything against their beliefs, you're apart of the far right.
But, feels like things are shifting back to the middle for various reasons. Think this is a good trend
Actually, really like maven, it's focus on building in standard way is fantastic (but agreed, it look messy, with all its xml and necessary versioning).
Just wrote a comment how I've always liked Maven. It's perfect for small and medium sized projects, and for service-oriented architectures/microservices - it seems like it was designed for this! It's main goal is to help you figure out the libraries that you're using and build them in a standard way.
It isn't great for really strange and odd builds, but in that case, you should probably be breaking your project down into smaller components (each with it's own maven file) anyways.
Actually, I like Maven. It's perfect for code that is broken into medium-sized projects, which makes it great for service-oriented architectures (would have said microservices here instead, but think we're learning that breaking our services too finely down is generally not a good idea).
Yeah, it seems like Maven is designed to build just one project with relatively little build-code (although, figuring out versioning of the libs used in your build can get tricky, but guessing this is how it is in most languages). It's still one of my favorites build tools for many situations.
Have always really liked Java, but yeah, Spring overall has been terrible for the language. Autowiring is against the principles of a typesafe programming language - Don't make me guess what what object is going to be attached to a reference. And if you do, at least figure out what linked object is at compile time, not at run time.
Spring autowiring makes Java seem as a whole unnecessarily complex. Think it should be highly discouraged in the language (unless it is revamped and made apart of the compiler).
... not sure how this applies to the ObjectMapper, as I haven't programmed in Java in awhile.
... and my gripe doesn't apply to SpringBoot though:)
> Autowiring is against the principles of a typesafe programming language
Constructor autowiring is the application of the inversion of control and dependency injection pattern. If there was no autowiring, you could autowire the components together just the same with normal code calling constructors in the correct order. Spring just finds the components and does the construction for you.
Yeah, for me at least, personally believe inversion of control should be used more surgically instead of blanketing the system with it. On the one hand, freeing your application layer from direct dependencies with the lower-level objects conceptually seems like a good idea, but think in practice, this is hardly ever helpful especially when used for every dependency.
At least from my experience, seems like we don't change the objects we use that often, that once a object is set on a reference var, a very, very large majority of them won't change.
And because of this, seems like that for most objects dependencies, we should just new them directly, and if later on we do need to change them, then at that time, we can refactor the code to use more abstraction (like inversion of control) to break the direct dependency, but only for the code that needs it (or if there is a important situation where having a direct dependency could be highly problematic in the future, like to a DB provider).
It's like the performance optimization problem. One guideline that is often quoted is that it's best not to over optimize the performance of your code, until you actually can test it in real-world test cases, because you very often optimize things that aren't the bottleneck. Same with the over usage of inversion of control. Spring makes it so we're using IOC everywhere, but it's just adding unnecessary complexity.
Think that if inversion of control is used, should be used mainly at a higher level, for components instead of on every class like often happens. But even for components, think you should be careful when deciding to do so.
... and agreed, you could just use the factory pattern instead of Spring.
For large applications, having the implementation (or multiple implementations) of certain functionality decoupled from the code using it, improves maintainability and configurability of the application. That is where inversion of control helps. And then manually writing the instatiation, scoping, dependency ordering and cleanup code to manage all of that is not useful to write yourself. Any dependency injection framework will work, although Spring is well used and has many integrations.
Yeah, I get the idea, abstractions allow decoupling. But, think that it should be used in a thoughtful way - there is quote from the original Design Patterns book that said something like a careful considered use of Design Patterns should make the system easier to work with, or something like that (sorry, don't have it on hand).
We can go back and forth on this, so will just say, in my opinion, Spring autowiring overall doesn't provide enough benefit versus its downsides, which to me are: increased complexity and doesn't work well enough (it should be easier to debug autowiring problems for one).
You seem very knowledgeable about design, and, of course, you're entitled to your opinion, so seems like we'll probably just have to disagree on this:)
For me at least, being statically typed is overall a strength. Yeah, it's not that much work to include types when declaring vars, but the benefits are you don't have the problems with types in expressions that you do with dynamically typed languages (Javascript for example, which is one the reasons why Typescript was created).
... although, Java have does support some dynamic typing, as you now don't need to have the type when instantiating objects, using the "var" keyword and the compiler will infer the type from the object that is being set to it (although, this is just syntactic sugar).
`var` has nothing to do with dynamic typing. It is still statically (compile time) typed, so the type can not change at runtime. Compare that to JavaScript where you could easily switch the type of a variable from Number to String.
agreed, it's not (as mentioned, it's just syntactic sugar). Still, how often is changing the type of a var needed? (besides minor casting issues)
And not saying that dynamic typing doesn't have a place, I really like working in Python, it's just that for more complicated code, prefer statically typed as it leads to less problems with your expressions. To each their own.
Am not 100% what's going on and why everyone is ragging on it, but to me, DLSS 5 clearly improves the graphics most of the time. Yeah, almost all the faces look more real, with more realistic skin and shadows instead looking like those CGI faces with poor detail from 12 years ago.
Personally, think it's just people freaked out that it's being improved on by AI, and therefore apart of the "AI slop" trend. Think if they had done this all with no AI and just polygons, it'd be hailed as a large step forward in graphics.
... and btw, am just as freaked out about AI taking over the creative fields as a lot of others (am a musician myself), but have to try to objective, and in my opinion, DLSS 5 is impressive.
They dont look real the lighting is terrible. There is lighting that would suggest two light sources on one part and lighting that would suggest one light source on other parts. Its jarring.
Took another look. You're entitled to your opinion, but, yeah, am not seeing the two light source problem you mentioned, not at least in screenshots I looked at. And for me at least, they look more realistic than with dlss 5 turned off. But may be I'm not seeing something you're seeing.
The specular highlights on faces definitely look wrong to me though I struggle to describe why. Shadows and diffuse lighting is a totally different story, though. Look at how it completely deletes the shadow of the steeple on the right hand side[1], or how it completely eliminates the shadows on this guy's face and jacket. Overcast lighting is an easy cheat for hyper-realism[3] and almost every single scene shown has softened or absent shadows and more diffuse light.
As an aside, I'm starting to wonder if they are modifying engine settings when switching it on and off. There's clearly some amount of accumulation it has to do and its impossible to frame-by-frame a video of a monitor, but in [1] the first frame snaps from a dynamic shadow of the steeple to a generic small blob shadow, then gets entirely eliminated on the next frame.
Hmm, I do see the shadows being removed in the links you have, and have noticed that the backgrounds do look like their lighted differently from the original, but was wondering if that is just because the AI lights things differently? - they did say that these AI effects are done with the actual 3d assets themselves and is not just some type of filter that run over the existing images, so could see how the lighting could change quite a bit.
Yeah, may be the fact that they are lighted differently from the original is turning people off. Understandable. For me, still find it impressive, and think the level detail in the faces and clothing is a full step up in capability.
> they did say that these AI effects are done with the actual 3d assets themselves and is not just some type of filter that run over the existing images
That was essentially just Jensen Huang lying during his Q&A. DLSS5 uses the same input data as DLSS<5, which is just screen space color data and motion vectors. From NVIDIAs announcement: "DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame."
I agree, every shot has something to like, especially in fine details, but I question the feasibility of fixing the issues while running the model on a consumer GPU in realtime. Getting similar improvements without falling back to diffuse lighting would require the model to infer a huge amount of information about off-screen light sources and objects. I'm much more excited about putting my tensor cores and vram towards neural textures since they can actually add detail at the geometry level.
Hmm, actually heard it from a podcast that this is actually working with the 3d assets, and even that statement you quoted says "anchored to source 3D content." Although, that could mean a lot of things and it's still early on, so it could still just be a pass at the end by an AI model. Yeah, I'll stay on the fence until more details are released - and should mention, am no graphics expert, and am only giving my opinion as a fan of good graphics on what the results look like:)
You have a point - cheap drones have changed warfare - but you might be simplifying the issue. As some warfare experts online have discussed, it isn't that cheap drones are the only weapon that is used in Ukraine (or warfare in general), it is one option in vast array of options based on the situation (although, agreed, it is taking on a much bigger significance). Look at the war in Iran. They did a pretty standard playbook and use stealth jets and cruise missiles to surgically take out air defenses in order to gain air dominance. This would be very difficult with just cheap drones.
... but, do agree that cheap weapons are still becoming extremely important. Iran is terrorizing the middle east and strait of hormuz with cheap drones, so they are definitely important. Yeah, in the war of attrition, low cost, high-volume options are clearly very important.
It's fairly important to distinguish what kind of drones are we talking about [1]. Iran's using Group 3 drones.
The GP is confusing Iran's neighbours not being ready to counter group 3 drones with the drones being inevitably effective. These drones are by necessity large and slow, because they need a lot of energy and aerodynamic efficiency to get their range. That means that they are vulnerable to cheap counters, which Ukraine is demonstrating very convincingly: even though Russia is now launching 800+-drone raids, the vast majority is shot down.
Even when those drones do get through, they are extremely inefficient. It's not just that they can't carry a heavy or sophisticated payload (more complex warheads are more effective, but way more expensive), the extremely high attrition ratio forces the enemy to try to target way too many drones per aimpoint. Instead of serving a few hundred aimpoints, the 800-strong raid is forced to concentrate on just a few, otherwise most aimpoints will get no hits whatsoever.
But also the only reason 800-strong raids can even be launched is Ukraine lacking the capability to interdict the launches. 800 group 3 drones have an enormous logistics and manufacturing tail, which a Western force would have no problem destroying way before the raid can be launched. For example, Iran in its current state can't launch such raids. So in practice Iran's neighbours would need to intercept only a handful of drones, which is hardly an insurmountable challenge.
GPS denial is a mixed bag. After about two years of efforts and counter-efforts, the Russians seemingly managed to build GPS receivers that are pretty resistant to jamming.
Hmm, not so sure TDD is a failed paradigm. Maybe it isn't a pancea, but it is seems like it's changed how software development is done.
Especially for backend software and also for tools, seems like automated tests can cover quite a lot of use cases a system encounters. Their coverage can become so good that they'll allow you to make major changes to the system, and as long as they pass the automated tests, you can feel relatively confident the system will work in prod (have seen this many times).
But maybe you're separating automated testing and TDD as two separate concepts?
I write lots of automated tests, but almost always after the development is finished. The only exception is when reproducing a bug, where I first write the test that reproduces it, then I fix the code.
TDD is about developing tests first then writing the code to make the tests pass. I know several people who gave it an honest try but gave up a few months later. They do advocate everyone should try the approach, though, simply because it will make you write production code that's easier to test later on.
... hmm, just looked it up. According to some sites on the web, TDD was created by Kent Beck as apart of Extreme Programming in the 90's and automated testing is a big part of TDD. Having lived through that era, thinking back, would say that TDD did help to popularize automated testing. It made us realize that focusing a ton on writing tests had a lot of benefits (and yeah, most of us didn't do the test first development part).
But this is kind of splitting hairs on what TDD is, not too important.
I think tests in general are good, just not TDD as it forces you to what I think bad and narrow paradigm of thinking. I think e.g. it is better that I build the thing, then get to 90%+ coverage once I am sure this is what I would also ship.
That's the result I've seen with anyone who tries TDD. Their code ends up being very rigid, making it difficult to add new features and fix bugs. It just ends up making them over confident in their code's correctness. As if their code is bug free. It just seems like an excuse to not think and avoid doing the hard stuff.
> But maybe you're separating automated testing and TDD as two separate concepts?
I hope it's clear that I am given my content and how I stress I write tests. The existence of tests do not make development TDD.
The first D in TDD stands for "driven". While my sibling comment explains the traditional paradigm it can also be seen in an iterative sense. Like just developing a new feature or even a bug. You start with developing a test, treating it like spec, and then write code to that spec. Look at many of your sibling comments and you'll see that they follow this framing. Think carefully about it and adversarially. Can you figure out its failure mode? Everything has a failure mode, so it's important to know.
Having tests doesn't mean they drive the development. So there's many ways to develop software that aren't TDD but have tests. The important part is to not treat tests as proofs or spec. They are a measurement like any other; a hint. They can't prove correctness (that your code does what you intend it to do). They can't prove that it is bug free. But they hint at those things. Those things won't happen unless we formalize the code and not only is that costly in time to formalize but often will result in unacceptable computational overhead.
I'll give an example of why TDD is so bad. I taught a class a year ago (upper div Uni students) and gave them some skeleton code, a spec sheet, and some unit tests. I explicitly told them that the tests are similar to my private tests, which will be used to grade them, but that they should not rely on them for correctness and I encourage them to write their own. The next few months my office hours were filled with "but my code passes the tests" and me walking students through the tests and discussing their limitations along with the instructions. You'd be amazed at how often the same conversations happened with the same students over and over. A large portion of the class did this. Some just assumed tests had complete coverage and never questioned them while others read the tests and couldn't figure out their limits. But you know the students who never struggled in this way? The students who first approached the problem through design and even understood that even the spec sheet is a guide. That it tells requirements, not completeness. Since the homeworks built on one another those students had the easiest time. Some struggled at first, but many of them got the right levels of abstraction that I know I could throw new features at them and they could integrate without much hassle. They knew the spec wasn't complete. I mean of course it wasn't, we told them from the get go that their homeworks were increments to building a much larger program. And the only difference between that and real world programming is that that isn't always explicitly told to you and that the end goal is less clear. Which only makes this design style more important.
The only thing that should drive the software development is an unobtainable ideal (or literal correctness). A utopia. This prevents reduces metric hacking, as there is none to hack. It helps keep you flexible as you are unable to fool yourself into believing the code is bug free or "correct". Your code is either "good enough" or not. There's no "it's perfect" or "is correct", there's only triage. So I'll ask you even here, can you find the failure mode? Why is that question so important to this way of thinking?
Hmm, saying tests are just a hint seems to be under appreciating their significance. Yes, they do have bugs of their own, but as you said they are a measurement. Having them statistically reduces the chances of bugs reaching production. They don't remove them completely of course, but they do greatly decrease the rate of bugs (and have read the same thing, formal verification of the code is typically not worth the time and cost).
And just looked up TDD on wikipedia. Actually, the standard process is not to write all the tests first, then do the implementation. It's to do what a lot of devs already do, write some tests based on your requirements. Then, write the implementation for these tests. Then repeat, adding in more test for other paths through the system.
Didn't know this myself about TDD (I thought it was focus writing all the tests, then do the implementation). Yeah, TDD is actually a very practical approach and something I pretty much do in my own development. Instead of using a driver program to run your working code, just write unit tests to run it. And keep building your unit tests for every new feature or execution path you're working on. You'll miss a lot of them early on, but you fill out the rest at the end.
Now that I know, in my opinion, TDD was pretty amazing and changed our industry.
Agreed, words matter.
There are a lot of smart people out there, and the writer of this site makes me skeptical when he/she exaggerates, omits or spins info. Tell us all the facts at least, so we can trust you.
True, keeping a reader is engaged is important, but at least for me, don't want spin on the actual facts. Want to know what the actual facts are and so I can make an informed decision. Otherwise, it's just the writer using salesmanship to sell their own personal beliefs.
And, for the writer perspective, spin is definitely a powerful technique (seems to be changing America to being more polarized), but for me personal, would like to think I try to see though it as much as possible (in any form, coming from the politically left or right).
I think it is. It keeps listeners engaged because what they love most is telling you that you might be wrong and looking ways for it. A listener should make up their own mind anyway and double check -- if what you say is 99% right better they take that away than be 100% right and not be heard at all. I also just respect people more that can be bold with their points rather than hiding behind some chicken shit nuance that always covers them if what they really meant to postulate was wrong.
reply