Hacker Newsnew | past | comments | ask | show | jobs | submit | joshmarlow's commentslogin

Adding to your comment, I've found that frequent squashing of commits on the feature branch makes rebasing considerably easier - you only have to deal with conflicts on one commit.

And of course, making it easier to rebase makes it more likely I will do it frequently.


I think Grok's voice chat is almost there - only things missing for me: * it's slower to start-up by a couple of seconds * it's harder to switch between voice and text and back again in the same chat (though ChatGPT isn't perfect at this either)

And of course Grok's unhinged persona is... something else.


Pretty good until it goes crazy glazing Elon or declaring itself mecha hitler.


Neither of these have happened in my use. Those were both the product of some pretty aggressive prompting, and were remedied months ago.


Yet, using this model in any way whatsoever after these episodes seems absolutely crazy to me.


All models have had similar instances. I particularly enjoyed Gemini’s black founders era. The “safety” teams have bent the politics of these tools in ways I don’t trust. Grok does too, but in my experience less so. This has real impacts.


Grok is the only frontier model that is at all usable for adult content.


It's so much fun. So is the Conspiracy persona.


> Bryan Johnson is an interesting case here. If you take the longevity project to its logical end, you get someone who's stopped living in order to keep living - for the most part not eating food he enjoys, not drinking, not doing anything spontaneous, all in service of more years.

I never understand this type of critique of Johnson. It's framed like he's suffering daily for his project, but the guy sounds happy as a clam - especially contrasted with his pre-Blueprint podcast with Lex Fridman.

Seems like he's doing something right.


Perhaps he is happy. In my personal experience, people who aim to tackle these kinds of large problems do so out of an inability to let go and accept life as it is. That's not necessarily a bad thing, but founders tended to be some of the most unhappy and unsettled people I have known in my life, they were just really good at channeling that lack of acceptance into their work and lives.

My hope for anyone who dedicates their lives to this kind of work are able to let go if they reach their deathbed without a solution, because if they can't, that would be a deeply painful way to leave this world.


> Seems like he's doing something right

He’s going to spend the remainder of his life obsessing over something he cannot control, and then he’s going to die at a normal age (or probably earlier) any way


I suppose I don't see how that's a problem if he's happy in the process - which he certainly appears to be.


People see him living a lifestyle without the "necessary" vices they have gotten accustomed to in their own, and it confuses them. They can't conceive the notion that a man can still be perfectly happy and abundant without those Pavlovian indulgences, because they have never known a world without them.


Agreed on Bryan Johnson. Before I actually watched a bit of his content, I just thought he had nutjob vibes and looked weird. No offense intended, if possible.

But honestly he just seems like a guy enjoying a fun project. He seems calm and happy in his videos.

Barring any hidden issues with chronic depression, it would be unlikely that he's unhappy. He's very well off financially, has a nice beautiful girlfriend who's with him in his journey, he sleeps a ton, works out, eats well and in general experiments with life.


Agreed! In the Lex Fridman podcast from a few years ago that I referenced, he talked quite a bit about his depression - he was near suicidal for 13 years, IIRC.

He sounds like a different person now.


The older I get the more convinced I am that "math is not hard; teaching math is hard".


This is far more truer than most people may realize.

Because there is so much to teach/learn, "Modern Mathematics" syllabi has devolved into giving students merely an exposure to all possible mathematical tools in an abstract manner, dis-jointly with no unifying framework, and no motivating examples to explain the need for such mathematics. Most teachers are parrots and have no understanding/insight that they can convey to students and so the system perpetuates itself in a downward spiral.

The way to properly teach/learn mathematics is to follow V.I.Arnold's advice i.e. On Teaching Mathematics - https://dsweb.siam.org/The-Magazine/All-Issues/vi-arnold-on-... Ground all teaching in actual physical phenomena (in the sense of existence with a purpose) and then show the invention/derivation of abstract mathematics to explain such phenomena. Everything is "Applied Mathematics", there is no "Pure Mathematics" which is just another name for "Abstract Mathematics" to generalize methods of application to different and larger classes of problems.


As a maths teacher who is interested in (and sufficiently skilled at) programming, I find teaching programming to be very hard, even to interested students.

Teaching maths to interested students is not hard (for me).


The sudden desire to add a small LLM and speech synthesizer so the mower can yell for help in a stranger danger scenario.


It's been a few years since I've touched OCaml - the ecosystem just wasn't what I wanted - but the core language is still my favorite.

And the best way I can describe why is that my code generally ends up with a few heavy functions that do too much; I can fix it once I notice it, but that's the direction my code tends to go in.

In my OCaml code, I would look for the big function and... just not find it. No single workhorse that does a lot - for some reason it was just easier for me to write good code.

Now I do Rust for side projects because I like the type system - but I would prefer OCaml.

I keep meaning to checkout F# though for all of these reasons.


VSCode has a pretty good Gemini integration - it can pull up a chat window from the side. I like to discuss design changes and small refactorings ("I added this new rpc call in my protobuf file, can you go ahead and stub out the parts of code I need to get this working in these 5 different places?") and it usually does a pretty darn good job of looking at surrounding idioms in each place and doing what I want. But gemini can be kind of slow here.

But I would recommend just starting using Claude in the browser, talk through an idea for a project you have and ask it to build it for you. Go ahead and have a brain storming session before you actually ask it to code - it'll help make sure the model has all of the context. Don't be afraid to overload it with requirements - it's generally pretty good at putting together a coherent plan. If the project is small/fits in a single file - say a one page web app or a complicated data schema + sql queries - then it can usually do a pretty good job in one place. Then just copy+paste the code and run it out of the browser.

This workflow works well for exploring and understanding new topics and technologies.

Cursor is nice because it's an AI integrated IDE (smoother than the VSCode experience above) where you can select which models to use. IMO it seems better at tracking project context than Gemini+VSCode.

Hope this helps!


Fun (?) fact - with this protocol you could use a trained Hawk as a firewall.


Definitely a fun fact :D


I've gotten some pretty cool things working with LLMs doing most of the heavy lifting using the following approaches:

* spec out project goals and relevant context in a README and spec out all components; have the AI build out each component and compose them. I understand the high-level but don't necessarily know all of the low-level details. This is particularly helpful when I'm not deeply familiar with some of the underlying technologies/libraries. * having an AI write tests for code that I've verified is working. As we all know, testing is tedious - so of course I want to automate it. And we written tests (for well written code) can be pretty easy to review.


I agree AI can change the balance of power but I think it's more nuanced.

When expertise is commoditized, it becomes cheap; that reduces payroll and operational costs - which reduces the value of VC investment and thus the power of pre-existing wealth.

If AI means I can compete with fewer resources, then that's an equalizing dynamic isn't it?


That assumes you (the human element) are still required to a significant degree. Right now those with assets are compelled to transfer them to those without because they have a need for the labor. If that need evaporates then why should they give you anything?


> Right now those with assets are compelled to transfer them to those without because they have a need for the labor.

Yes - and those without are compelled to trade their labor for assets.

My point is that the assets themselves mean less when the average person can use AI to design anything - that makes the costs of production go down.

In a world where production is cheap, the money required to produce has relative less value.


> My point is that the assets themselves mean less when the average person can use AI to design anything

I’m not sure which assets you think that devalues; it certainly increases the value of the assets needed to run AI, and also of the assets needed to realize the things that people can design with AI.

> In a world where production is cheap, the money required to produce has relative less value.

In a world where your labor isn't required for production, the assets that are required for production have a much greater value relative to your labor than they do in one in which your labor is required to produce something.

“Cheap” is only a thing relative to some other thing.


To clarify, I'm focusing on the costs of ramping up a startup.

Currently VC has a great deal of power because up-front investment is required to hire staff and other expenses until the startup can become cash-flow positive. When a lone individual can start a venture using AI then the payroll costs go down. The investment requirements go down.

Yes, automation means that people with assets don't have to pay other people for their labor.

But it also means that people starting new ventures have less need of significant up-front capital.


I agree that lower up front expenses means less need for investment. I'll also note that it opens up the playing field to more people which increases competition. So it might or might not make your life easier.

However the technology that is expected to reduce labor requirements (and thus expenses) has an uncertain endpoint. It seems plausible that at some point a threshold could be crossed beyond which the human labor that you are able to add on top becomes essentially irrelevant. It is this second occurrence, or rather how society might react to it, that should be cause for at least some concern.


oh, the real world changes will be nuanced.

but they'll start happening because of the new incentive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: