Hacker Newsnew | past | comments | ask | show | jobs | submit | EricMausler's commentslogin

This entire soul document is part of every prompt created with Claude?


No, it's trained into the model weights themselves.


No, I think apparently it was used in the reinforcement learning step somehow to influence the model's final fine-tuning. At least how I understood it.

The actual system prompt from Anthropic is shorter and also public on their website I believe


Yeah they publish the system prompts here: https://platform.claude.com/docs/en/release-notes/system-pro...


Alternatively, assuming they are aware of the cost, what does this say about what they are implying the cost of electricity is going to be?


> warmth and empathy don't immediately strike me as traits that are counter to correctness

This was my reaction as well. Something I don't see mentioned is I think maybe it has more to do with training data than the goal-function. The vector space of data that aligns with kindness may contain less accuracy than the vector space for neutrality due to people often forgoing accuracy when being kind. I do not think it is a matter of conflicting goals, but rather a priming towards an answer based more heavily on the section of the model trained on less accurate data.

I wonder if the prompt was layered, asking it to coldy/bluntly derive the answer and then translate itself into a kinder tone (maybe with 2 prompts), if the accuracy would still be worse.


Alternatively, I've gotten exactly what I wanted from an LLM by giving it information that would not be enough for a human to work with, knowing that the llm is just going to fill in the gaps anyway.

It's easy to forget that the conversation itself is what the LLM is helping to create. Humans will ignore or depriotitize extra information. They also need the extra information to get an idea of what you're looking for in a loose sense. The LLM is much more easily influenced by any extra wording you include, and loose guiding is likely to become strict guiding


Yeah, it's definitely not a human! But it is often the case in my experience that problems in your context are quite obvious once looked at through a human lens.

Maybe not very often in a chat context, my experience is in trying to build agents.


Hey there! I have a B.S. in Information & Systems Engineering, which includes Operations Research (OR). Happy to chat about this!

When I graduated, the OR term was already fading, and from what I’ve seen, it’s pretty much gone as a standalone field. The tools are still strong, but OR isn’t often listed as a job specialization on its own.

I started as a business analyst, and while OR wasn’t in any job descriptions, it gave me an edge. I used OR methods to go above and beyond, working closely with branch and executive management to analyze cost-effectiveness, optimize decisions, and make strategic recommendations. This helped me stay at the top of my pay band. Of course, I still handled traditional BA tasks like dashboards, reports, automation, and SQL.

My advice? Cross-specialize. OR is incredibly valuable, but it works best when paired with another strong skill set. For me, a CS minor and SQL/database skills helped early in my career.

To put it simply: OR lets you optimize a warehouse layout—but most jobs also require you to move boxes. It aligns more with engineering management roles than entry-level work, and those management positions typically go to people with industry experience.

That said, I genuinely believe OR is one of the best specializations when combined with another field. You just need to polish it with the right complementary skills.

(Full disclosure I used AI to clean up this message, but it's still very close to my initial draft. Mostly just some grammar and phrasing changes, but it does kind of read like AI now so I wanted to call it out that the sentiment is still genuine)

As far as connecting to other practitioners, I mostly just stay active in forums and joined a few LinkedIn groups but I need to improve in this area too, which is my motivation for posting this.


I think cross-specializing with physics of energy might be quite cool. Then I could work on problems such as, e.g.,

- optimizing the placement of wind turbines to maximize energy capture

- determining the optimal size and type of solar panels for a given area.


Absolutely, I can see how that could be effective. The jobs may lean toward electrical experience. Power engineering is a subfield of electrical and may be relevant. I looked into it once for myself, seemed like a good fit.

Another field of tools that I'm looking at are the Geospatial ones. Being able to work with mapping software/data always felt like a good mix to me.

What tools are they teaching now? I studied on like AMPL for linear/nonlin prog, ARENA for sims, Matlab for general but it's been a while.


yes, geospatial could be interesting. The tools depend on what the university / lecturer prefers, for me it was Julia for programming in math courses, JuMP.jl for optimization modeling, Python for ML courses.


Thanks for the comment!


Anecdotal but I told chatgpt to include it's level of confidence in its answers and to let me know if it didn't know something. This priming resulted in it starting almost every answer with some variation of "I'm not sure, but.." when I asked it vague / speculative questions and then when I asked it direct matter of fact questions with easy answers it would answer with confidence.

That's not to say I think it is rationalizing it's own level of understanding, but that somewhere in the vector space it seems to have a Gradient for speculative language. If primed to include language about it, it could help cut down on some of the hallucination. No idea if this will effect the rate of false positives on the statements it does still answer confidently however


You'd have to find out the veracity of those leading phrases. I'm guessing that it just prefaces the answer with a randomly chosen statement of doubtfulness. The error bar behind every bit of knowledge would have to exist in the dataset.

(And in neural network terms, that error bar could be represented by the number of connections, by congruency of separate paths of arguing, by vividness of memories, etc ... it's not above human reasoning either, no need for new data structures ...)


No comment on if output analysis is all that is needed, though it makes sense to me. Just wanted to note that using file size differences as an argument may simply imply transformers could be a form of (either very lossy or very efficient) compression.


You can argue any form of data is an arbitrarily lossy compression of any other form of data.

I get your point, but nobody is archiving their companies 50 years of R&D data with and LLM so they can get it down to 10GB.

They may have traits of data compression, but they are not at all in the class of data compression software.


I cant help but chime in here because I used to feel this way and all the typical advice never felt right (ie that you shouldn't care how good you are at things)

Very quickly I will list the 3 main points that have helped me the most

1) the things you care to try to excel at is a statement about things worth excelling at and actual skill is often a minor detail. It's okay to identify with where the effort goes and how much you give rather than the result of it. In this way it is like voting, and there is no best person at voting. You identify with the tribe, not your ability

2) when being competitive does actually matter, the best in the world cannot be everywhere at once, so there is actually a lot of meaning behind being the best locally at something. Or even just not the worst locally. Identity is irrelevant on this one, but it does require you care and are self aware about how good you actually are at things.

3) how you relate to others is also a big part of identity. being in the middle of the pack on most things makes you much more relatable than being best. For some person who is better than you at everything, are you able to deeply connect with them or do you get distracted by comparison thoughts, insecurity, or ideas to use them for something self-serving? If not you, still how often in their life do you think that happens for them with others?


Yes actually, I've been binging a lot of General Relativity content lately by coincidence and can say the Dialect YouTube channel has been the best resource for describing this. I'm not an expert so I cannot speak to its accuracy but it seems sound.

In particular the video "Conceptualizing the Christoffel Symbols". Also look at content on the Metric Tensor

Additionally, there is content from other sources (albeit less produced) on describing projective geometry which is also related


Maybe not exactly what you are describing, but I recently did some layman research on "Strange Attractors" and chaos theory, which covers very similar topics. I cannot summarize here, but it's a neat rabbit hole to go down


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: