Hacker Newsnew | past | comments | ask | show | jobs | submit | keithnz's commentslogin

given you are new, check out HNs guidelines https://news.ycombinator.com/newsguidelines.html

I'm finding AI great to have a conversation with to flesh out ideas, with the added benefit it can summarize everything at the end

You're being steered without being aware of it.

Worse. You’re being steered along a circle

Maybe they are aware of it?

I talk to other people. They influence me, steer me. I am okay with that.


not at all, it's very productive.

I do this a lot. Start by telling the AI to just listen and only provide feedback when asked. Lay out your current line of thinking conversationally. Periodically ask the AI to summarize/organize your thoughts "so far". Tactically ask for research into a decision or topic you aren't sure about and then make a decision inline.

Then once I feel like I have addressed all the areas, I ask for a "critical" review, which usually pokes holes in something that I need to fix. Finally have the AI draft up a document (Though you have to generally tell it to be as concise and clear as possible).


I usually create a document/folder with my thinking on what I want to do, any background information that is relevant, conversations on the topic, technical manuals, links etc. Then enter a conversation and explore the problem space and do something very similar to what you are doing.

I haven't used pen and paper for note taking for years and years now. I used to keep a lot of notes in markdown organized into folders (used obsidian for a bit but was just easier to do in Vim). These days I don't take that many notes, usually only to capture key points/decisions in discussions but usually are pretty short lived. I find things get captured in other forms such that notes aren't really needed that much anymore.

FYI, AI adoption in health in NZ is moving forward, for example https://www.rnz.co.nz/news/national/589774/emergency-doctors...

This is just about not using free/public AI tools.


that's mentioned in OP article.

Heidi is frustratingly consistent at hallucinating stuff. I've seen it in almost all of the dozen or so summaries I've had from medical people recently (surgeon, physio, consultant). A GP I know tried for a month and then was like 'it's not worth the risk exposure to me or my patients'.


No? Azures been rock solid for us.

Front door did have a major outage last year.

No, well, I still enjoy the articles. The thing that always surprises me is the negativity in comment threads. I'm genuinely quite excited about AI based development. Yesterday I was playing around with developing a marketing plan for a market gap where we could leverage our product and finding what features in our product would need changing/adding to improve our offering. Quite interesting results!

I think in most places on the internet the negative comments are the ones that will win out. Same for AI I suppose. I tried not to bemoan the whole concept here, just the amount of 'airtime' it gets. Sort of like when something happens in the news (lately it's been the Epstein files for me), and you wish you could see a more balanced picture of world events.

Surround yourself with positive people. Reddit's take for an event I was at made it sound like it went terribly, but I was there and had fun.

just use claudes help, if you want to know keybinds, just do /keybinds (which is not in the cheat sheet)

it doesn't need to exist, its all in claudes help, and easily discoverable.

not really, mostly its self explanatory, it has poweruser things that are discoverable within a few minutes of reading the help. Weirdly the cheat sheet is actually missing things that you can find inside claudes help like /keybinds .

tell them what to prompt the AI with to get the correct results. I've seen a number youtube shorts lately doing this, where some scientist gets "refuted" by some random person based on an LLM result, they then sit with the LLM and ask the same question, get the same wrong answer, then follow it up with a clarifying question, which then the LLM realizes its mistake and gives a better answer.


And then ask another question, and the LLM changes its mind again ("are you sure?").

It's not actually realizing anything so much as it's following your lead. Yes, followup questions can help dislodge more information, but fundamentally you can accidentally or on purpose bully an LLM to contradict itself quite easily, and it is only incidentally about correctness.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: