Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think it's helped me do anything I couldn't do, in fact I've learned it's far easier to do hard things myself than trying to prompt an AI out of the ditches it will dig trying to do it. But I also find it's great for getting painful and annoying tasks out of the way that I really can't motivate to do myself.


> I don't think it's helped me do anything I couldn't do

I am seeing a pattern here. It appears that AI isn't for everyone. Not everyone's personality may be a good fit for using AI. Just like not everybody is a good candidate for being a software dev, or police officer etc.

I used to think that it is a tool. Like a car is. Everybody would want one. But that appears not be the case.

For me, I used AI every day as a tool, for work and and home tasks. It is a massive help for me.


What home tasks do you use it for?

It's hard for me to imagine many. It's not doing the dishes or watering the plants.

If I wanted to rearrange the room I could have it mock up some images, I guess...


Figuring out which fertilizer, how often to water and sun placement for the plants is a useful AI request.


Is it? It'll take a while for fertilizer and sun placement to take visually effect, and there's risk that short term effects aren't indicative of long term effects.

How can you verify the recommendations are sound, valid, safe, complete, etc., without trying them out? And trying out unsound, invalid, unsafe, incomplete, etc., recommendations might result in dead plants in a couple of weeks.


I personally use chatgpt for initial discovery on these sorts of problems, maybe ask a probing question or two and then go back to traditional search engines to get a very rough second opinion(which might also lead to another round of questions). By the end of that process I'll either have seen that the llm is not helpful for that particular problem, or have an answer that I'm "reasonably confident" is "good enough" to use for something medium to low risk like potentially killing a plant. And I got there within 10-20 minutes, half of that being me just reading the 'bots summary.


> How can you verify the recommendations are sound, valid, safe, complete, etc., without trying them out?

Such an odd complaint about LLMs. Did people just blindly trust Google searches before hand?

If it's something important, you verify it the same way you did anything else. Check the sources and use more than a single query. I have found the various LLMs to very useful in these cases, especially when I'm coming at something brand new and have no idea what to even search for.


Eh, for something like this the cost of it being wrong might be pretty small, but I'd bet odds are good that its recommendations will be better than whatever I might randomly come up with without doing any research. And I don't have the time to do the research on normal old google where it's really hard to find exactly what I want.

I've found it immensely helpful for giving real world recommendations about things like this, that I know how to find on my own but don't have the time to do all the reading and synthesizing.


That's an interesting perspective, I don't think it's an innate thing though, I think it's a mindset issue. Humans are adaptable, but we're even more stubborn.


It’s weird how divisive it is. For me it’s completely dependent on the quality of the output. Lately, it’s been more of a hinderance.


I think there might be cases, for some people or some tasks, where the difficulty of filling in a blank page is greater than the difficulty of fixing an entire page of errors. Even if you have to do all the same mental work, it feels like a different category of work.


A very good tip: you get one chance to prompt them to a new path failing that clear the context and start again from the current premise.

Use only actionable prompts, negations don't work on ai and they don't work on people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: