Hacker Newsnew | past | comments | ask | show | jobs | submit | Me1000's commentslogin

The premise of this study is a bit misguided, imho. I have absolutely no idea how many people _post_ harmful content. But we have a lot of data that suggests a _lot_ of people consume harmful content.

Most users don't post much of anything at all on most social media platforms.


Maybe it's a lack of imagination on my part, but how do spammers abuse self-hosted runners?

Form submission spam. Unique/'untraceable' IPs...

How do they abuse self hosted runners?

Malware in build scripts/dependencies. That's not exclusively credential/crypto-stealers, there's apparently also a healthy demand for various types of spam straight from corpo gateways...

Yes, but they’re self hosted

Hi! Congratulations on the launch. Is your intention to ship using WebKit on Window and Linux too?


Claude has a sycophancy problem too. I actually ended up canceling my subscription because I got sick of being "absolutely right" about everything.


I've had fun putting "always say X instead of 'You're absolutely right'" in my llm instructions file, it seems to listen most of the time. For a while I made it 'You're absolutely goddamn right' which was slightly more palatable for some reason.


I've found that it still can't really ground me when I've played with it. Like, if I tell it to be honest (or even brutally honest) it goes wayyyyyyyyy too far in the other direction and isn't even remotely objective.


Yeah I tried that once following some advice I saw on another hn thread and the results were hilarious, but not at all useful. It aggressively nitpicked every detail of everything I told it to do, and never made any progress. And it worded all of these nitpicks like a combination of the guy from the ackchyually meme (https://knowyourmeme.com/memes/ackchyually-actually-guy) and a badly written Sherlock Holmes.


My advice would be: It can't agree with you if you don't tell it what you think. So don't. Be careful about leading questions (clever hans effect) though.

So better than "I'm thinking of solving x by doing y" is "What do you think about solving x by doing y" but better still is "how can x be solved?" and only mention "y" if it's spinning its wheels.


Have it say 'you're absolutely fucked'! That would be very effective as a little reminder to be startled, stop, and think about what's being suggested.


Compared to GPT-5 on today's defaults? Claude is good.

No, it isn't "good", it's grating as fuck. But OpenAI's obnoxious personality tuning is so much worse. Makes Anthropic look good.


Not OP, but yes believe it or not it's impossible to find certain movies anywhere other than pirating them. One example is "Pirates of Silicon Valley", I watched it when I was young and recently wanted to watch it again. I pay for basically all the streaming services, I'm would have been happy to rent it from any service at all. I spent several hours trying to find a way to pay to watch it and never could.


I'm pretty sure i just watched that via apple+.

But your point generally valid regardless.


This is an old relatively low budget TV movie, it's not on TV+ (which I subscribe to). Nor can you rent it on iTunes, it doesn't even show up when you search for it. Same for Prime Video, etc.



That link says "Content Unavailable" for me, possibly region locked? I changed the "mx" in your link to "us" but same error.


Ah, that's too bad for you.

its a great movie!!


Apple TV doesn't allow me to stream in my browser, so I happily pirate their content. I pay for all the other "big" streaming services that I can use like a normal person.


I'm pretty sure it's just being used as a turn of phrase here. Under the bridge means commuters are riding BART, over the bridge means they're driving.


There’s an important distinction between the open weight model itself and the deepseek app. The hosted model has a filter, the open weight does not.


I didn't know that! That gives me another reason to play with it at home. Thanks for cluing me in. :)


Yet it's interesting how we put the blame and punishment on the people being taken advantage of, and not the employers who are exploiting them. If both parties are breaking the law shouldn't we at the very least ensure that the business owner who is exploiting any number of workers is held to the same standard as an undocumented person whose only crime was not having the proper paperwork?


I don't blame them. If I were them, I would do the same thing. However, as someone with the ability to vote and influence (to a very small degree) public policy, I would prefer we move toward a system in which strong labor rights exist in this country, and this is simply impossible in an environment in which employers are free to hire labor off the books for "pennies". To be clear, I think both political parties in the US are terrible, and all of this debate serves the interests of the employers that benefit from this situation.


Because it would hurt our little elitist exceptionalist hearts if we gave an H1B to a construction worker. There are low wage industries that could use such a program, but our little hearts can't take it because "its not the best and brightest".


This is a technology demo, not a model you'd want to use. Because Bitnet models are only average 1.58 bits per weight you'd expect to need the model to be much larger than your fp8/fp16 counterparts in terms of parameter count. Plus this is only a 2 billion parameter model in the first place, even fp16 2B parameter models generally perform pretty poorly.


Ok that's fair. I still think something was up with my build though, the online demo worked far better than my local build


I'm also confused by that, but it could just be the model being agreeable. I've seen multiple examples posted online though where it's fairly clear that the COT output is not included in subsequent turns. I don't believe Anthropic is public about it (could be wrong), but I know that the Qwen team specifically recommend against including COT tokensfrom previous inferences.


Claude has some awareness of its CoT. As an experiment, it's easy, for example, to ask Claude to "think of a city, but only reply with the word 'ready' and next to ask "what is the first letter of the city you thought of?"


Oops! I tried a couple experiments after writing this, and I believe I was mistaken, though I don't know how. It appears Claude was simply playing along, and convinced me it could remember the choices it secretly made. I must either have given it a tell, or perhaps it guessed the same answers twice in a row.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: