Hacker Newsnew | past | comments | ask | show | jobs | submit | BoorishBears's commentslogin

You're replying to someone (rightfully) pointing out that you can layoff poor performers without proclaiming it with one of the farthest reaching voices in the industry.

Anyone who's worked in a large org knows there's absolutely zero chance that those layoffs don't touch a single bystander or special case.


Any kind of stack ranking privileges people who are good in presentation of self and high in pathological narcissism.

OpenAI does care about copyright, thankfully China does not: https://imgur.com/a/RKxYIyi

(to clarify, OpenAI stops refining the image if a classifier detects your image as potentially violating certain copyrights. Although the gulf in resolution is not caused by that.)


Seedream 4.5 is almost as good as Seedream 4!

(Realistically, Seedream 4 is the best at aesthetically pleasing generation, Nano Banana Pro is the best at realism and editing, and Seedream 4.5 is a very strong middleground between the two with great pricing)

gpt-image-1.5 feels like OpenAI doing the bare minimum to keep people from switching to Gemini every time they want an image.


I haven't seen that, meanwhile gpt-image-1.5 still has zero-tolerance policing copyright (even via the API) so it's pretty much useless in production once exposed to consumers.

I'm honestly surprised they're still on this post-Sora 2: let the consumer of the API determine their risk appetite. If a copyright holder comes knocking, "the API did it" isn't going to be a defense either way.


There's still something off in the grading, and I suspect they worked around it

(although I get what you mean, not easily since you already trained)

I'm guessing when they get a clean slate we'll have Image 2 instead of 1.5. In LMArena it was immediately apparent it was an OpenAI model based on visuals.


A few years ago we didn't have an imprecise nondeterministic programming language that would allow your mom to achieve SOTA results on a wide range of NLP tasks by asking nicely, or I'm sure people would have taken it.

I think a lot of prompt engineering is voodoo, but it's not all baseless: a more formal way to look at it is aligning your task with the pre-training and post-training of the model.

The whole "it's a bad language" refrain feels half-baked when most of us use relatively high level languages on non-realtime OSes that obfuscate so much that they might as well be well worded prompts compared to how deterministic the underlying primitives they were built on are... at least until you zoom in too far.


I don't buy your past paragraph at all I am afraid. Coding langues, even high level ones, are built upon foundations of determism and they are concise and precise. A short way to describe very precisely, a bunch of rules and state.

Prompting is none of those things. It is a ball of math we can throw words into, and it approximates meaning and returns an output with randomness built in. That is incredible, truly, but it is not a programming language.


Eh, how modern technology works is not really the part I'm selling: that's just how it works.

Coding languages haven't been describing even a fraction of the rules and state they encapsulate since what? Punch cards?

It wasn't long until we started to rely on exponential number of layered abstractions to do anything useful with computers, and very quickly we traded precision and determinism for benefits like being concise and easier to reason about.

-

But also, the context here was someone calling prompting a "imprecise nondeterministic programming language": obviously their bone is the "imprecise nondeterministic" part, not distilling what defines a programming language.

I get it doesn't feel warm and fuzzy to the average engineer, but realistically we were hand engineering solutions with "precise deterministic programming languages", they were similarly probabilistic, and they performed worse.


Name a single programming language that is probabilistic in any way?

- A text prompt isn't probabilistic, the output is.

- https://labs.oracle.com/pls/apex/f?p=LABS:0:5033606075766:AP...

- https://en.wikipedia.org/wiki/Stan_(software)

- https://en.wikipedia.org/wiki/Probabilistic_programming

I explained in the most clear language possible why a fixation on the "programming language" part of the original comment is borderline non-sequitur. But if you're insistent on railroading the conversation regardless... at least try to be good at it, no?


I skimmed your comment since you were making the strange comparison that modern coding is basically probabilistic to a degree that prompting is, so I see now you weren't the one to say it's "probabilistic programming". But you are still trying to say that normal programming is basically probabilistic in some relevant way, which I think is quite ridiculous. I don't see how anything about normal engineering is probabilistic other than mistakes people make.

"I didn't do the absolute bare minimum and read the comment I replied to, so here's 100 words excusing that."

Do you mean, like, scripting languages? Are the underlying primitives C and machine language? "Might as well be well worded prompts" is the overstatement of the century; any given scripting language is far closer to those underlying layers than it is to using natural language with LLMs.

Sure doesn't seem like it. https://x.com/jarredsumner/status/1999317065237512224

And forget scripting languages, take a C program that writes a string to disk and reads it back.

How many times longer does it get the moment we have to ensure the string was actually committed to non-volatile NAND and actually read back? 5x? 10x?

Is it even doable if we have to support arbitrary consumer hardware?


You're stretching really hard here to try and rationalize your position

First of all, I pick the hardware I support and the operating systems. I can make those things requirements when they are required.

But when you boil down your argument, it's that because one thing may introduce non-determinism, then any degree of non-determinism is acceptable.

At that point we don't even need LLMs. We can just have the computer do random things.

It's just a rehash of the infinite monkeys with infinite type writers which is ridiculous


No the point was quite clear:

> A few years ago we didn't have an imprecise nondeterministic programming language that would allow your mom to achieve SOTA results on a wide range of NLP tasks by asking nicely, or I'm sure people would have taken it.

But that (accurate) point makes your point invalid, so you'd rather focus on the dressing.


We still don't have that programming language (although "SOTA" and "wide range of NLP tasks" are vague enough that you can probably move the goalposts into field goal range).

This comment is written way too adversarially for someone who doesn't know what NLP is.

> nondeterministic programming language that would allow your mom to achieve SOTA results

I actually think it's great for giving non-programmers the ability to program to solve basic problems. That's really cool and it's pretty darn good at it.

I would refute that you get SOTA results.

That has never been my personal experience. Given that we don't see a large increase in innovative companies spinning up now that this technology is a few years old, I doubt it's the experience of most users.

> The whole "it's a bad language" refrain feels half-baked when most of us use relatively high level languages on non-realtime OSes that obfuscate so much that they might as well be well worded prompts compared to how deterministic the underlying primitives they were built on are... at least until you zoom in too far.

Obfuscation and abstraction are not the same thing. The other core difference is the precision and the determinism both of which are lacking with LLMs.


Pretty simple sentence: a hoodie is a top (usually a sweatshirt) with a hood. The hood being a round cap-like piece of fabric that covers your head.

They went to sleep and that very same piece of fabric got jostled underneath their back and got stuck! The fabric, now constrained by a good portion of their body weight, either applied a great amount of pressure to a very small area of their body or caused them to get stuck in an unnatural sleeping position.

Either could conceivably lead to considerable localized pain.

(And I assume they don't know for sure since they were asleep as this occured)


Most people don't realize their applications are running like dogwater on Node because serverless is letting them smooth it over by paying 4x what they would be paying if they moved 10 or so lines of code and a few regexes to a web worker.

(and I say that as someone who caught themselves doing the same: severless is really good at hiding this.)


Well this is the platform that got kicked off Discord for refusing to delete user accounts in a timely manner, then training an AI model on user inputs...

Then tried to weaponize their userbase to mass email Discord over being kicked off.


And then struggled to manage Reddit...then scaled to as many platforms as possible (text, whatsapp, etc) then shut it all down then built out an entire API just to shut it down (apparently, can't confirm). They can't even get dark mode to work by default on their own website. Suspended from Twitter/X. Founder, Anush, making comments about "replacing engineers with AI" (source: https://x.com/anushkmittal/status/1979372588850884724), although that could be a joke, using their absolute dogshit "talk" platform.

And speaking of their chat platform...it's literally horrible. Slow to load, terrible UX, disjointed UI, no accessibility options, AI chatbots everywhere and you can't even tell their AI without clicking their profiles. It's like if Slack was made by a 12 year old.

Seriously, to put it in perspective I'm on a MacBook Pro M3 (so not that old) and a fiber, gigabit network. I click one chat and it can take up to 5 seconds to load the channel. JUST TO LOAD THE CHANNEL. It legit fires off like 30 fetch requests each time you load a channel. It's insane. I can't even blame NextJS for that, it's straight up them probably "vibe coding" everything.


As the other comments pointed out, that's not covering billing...

But also the (theoretical) production platform for Gemini is Vertex AI, not AI Studio.

And until pretty recently using that took figuring out service accounts, and none of Google's docs would demonstrate production usage.

Instead they'd use the gcloud CLI to authenticate, and you'd have to figure out how each SDK consumed a credentials file.

-

Now there's "express mode" for Vertex which uses an API Key, so things are better, but the complaints were well earned.

At one point there were even features (like using a model you finetuned) that didn't work without gcloud depending on if you used Vertex or AI Studio: https://discuss.ai.google.dev/t/how-can-i-use-fine-tuned-mod...


AI Studio is meant to be the fast path from prompt to production, bringing billing fully into AI Studio in January will make this even faster! We have hundreds of thousands of paying customers in production using AI Studio right now.

I could've made my comment more clear. Definitely missing a statement along the lines of "and then after creating, you click 'set up billing' and link the accounts in 15 seconds"

I did edit my message to mention I had GCP billing set up already. I'm guessing that's one of the differences between those having trouble and those not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: