Hacker Newsnew | past | comments | ask | show | jobs | submit | fauigerzigerk's commentslogin

It's not marked in the Chrome extension UI.

But if you have to model those custom pricing structures anyway, the question what you gain by not reflecting them in the database schema.

There's no reason to put all those extra fields in the same table that contains the universal pricing information.


A lot of unnecessary complexity/overhead for a minor seldomly touched part of a much larger already complex system?

I'll give a comparison.

JSON

- We have some frontend logic/view (that can be feature-flagged per customer) to manage updating the data that's otherwise mostly tagging along as a dumb "blob" (auto-expanded to regular a part of the JSON objects maps/arrays at the API boundary making frontend work easier, objects on the frontend, "blobs" on the backend/db)

- Inspecting specfic cases (most of the time it's just null data) is just copying out and formatting the special data.

- If push comes to shows, all modern databases support JSON queries so you can pick out specifics IF needed (has happened once or twice with larger customers over the years).

- We read and apply the rules when calculating prices with a "plugin system"

DB Schema (extra tables)

- Now you have to wade through lots of customer-specific tables just to find the tables that takes most of the work-time (customer specifics are seldomly what needs work once setup). We already have some older customer-specific stuff from the early days (I'm happy that it's not happened much lately).

- Those _very_ few times you actually need to inspect the specific data by query you might win on this (but as mentioned above, JSON queries has always solved it).

- Loading the universal info now needs to query X extra tables (even when 90%-95% of the data has no special cases).

- Adding new operations on prices like copying,etc now needs to have logic for each piece of customer specific table to properly make it tag along.

- "properly" modelled this reaches the API layer as well

- Frontend specialization is still needed

- Calculating prices still needs it's customization.

I don't really see how my life would have been better for managing all extra side-effects of bending the code to suit these weird customer requests (some that aren't customers anymore) when 90-95% of the time it isn't used and seldomly touched upon with mature customers.

I do believe in the rule of 3, if the same thing pops up three times I do consider if that needs to be graduated to more "systematic" code, so often when you abstract after seeing something even 2 times it never appears again leaving you with some abstraction to maintain.

JSON columns, like entity-attribute-value tables or goto statements all have real downsides and shouldn't be plonked in without a reason, but hell if I'd have to work with overly complex schemas/models because people start putting special cases into core pieces of code just because they heard that a technique was bad.


I agree, but the question is how better grounding can be achieved without a major research breakthrough.

I believe the real issue is that LLMs are still so bad at reasoning. In my experience, the worst hallucinations occur where only handful of sources exist for some set of facts (e.g laws of small countries or descriptions of niche products).

LLMs know these sources and they refer to them but they are interpreting them incorrectly. They are incapable of focusing on the semantics of one specific page because they get "distracted" by their pattern matching nature.

Now people will say that this is unavoidable given the way in which transformers work. And this is true.

But shouldn't it be possible to include some measure of data sparsity in the training so that models know when they don't know enough? That would enable them to boost the weight of the context (including sources they find through inference time search/RAG) relative to to their pretraining.


Anything that is very specific has the same problem, because LLMs can’t have the same representation of all topics in the training. It doesn’t have to be too niche, just specific enough for it to start to fabricate it.

One of these days I had a doubt about something related to how pointers work in Swift and I tried discussing with ChatGPT (don’t remember exactly what, but it was purely intellectual curiosity). It gave me a lot of explanations that seemed correct, but being skeptical and started pushing it for ways to confirm what it was saying and eventually realized it was all bullshit.

This kind of thing makes me basically wary of using LLMs for anything that isn’t brainstorming, because anything that requires knowing information that isn’t easily/plentifully found online will likely be incorrect or have sprinkles of incorrect all over the explanations.


What would happen if you gave the same task to 200 human contractors?

I suspect SLOC growth wouldn't be quite as dramatic but things like converting everything to Rust's error handling approach could easily happen.


I would say that's what Nvidia is doing.

I'm not sure how Apple is enabling anything interesting around AI right now.

That's what this bland article is not even touching on. Yes, having missed the boat is great if the boat ends up sinking. That doesn't make missing boats a great strategy.

Building huge models and huge data centers is not the only thing they could have done.

They had some interesting early ideas on letting AI tap app functionality client-side. But that has gone nowhere, and now everything of relevance is happening on servers.

Apple's devices are not even remotely the best dumb terminals to tap into that. Even that crown goes to Android.



Cool! I can't find it on the read me, but can it run Qwen locally?

The best way to do that at the moment is using the llm-ollama plugin.

Yes, that's a great one. And the domain is actually killedbygoogle.com

almost all were amazing, I loved: Show HN: A text editor that doesn't use AI (github.com)

On a serious note: I for one welcome our AI overlords.


I think you have to make a distinction between indvidual experience and claims about general truths.

If I know someone as an honest and serious professional, and they tell me that some tool has made them 5x or 10x more productive, then I'm willing to believe that the tool really did make a big difference for them and their specific work. I would be far more sceptical if they told me that a tool has made them 10% more productive.

I might have some questions about how much technical debt was accumulated in the process and how much learning did not happen that might be needed down the road. How much of that productivity gain was borrowed from the future?

But I wouldn't dismiss the immediate claims out of hand. I think this experience is relevant as a starting point for the science that's needed to make more general claims.

Also, let's not forget that almost none of the choices we make as software engineers are based on solid empirical science. I have looked at quite a few studies about productivity and defect rates in software engineering projects. The methodology is almost always dodgy and the conclusions seem anything but robust to me.


> Why require that companies use a specific programming language instead of requiring that the end product is good?

I can think of two reasons. First, achieving the same level of correctness could be cheaper using a better language. And second, you have to assume that your testing is not 100% correct and complete either. I think starting from a better baseline can only be helpful.

That said, I have never used formal verification tools for C or C++. Maybe they make up for the deficiencies of the language.


How do you define a better programming language, how do you judge whether one programming language is better than another, and how do you prevent corruption and cartels from taking over?

If Ada was "better" than C++, why did Ada not perform much better than C++, both in regards to safety and correctness (Ariane 5), and commercially regarding its niche and also generally? Lots of companies out there could have gotten a great competitive edge with a "better" programming language. Why did the free market not pick Ada?

You could then argue that C++ had free compilers, but that should have been counter-weighed somewhat by the Ada mandate. Why did businesses not pick up Ada?

Rust is much more popular than Ada, at least outside Ada's niche. Some of that is organic, for instance arguably due to Rust's nice pattern matching and modules and crates. And some of that is inorganic, like how Rust evangelists through force, threats[0], harassment[1] and organized and paid media spam force Rust.

I also tried Ada some time ago, trying to write a tiny example, and it seemed worse than C++ in some regards. Though I only spent a few hours or so on it.

[0]: https://github.com/microsoft/typescript-go/discussions/411#d...

[1]: https://lkml.org/lkml/2025/2/6/1292

> Technical patches and discussions matter. Social media brigading - no than\k you.

> Linus

https://archive.md/uLiWX

https://archive.md/rESxe


>How do you define a better programming language

A language that makes avoiding certain important classes of defects easier and more productive.

>how do you judge whether one programming language is better than another

Analytically, i.e. by explaining and proving how these classes of bugs can be avoided.

I don't find empirical studies on this subject particularly useful. There are too many moving parts in software projects. The quality of the team and its working environment probably dominates everything else. And these studies rarely take productivity and cost into consideration.


>"by using a single solution from a huge provider"

The parent didn't say that though and clearly didn't mean it.

Smaller SaaS providers have a problem right now. They can't keep up with the big players in terms of features, integrations and aggressive sales tactics. That's why concentration and centralisation is growing.

If a lot of specialised features can be replaced by general purpose AI tools, that could weaken the stranglehold that the biggest Saas players have, especially if those open weights models can be deployed by a large number of smaller service providers or even self hosted or operated locally.

That's the hypothesis I think. I'm not sure it will turn out that way though.

I'm not sure whether the current hyper-competitive situation where we have a lot of good enough open weights models from different sources will continue.

I'm not sure that AI models alone will ever be reliable enough to replace deterministic features.

I'm not sure whether AI doesn't create so many tricky security issues that once again only the biggest players can be trusted to manage them or provide sufficient legal liability protection.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: