Hacker Newsnew | past | comments | ask | show | jobs | submit | sevensor's commentslogin

I’m interested by this recent development where people are using LLMs with Lean to prove things that haven’t been formalized yet. This seems like a really good fit for LLMs. They’re good at translating approximate language to formal language, they’re good at information retrieval, and any failures are mitigated by Lean itself. I haven’t tried it, but I imagine failure modes might be proving useless lemmas along the way, taking a needlessly roundabout approach, or inadvertently proving something different than what you set out to do.

No mention of PNGs? I don’t usually go to jpegs first for screenshots of text. Did png have worse compression? Burn more cpu? I’m sure there are good reasons, but it seems like they’ve glossed over the obvious choice here.

edit: Thanks for the answers! The consensus is that PNG en/de -coding is too expensive compared to jpeg.


PNGs of screenshots would probably compress well, and the quality to size ratio would definitely be better than JPG, but the size would likely still be larger than a heavily compressed JPG. And PNG encoding/decoding is relatively slow compared to JPG.

PNG is VERY slow compared to other formats. Not suitable for this sort of thing.

PNGs are lossless so you can’t really dial up the compression. You can save space by reducing to 8-bit color (or grayscale!) but it’s basically the equivalent of raw pixels plus zlib.

PNG can be lossy. It can be done by first discarding some image detail, to make adjacent almost-matching pixel values actually match, to be more amenable to PNG's compression method. pngquant.org has a tool that does it.

There are usage cases where you might want lossy PNG over other formats; one is for still captures of 2d animated cartoon content, where H.264 tended to blur the sharp edges and flat color areas and this approach can compensate for that.


PNGs likely perform great, existing enterprise network filters, browser controls, etc, might not, even with how old PNGs are now.

Surprisingly not yet available in Project Gutenberg, which is my go-to source for 19th century books.

Good point; it was apparently influential, having undergone some 26 printings

My high school electronics teacher had a screwdriver that was permanently welded to the terminals of a capacitor he’d pulled from a TV set. “This is why you use a resistor to discharge one of these things,” he said.

TIL. I always used a screwdriver to discharge those capacitors lol.

Last time I measured it, handling KeyError was also significantly faster than checking with “key in collection.” Also, as I was surprised to discover, Python threads are preemptively scheduled, GIL notwithstanding, so it’s possible for the key to be gone from the dictionary by the time you use it, even if it was there when you checked it. Although if you’re creating a situation where this is a problem, you probably have bigger issues.

I was just looking at an OOM situation this week. I disagree that turning overcommit off helps specifically with locality. Finding the straw that breaks the camel’s back doesn’t necessarily help you fix the problem. If you don’t understand the whole system, it’s going to be hard to debug regardless.

I’ve done this: I had a hot loop and I discovered that I could reduce instruction counts by adding a branch inside the loop. Definitely slower, which I expected, but it’s worth measuring.

What’s interesting to me about this, reckless as it is, is that the conversation has begun to shift toward balancing LLMs with rigorous methods. These people seem to be selling some kind of AI hype product backed by shoddy engineering, and even they are picking up on the vibe. I think this is a really promising sign for the future.

I’ve been posting here for a decade, and lurking for a few years before that, and your observation tracks with my experience. There’s a handful of ideas we’re collectively mulling over at any given time. They slowly rotate in and out, and modulate into different keys as they go. For instance, the leading edge of LLM related discourse seems to be improving now that we’re getting bored with both hype and outrage, although there’s still plenty of hype and outrage. Old topics still occasionally blip into focus, but they don’t have the same staying power. Like, you’ll see a Rails post every now and then, but it’s not the darling it was. I doubt we’ll ever see the likes of “Rails is a Ghetto” make the rounds again, because nobody’s worked up about it any more. Or take Skrillex, who must have been on the front page every week for a month. I was annoyed at the time, but I think we’re due for a retrospective on programmers and dubstep. I’m not the one to write it though; not my scene in the first place.

Anyway, thank you dang for your moderation!


I discovered named ranges only a couple of months ago, and they’re pretty amazing, as is the spilling behavior that fills an entire column from one cell. When it comes to the specifics of dealing with complex formulas, I usually copy paste them to a text editor because the formula bar doesn’t have a great editing ui.

How far did you get with let and lambda? It seems like you could build whole programs on them, but I haven’t pushed very hard on them yet. Do you want to say more about the “heavy technical limitations”?


Yes, I don't think Excel product owners are focusing on professional development, just power users. So things like the formula bar are lackluster. There is also a formula stepper that's mostly useless for debugging.

I played with them and even tried to build a minikanren, but on my day to day job I just use them tactically, so I avoid their limitations and I'm quite happy with them for that.

From memory:

  - Recursion is severely limited in lambda.
  - There are extra pains when putting formulas on the name manager or on conditional formatting (although more focused on references than on lambdas in particular, usage of lambda would be more powerful if they didn't exist)
  - You can't do arrays of arrays, so your data structures need to be flat

Thanks for the report! I was hoping for better news, but I can’t say I’m surprised.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: