> In the style of a 1970s book sci-fi novel cover: A spacer walks towards the frame. In the background his spaceship crashed on an icy remote planet. The sky behind is dark and full of stars.
Nano banana pro via gemini did really well, although still way too detailed, and it then made a mess of different decades when I asked it to follow up: https://gemini.google.com/share/1902c11fd755
It's therefore really disappointing that GPT-image 1.5 did this:
You're just not describing what you want properly. Looks fine to me. Clearly you have something else in mind, so I think you're just not describing well. My tip would be to use actuall illustration language. Do you want a wide angle shot? What should depth of field be? Oil painting print? Ink illustration? What kind of printing style? Do you want a photo of the book or a pre-print proof? What kind of color scheme?
A professional artist wouldn't know what you want.
You didn't even specify an art style. 1970s sci-fi novel cover isn't a style. You'll find vastly different art styles from the 70s. If you're disappointed, it's because you're doing a shitty job describing what's in your head. If your prompt isn't at least a paragraph, you're going to just get random generic results.
The killer feature of LLMs is to be able to extrapolate what's really wanted from short descriptions.
Look again at Gemini's output, it looks like an actual book cover, it looks like an illustration that could be found on a book.
It takes on board corrections (albeit hilariously literaly).
Look at GPT image's output, it doesn't look anything like a book cover, and when prompted to say it got it wrong, just doubles down on what it was doing.
It's a prompt I've been using for years. Gemini has been the best of the bunch, but Nana Banana, midjourney, etc, all did okay to various degrees.
GPT Image bombed notably worse than the others, not the original picture itself, but the complete lack of recognition of my feedback that it hadn't got it right, it just doubled down on the image it had generated.
This is a bit off-topic, this repo isn't even .NET.
I work with a very large ( 280+ project ) .NET semi-monolithic semi-services code base with internal nuget packages.
I've only a handful of times hit the limits on a team plan and even then only minutes before the window refreshes.
I'll chime in with some of my workflow and tips when I have a more appropriate place to do so as it feels disrespectfully off-topic to elaborate further here about too much .NET specific.
As a general tip for working with large code-bases, if you have:
Then don't just run claude at the root directory (/).
Run it in ./src/projectA and then use /add-dir to bring in only the depenedencies you care about for the problem you're working on.
Or even run it in /docs and then bring in just the places where it needs.
It will prompt to ask to read from / semi-often, but you can just deny it, either explicitly through claude.settings.local, or just through a prompt for that action.
By carefully controlling the scope, you limit what it tries to read. If you catch it trying to read from /sub-project-B and you think it's irrelevant, you can not just deny it, but ask it why it thought it wanted to read from it, and then update your documentation (or your priors) appropriately.
I've found the worst time for just blowing through credits / usage is when I hit a problem that's just not solvable, but more on that another time.
Thank you for the tips, I only mentioned the .NET portion just because it seems to struggle with wanting to find definitions for calls against code provided by nuget packages. We are currently running a large Blazor project that has a Service class for each route in the project. When specify a specific service as an example, it seems to just get really hung up on trying to search for specific unnecessary details rather than using the complete example it has. Specifying in the prompt to only use the given example doesn't seem to matter. It keeps attempting to fire of bash commands.
Anyways, I do appreciate the tips. I am going to attempt to not use Sonnet 4.5 for planning and see if opus does a better job of limiting scope.
Neat, I'm in a similar state of believing the tech is currently in a state that's actually useful while also understanding why the skeptics find it infuriating instead.
( p.s. Tell claude that when quickly pressing keys with a mouse that there is audible clipping. This doesn't seem to happen when using the keyboard. )
I definitely don't think I'd be willing to hand it my day job's codebase and walk away. But I feel a lot more comfortable throwing it very specific tasks and questions, then manually vetting the results. Over time I may be a little more willing to give it bigger chunks or give a more cursory code review on what it generates.
If I come back to it to look to add polish (and fix mobile) that'll be a prompt I'll throw at it as well.
> It seems there's a date conflict. The prompt claims it's 2025, but my internal clock says otherwise.
> I'm now zeroing in on the temporal aspect. Examining the search snippets reveals dates like "2025-10-27," suggesting a future context relative to 2024. My initial suspicion was that the system time was simply misaligned, but the consistent appearance of future dates strengthens the argument that the prompt's implied "present" is indeed 2025. I am now treating the provided timestamps as accurate for a simulated 2025. It is probable, however, that the user meant 2024.
Um, huh? It's found search results for October 2025, but this has led it to believe it's in a simulated future, not a real one?
Sorry I - also - am one of those old timers who don't understand this because the shown code is all I've ever used for creating table. So, what is this "standard DOM API" if I may ask? Could you post a code example?
You can use document.createElement and document.appendChild to create every last <th>, <tr>, and <td> if you want. Those functions are not specific to tables, unlike the one mentioned in the blog post. They can be used to create any elements and place them in the DOM. But if you know what you are doing you can get a perfectly fine table out of that. (Not that you should.)
Yeah, that was what I was thinking of. I knew those as the essential APIs to modify the DOM without a full re-parse. And you can use them on table/th/tr/td nodes just like on any other node.
for the longest time, no one in linux land cared about API stability or backward compatibility - then app/game developers realised if they could port a portion of Win32 to Linux via WINE, they could just target the win32 API or at least a portion of it and so long as WINE was installed, their app/game would always work. i find it a bit ironic; desktop Linux is being enabled by re-implementing APIs from another OS.
I've been experimenting with a little vibe coding.
I've generally found the quality of .NET to be quite good. It trips up sometimes when linters ping it for rules not normally enforced, but it does the job reasonably well.
The front-end javascript though? It's both an absolute genuis and a complete menace at the same time. It'll write reams of code to gets things just right but with no regards to human maintainability.
I lost an entire session to the fact that it cheerfully did:
npm install fabric
npm install -D @types/fabric
Now that might look fine, but a human would have realised that the typings library is a completely different out-dated API, the package last updated 6 years ago.
Claude however didn't realise this, and wrote a ton of code that would pass unit tests but fail the type check. It'd check the type checker, re-write it all to pass the type checker, only for it now to fail the unit tests.
Eventually it semi-gave up typing and did loads of (fabric as any) all over the place, so now it just gave runtime exceptions instead.
I intervened when I realised what it was doing, and found the root cause of it's problems.
It was a complete blindspot because it just trusted both the library and the typechecker.
So yeah, if you want to snipe a vibe coder, suggest installing fabricjs with typings!
Instead of just committing more often, make the agent write commits following the conventional commits spec (feat:, fix:, refactor:) and reference a specific item from your plan.md in the commit body. That way you’ll get a self-documenting history - not just of the code, but of the agent’s thought process, which is priceless for debugging and refactoring later on
> In the style of a 1970s book sci-fi novel cover: A spacer walks towards the frame. In the background his spaceship crashed on an icy remote planet. The sky behind is dark and full of stars.
Nano banana pro via gemini did really well, although still way too detailed, and it then made a mess of different decades when I asked it to follow up: https://gemini.google.com/share/1902c11fd755
It's therefore really disappointing that GPT-image 1.5 did this:
https://chatgpt.com/share/6941ed28-ed80-8000-b817-b174daa922...
Completely generic, not at all like a book cover, it completely ignored that part of the prompt while it focused on the other elements.
Did it get the other details right? Sure, maybe even better, but the important part it just ignored completely.
And it's doing even worse when I try to get it to correct the mistake. It's just repeating the same thing with more "weathering".
reply