I remember the was a guy who regularly posted tech predictions and then every year adjusted and reflected on his predictions. Can anyone help me find it?
Open sourcing a system where you might have notes in markdown to build a knowledge base, and review them according to a schedule, but also Anki like flash cards attached to each note.
All notes are simple markdown file stored locally.
I’ve been using it to benefit my research and make the knowledge to stick better on my head for several years. My base is more than 400 markdown notes now, and I sync them to a private GitHub repository.
I've been working on knowledge base + spaced repetition project, and I know how convenient markdown files are.
1. You can view them anywhere (Github renders them nicely)
2. You can edit them in your favorite editor
3. Formatting doesn't decrease the readability
4. Extensible (syntax highlighting, mermaid, mathjax, etc.)
5. Cross-linking which is a core for any knowledge system is free
6. You can use Git for versioning and backup, etc, etc.
This looks really interesting! I am studying "knowledge-heavy" subjects with lots of facts I need to learn, and have been looking for software where I can write flashcards directly within my notes, and both review them when reading my notes, and globally across notes. I like to have my notes locally, so I didnt find any good solutions. But there are some parsers for anki that can process markdown documents and extract items within them
It's funny when you start think how to succeed with LLMs, you end up thinking about modular code, good test coverage, though-through interfaces, code styles, ... basically with whatever standards of good code base we already had in the industry.
but they are a company that burns billions every year in losses and this seems like a pretty random acquisition.
Bun is the product that depends on providing that good, stable, cross-platform JS runtime and they were already doing a good job. Why would Anthropic's acquisition of them make them better at what they were already doing?
It's Anthropic, not Microsoft. They already had a runway of 4 years, and honestly, that is preferable to hitching their wagon to a volatile startup like Antropic.
> As discussed previously, OpenAI lost $5 billion and Anthropic $5.3 billion in 2024, with OpenAI expecting to lose upwards of $8 billion and Anthropic — somehow — only losing $3 billion in 2025. I have severe doubts that these numbers are realistic, with OpenAI burning at least $3 billion in cash on salaries this year alone, and Anthropic somehow burning two billion dollars less on revenue that has, if you believe its leaks, increased 500% since the beginning of the year.
You may have posted the wrong link, because what you posted was not a source, but rather an amatuer blogger's oponion about what anthrotic's and OpenAI revenue and losses are. Do you have the correct link to actual evidence that Anthropic has losses in the billions?
> Privately held companies often disclose revenue figures if they are growing quickly, but keep the rest of their finances a secret because they often tell a far less impressive story. The approach is especially true for AI developers that don’t want to disclose the extraordinary rate at which they are burning cash. The Journal is reporting Anthropic’s base case projections, not its more optimistic forecasts.
> The Information earlier reported on some of the financial figures for both companies.
> The documents show that OpenAI expects to burn $9 billion after generating $13 billion in sales this year, while Anthropic expects to burn almost $3 billion on $4.2 billion in sales—roughly 70% of revenue for both.
Thanks for the link, however, this is not saying what you think it is saying. This is talking about expenses, not losses. Saying that Anthropic has expenses in the billions is as meaningless as saying that Google has expenses in the hundreds of billions. This exemplifies why I hate it when people use amateur blogs to try to show that AI companies are failing; they use amateurish interpretations that are usually wrong, and a lot of people latch on to them because it confirms their own ideals
Please read the article. When it says it'll burn $x on $x revenue, it means the burn is not expenses but the net loss. Here is another article that says the same thing:
Do you really think Anthrophic's annual expenses are in single digit billions? Or OpenAI's annual expenses being less than $9 billion?
> people latch on to them because it confirms their own ideals
I think this applies universally, even to yourself, no? You're so deadset on believeing Anthrophic is not losing billions, you're debating semantics and borderline insulting my reading skills.
I'm wondering if Bun would be a good embedded runtime for Claude to think in. If it does sandboxing, or if they can add sandboxing, then they can standardize on a language and runtime for Claude Code and Claude Desktop and bake it into training like they do with other agentic things like tool calls. It'd be too risky to do unless they owned the runtime.
This is pretty much obvious for people migrated from JavaScript to TypeScript and suddenly realised that most of their unit tests can now go to a trash bin.
I’m speaking from a Python background. I love types and use them religiously, but I had no idea how much better (modulo runtime checks; that one’s obvious) others were at it.
It's been 8 years and I can't imagine ever writing untyped JavaScript again. Garbage garbage garbage.
If you're on TS and want end-to-end type safety, I recommend you validate everything with something like Superschema and Zapatos/PgTyped. Node with 100% type safety is wonderful to work with.
Robert Martin teaches us that codebase is behaviour and structure. While behaviour is something we want the software to do. The structure can be even more important because it defines how easy if possible to evolve the behaviour.
I'm not entirely sure why I had an urge to write this.
Any example of that? One would think that predicting what comes next from an image is basically video generation, which works not perfect, but works somehow (Veo/Sora/Grok)
You'll see it struggles - https://streamable.com/5doxh2 , which is often the case with video gen. You have to describe carefully and orchestrate natural feeling motion and interactions.
You're welcome to try with any other models but I suspect very similar results.
physics textbooks are though so it should know how they'd work, or at least know that balls don't spontaneously appear and disappear and that gears don't work when they aren't connected
It is video generation, but succeeding at this task involves detailed reasoning about cause and effect to construct chains of events, and may not be something that can be readily completed by applying "intuitions" gained from "watching" lots of typical movies, where most of the events are stereotypical.
reply