I used Nemotron 3 nana on LM Studio yesterday on my 32G M2-Pro mac mini. It is fast and passed all of my personal tool use tests, and did a good job analyzing code. Love it.
Today I ran a few simple cases on Ollama, but not much real testing.
Kind of depends on your mac, but if it's a relatively recent apple silicon model… maybe, probably?
> Nemotron 3 Nano is a 3.2B active (3.6B with embeddings) 31.6B total parameter model.
So I don't know the exact math once you have a MoE, but 3.2b will run on most anything, 31.6b and you're looking at needing a pretty large amount of ram.
Given Mac bandwidth, you'll generally want to load the whole thing in RAM. You get speed benefits based on smaller-size active experts, since the Mac compute is slow compared to Nvidia hardware. This should be relatively snappy on a Mac, if you can load the entire thing.
But none of this (signal/noise ratio, etc) is related to the topic of the article, which claims that even with good signal, blood flow is not useful to determine brain activity.
They are indeed coupled, but the coupling is complicated and may be situationally dependent.
Honestly, it's hard to imagine many aggregate measurements that aren't. For example, suppose you learn that the average worker's pay increased. Is it because a) the economy is booming or b) the economy crashed and lower-paid workers have all been laid off (and are no longer counted).
1. His speakers are powered already. He doesn't need an amp.
2. Even if they weren't, how is he supposed to connect to the Fosi without a headphone jack coming out from the TV? The Fosi only has RCA input.
They are a minority of voters, though. And they get to unilaterally declare things that the vast majority don't get to vote on. I don't think you're making a good case.
I don't think that particular example focuses on the right thing to make the comparison similar.
The point isn't that education (in general) can't happen during a strike or that you can't get groceries (in general) during the strike you mention. The point is that education union is a small minority controlling what education is available, regardless of what the public wants.
To make your analogy similar, I think you could compare to grocery workers refusing to allow meat to be sold in grocery stores because a large portion of them are vegans, regardless of what the general public wants.
Support for unions in the US is at record highs (~70%), well above a majority of voters. If you slice by age cohort, highest support is Gen Z, lowest support are oldest cohorts (Boomers, Silent), which are aging out ~2M/year (55+ age cohort) [1].
Interestingly and very recently (December 11th, 2025), the US House recently voted on a bill to restore collective bargaining rights for a majority of federal employees [2]. House lawmakers voted 231-195 to pass the Protect America’s Workforce Act [3]. The entire Democratic Caucus, along with 20 Republicans, voted in favor of the legislation.
Technically, yes, but it's a similar relationship of humans being animals. If you say animals, the audience will assume you're not talking about humans.
If there are security updates, then actually staying on the old OS is probably better for 99% of users. Constant change is almost impossible for most people to deal with.
Not the OP, but I think "slight" here is in relation to Anthropic and Google. Claude Opus 4.5 comes at $25/MT (million tokens), Sonnet 4.5 at $22.5/MT, and Gemini 3 at $18/MT. GPT 5.2 at $14/MT is still the cheapest.
I used the pricing for long context (>200k) in all cases. I personally use AI as coding assistants, like lots of other people, and as such, hitting and exceeding 200k is quite the norm. The numbers you are showing are for <200k context length.
I also use them as coding assistants among other things, like lots of other people, and hitting and exceeding 200k is absolutely not the norm unless you're using a large number of huge MCP servers. At those context sizes output quality significantly declines, even with the claims of "we support long context". This is why all those coding assistants use auto-compression, not just to save money, but largely to maintain quality. In any case, >200k input calls are a small fraction of all.
Ironically at that input size, input costs dominate rather than output, so if that's the use case you're going for you want to be including those in your named prices anyway.
I am. Super simple. Super cheap. Great dev experience. Want to know whether the migration is going to work? Just download the prod db locally and test it. I'm happy.
reply