> Analog signal processing is clearly less memory than a register, no?
You are going to have a hard time doing analog signal processing with memoryless elements. In the linear domain all you can do is apply gain and mix signals together. If you work with memoryless nonlinearities you can do waveshaping, which is generally only useful when applied to special signals (e.g. sine waves).
Any time you want to do frequency-dependent behavior (filtering, oscillation) you need energy storing elements, usually capacitors, sometimes inductors. A capacitor is just like a register: it stores charge, similarly, inductors store energy in the magnetic field. Needless to say these devices are not memoryless. In fact, since the quantity that they remember is a continuous variable, they store a lot of information.
I was curious about the long-term stability of the cited HAKMEM sin/cos generator. I found an overview here: https://news.ycombinator.com/item?id=3111501 (EDIT: I'm still not sure about stability, apparently it is stable in exact arithmetic under certain conditions.) Coincidentally it is related to the Verlet integration video I posted last week: https://news.ycombinator.com/item?id=46253592
Yeah, it is exact in this specific circumstance. But yes, it's exactly the same trick; I also enjoyed that video in my Youtube recommender feed last week!
It is not just a way of writing ring buffers. It's a way of implementing concurrent non-blocking single-reader single-writer atomic ring buffers with only atomic load and store (and memory barriers).
The author says that non-power-of-two is not possible, but I'm pretty sure it is if you use a conditional instead of integer modulus.
I first learnt of this technique from Phil Burk, we've been using it in PortAudio forever. The technique is also widely known in FPGA/hardware circles, see:
"Simulation and Synthesis Techniques for Asynchronous
FIFO Design", Clifford E. Cummings, Sunburst Design, Inc.
I think unfortunately we sometimes ascribe to powers of two supernatural powers that are really about caches being built in powers of two.
Intel is still 64 byte cache lines as they have been for quite a long time but they also do some shenanigans on the bus where they try to fetch two lines when you ask for one. So there’s ostensibly some benefit of aligning data particularly on linear scans to 128 byte alignment for cold cache access.
But there's a reason that caches are always sized in powers of two as well, and that same reason is applicable to high-performance ring buffers: Division by powers of two is easy and easy is fast. It's reliably a single cycle, compared to division by arbitrary 32bit integers which can be 8-30 cycles depending on CPU.
Also, there's another benefit downstream of that one: Powers of two work as a schelling point for allocations. Picking powers of two for resizable vectors maximizes "good luck" when you malloc/realloc in most allocators, in part because e.g. a buddy allocator is probably also implemented using power-of-two allocations for the above reason, but also for the plain reason that other users of the same allocator are more likely to have requested power of two allocations. Spontaneous coordination is a benefit all its own. Almost supernatural! :)
CPU Caches are powers of two because retrieval involves a logarithmic number of gates have to fire in a clock cycle. There is a saddle point where more cache starts to make the instructions per second start to go back down again, and that number will be a power of two.
That has next to nothing to do with how much of your 128 GB of RAM should be dedicated to any one data structure, because working memory for a task is the sum of a bunch of different data structures that have to fit into both the caches and main memory, which used to be powers of two but now main memory is often 2^n x 3.
And as someone else pointed out, the optimal growth factor for resizable data structures is not 2, but the golden ratio, 1.61. But most implementations use 1.5 aka 3/2.
Fwiw in this application you would never need to divide by an arbitrary integer each time; you'd pick it once and then plumb it into libdivide and get something significantly cheaper than 8-30 cycles.
powers-of-two are problematic with growable arrays on small heaps. You risk ending up with fragmented space you can't allocate unless you keep growth less than 1.61x, which would necessitate data structures that can deal with arbitrary sizes.
Non-power-of-two is only really feasible of the total number of inserts will fit in your post/ack counters. Otherwise you have to implement overflow manually which may or may not be possible to do with the available atomic primitives on your architecture.
I first encountered this structure at a summer internship at a company making data switches.
Regardless of correctness, as a DSP dork I really identified with the question: "What kind of a monster would make a non-power of two ring anyway?" I remember thinking similarly when requesting a power of two buffer from a 3rd party audio hardware device and having it correct to a nearby non-power of two. Latency adding ringbuffer to the rescue.
A couple of the comments to the article suggest using 64-bit numbers, which is exactly the right solution. 2^64 nanoseconds=584.55 years - overflow is implausible for any realistic use case. Even pathological cases will struggle to induce wraparound at a human timescale.
(People will probably moan at the idea of restarting the process periodically rather than fixing the issue properly, but when the period would be something like 50 years I don't think it's actually a problem.)
> using 64-bit numbers, which is exactly the right solution
On a 64-bit platform, sure. When you're working on ring buffers with an 8-bit microcontroller, using 64-bit numbers would be such an overhead that nobody would even think of it.
My parting shot was slightly tongue in cheek, apologies. Fifty years is a long time. The process, whatever it is, will have been replaced or otherwise become irrelevant long before the period is up. 64 bits will be sufficient.
I agree with that sentiment in general but even though I've seen systems in continuous operation for 15 years, I've never seen anything make it to 20. I wouldn't write something with the external expectation it never made it that far, but in practical terms, that's probably about as safe as it gets. Even like embedded medical devices expect to get restarted every now and again.
Just as an example the Voyager computers have been restarted and that's been almost 60 years.
> It is not just a way of writing ring buffers. It's a way of implementing concurrent non-blocking single-reader single-writer atomic ring buffers with only atomic load and store (and memory barriers).
That may or may not be part of the actual definition of a ring buffer, but every ring buffer I have written had those goals in mind.
And the first method mentioned in the article fully satisfies this, except for the one missing element mentioned by the author. Which in practice, often is not only not a problem, but simplifies the logic so much that you make up for it in code space.
Or, for example, say you have a 256 character buffer. You really, really want to make sure you don't waste that one character. So you increase the size of your indices. Now they are 16 bits each instead of 8 bits, so you've gained the ability to store 256 bytes by having 260 bytes of data, rather than 255 bytes by having 258 bytes of data.
Obviously, if you have a 64 byte buffer, there is no such tradeoff, and the third example wins (but, whether your are doing the first or third example, you still have to mask the index data off at some point, whether it's on an increment or a read).
> The author says that non-power-of-two is not possible, but I'm pretty sure it is if you use a conditional instead of integer modulus.
There's "not possible" and then "not practical."
Sure, you could have a 50 byte buffer, but now, if your indices are ever >= 50, you're subtracting 50 before accessing the array, so this will increase the code space (and execution time).
> The [index size > array size] technique is also widely known in FPGA/hardware circles
Right, but in those hardware circles, power-of-two _definitely_ matters. You allocate exactly one extra bit for your pointers, and you never bother manually masking them or taking a modulo or anything like that -- they simply roll over.
If you really, really need to construct something like a 6 entry FIFO in hardware, then you have techniques available to you that mere mortal programmers could not use efficiently at all. For example, you could construct a drop-through FIFO, where every element traverses every storage slot (with a concomitant increase in minimum latency to 6 clock cycles), or you could construct 4 bit indices that counted 0-1-2-3-4-5-8-9-10-11-12-13-0-1-2 etc.
Most ring buffers, hardware or software, are constructed as powers of two, and most ring buffers either (a) have so much storage that one more element wouldn't make any difference, or (b) have the ability to apply back pressure, so one more element wouldn't make any difference.
> The author says that non-power-of-two is not possible, but I'm pretty sure it is if you use a conditional instead of integer modulus.
I don't see why it wouldn't be, it's just computationally expensive to take the modulo value of the pointer rather than just masking off the appropriate number of bits.
Yes, that's what I'm saying. You can't just use a quick and easy mask, you have to use a modulo operator which is computationally expensive enough that it's probably killing the time savings you made elsewhere.
There's probably no good reason to make your buffer sizes NOT a power of two, though. If memory's that tight, maybe look elsewhere first.
What I mean is: This ringbuffer implementation (and its simplicity) relies on the index range being a multiple of the buffer size (which is only true for powers of two, when the index is e.g. a 32bit unsigned integer).
If you swap bitmasking for modulo operations then that does work at first glance, but breaks down when the index wraps around. This forces you to abandon the simple "increment" operation for something more complex, too.
The requirement for a power-of-two size is more intrinsic to the approach than just the bitmasking operation itself.
It is the editorial board, i.e. academic peers, not the publisher, that are (?were) the arbiters. As far as I can see, the primary non-degenerate function of journals is to provide a quality control mechanism that is not provided by "publishing" on your own webpage or arxiv.org. If journals really are going to abandon this quality control role (personally I doubt it) then I fail to see their relevance to science and academic discourse at large.
Indeed, they are irrelevant. Right now they maintain an administrative monopoly over the peer review process, that makes them de-facto arbiters even if it's peers doing the work.
Journals should either become tech companies offering (and charging for) new and exciting ways to present scientific research, or simply stop existing.
Completely off topic, but thanks for creating AudioMulch, I don't use it actively anymore but it totally revolutionized how I approach working with sound!
At the end of the day, I expect a journal that I pay for to be better than arXiv and that means quality control. Few people have the time to self-vet everything they read to the extent that it should be in absence of other eyes
That's supposedly The Verge paraphrasing the CEO (Unfortunately I can't verify because the full article requires subscription.) I would like to know what the CEO actually said because "it feels off-mission" is a strange thing for the leader of the mission to say. I would hope that they know the mission inside out. No need to go by feels.
> In our conversation, Enzor-DeMeo returns often to two things: that Mozilla cares about and wants to preserve the open web, and that the open web needs new business models. Mozilla’s ad business is important and growing, he says, and he worries “about things going behind paywalls, becoming more closed off.” He says the internet’s content business isn’t exactly his fight, but that Mozilla believes in the value of an open and free (and thus ad-supported) web.
> At some point, though, Enzor-DeMeo will have to tend to Mozilla’s own business. “I do think we need revenue diversification away from Google,” he says, “but I don’t necessarily believe we need revenue diversification away from the browser.” It seems he thinks a combination of subscription revenue, advertising, and maybe a few search and AI placement deals can get that done. He’s also bullish that things like built-in VPN and a privacy service called Monitor can get more people to pay for their browser. He says he could begin to block ad blockers in Firefox and estimates that’d bring in another $150 million, but he doesn’t want to do that. It feels off-mission.
> One way to solve many of these problems is to get a lot more people using Firefox. And Enzor-DeMeo is convinced Mozilla can get there, that people want what the company is selling. “There is something to be said about, when I have a Mozilla product, I always know my data is in my control. I can turn the thing off, and they’re not going to do anything sketchy. I think that is needed in the market, and that’s what I hope to do.”
I don't like how he assumes that a free internet must be ad-supported. The ad-supported web is hideous, even with their ads removed. A long, convoluted, inane mess of content.
On the other hand, the clean web feels more direct, to the point, and passionate. I prefer to read content written by passion, not by money seeking purposes.
That's not correct. Linux is free, almost all open source is, many projects, websites are done out of passion.
I contribute to open source projects and nobody "gave me something", as I did it because I wanted to make it better. Like me, there are many others. Nobody is "the product" there.
What the saying you are misrepresenting means is "carefully check free things as you may be the product". Not "free things cannot exist, you either are the product or you pay".
Linux development is paid for either directly or in-kind by companies including Red Hat, IBM, Canonical, Oracle, and others. It's free to use and mostly open source but if it existed only on passion it would be something far less than it actually is.
People need to eat and have a roof over their heads.
Those companies pay the improvements they want for their usage case, which is usually far removed from what normal users want. I don't really need support for thousands of CPUs and terabytes of RAM.
Do you remember what Linux was like before the big corporations started contributing/supporting it? Just getting X11 working with your video card and monitor could take hours or days. Setting up a single server could easily be a "project" taking weeks. And god forbid you ever had to update it.
That in particular was thanks to the X.Org foundation. And while it made things easier, it didn't take "days" setting up a graphics, it took hours at most. And setting up a server didn't take weeks, it was an 1-2 day task at worst.
> If something is free (en masse), you are probably a product.
If something being free ever mattered to your privacy, it hasn't for a long time. Today no matter how expensive something is you are probably a product anyway. Unethical and greedy companies don't care how much money you paid them, they'll want the additional cash they'll get from selling you out at every opportunity. Much of my favorite software is free and doesn't compromise my privacy.
Fine, but don't make my machine do work as part of the agreement between host and advertiser (the only reason I can utilize an ad blocker in the first place). And definitely don't try to make it so my machine can't object to you trying. On top of all that, most places want to take my money, AND force ads, AND make my machine part of the process.
I thought the "free" in "free web" was supposed to mean "free as in freedom," not "free as in beer." Have we really reached the point where the CEO of Mozilla no longer understands or cares about that distinction?
Reminds me of this quote from Walter Murch, from In the Blink of an Eye I think:
"Most of us are searching-consciously or unconsciously- for a degree of internal balance and harmony between ourselves and the outside world, and if we happen to become aware-like Stravinsky- of a volcano within us, we will compensate by urging restraint. By that same token, someone who bore a glacier within them might urge passionate abandon. The danger is, as Bergman points out, that a glacial personality in need of passionate abandon may read Stravinsky and apply restraint instead."
This quote gives me such pause. I came back to read it again several times today. Conversations about echo chambers and filter bubbles are everywhere, and it's hard to sort into real data-driven arguments that there is an upward trend in the tendency to consume information that reaffirms our beliefs, but it does seem like our mechanisms for doing this have gotten a lot better, and that one could stay in a bubble indefinitely and never run out of content. I wonder if Murch is even still right to assume that we search for balance and harmony with an outside world we more often interact with through our abstractions, many chosen by us, most at least chosen by something. I wonder how many glaciers to read of restraint, how many volcanoes read of passionate abandon today, whether the feedback loop of escalation of flattery drives people to disappear into cages of their own making, or to burn themselves out, to use only this one dichotomy. I wonder how many of these feedback loops anyone is in about how many things. I wonder if I can even know which ones I'm in. Even if anyone succeeds at questioning everything they believe all the time, are they actually better off being a leaf on the wind, unable to form opinions?
I guess in short, this quote brings into sharp focus how brainrotting all this information and curation is, automatic and pervasive as it's become
I don't think they communicate the same broad idea at all. Making "unpredictable, seemingly irrational" choices is far from equivalent to being a dumb asshole. Your second version assumes the equivalence, which, hypothetically speaking, could provide a nice cover for purposeful malfeasance, could it not?
I use the Boox 10.3 for reading emails, text-based sites like this, and manga. Its bliss and has replaced 80% of my ipad. The experience of using it outside completely trounces normal screens.
As soon as they make larger, better 60hz panels I will 100% switch all my monitors over. I think making videos look worse is a positive. We don't need doomscrolling. We don't need 60fps react buttons with smooth gradients. We don't need to HDR the entire web. I primarily use text based sites anyways, so eink is perfect for me.
You are going to have a hard time doing analog signal processing with memoryless elements. In the linear domain all you can do is apply gain and mix signals together. If you work with memoryless nonlinearities you can do waveshaping, which is generally only useful when applied to special signals (e.g. sine waves).
Any time you want to do frequency-dependent behavior (filtering, oscillation) you need energy storing elements, usually capacitors, sometimes inductors. A capacitor is just like a register: it stores charge, similarly, inductors store energy in the magnetic field. Needless to say these devices are not memoryless. In fact, since the quantity that they remember is a continuous variable, they store a lot of information.
reply