Yeah, staff engineer is a pinnacle "still doing engineering and maybe leadership but not management" position in engineering firms. The academic "staff" is just a "not really one of us" gatekeeping-the-servants title.
Probably either (1) they don't request another jpeg until they have the previous one on-screen (so everything is completely serialized and there are no frames "in-flight" ever) (2) they're doing a fresh GET for each and getting a new connection anyway (unless that kind of thing is pipelined these days? in which case it still falls back to (1) above.)
You can still get this backpressure properly even if you're doing it push-style. The TCP socket will eventually fill up its buffer and start blocking your writes. When that happens, you stop encoding new frames until the socket is able to send again.
You probably won't get acceptable latency this way since you have no control over buffer sizes on all the boxes between you and the receiver. Buffer bloat is a real problem. That said, yeah if you're getting 30-45 seconds behind at 40 Mbps you've probably got a fair bit of sender-side buffering happening.
> you have no control over buffer sizes on all the boxes between you and the receiver
You certainly do; the amount of data buffered can never be larger than the actual number of bytes you've sent out. Bufferbloat happens when you send too much stuff at once and nothing (typically the candidate to do so would be either the congestion window or some intermediate buffer) stops it from piling up in an intermediate buffer. If you just send less from userspace in the first place (which isn't a good thing to do for e.g. a typical web server, but _can_ be for this kind of video conference-like application), it can't pile up anywhere.
(You could argue that strictly speaking, you have no control over the _buffer_ sizes, but that doesn't matter in practice if you're bounding the _buffered data_ sizes.)
Usually I watch your stuff very closely (and positively) because you're pushing the edges of how LLMs can be useful for code (and are a lot more honest/forthwright than most enthusiasts about it Going Horribly Wrong and how much work you need to do to keep on top of it.) This one... looks like a crossbar of random things that don't seem like things anyone would actually want to do? Mentioning the sandboxing bit in the first post would have helped a lot, or anything that said why that particular modes are interesting.
I had been in a similar boat and here are some softwares that I recommend or would discuss with you
https://github.com/libriscv/libriscv (I talked with the author of this project, amazing author fwsgonzo is amazing) and they told me that this has the least latency out of any sandbox at only minor consequence of performance that they know of
Btw for sandboxing, kvm itself feels good too and I had discussed it with them in their discord server when they had mentioned that they were working on a minimal kvm server which has since been open sourced (https://github.com/varnish/tinykvm)
Honestly Simon, Deno hosting/the way deno works is another good interesting tidbit for sandboxing. I wish something like deno's sandboxing capabilities came to python perhaps since python fans can appreciate it.
I will try to look more into your github repository too once I get more free.
> Unfortunately it means those languages will be the permanent coding platforms.
not really,
I suspect training volume has a role in debugging a certain class of errors, so there is an advantage to python/ts/sql in those circumstances: if, as an old boss once told me, you code by the bug method :)
The real problems I've had that hint at training data vs logic have been with poorly documented old versions of current languages.
To me, the most amazing capability is not the code they generate but the facility for natural language analysis.
my experience is that agent tools enable polyglot systems because we can now use the right tool for the job, not just the most familiar.
Anyone know how this compares to Espruino? The target memory footprint is in the same range, at least. (I know very little about the embedded js space, I just use shellyplugs and have them programmed to talk to BLE lightswitches using some really basic Espruino Javascript.)
When they built the new High School in my old home town in western CT one of the local archaeologists (day job: science teacher) did some exploration on site and discovered all sorts of stuff - no funding for a proper dig so they capped it and put up a plaque about it (ISTR they put the tennis courts on top since that disturbed it least?)
So, yeah, there's lots of archaeology in New England, it's just that a lot of it is literally buried or otherwise not called out. (And "in the US, 100 years is a long time; in the UK, 100 miles is a long distance" is also Just How It Is...)
https://wiki.debian.org/UsingQuilt but the short form is that you keep the original sources untouched, then as part of building the package, you apply everything in a `debian/patches` directory, do the build, and then revert them. Sort of an extreme version of "clearly labelled changes" - but tedious to work with since you need to apply, change and test, then stuff the changes back into diff form (the quilt tool uses a push/pop mechanism, so this isn't entirely mad.)
Yea, so? Debian goes back 32 or more years, and quilt dates to approximately the same time. It’s probably just a year or two younger than Debian.
At Mozilla some developers used quilt for local development back when the Mozilla Suite source code was kept in a CVS repository. CVS had terrible support for branches. Creating a branch required writing to each individual ,v file on the server (and there was one for every file that had existed in the repository, plus more for the ones that had been deleted). It was so slow that it basically prevented anyone from committing anything for hours while it happened (because otherwise the branch wouldn’t necessarily get a consistent set of versions across the commit), so feature branches were effectively impossible. Instead, some developers used quilt to make stacks of patches that they shared amongst their group when they were working on larger features.
Personally I didn’t really see the benefit back then. I was only just starting my career, fresh out of university, and hadn’t actually worked on any features large enough to require months of work, multiple rounds of review, or even multiple smaller commits that you would rebase and apply fixups to. All I could see back then were the hoops that those guys were jumping through. The hoops were real, but so were the benefits.
Quilt is difficult to maintain, but a quilt-like workflow? Easy: it's just a branch with all patches as commits. You can re-apply those to new releases of the upstream by using `git rebase --onto $new_upstream_commit_tag_or_branch`.
By having a naming convention for your tags and branches, then you can always identify the upstream "base" upon which the Debian "patches" are based, and then you can trivially use `git log` to list them.
Really, Git has a solution to this. If you insist that it doesn't without looking, you'll just keep re-inventing the wheel badly.
Do you ever really want this? I don't recall wanting this. But you can still get this: just list the ${base_ref}..${deb_ref} commit ranges, select the commit you want, and diff the `git show` of the selected commits. It helps here to keep the commit synopsis the same.
E.g.,
c0=$(git log --oneline ${base_ref0}..${deb_ref0} |
grep "^[^ ] The subject in question" |
cut -d' ' -f1)
c1=$(git log --oneline ${base_ref1}..${deb_ref1} |
grep "^[^ ] The subject in question" |
cut -d' ' -f1)
if [[ -z $c0 || -z $c1 ]]; then
echo "Error: commits not found"
else
diff -ubw <(git show $c0) <(git show c1)
fi
See also the above commentary about Gerrit and commit IDs.
(Honestly I don't need commit IDs. What happens if I eventually split a commit in a patch series into two? Which one, if either, gets the old commit ID? So I just don't bother.)
People keep saying “just use Git commits” without understanding the advantages of the Quilt approach. There are tools to keep patches as Git commits that solve this, but “just Git commits” do not.
Having maintained private versions of Debian packages, I have zero need for "commit messages on changes to patches". I can diff them as needed as I showed, but I rarely ever need to -- I mostly only rebase onto new upstreams. Seeing differences in patches isn't helpful because there is not enough context there as to what changed in the upstreams.
I rather suspect that "commit messages on changes to patches" is what Debian ended up with and back-justifies it.
Of course, I am not a Debian maintainer, so it's entirely possible I'm just missing the experience of it that would make me want "commit messages on changes to patches".
Quilt was AFAIK used before Git, so you’re not wrong. But now that it’s there, it has some advantages.
I’m not arguing against replacing Quilt, but it should be more than just Git. I haven’t done Debian packaging in a long time but apparently there are some Git-based tools now?
I don't know that I've ever wanted to diff a diff, but you could do that still. And bisecting would still be possible, especially if you use merges instead of rebases.
Bisect rebases... you mean that you have two release branches based on divergent upstream branches and you want to quickly test where a bug was introduced on the way from the one to the other? What I would do in a rebase workflow is find the merge base (`git merge-base`) of the two release branches, and bisect from that to the release branch I'm interested in.
The satellite clocks are designed to run autonomously for a few days without noticeable degradation, and up to a few weeks with defined levels of inaccuracy, but they are normally adjusted once a day by the ground stations based on the timescale maintained by the USNO. That, in turn, uses an ensemble of H-masers.
I knew of some experiments in this space back in the late 1980s or early 1990s - but it was specifically with DECstation hardware that had terrible clocks (not used for alerting, just "this graphs nicely against temperature".) https://groups.csail.mit.edu/ana/Publications/PubPDFs/Greg.T... (PDF) 4.2.1 does talk about explaining local clock frequency changes with office temperature changes (because they overwhelm a clock-aging model) but it doesn't have graphs so perhaps they weren't clear enough to include (or just not relevant enough to Time Surveying.)
it's slop. (If you look at the ORCID link posted elsethread there's literally nothing biology related in his 70 publications in the last two decades - and it seems unlikely one would become director of the PSFC with that sort of distraction...)
reply