An IDE, by name, integrates the development environment. Depending on target, development lifecycle usually includes writing code, generating binaries, pushing those binaries to the target, executing on the target, and debugging on the target; so an IDE includes editing, compiling, linking, device/JTAG drivers, and debugging, in my mind. I suppose for platforms without cross-compilation the device drivers vanish, and for languages without compilers and linkers similar; but at a minimum, development means both authoring and debugging code, and so an IDE must provide an integrated view of the debugging process.
Then that distinction doesn't match the original claim, that readability of unlabeled code doesn't matter, since we now have IDEs, as this is an orthogonal matter according to your definition.
Except that if you're in country B, which has a law that says "you may not make information available to children that discloses that Santa Claus is made up," and the age of becoming an adult in your country is 18 -- knowing that a person accessing your site from country A is an adult in country A (which means, say, ≥ 16) is not sufficient to comply with the law.
I’m not sure why the age of majority in the region of the server would be relevant. The user is not traveling to that region, the laws protecting them should be the laws in their own region.
I don't know if "should" is intended as a moral statement or a regulatory statement, but it's not at all unusual for server operators to need to comply with laws in the country in which they are operating…
Or like it's 2025. Plenty of current-production parts using an 8051 core as either their main sequencer, or as a low power core with bigger options on the main power rail.
Can you give an example? I see RISC-V being used to replace custom 16 and 32 bit cores, and M0-class ARM cores, in the 10k+ gate range, but haven't really seen a migration in the 8 bit space.
I have personally ported a usb c usbpd stack from 8051 to cortex m0 to rv32ec. I can’t talk for gate count, it for code size the biggest factor was the compiler, with ARMCC giving the smallest code. As the rom was larger than the core, that was a bigger factor in gate count.
The article, and many of the responses, are hinting at the fact that bubblesort is an example of an anytime algorithm. This is a wide class of algorithms which provide a correct answer after some amount of time, but provide an increasingly good answer in increasing amounts of time short of the completion time. This is a super valuable property for real time systems (and many of the comments about games and animations discuss that). The paper that introduced me to the category is "Anytime Dynamic A*" [0], and I think it's both a good paper and a good algorithm to know.
Anytime algorithms are great for robotics planning, for example. A plan does not have to be perfect to be useful, especially when it can be refined further in the next timestep. And the robot cannot act out the plan instantaneously, so by the time one is close to the point where a non-ideal segment would be, one has had many timesteps to refine/optimize it. But robot could start moving right away.
An anytime algorithm monotonically improves some evaluation metric. For a sort, the evaluation metric is usually the number of inversions in the list. At completion, there will be zero inversions. If at time 0 there are N inversions, then at time 0 < t < completion time there will be ≤ N inversions; that is, the list is "more sorted" than it was before. As the various examples about games and animation elsewhere in the comments show, this can be interpreted as "somewhat smoothly moving towards sorted over time," which is an (occasionally) ((rarely)) useful property.
Okay, but it gives you a mostly good answer! Unlike many other sorts where if you interrupt it before the last step, you get total nonsense.
It's basically asymptotically approacting the correct (sorted) list instead of shuffling the list in weird ways until it's all magically correct in the end.
> Unlike many other sorts where if you interrupt it before the last step, you get total nonsense.
which ones you have in mind? and doesn't "nonsense" depend on scoring criteria?
selection sort would give you sorted beginning, cocktail shaker would have sorted both ends
quick sort would give vast ranges separation ("small values on one side, big on the other"), and block-merge algorithms create sorted subarrays
in my view those qualities are much more useful for partial state than "number of pairs of elements out of order" metric which smells of CS-complexity talk
For control systems like avionics it either passes the suite of tests for certification, or it doesn't. Whether a human could write code that uses less memory is simply not important. In the event the autocode isn't performant enough to run on the box you just spec a faster chip or more memory.
I’m sorry, but I disagree. Building these real-time safety-critical systems is what I do for a living. Once the system is designed and hardware is selected, I agree that if the required tasks fit in the hardware, it’s good to go — there’s no bonus points for leaving memory empty. But the sizing of the system, and even the decomposition of the system to multiple ECUs and the level of integration, depends on how efficient the code is. And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”), so the system design needed to deal with lower-ASIL capable hardware and achieve reliability, at the cost of system complexity, at a higher level. Today doing that in a safety processors is possible for hand-written code, but still marginal for autogen code, meaning that if you want to allow for the bloat of code gen you’ll pay for it at the system level.
>And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”)
The idea that processors from the last decade were slower than those available today isn't a novel or interesting revelation.
All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
50+ years of off by ones and use after frees should have disabused us of the hubristic notion that humans can write safe code. We demonstrably can't.
In any other problem domain, if our bodies can't do something we use a tool. This is why we invented axes, screwdrivers, and forklifts.
But for some reason in software there are people who, despite all evidence to the contrary, cling to the absurd notion that people can write safe code.
> All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation.
No. It means more than that. There's a cross-product here. On one axis, you have "resources needed", higher for code gen. On another axis, you have "available hardware safety features." If the higher resources needed for code gen pushes you to fewer hardware safety features available at that performance bucket, then you're stuck with a more complex safety concept, pushing the overall system complexity up. The choice isn't "code gen, with corresponding hopefully better tool safety, and more hardware cost" vs. "hand written code, with human-written bugs that need to be mitigated by test processes, and less hardware cost." It's "code gen, better tool safety, more system complexity, much much larger test matrix for fault injection" vs "human-written code, human-written bugs, but an overall much simpler system." And while it is possible to discuss systems that are so simple that safety processors can be used either way, or systems so complex that non-safety processors must be used either way... in my experience, there are real, interesting, and relevant systems over the past decade that are right on the edge.
It's also worth saying that for high-criticality avionics built to DAL B or DAL A via DO-178, the incidence of bugs found in the wild is very, very low. That's accomplished by spending outrageous time (money) on testing, but it's achievable -- defects in real-world avionics systems overwhelming are defects in the requirement specifications, not in the implementation, hand-written or not.
Codegen from Matlab/Simulink/whatever is good for proof of concept design. It largely helps engineers who are not very good with coding to hypothesize about different algorithmic approaches. Engineers who actually implement that algorithm in a system that will be deployed are coming from a different group with different domain expertise.
Sometimes. But sometimes you have a set of functions that are called through function pointers that need the same signature, and one or more of them ignore some of the arguments. These days I’d spell that __attribute__((unused)); but it’s a perfectly reasonable case.
reply