Hacker Newsnew | past | comments | ask | show | jobs | submit | JoachimS's commentslogin

"RC4, short for Rivist Cipher 4". No, "Ron's Code 4".

And the default will now be AES-SHA1, where SHA-1 is to be deprecate by NIST in 2030. (https://www.nist.gov/news-events/news/2022/12/nist-retires-s...)


All information is translated to Finnish at ingress, so yes.


The book Project Hail Mary by Andy Weir this is exactly what they do - have a layer of the bacteria (which is integral to the plot) between the hull and where the humans lived.

https://en.wikipedia.org/wiki/Project_Hail_Mary

Good, fun book. The main protagonists are a bit too inventive and quick to do solve, fix things. But in all fairness it's SF after all.


That was a great book and there is a movie in the works!

It looks good, big budget with Ryan Gosling as MC


Does this mean in a general sense there are numbers that are harder to factor, or is it due to constraints? That some keys will be much harder to crack? If so, how can we know beforehand?


it's more that there are numbers that are easier to factor (e.g. 2^n-1). almost all numbers once you get past the tiny numbers are about the same.


Yes, profit depends on scale. But far from everything sells in millons of units, and scale is not everything. Mobile base stations sells i thousands and sometimes benefit from ASICs. But the ability to adapt the base station due to regional requirements and support several generations of systems with one design makes FPGAs very attractive. So in this case, the scale make FPGAs a better fit.


With a 90% to 95% reduction in performance [0], I'd be interested to know when these "generational" upgrades are worth the hit, since it seems like you're already going back a few generations.

I'll admit I'm not familiar with the processing requirements of basestations, but the prospect of mass-produced FPGA baseband hardware still seems dubious to me, and I can't find conclusive evidence it being used, only suggestions that it might be useful (going back at least 20 years). Feel free to share more info.

[0] ASIC vs FPGA comparison of RISC-V processor, showing an 18x slowdown (or 94.[4]% reduction), apparently consistent with the "general design performance gap": https://iugrc.journals.ekb.eg/article_302717_7bac60ca6ef9fb9...


The ability to optimize the memory access and memory configuration is sometimes a game changer. And modern FPGA tools have functionality to make mem access quite easy. Not as easy as a MCU/CPU, but basically the same as for an ASIC.

I would also question the premise that mem access is less tedious, easy for MCUs/CPU. Esp if you need determinstic performance and response times. Most CPUs have memory hierarchies.

The more practial attempts at dynamic, partial reconfiguration involves swapping out accelerators for specific functions. Encoders, fecoders for different wireless standards, Different curves in crypto for example. And yes somebody has to implement those.


> modern FPGA tools have functionality

HLS is not good, so I don't know what you are referring to as "modern." I am primarily experienced with large UltraScale+ and Versal chips - nothing has changed in 15 years here.

> basically the same as for an ASIC

What does this even mean, specifically? Use RTL examples. ASIC memory access isn't "easy," either (though it is basically the "same.")

> partial reconfiguration involves swapping out accelerators for specific functions

Tell me you've never used PR without telling me. Current vendor implementations of this are terrible (with Xilinx leading the pack.)


Yes you can control the seeds and get determinstic bitstreams. Depending on device, tools you can also assist the tools by providing floorplanning constraints. And one can of course try out seeds to get designs that meet results you need. Tillitis use this to find seeds that generate implementations that meet the timing requirements. Its in ther custom tool flow.


You also need to bring time to market, product lifetime, the need for upgrades, fixes and flexibility, risks and R&D cost including skillset and NRE when comparing FPGAs and ASICs. Most, basically all ASICs start out as FPGAs, either in labs or in real products.

Another aspect where FPGAs are interesting alternatives are security. Open up a fairly competent HSM and you will find FPGAs. FPGAs, esp ones that can be locked to a bitstream - for example anti-fuse or Flash based FPGAs from Microchip are used in high security systems. The machines can be built in a less secure setting, and the injection, provisioning of a machine can be done in a high security setting.

Dynamically reconfigurable systems was a very interesting idea. With support for partial reconfiguration, which allowed you to change accelarator cores connected to a CPU platform seemed to bring a lot of promise. Xilinx was an early provider with the C6x family IRRC through company they bought. AMD also provided devices with support for partial reconfiguration. There were also some research devices and startups for this in the early 2000s. I planned to do a PhD around this topic. But tool, language support and the added cost in the devices seemed to have killed this. At least for now.

Today, in for example mobile phone systems, FPGAs provide the compute power CPUs can't do with the added ability do add new features as the standards evolve, regional market requirements affect the HW. But this is more like FW upgrades.


The competitiveness between Lattice and Xilinx is also not a univeral truth. It totally depends on the applications. Small to medium designs Lattice have very competitive offerings. Hard ARM cores, not as much. Very large designs not at all. But if you need internal config memory (for some devices), small footprint etc Lattice is really a good choice. And then support in open source tools to boot.


The non-deterministic part of the toolchain is not a universal truth. Most, all tools allow you to set, control the seeds and you can get deterministic results. Tillitis use this fact to allow you to verify that the FPGA bitstream used is the exact one you get from the source. Just clone the design repo, install the tkey-builder docker image for the release and run 'make run-make' and of course all tools in the tkey-builder are open source with known versions to that you can verify the integrity of the tools.

And all this is due to the actually very good open source toolchain, including synthesis (Yosys) P&R (NextPNR, Trellis etc), Verilator, Icarus, Surfer and many more. Lattice being more friendly than other vendors has seen an uptake in sales because of this. They make money on the devices, not their tools.

And even if you move to ASICs, open source tools are being used more and more, esp at simulation, front end design. As an ASIC and FPGA designer for 25 odd years I spend most of my time in open source tools.

https://github.com/tillitis/tillitis-key1 https://github.com/tillitis/tillitis-key1/pkgs/container/tke...


I never understood why FPGA vendors think the tools should do this and not the designer. Most do a terrible job at it too. E.g., Quartus doing place and route in a single thread and then bailing out after X hours/days with a cryptic error message... As a designer I would be much happier to tell it exactly where to put my adder, where to place the sram, and where to run the wires connecting the ports. You'd build your design by making larger and larger components.


As I understand it, the physical FPGA layout and timing information used for placement and routing is proprietary, and the vendors don’t want to share it. They’ll let you specify constraints for connections, but it has to go through their opaque solver. And to be fair, they do have to try to solve an NP-complete problem, so the slowness isn’t unjustified compared to all the other slow buggy software people have to deal with nowadays.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: