No, I don't have any. It is because it was not so much a bug, but a decision about a tradeoff. They compressed by unifying similar looking glyphs. Sure, these glyphs weren't representing the same character, but they did look similar. It is the kind of error an human could also have made, except humans also know that sums are supposed to match, so they take that into account when reading. Also they have a probability score and when they are unsure they read again or ask. This are all things these printers can't do without doing supervised OCR.
The tested scans did look kind of crappy, so if you care about non altered glyphs maybe don't do a lossy compression on a low resolution scan. So these issue can totally happen with any printer if your resolution is too low, the glyphs are ambigous and you use a too aggressive lossy compression. This also happens with other approaches like vectorization or OCR.
Yes and by default your tradeoff should be to have the correct information. And that's actually what Xerox claimed. They said this was false and it was correctly documented. They said only if you select it this would happen. Watch the CCC talk buy the person that figured this out. Turns out they were wrong.
> Watch the CCC talk buy the person that figured this out. Turns out they were wrong.
I already did, although some time ago. Kriesel also has some other interesting talks, e.g. about the German Railway company.
Like I totally think Xerox is at fault, but what they actually did wrong was using bad defaults (and lying when they got told). This can totally occur with any software. From looking at the pictures I think part of the issue was that the input resolution was lower than what the compression was tested with, not the compression part per se.
Also I think the customers are at least partially at fault for digitalizing, but not checking. Don't be stingy on important data. And who in there sane minds throws away the originals????? Like you can throw away copies all the way you like, but NEVER the original. (Except when you really want to "destroy" information.) To me that was the most ridiculous part, assuming software (being famous for bugs like no other tool) can never be wrong and throwing away physical things only relying on your random files to exist.
I haven't watched the talk since I saw it, but my memory was that they did it in a mode where they claimed it wasn't supposed to happen. So it would happen even if the costumer selected the right one. Maybe my memory is bad on that.
I mean I never watched the video (or maybe I did and I don't remember), I only 'saw' the presentation live. And I remember that room being a riot. So I might get some details wrong. Not sure this is correct english.
The FFT is still easy to use, and it you want a higher frequency resolution (not higher max frequency), you can zero pad your signal and get higher frequency resolution.
Zero-padding gives you a smoother curve, i.e., more points to look at. But it does not add new peaks. So, if you have two very close frequencies that produce a single peak in the DFT (w/o zero-padding), you would not get two peaks after zero-padding. In the field, were I work, resolution is understood as the minimum distance between two frequencies such that you are able to detect them individually (and not as a single frequency).
Zero-padding helps you to find the true position (frequency) of a peak in the DFT-spectrum. So, your frequency estimates can get better.
However, the peaks of a DFT are the summits of hills that are usually much wider than compared to other techniques (like Capon or MUSIC) whose spectra tend to have much narrower hills. Zero-padding does not increase the sharpness of these hills (does not make them narrower).
Likewise the DFT tends to be more noisy in the frequency domain compared to other techniques which could lead to false detections (e.g. with a CFAR variant).
> Q: What if I need matrix dimensions (M, N, K) not found in your configurations?
>A: 1. You can find the nearest neighbor configuration (larger than yours) and pad with zeros. 2. Feel free to post your dimensions on GitHub issues. We are happy to release kernels for your configuration.
Lol, this will be potentially much slower than using the general matmul kernel.
However, I like this kind of research because it really exploits specific hardware configurations and makes it measurable faster (unlike some theoretical matmul improvements).
Code specialization is cheap, and if it saves in the order of a few %, it quickly reimburses its price, especially for important things like matmul.
At EPFL we observe worrying trends that all services are moved to Microsoft (e-mails, cloud).
What happened to universities to host elemental services themselves?
EPFL also partnered up recently with Omnissa Work Space One to strengthen security of IT on campus. Mandatory (American) software which EPFL IT office wants to install on machines...
https://dkriesel.com/en/blog/2013/0802_xerox-workcentres_are...
reply