Hacker Newsnew | past | comments | ask | show | jobs | submit | dleeftink's commentslogin

Why not svg filters to create alpha channels? Seems to be supported by the library too (very useful btw!).

I might misremember, but iZotope RX and Melodyne were pretty useful in this regard.

Eno applies:

> It's the sound of failure: so much modern art is the sound of things going out of control, of a medium pushing to its limits and breaking apart. The distorted guitar sound is the sound of something too loud for the medium supposed to carry it. The blues singer with the cracked voice is the sound of an emotional cry too powerful for the throat that releases it. The excitement of grainy film, of bleached-out black and white, is the excitement of witnessing events too momentous for the medium assigned to record them.


And

> "By the time a whole technology exists for something it probably isn't the most interesting thing to be doing."


Where did you get this from? Searching for it, in a weird irony I guess, just leads me back to this post.

I recognize it as a quote from A Year With Swollen Appendices, which is a great read even if you aren't an Eno fan (although I am, which admittedly makes me biased :P)

Thank you! I’ll check that out

> novel sentence

The question then becomes on of actual novelty versus the learned joint probabilities of internalised sentences/phrases/etc.

Generation or regurgitation? Is there a difference to begin with..?


I'm not sure what you mean? As the length of a sequence increases (from word to n-gram to sentence to paragraph to ...), the probability that it actually ever appeared (in any corpus, whether that's a training set on disk, or every word ever spoken by any human even if not recorded, or anything else) quickly goes to exactly zero. That makes it computationally useless.

If we define perplexity in the usual way in NLP, then that probability approaches zero as the length of the sequence increases, but it does so smoothly and never reaches exactly zero. This makes it useful for sequences of arbitrary length. This latter metric seems so obviously better that it seems ridiculous to me to reject all statistical approaches based on the former. That's with the benefit of hindsight for me; but enough of Chomsky's less famous contemporaries did judge correctly that I get that benefit, that LLMs exist, etc.


My point is, that even in the new paradigm where probabilistic sequences do offer a sensible approximation of language, would novelty become an emergent feature of said system, or would such a system remain bound to the learned joint probabilities to generate sequences that appear novel, but are in fact (complex) recombinations of existing system states?

And again the question being, whether there is a difference at all between the two? Novelty in the human sense is also often a process of chaining and combining existing tools and thought.


Motioncanvas may also be of interest then!

[0]: https://motioncanvas.io/blog


All is copied in one way or another, progress in a vacuum is truly artificial and those who've been singularly credited for certain inventions likely have so because of the luck of the draw.

Hopefully, this century, we can shed some of the 'dominating' mindset that has led to technological exclusionism in the first place. Not that catching up to the state-of-the-art isn't warranted, but that progress will become pocketed once more if we keep falling for the same economic traps.

It's interesting how this comes up when the west is the one that is trying to catch up :)

Fair enough, but the point still stands: innovation of equal benefit compared to isolationism once more with a hefty share of underhanded copying, which will ultimately result in similar technical capabilities anyways.

While historically this has been difficult to achieve, when innovation cycles shift there is an opportunity to shift ingrained practices too.


While not specifically Victorian, couldn't we learn much from what daily conversations were like by looking at surviving oral cultures, or other relatively secluded communal pockets? I'd also say time and progress are not always equally distributed, and even within geographical regions (as the U.K.) there are likely large differences in the rate of language shifts since then, some possibly surviving well into the 20th century.

Although many tools exist, there still seems to a large context gap here: we need better tools to orient ourselves and to navigate large (legacy) codebases. While not strictly a a source graph or the like, I do think Enso like interface may prove successful here[0].


Also, not all information spreads through public channels, and might not even be/become publicly known. But that doesn't mean news refraction based on textual similarity isn't worthwhile to pursue, as it can reveal a lot about the self-organising principles by which the media operate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: