Funny, because these exact measures [0] were brought up in response to a similar claim I made over a year ago [1] about the resolution of our instrumentation.
There would appear to be a worrying trend of faith in scientism, or the belief that we already have all the answers squirreled away in a journal somewhere.
It's a bit funny, the qualia thing and sampling rates.
Obviously we hope what we learn from e.g. psychology and fMRI will help us explain more things about the mind, and surely most researchers in psychology hope their research will help us get some answers on things related to qualia as well. And almost certainly most good / consistent reductionist researchers must believe that qualia arise from the brain, at least in significant part.
However, precisely by this reductionist logic, and since it is immediately and phenomenally clear that the rate of change of qualia in the mind (or the "amount" of different qualia, i.e. images or sounds that one can process or generate in the mind in under a second) is incredibly fast, it follows immediately and logically without any need for an experiment that fMRI cannot have the temporal resolution needed for a rich understanding of the mind, simply based on knowing the TR (temporal sampling resolution) is so poor. And yet, I also find a lot of people in scientific brain research go oddly silent or seem to refuse to accept this argument unless some strange sort of published, quantificationist operationalization can be pointed to (hence my pre-emptive mentioning information transmission in neurons in under 100ms).
I'm not sure I'd call this scientism, exactly, I tend to see it as "selective quantificationism", i.e. that certain truths can only be proven as true if you introduce some kind of numerical measurement procedure and metrical abstraction. Like, no one demands a study with Scoville units to prove that e.g. a ghost pepper is at least an order of magnitude hotter than candied ginger, even though this is as blazingly obvious as the fact that the mind moves too fast for something that can barely capture images of the brain at a rate of two per second.
I'm not a scientist, and I don't even have a very good statistical background, so correct me if I'm wrong; would it be far to say that the lack of skepticism about fMRI studies in the broader public is due to scientism? Because of naive reductionism and a gut understanding of what is "scientific", people are far more skeptical of a study that says, "we surveyed 100,000 people" vs. "we scanned the brains of 10 people." I've noticed a similar phenomenon with psych vs. evolutionary psych. People have an image in their head of what is scientific that has nothing to do with statistical significance and everything to do with vibes.
It is tempting to speculate on what might cause the credulousness of the broader public re: fMRI, but I think there is enough / too much going on here for me to really be able to say anything with much confidence. Scientism especially is hard to define.
I think I broadly agree with you though that credulousness to (statistically and methodologically weak) scientific / technological claims mostly comes down to vibes and desires / needs, and not statistical significance, logical rigor, evidence, or etc.
Where needs / desires are high, vibes will (often) win over rationality, and vice-versa. It is easier for people to be objective about science that doesn't really clearly matter in any obvious direction, or at all. fMRI is "the mind", and thus consciousness, and so unfortunately reduces rational evaluation in much the same way speculation about AI and "consciousness" and etc does. *Shrug*
The signing keys used by the Certificate Authority to assert that the client (leaf) certificate is authentic through cryptographic signing differ from the private keys used to secure communication with the host(s) referenced in the x509 CN/SAN fields.
I know that. At issue is the fact that the signing keys can be used to sign a MITM key. If there were multiple signatures on the original key, it would (or could) be a lot harder to MITM (presumably). Do you trust any CA enough to never be involved in this kind of scandal? Certainly government CA's and corporate CA's MITM people all the time.
Edit: I'm gonna be rate limited, but let me just say now that Certificate Transparency sounds interesting. I need to look into that more, but it amounts to a 3rd party certificate verification service. Now, we have to figure out how to connect to that service securely lol... Thanks, you've given me something to go read about.
They... sort of are though? A year or two ago I just waited until the very last problem, which was min-cut. Anybody with a computer science education who has seen the prompt Proof. before should be able to tackle this one with some effort, guidance, and/or sufficient time. There are algorithms that don't even require all the high-falutin graph theory.
I don't mean to say my solution was good, nor was it performant in any way - it was not, I arrived at adjacency (linked) lists - but the problem is tractable to the well-equipped with sufficient headdesking.
Operative phrase being "a computer science education," as per GGP's point. Easy is relative. Let's not leave the bar on the floor, please, while LLMs are threatening to hoover up all the low hanging fruit.
You say in your comment: "Anybody with a computer science education ... should be able to tackle this one" which is directly opposed to what they advertise: "You don't need a computer science background to participate"
"Anybody with a computer science education who has seen the prompt Proof. before should be able to tackle this one with some effort, guidance, and/or sufficient time."
I have a computer science education and I have no idea what you're talking about. The prompt "Proof." ?
Most people who study Comp Sci never use any of what they learned ever again, and most will have forgotten most of what they learned within one or two years. Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
Holy fuck. I should just grow coconuts or something in the remote Philippines.
> Most software engineers never use any comp sci theory at all, but especially not graph theory or shit like Dijkstras algorithms, DFS, BFS etc.
But we are talking about Advent of Code here, which is a set of fairly contrived, theoretical, in vitro learning problems that you don't really see in the real software engineering world either.
Obligatory uwaterloo plug. I didn't even end up graduating after 3 years of compsci but still ended up with almost two years of work experience. Colleagues in my early career were still paying down student debt while I had already paid for tuition out of pocket, not with tax dollars.
Funny too, because I had a philosophy professor there who talked about how the university is not a vocational school, but a place one goes to enrich the mind and become a more worldly citizen.
> It's the best platform to stalk people and collect any info using OSINT.
It's the main platform of interest if you ever talk to data brokers just because of the richness of personal information, employment history, and social network (connections) information present there. Microsoft is sitting on a goldmine of personally-identifiable information, and the platform is aggressively scraped every millisecond for new data.
Funny, because these exact measures [0] were brought up in response to a similar claim I made over a year ago [1] about the resolution of our instrumentation.
There would appear to be a worrying trend of faith in scientism, or the belief that we already have all the answers squirreled away in a journal somewhere.
[0] https://news.ycombinator.com/item?id=41834346
[1] https://news.ycombinator.com/item?id=41807867
reply