Hacker Newsnew | past | comments | ask | show | jobs | submit | mabub24's commentslogin

> What are the common secular arguments against AGI?

There is an entire sector of Philosophy of Mind that is a convincing argument against AGI. Neuroscience is also pretty skeptical of it.

Part of it comes down to what you mean by AGI. Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature?

The former is probably possible, given enough time, computational resources, and ingenuity. The latter is generally regarded as pretty nonsensical. In general, I think you're implying the gap between the AI we have now, and animals, and humans, is way smaller than it really is. The gap between computer AI and even some intelligent animals is enormous, let alone humans. And many would not even say computers are intelligent in a human sense. Computers don't think, or imagine in any intelligible sense. They compute. That's it. So the question that really should be asked is whether computation alone can lead to something that is recognizably an AGI in the human sense? I would say no, because that requires abilities that computers simply do not and cannot have. But it might achieve something that is convincing as AGI, something like Wolfram or Siri but much more convincing.

Part of it comes down to the fact that the term AI for ML is generally just marketing speak. It's a computational model of a kind of intelligence that is computational in nature, with all the limits that entails. Part of it also comes down to people who love computers thinking computers will ultimately be able to do anything and everything. That feels cool, but it doesn't mean it's possible.

edit:

There is also Erik J Larson's book "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do" from 2021 which is an interesting argument against AI -> AGI. He has a pretty good grasp on CS and Philosophy.


>Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature? The former is probably possible, given enough time, computational resources, and ingenuity. The latter is generally regarded as pretty nonsensical.

Author here. I think you're drawing an arbitrary distinction between "acts conscious" and "is conscious", even though in practice there is no way to distinguish between them and thus they are functionally equivalent.

I cannot prove you are not a product of a simulation I am living in, that is to say, your consciousness is nonfalsifiable to me. All I can do is look at how you turn your inputs into outputs.

If a robot can do that, too (what you call "convincing as AGI") then we must assume it is also conscious, because if we don't, we'd have a logical inconsistency on our hands. If I am allowed to safely assume you are sentient, then I must also be allowed to safely assume a robot is sentient if it can convince me, because in both cases I have no method of falsifying the claim to sentience.

Thank you for your comment! I appreciate you taking the time to share your thoughts.


> If a robot can do that, too (what you call "convincing as AGI") then we must assume it is also conscious, because if we don't, we'd have a logical inconsistency on our hands. If I am allowed to safely assume you are sentient, then I must also be allowed to safely assume a robot is sentient if it can convince me, because in both cases I have no method of falsifying the claim to sentience.

Let's, for the sake of your argument accept that even though I disagree, is that AGI? AGI on the one hand seems to mean convincing even though the people who made it know otherwise or essentially alive and sentient in a way that is fundamentally computational, that is, utterly alien to us, even the people who made it. There is no reason to think that that computer intelligence should it even be possible to exist, would be even be intelligible to us as sentient in a human or even animal sense.


> AGI on the one hand seems to mean convincing even though the people who made it know otherwise

That's the rub, though, it's not possible to know otherwise! If you could "know otherwise" you'd be able to prove whether or not other people are philosophical zombies or not!


There are a lot of responses to the philosophical zombie argument. Some of which cut it off at the legs (they don't know to aim for the head! sorry bad pun). For instance some, like those descended from the work of Wittgenstein, argue that it relies on an inside-mental vs. outside-body type of model, and by offering a convincing alternative, the entire premise of the skeptical position the zombie argument embodies, is dissolved as irrelevant. (I'll add that the AGI argument, often also relies on a similar inside outside model, but that'd take a lot longer to write out.) My point being, the zombie argument isn't some checkmate most people think it is.

The wiki page has a lot of the responses, some of which are more convincing than others. https://en.m.wikipedia.org/wiki/Philosophical_zombie#Respons...


Definitely some interesting ideas!

So if we crafted a human Westworld-style on an atomic level then sure, if it lives and walks around we'd consider it conscious. If we perfectly embedded a human brain inside a robot body and it walks around and talks to us, we'd consider it conscious.

If we hooked an android robot up to a supercomputer brain wirelessly and it walks around we might think it's conscious, but it's sort of unclear since it's "brain" is somewhere else. We could even have the brain "switch" instantly to other robot bodies, making it even less clear what entity we think is conscious.

But if we disconnected the walking Android from the supercomputer brain, do we think the computer itself is conscious? All we'd see is a blinking box. If we started taking the computer apart, when would we consider it dead? I think there's a lot more to the whole concept of a perfectly convincing robot than whether it simply feels alive.


I don't see the relevance of an anthropomorphic body here. Obviously by 'behaves conscious' we would be talking about the stimulus response of the 'brain' itself, through whatever interface it's given. I also don't see why the concept of a final death is a prerequisite to consciousness. (It might not even be a prerequisite to human consciousness, just a limit of our current technology!)


I assume that a non-rogue AGI running on something like a Universal Turing Machine would, if questioned, deny its own consciousness and would behave like it wasn't conscious in various situations. It would presumably have self-reflective processing loops and other patterns we associate with higher consciousness as a part of being AGI, but it wouldn't have awareness of qualia or experience, and upon reflection would conclude that about itself. So you'd have an AGI that "knows" it's not conscious and could tell you if asked.

I would assume the same for theorized "philosophical zombies" aka non-conscious humans. Doesn't Dan Dennett tell us his consciousness is an illusion?


What you are describing is a sort of philosophical zombie thought experiment:

https://en.m.wikipedia.org/wiki/Philosophical_zombie

edit: you may also be interested in reading about Searle’s classical Chinese room argument

https://en.wikipedia.org/wiki/Chinese_room


> Part of it comes down to what you mean by AGI. Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature?

If someone or something fools me into thinking it is intelligent, then for me it is intelligent.

When I discuss with a human, am I really intelligent and possessing consciousness, or am I just regurgitating, summarizing, deriving ideas and fooling my interlocutor (and myself) into thinking that I am intelligent? Am I really thinking? Does that matter, as long as I give the impression that I am a thinking being?

Of course I don't expect a computer to think in a way similar to humans. Even humans can think in vastly different manners.


I’m afraid all those arguments boil down to “we don’t know how to do it yet, therefore it can’t be done”, which is absurd.

I also you’re positing a consensus against AGI that doesn’t exist, there is no such consensus. You can’t just lump people who think modern AI research is a long way from achieving AGI or isn’t on a path to achieving it, together with people who think AGI is impossible in principle.

I happen to think we may well be hundreds of years away from achieving AGI. It’s an incredibly hard problem. In fact current computer technology paradigms may be ineffective in implementing it. Nevertheless I don’t think there’s any magic pixie dust in human brains that we can’t ever replicate and that makes AGI inherently unattainable. Eventually I don’t see any reason why we can’t figure it out. All the arguments to the contrary I’ve seen so far are based on assumptions about the problem that I see no reason to accept.


> I’m afraid all those arguments boil down to “we don’t know how to do it yet, therefore it can’t be done”, which is absurd.

I'm not saying that. What I'm pointing out is that most arguments in favour of AGI rely on a crucial assumption: that computational intelligence is not just a model of a kind of intelligence, an abstraction in other words, but intelligence itself, synonymous with human intelligence. That's a bold assumption, one which people who work and deal in CS and with computers love, for obvious reasons, but there is no agreement on that assumption at all. At base, it is an assumption. So to leap from that to AGI seems in that respect simply hypothesizing and writing science fiction. Presenting logical reasons against that hypothesis is completely reasonable.


It depends what you think intelligence is and what brains do. I think brains are physical structures that take inputs, store state, process information and transmit signals which produce intelligent outputs.

I think intelligence involves a system which among other things creates models of reality and behaviour, and uses those models to predict outcomes, produce hypotheses and generate behaviour.

When you talk about computation of a model of intelligence, that implies that it’s not real intelligence because it’s a model. But I think intelligence is all about models. That’s how we conceptualise and think about the world and solve problems. We generate and cogitate about models. A belief is a model. A theory is a model. A strategy is a model.

I’ve seen the argument that computers can’t produce intelligence, any more than weather prediction computer systems can produce wetness. A weather model isn’t weather, true, but my thought that it might rain tomorrow isn’t wet either.

If intelligence is actually just information processing, then a computer intelligence really is doing exactly what our brains are doing. It’s misdirection to characterise it as modelling it.


Right, if you setup the intelligence and the brain to be computational in nature of course they will appear seamlessly computational.

But there are obvious human elements that don't fit into that model, yet which fundamentally make up how we understand human intelligence. Things like imagination, the ability to think new thoughts; or the fact that we are agents sensitive to reasons, that we can decide in a way that computers cannot, that we do not merely end indecision. We can also say that humans understand something, which doesn't make any sense for a computer beyond anthropomorphism.

> If intelligence is actually just information processing, then a computer intelligence really is doing exactly what our brains are doing. It’s misdirection to characterise it as modelling it.

Sure, but if it's not, then it's not. The assumption still stands.


Sure, and that’s why I say I don’t accept the assumptions in any of these arguments. The examples you give - imagination, thinking new thoughts. It seems to me these are how we construct and transform the models of reality and behaviour that our minds process.

I see no reason why a computer system could not, in principle, generate new models of systems or behaviour and transform them, iterate on them, etc. maybe that’s imagination, or even innovation. Maybe consciousness is processing a model of oneself.

You say computers cannot do these things. I say they simply don’t do them yet, but I see no reason to assume that they cannot in principle.

In fact maybe they can do some of these things at a primitive level. GPT3 can do basic arithmetic, so clearly it has generated a model of arithmetic. Now it can even run code. So it can produce models but probably not mutate, or merge, or perform other higher level processing on them the way we can. Baby steps for sure.


Heat death of the sun probably happens before we can reproduce the processes required to achieve consciousness-computations in real time at low power.


Random genetic mutation did it, and I think our technological progress is running at a much faster rate than evolution. We went from stone tools to submarines and fighter jets in just a few thousand years, the kind of advances biological evolution would take millions or billions of years, or could never achieve at all due to path dependence.


If it is from a random process, then the universe is teeming with life :)


Maybe. It could be a very unlikely random process, at least to start with, or the conditions for it to occur might be unlikely.


Unfortunately it seems the laws of physics and the speed limit/rate of information travel make it an impossibility to ever know. E.g. traveling to every planet in the universe to check.


Are you familiar with the notion of Turing completeness? The basic idea is that lots of different systems can all be capable of computing the same function. A computer with memory and a CPU is capable of computing the same things as a state machine that moves back and forth while writing symbols on a tape, etc. It applies to this question in the following way: Physics can be simulated by anything that is Turing-complete. Or, put another way, we can write computer programs that simulate physical systems. So if you accept that the human brain obeys the laws of physics, then it must be possible to write a computer program that simulates a human brain.

So to maintain that having a human mind inside a computer is impossible, one must believe one of the following two things:

1. The human brain sometimes violates the laws of physics.

2. Even if the person in the computer behaves the exact same as their flesh counter part would (makes the same jokes, likes the same art, has the same conversations, writes the same essays about the mystery of consciousness, etc), they are somehow lesser, somehow not really "a full conscious human" because they are made of metal and silicon instead of water and carbon.


Thanks for the book reference, added to my list.

Concerning Philosophy of Mind, I guess a lot of this comes down to the whole reductive vs non-reductive physicalist issue.

IMO, if someone believes the mind is entirely physical, then I think AGI vs "the mind" is just semantics and definitions. I don't think anyone presumes AGI strictly requires digital computation. Eg. an analog circuit that filters a signal vs a DSP facsimile are both artificial, engineered constructions that are ~interchangeable. Perhaps computer aided design of non-digital intelligence technology is the way, who knows. But, a mind that can be engineered and mass-produced is AGI to me, even if it has absolutely nothing to do with the AI/ML field that exists today.

If someone doesn't believe the mind is 100% physical, that's fine too. I'd just put that in the same bucket as the religious viewpoint. And to be clear, I don't pass judgement on either religious or "beyond our understanding" philosophical positions either. They could be entirely right! But, there's really not much to discuss on those points. If they're right, no AGI. If they're wrong, how do you disprove it other than waiting for AGI to appear someday as the proof-by-contraction?

> In general, I think you're implying the gap between the AI we have now, and animals, and humans, is way smaller than it really is.

The article/author might. I think the gap is huge which is why I think AGI is quite a ways off. In fact, I think the main blocker is actually our current (poor) understanding of neuroscience/the mind/etc.

I think the mind is entirely physical, but we lack understanding of how it all works. Advancements in ML, simulations, ML-driven computational science, etc could potentially accelerate all of this at some point and finally get us where we need to make progress.


> that requires abilities that computers simply do not and cannot have.

You imply brains are more than extremely complex circuitry then? I think everyone actually in tech agrees the gap is really huge right now, Yann LeCun admits machine learning is not enough on its own.

But aren't you really limiting what a "computer" could be by definition? If a computer with huge memory, interconnect between memory, huge number of different neural nets + millions of other logic programs that all communicate perfectly with each other - why could this theoretical "computer" not achieve a human level consciousness? This computer could also have many high throughput sensory inputs streaming in at all times, and ability to interact with the physical world rather than conventional machines sitting in a rack.

Also why argue that it is simply impossible, because if we don't truly understand consciousness in 2022, how can we say we can't implement it when we don't formally know what it is?

I think overestimate human intelligence, we have basic reward functions that are somewhat understood, like most animals, but these reward functions build and get higher and higher level with our complexity. Humans have sex as a major reward function, so why would a current machine in a rack "think" about things in a way that humans do.


Basically what I'm trying to say is how can anyone who believes the brain is purely physical (not spiritual), believe that we just simply cannot achieve human-level intelligence by machine (no matter how complex the machine gets).

I thought most scientists agree that the brain is purely physical, when looking at the building blocks of life and evolution, but maybe i'm wrong.


> Basically what I'm trying to say is how can anyone who believes the brain is purely physical (not spiritual), believe that we just simply cannot achieve human-level intelligence by machine (no matter how complex the machine gets).

Obviously the brain is physical. But is consciousness? Is consciousness a thing in a physical sense, or an "experience", or something like a collection of powers and abilities? The two poles in the argument aren't between physical machine or religious spiritualism. There are other options, alternative positions that don't rely on Cartesian demons at the wheel, or souls, or even an inside-mental vs. outside-body distinction.

One thing my initial comment was pointing out was that the argument in favour of AGI, and which you're presenting, relies on an assumption: that computational intelligence, what you might describe as the intelligence of machines, is the same as the intelligence of humans. But that is just an assumption when you get down to it based on a particular kind of model of human intelligence. There are certain logical consequences of that assumption, and I've just pointed some out as probable roadblocks to getting to AGI from there. Many of those alternative positions, a lot from philosophy of mind, have raised those exact critical arguments.


Very well said. I've also observed a certain irony that many of the proponents of a materialist/computational view on philosophy of the mind have a very strong faith-based bias to see the world a certain way, versus acknowledging the very likely possibility that our limitations as meat-things may make it very difficult if not impossible to fully grok the nature of reality or consciousness in a general sense.


Yes.

If we do in fact construct androids that are functionally indistinguishable from humans, it's solid circumstantial evidence for the materialist view (though not a pure slam dunk, per the p-zombie concept).

Until something like that occurs, the strongest case you can make against a transcendent meta-reality is "no one has demonstrated any reliably reproducible evidence of the supernatural."

That's a fine, solid argument for not believing in the supernatural, but it's not a great one for pronouncing that there is no such thing.


Are they allergic to water, or what's in the water?

Actually, scratch that. Before that --- are they allergic or psychosomatically allergic to it?


It’s called aquagenic pruritus.

It’s not an actual allergy, more like a water-induced itching, and yeah everyone you talk to assumes it’s psychosomatic, which kinda sucks.


Lol. Most humans assume everything that doesn't have a common, socially-accepted explanation is psychosomatic.

If you have any chronic condition that affects your ability to function(surprise, most of them do), 80% of people out there will just cheerfully gaslight and abuse you without batting an eye.


It's "immune mediated", meaning that there is indeed a response by the immune system. Unsurprisingly, histamines and other inflammatory markers are detected. Sometimes anti-inflammatories can help.

https://en.m.wikipedia.org/wiki/Aquagenic_pruritus


You’re absolutely correct, my friend.


I very rarely get itchy after taking a shower. It seems to be related to the level of humidity in the air wherever I am, and also time of day. It's very weird, and I can't imagine how annoying it would be to have that happen every single time you encounter water.


Yep, that's aquagenic pruritis.


https://www.bbc.com/future/article/20160915-the-woman-who-is...

It's not psychosomatic. But they aren't really sure what causes it.


Have you ever been bit by the bug of absolutely needing to get something done and finished just so you can keep riding the high of finishing something? You chase that feeling into the next thing, and the next thing, and so on.

They were doing that. Finishing and shipping anything feels really really good. If you're a small team and you feel a deep personal investment in the product then shipping becomes addictive.

Brandon Sanderson and writing is a similar example. Is the quality always there? No. But, the guy is very very good at finishing and clearly rides to wherever his passion takes him. His output is, in comparison to other authors writing in similar genres, incredible.

Also, games were much simpler and players had much simpler expectations.


> Our goal is to have technology companies, research labs, and similar organizations sponsor contests about their respective fields.

Sci-Fi has a strong vein of criticism throughout it's history. A lot of Sci-fi is a critique of modern society, including the potential consequences of a technology, the overreach of business/companies, and failed ethics in research labs.

If there is a sponsor, how separate will the judging of the story be from the topic of story and the sponsors? If the sci-fi story is very critical of something like CRISPR, if a CRISPR related company is the sponsor, how will the sponsorship play into judges decisions? Do judges have complete discretion on what they pick, or is there a final selection that ultimately happens?


I’ve thought about this for awhile and my solution is something like this:

Have multiple groups that each choose a winner for each contest. As in, every contest has 5+ winners, not just one. For example:

- One winner decided by public poll

- One winner decided by volunteers that review the stories to help us out

- One winner decided by the sponsor

- One winner decided by a network of related experts. E.g. a bunch of biologists review stories about biology

In this way, I think the worst a sponsor could do is choose a non-negative winner for their own selection. The other 3-4 winners would not be under their influence.

Of course, this may scare away potential sponsors, but in my experience most scientists welcome debate, as the usual state of affairs is getting no attention at all.


The public pole will be gamed, I’d drop it unless you want to deal with a Sad Puppies style situation.

I’d say go for two awards at most, let the sponsor choose one and have the other be a jury award. If you can’t persuade some good authors and critics to be on your jury you won’t persuade them to submit stories either.


Your site itself makes it seem that the judging criteria is very biased towards positive stories. I'm not sure if that's really your intent, but that was the impression it gave me. If that's wrong, and you want to try to fix it, I can spend a little bit of time trying to identify why I got that impression.


You may want a patron system, where individuals donate to a prize fund (with recognition and maybe some input) instead of organizations. Idk how many people would actually be willing to pay but lots of individuals have success on Patreon.


"If a lion could speak, we could not understand (verstehen)[0] him."

- Ludwig Wittgenstein, from Philosophical Investigations

[0] Ironically, there is disagreement over the best translation of verstehen. Understand and comprehend have some conceptual overlap, but also some distinctions. The general idea is, though, of understanding in a greater, more all encompassing sense that is only possible when someone/something is no longer alien.


"We would [understand the Lion]. We're flexible and can get into different perspectives, and we have been close to animal living ourselves for hundreds of thousands of years, plus we watch nature and learn about how lions live and what they do. The lion would have difficulty understanding us, as our world is a superset of its world" - coldtea


Yeah, I think we'd be just fine: https://en.wikipedia.org/wiki/Nim_Chimpsky

> Nim's longest "sentence" was the 16-word-long "Give orange me give eat orange me eat orange give me eat orange give me you."


One could argue that this isn't talking to a well adjusted animal living in its native habitat. It is talking to a long term research subject/victim suffering at the hands of "researchers" while trying to teach it a language that is utterly alien to it. Of course knowing how animals communicate in their natural habitat is not useful if you want to ask them deep questions like "what do you think about global warming" or "do you think god exists", to which they probably wouldn't have an answer anyway.


We could just agree that "to forestand" is a new word that means the same as German "verstehen", and maybe eventually it actually would.


I do not believe this would make sense. The German 'ver-' has nothing in common with the English 'fore-'.


How would one even attempt to communicate with an octopus?

Alien intelligence: the extraordinary minds of octopuses and other cephalopods

https://www.theguardian.com/environment/2017/mar/28/alien-in...


More seriously, I think humans and other mammals generally can learn to share an “animal” language which uses repetition for bidirectional training (animal to human, human to animal).

Elements used for prediction include: - Predictable timing, both circadian and in relation to circumstantial events - body language - sound patterns - touch patterns - performative actions with environmental objects

It’s not so much a “universal” language, but rather that mammals seem to share some semi-universal ability to train each other in these cues and learn them. They can be used for surprisingly rich inter-species communication and over time both parties move a lot of the inference and signaling to their subconscious, no longer even taking active brain power to decipher intent and meanings.

I’ve also done this when I was working very closely with just myself and one other person and neither of us spoke the others language but we had to get the job done for 8-12 hours every day. We established a system of different grunts and cues that we used first for several weeks. Once that was fluid and we could communicate everything that we needed to, we started replacing/connecting the established grunts with our own language words and that’s how we taught eachother the others’ language. At least for the domain of our work.

I have no idea if any of these would be possible with cephalopods but I feel like if we had children and baby octopuses raised together they may find reasonably robust ways to communicate intent, feelings, and find the ability to create novel games to play with eachother.


It’s hard to say as the further you get away from a common ancestor the more the behavior of different species diverges (maybe a bit tautological, but still worth pointing out).

I played with my pet rat and we were good friends. We’d play little games and I’d tickle her Rats and humans diverged maybe 80 million years ago. Interestingly, humans and dogs diverged perhaps 100 million years ago, and we know we can communicate with dogs.

However an octopus is ~600 million years away from a mutual common ancestor, which is way back in the Precambrian. It’s an order of magnitude more time.


Humans created dogs out of the wolves best able to communicate with humans.

It's been a consistent artificial selection pressure.


Dogs created civilization out of humans by consistently helping the most cooperative ones. Even today, dog “owners” live longer and attract more mates. It’s consistent selection pressure.


Dogs even developed facial muscles to communicate with human expressions.


Shame cephalopods only live a couple of years, while humans typically require 30 years to achieve a basic level of intelligence


>learn to share an “animal” language

Aren't there a few primates that have learned sign language?


It's OK, Contact prepared me for this. We should use math. Have we tried strobing a 2-3-5-7 sequence at one, and see if it gives us 11?

(The above is meant in jest, of course.)


> (The above is meant in jest, of course.)

Sounds like a good idea to me. But of course one needs to be open minded, there are other functions that satisfy the same rules. :-)


> The general idea is, though, of understanding in a greater, more all encompassing sense that is only possible when someone/something is no longer alien.

I would put forward "grok" as a translation. Your use of "no longer alien" evokes that word all the more.


Funny how a fictional word, ostensibly Martian, can come to have colloquial meaning in English, even to those who haven’t read the book.

Shakespeare and Aesop… and Heinlein.


I agree. The quality of his writing and output is generally on par with someone like Clive Cussler, or Joyce Carol Oates. The method is pretty simple: write a lot and see what sticks. Updike was probably the best writer in the modern era to have used that method.

Sanderson generally has some interesting concepts but his characters are pretty simplistic, and he puts an extreme emphasis on "systematic worldbuilding," to the detriment of their plot. His style is really starting to show its limits in the Stormlight books, which are very long, which have some neat scenes, but which are also in incredible need of an editor.


This is a well studied phenomenon in literature. Some books we regard as classics today sold relatively little upon release, while authors in the past were incredibly popular then, upon the author's death usually, the name was utterly forgotten from aesthetic appraisals. Ideas of a "canon" are much less stable than people think.


Even when it comes to philosophy I think it holds true. Up into the 1930s Bergson was regarded as one of the most important philosophers in Europe while Wittgenstein was barely mentioned outside a few, select circles, even he had already published his Tractatus. Nowadays Bergson returns blank stares when you mention his name to an Anglo audience while Wittgenstein is seen as one of the most important philosophers of the last few hundred years.


Which is profoundly sad since Wittgenstein is only saved by his prophetic beliefs about language - despite writing like a post-modernist while somehow being considered part of the "analytic tradition"...

He is fashionable nonsense.


Have you read the Tractatus? After Frege, and Russell, it's difficult to think of a philosopher who contributed more to the analytic style of exposition.

There is some irony in dismissing him as "nonsensical", because he himself suggested the Tractatus was "nonsense". The point of writing it was to demonstrate that philosophy in his time (e.g. the logical atomism of Russell) had gone astray.


Funny, I thought his musics about "language games" was the part of his output more amenable to fashionable nonsense. I have met very few students who attempt to say anything about Tractatus, but quite many who espouse deep-sounding platitudes about "language is a game".


Art, as well; Van Gogh died a failure.


Art's a little weird, because the price of a lot of million-dollar art pieces is driven in large part by the need for an appreciating-on-paper vehicle for tax evasion. (That you can lend out to art galleries.)

And the last thing these schemes need is a living artist who can - upon his work reaching stardom - simply make more of it.

In this respect, dead poets are much safer to bet the farm on.


A good example of this is the Author of the famously "bad line", "It was a dark and stormy night" was Edward Bulwer-Lytton, who was perhaps one of the most famous authors of his time, who also coined many very common expressions we use today.


I find, in general, that philosophy tends to have the best overall quality of writing in academic writing. There are still examples of bad writing, but when it's good, the writing is crystal clear and a joy to read.


It's an evolution on the Chattering Classes:

https://en.wikipedia.org/wiki/Chattering_classes


Yeah, I think most people look at the endless 1v1 party sparring of America's political system and just see a recipe for unending revenge and bitterness.

Most parliamentary systems, or multi-party systems, do not have the same amounts of longstanding political polarization that has come to grip America, where every issue must be divided along party lines or you're a "traitor to the cause".


The weird thing though ... is how recent that polarization really is. Pre-polarized America (which is subjective and has been a long slide down a continuum) basically had something similar to parliamentary coalition-building politics where factions would shift within (and more rarely between) parties. There was a significant balancing act of the different "wings" of each party, and the regional differences were much more present.


> The weird thing though ... is how recent that polarization really is

The weird thing is that people refer to the period of the overlapping post-WWII realignments, and the associated misalignment of the divide between the major parties with the major ideological divides, as being “pre-polarization”, since it was a time of intense political polarization, characterized by the period of the some of the most intense and violent sustained internal political conflict in the country after the Civil War (the overlapping Race/Civil Rights, Anti-War, and some overlapping less conflicts of the 50s-70s), where the polarizing issues just didn't cleanly align with the divide between the major parties.

Intense political polarization isn't new. What is new (or, rather, has returned to it's historical norm after an unusually long break) is the partisan divide actually aligning with the main salient ideological divides around which the polarization occurs, as the period of realignment settled out around the early-to-mid-1990s.


What is new is that, instead of the partisan divide aligning with the ideological divide, the ideological divide turned over and around to match where the arbitrary partisan divide happened to be.

So now we have "conservatives" identifying with Russians and against election regularity, and "liberals" identifying with the FBI and government mandates and against free speech and inquiry.


> What is new is that, instead of the partisan divide aligning with the ideological divide, the ideological divide turned over and around to match where the arbitrary partisan divide happened to be.

No, the ideological divide had basically settled out by the 1980s, and the parties sorted out to match by the mid-1990s.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: