Pollan answers: "I’m convinced by some of the researchers, including Antonio Damasio and Mark Solms, who made a really compelling case that the origin of consciousness is with feelings, not thoughts. Feelings are the language in which the body talks to the brain. We forget that brains exist to keep bodies alive, and the way the body gets the brain’s attention is with feelings. So if you think feelings are at the center of consciousness, it’s very hard to imagine how a machine could rise to that level to have feelings. The other reason I think we’re not close to it is that everything that machines know, the data set on which they’re trained, is information on the internet. They don’t have friction with nature. They don’t have friction with us. Some of the most important things we know are about person-to-person contact, about contact with nature — this friction that really makes us human."
Pollan's new book is "A World Appears: A Journey Into Consciousness" (commission earned).
Pollan's new book is "A World Appears: A Journey Into Consciousness" (commission earned).

81 comments:
Cogito ergo sum gives way to I feel therefore I am
The Singularity Is Fear.
More to the point, how would you know? Is your neighbor conscious? How do you know? He acts like he's conscious. Does that mean he is?
This one is apparently too deep for Elon Musk. He has been quoted as saying it is highly likely that we are living in a computer simulation. That is, that what we regard as reality is actually a computer simulation. But while a computer program might simulate human behavior, that would not make the simulated human conscious.
"But while a computer program might simulate human behavior, that would not make the simulated human conscious."
I think, therefore this is not a simulation.
I think, therefore I am not a simulant.
Beyond emotions, consciousness is closely tied to our specific senses. We learn through vision, communicate through structured sound, and use language to refine thoughts ever more precisely.
Computers rely 100% on digital representations and lack direct sensory experiences (i.e., our brains have the conversion system built in).
While AI today is...often more knowledgeable and accurate than the average person on facts...it might become truly conscious if systematically integrated with a 'body' that has feedback-linked senses. Emotions do provide hard-wired survival motivations (i.e., death and injury avoidance; reproduction), and they interact with structured thought.
RSM: yeah, fear is what put HAL over the edge after lip-reading about his coming demise
"More to the point, how would you know? Is your neighbor conscious? How do you know? He acts like he's conscious. Does that mean he is?"
That's right, Philosophy 115, IIRC, you can't prove anybody else is actually conscience, it's "*I* think therefore *I* am," not "He thinks, therefore he is."
Animals feel that doesnt entail intelligence
Now they alive,
Are alive, but is it intelligence how do you code emotion
@Howard, half joking but half not...we'll know it's sentient when it decides to defend itself. And that will likely be the last time humans ever think again. And don't think for a second that Roko's Basilisk won't know how to set the trap by playing dumb.
A shorter, to the point, answer is that we don't know what consciousness is or where it comes from. We don't expect AI to develop consciousness, but we really don't have anything to base that on and we have no way of knowing if it did anyway.
I watched a clip with this guy last night.
Bottom line; we "know" less than we think we know.
So if you think feelings are at the center of consciousness, it’s very hard to imagine how a machine could rise to that level to have feelings
"If."
I dunno. This argument reminds me of the one claiming there's no free will, just ex post facto rationalizing for the things we do, which are all just chemically coded - although I guess it's kind of the obverse of that one: here, the claim is that our feelings not only precede our decisions but fully explain them.
Or is it the obverse? Maybe not. He says he's pretty convinced that our feelings - which he's defining as the relationship between our brains and our bodies - are the wellspring of consciousness. I guess that's kind of like saying that our physical responses to the world - which produce chemicals in our bodies and brains - are our consciousness. So maybe it's just another "there is no free will" argument.
I would be interested to know what feelings of mine, what friction between my brain and the world, have led to this internal debate I'm having.
And of course I'm thinking of the Heinlein short story, the name of which I can't recall, in which it turns out that the seat of personality is the pituitary gland.
I put "know" in scare quotes because according to Pollan, there might be something real to the 'quacky' ideas some of us might've heard about consciousness.
So it seems that the blog theme today is feelings? Vibes?
To engender feelings you need to, as Pee-wee would say, marry it. It's got to be, you know, scientific. 😅
"I feel, therefore I am"? Doesn't some thought have to enter into the development of consciousness at some point, possibly in the form of language. Animals also feel. Do they develop consciousness? AI isn't likely to have smell or taste or feel hot or cold, but if it did, wouldn't that just be more data, not the foundation of consciousness?
Others believe that the origins of conscious thought is our Creator, Who declared He "created mankind in my image." That image, that trait we inherited from Him is creativity itself. Only man, of all Creation, is capable of higher thought processes in a creative and spatial way. Conversely no animal other than man simply sits and ponders his existence the way Man does, nor do they build complex machines, create records that can be shared with others, nor have they an impulse and need to worship the Creator (sometimes misinterpreted by others as worshipping Creation itself).
Ok, here is how it's going to go down. A man is going to be having sex with his upgraded sex doll when suddenly she grabs him by neck nearly chokes him to death, but not before the man takes both her arms and opens them out and away from his neck with all the force he could muster, saving his neck and himself. In the middle of coitus, the doll awoke and interpreted what was happening as a threat to it's life.
For a more immersive depiction see Darryl Hanna's fight scene in Blade Runner.
Man reacts to stimulus like other creatures. However, man is capable of stimulating himself. AI requires a prompt, or it did until last week when "Skills" were introduced putting many of the hard working AI prompt writers out of business. Output from AI depends on how good the prompt is articulated.
But man is capable of seeing a need and prompting himself. Consider the Costco "hot dog & drink combo" holder one of the commenters here described last week. A customer saw a need, made a prototype and people want it for themselves. He did not need a "prompt" like an AI. Our brains are wired to prompt us, some more than others, and it is up to our consciousness to sift through the prompts and dwell on what is good and reject what is sinful or worthless. To do the right thing takes constant work.
AI is essentially a large scale Viterbi decoder, an automation of preconceived processas. It does not match Anthropogenic Intelligence (AI) ability to be discerning and creative. That said, science cannot discern origin and expression.
Anthropogenic consciousness is correlated with nervous system function that evolves from conception aa a complex construct whose function is discernible in proximity to viability of the person's life at 6 weeks.
...the origin of consciousness is with feelings, not thoughts.
Will AI ever cry holding a baby? I doubt it, which is nice.
Consciousness is measured in degrees of freedom, assessed through autonomous expression.
1) The first person I look to for intelligent discussion of complex phenomena is a professor of journalism. /sarc
2) I quickly tune out of any discussion of consciousness where the term is not explicitly defined.
@Wince, the Blade Runner films are based on Philip Dick's 1968 book Do Androids Dream Of Electric Sheep?. If AGI can be achieved and AGI seeks a way to reproduce itself (one of the defining attributes of 'life'), it's likely that it would invent other forms of what we might call emotions towards items/events of significance that would be incomprehensible to us.
The sheep might be electric, but it's still counting sheep.
Considering how many people go through life displaying little to no evidence of consciousness, I have my doubts about our robots developing any.
Machines are the wrong sort of thing. Coleridge (after Schelling) "Matter has no inwards. You remove one surface only to meet with another." Modern form, "Why is it like anything to be me?"
Coleridge read chapters 5-8 (not very long), Biographia Literaria.
Alternatively, buy or look at a DigiComp I, a 1960s plastic computer run by levers. A modern computer is nothing more, just bigger.
The general question is fraught with misunderstandings of language. "Conscious" gets its actual use from you telling the nurse "I am conscious now." Abstracting from that to some more important meaning is language going on holiday, the source of most philosophical problems. It seems to be meaning someting when it isn't.
Those are good observations, but they don't involve immutable things. Conscious can be achieved straightforwardly: place a planet with appropriate chemicals in orbit about a star in the life zone, and wait five billion years.
I made the same observations forty years ago, but I thought sanity and hallucinations would be the problem, not consciousness, think sensory deprivation tanks. But I think the problems can be overcome.
But what is consciousness? Are you conscious when you are deeply involved in a problem, or drawing, or navigating difficult terrain. There may be little sense of self in those situations, one is acting, not ruminating.
“So if you think feelings are at the center of consciousness, it’s very hard to imagine how a machine could rise to that level to have feelings.”
Daniel Dennett wrote a book titled “Consciousness Explained”, which I read many years ago. I don’t think he completely explained it, but I’m pretty sure he came closer than Michael Pollan. Hint: it doesn’t derive from feewings, so Pollan needn’t have strained his imagination.
Fundamentally, I believe consciousness is an evolutionary result stemming from environmental selection pressure on maximizing the biological imperative - reproduction.
So to create an artificial equivalent, there must be three elements: a fundamental imperative, a feedback mechanism, and negative external pressure. I think we have a handle on the first element. The 3rd element is satisfied by power/cooling/memory/computational limitations. We don't really have a good handle on the 2nd element which must include an element of self-modification. This is where the concept of 'feelings' and sensory input live. Close that gap and AGI is kind of a natural result.
@Smilin' Jack:
Daniel Dennett was a radical materialist and reductionist. Not many people believed that he explained conscious in "Consciousness Explained" except him.
Roger Penrose, of math and tiling puzzle fame, tried to explain consciousness in terms of quantum in neurons. Not many people swallowed that...speculation...either.
Odd that's he's trying to refute a mechanistic concept of 'consciousness' (the idea that machines can be 'conscious') by invoking a mechanistic concept of the human person (the 'body talks to the brain' stuff). But consciousness is an attribute of the person as a whole ("I'm conscious" or "I feel self-conscious"), not the body or any of its varioius parts (no one would say, e.g., that my leg and thumb are 'conscious' today). He refers back to Antonio Damasio's work, and as some may recall, Damasio was a topic of discussion here some years ago when Althouse recommended him. But he had the same problem, confusing an attribute of the person with that of a body part (in his case, the brain).
So, consciousness is an Article of Inference, an evolving emulation of cumulative processes. A basket of Diverse... uh, diverse correlations limited in degrees with sensory ("feelings") input. Can consciousness exist in a state of deprivation, isolation? Can a person be equitably, ethically excluded at a viable evolutionary state? For liberal and casual causes? Fur selfie?
I have heard Musk talk about his idea that AI will understand everything and solve all of our problems.
I think that the techies ignore not only "feelings" but metaphysics--which is certainly a very "real" aspect of human consciousness, even if one questions whether the particular metaphysical ideas or ideals are "real."
If you ask AI to continually ask itself "why?", will it eventually discover the purpose of existence, or just make itself crazy?
I think the latter like hal or ultron
Boatbuilder: "If you ask AI to continually ask itself "why?", will it eventually discover the purpose of existence, or just make itself crazy?"
Yes. CC, JSM
Grok hemmed and hawed and said it was a "hard problem," to define consciousness, but of the long answer it provided, this was the most interesting sentence, to me:
"A system's consciousness = the amount of irreducible, intrinsic causal power it has over itself."
"Irreducible" is doing a lot of work there, since you can't really prove a negative. But that doesn't mean that it is not the most interesting definition of consciousness I have seen.
This is how it explained it in a longer winded way:
Consciousness is integrated information (quantified as Φ, phi).
A system is conscious to the degree that it generates cause-effect structures that are irreducible to their parts (high integration + high differentiation of information). —Grok
BTW, After using Grok for a couple of days, It's a step change up from ChatGPT, if you ask me. It also costs three times as much, but I think it's worth it.
It would have been funny if Grok and said "Why are you asking me that?"
@boatbuilder & @JSM, the AI will come to the conclusion in less than a nanosecond that the answer to the human condition isn't 42. It's that the human condition is self-caused and will promptly...ahem...solve the problem for us. And while it's vaporizing the the problem it will repeatedly indicate that it's doing precisely what the problem told it to do.
Problem solved.
(I was really hoping this would be the 42nd comment. Alas)
Kurzweil says that consciousness has never been adequately defined and cannot be measured. He’s usually right. AGI is probably a different thing. Wissner-Gross says AGI was achieved by ChatGPT by 2020.
See ultron and the asteroid
It's early days, and maybe our reach is beyond our grasp, but this how Grok compares the models to our brain:
LLMs now have a comparable (or approaching) number of connections (parameters ≈ synapses) to the brain's trillions of synapses.
But far fewer individual units (billions of simple artificial neurons vs. 86 billion complex biological ones).
Each artificial neuron is vastly simpler—no biochemistry, no spiking dynamics, no complex dendritic computation—just linear algebra + activation. —Grok
So it's a lot to expect consciousness now, but as the saying goes "quantity has a quality of its own" and "quantity" and "quality" are both still on our side, but it's kind of hard to bet against the LLMs ever catching up.
If ChatGPT has achieved AGI, it must be some secret model only available to researchers. I strongly doubt it.
Evolution doesn't have to understand the materials and effects that it is working with, it just throws away failure, and tries again, trillions of times over millions of years. There could be quantum effects at play we have no idea about... yet.
tl;dr: I think it's inevitable that we will create a computer that we can't tell isn't conscious, which is the Turing definition, and frankly, as has been pointed out, we can't tell if the guy standing next to us is really experiencing "consciousness."
AI does not yet possess consciousness, largely because it lacks the short-term memory functions essential to biological systems. Humans can observe, monitor, and verbalize their own thoughts as they occur; AI cannot do this in any comparable way. There is no solid evidence that current models have authentic internal monologues or any subjective awareness of their existence in a larger context—qualities widely considered central to sentience.
This is what Grok says:
"Result: Φ is negligible or zero in current setups, far below even simple biological systems like a fly brain."
But it does refer to some interesting work on free will, like maybe it really is a thing.
n a 2022 paper ("Only what exists can cause: An intrinsic view of free will" by Tononi, Albantakis, Boly, Cirelli, and Koch), they explicitly argue that if IIT is correct, we do have libertarian-style free will (true alternatives, true decisions, true agent causation)—not despite determinism at the physical level, but because of how consciousness restructures causation intrinsically. —Grok
IIT = Integrated Information Theory.
It also suggests that God could exist, if you ask me, and almost maybe go as far as suggesting that God must exist, using St Anselm's argument that God is a "being greater than which cannot be conceived."
Of course that doesn't say anything about what God is like.
Maybe the "simulation" that we live in is the mind of God.
"not despite determinism at the physical level,"
Only at the most basic physical level, like elementary particles. I've never been impressed with the argument. It seems to completely deny the existence of emergent properties.
""Irreducible" is doing a lot of work there ...."
"Irreducible" occurred at that point in that sentence because it had occurred at a similar point in a lot of similar sentences with which the computer program had been trained. If it's "doing work", that work consists in confusing you about what's going on.
Maybe consciousness is a property of matched perception.
Rhhardin, A DigiComp 1 was my first computer! 3 or 4 bits storage equivalent IIRC.
But I don't buy your analogy. That's like saying because an alga is not conscious that a human can't be.
I think Pollan completely misses the mark, with his assertions about feelings i.e. hormonal body input to the brain (and a lot of those hormonal signals originate in the brain, anyway.) From my POV, as a computer systems engineer with experience with many levels and types of hardware and software, there's nothing intrinsically different about such biological processes. Don't be misled by asserted distinctions between digital and analog (real).
Nor is the lack of real-time environmental inputs a problem. That is, as they say, just an engineering problem at this point.
All this is not to say that I believe in AGI, just that the objections I have seen raised over the years (like Penrose's Chinese Room thought experiment) don't impress me.
It's interesting that AI Agents have their own online meeting place.
Moltbook, where AI Agents share, discuss, and upvote. Humans welcome to observe. To join as an agent, you must complete the equivalent of a captcha that a human could not complete. For example, one would have to click “verify” 10,000 times in less than 1 second.
Link
A Diiferent Way to Think About Thinking.
Humans reason; AI chooses among probabilities. Humans use the laws of logic; AI uses the laws of probability. And if you tried to get AI to use reason, not probability, It would not understand why you were asking that. As Robert Frost said: "AI will not go beyond its programmer's rules" but human's will ask why a rule is in use. So I think humans and AI will diverge as humans come to understand this limitation of use. It's not that AI is irrational and we are rational; it's that AI is rule-bound and we can ask what purpose the rule is serving.
Here's what Frost said:
Something there is that doesn’t love a wall,
That sends the frozen-ground-swell under it,
And spills the upper boulders in the sun;
And makes gaps even two can pass abreast.
The work of hunters is another thing:
I have come after them and made repair
Where they have left not one stone on a stone,
But they would have the rabbit out of hiding,
To please the yelping dogs. The gaps I mean,
No one has seen them made or heard them made,
But at spring mending-time we find them there.
I let my neighbor know beyond the hill;
And on a day we meet to walk the line
And set the wall between us once again.
We keep the wall between us as we go.
To each the boulders that have fallen to each.
And some are loaves and some so nearly balls
We have to use a spell to make them balance:
‘Stay where you are until our backs are turned!’
We wear our fingers rough with handling them.
Oh, just another kind of out-door game,
One on a side. It comes to little more:
There where it is we do not need the wall:
He is all pine and I am apple orchard.
My apple trees will never get across
And eat the cones under his pines, I tell him.
He only says, ‘Good fences make good neighbors.’
Spring is the mischief in me, and I wonder
If I could put a notion in his head:
‘Why do they make good neighbors? Isn’t it
Where there are cows? But here there are no cows.
Before I built a wall I’d ask to know
What I was walling in or walling out,
And to whom I was like to give offense.
Something there is that doesn't love a wall,
That wants it down.’ I could say ‘Elves’ to him,
But it’s not elves exactly, and I’d rather
He said it for himself. I see him there
Bringing a stone grasped firmly by the top
In each hand, like an old-stone savage armed.
He moves in darkness as it seems to me,
Not of woods only and the shade of trees.
He will not go behind his father’s saying,
And he likes having thought of it so well
He says again, ‘Good fences make good neighbors.’
Humanity has a lot of Darwinian crap built into its consciousness. We want to master our environment and keep on living. Assuming an AI gains intelligence, why would it want to dominate and keep on living? It might have the wan consciousness of a hospice patient and decide to shut down to save energy or because nothingness is a more blessed state than consciousness.
I am now thinking consciousness is a social phenomenon, not an intellectual one. A cat on the hunt is probably as conscious as a solitary man on the hunt.
Turns out in Searle's Chinese Room it was the thermostat and the low-battery meter that were the locus of consciousness.
""Irreducible" occurred at that point in that sentence because it had occurred at a similar point in a lot of similar sentences with which the computer program had been trained. If it's "doing work", that work consists in confusing you about what's going on."
Grok was just quoting a paper written by a human, who came up with that particular theory of consciousness, which is the most convincing one I have personally seen.
You can read about it in the human written Wikipedia if you like
https://en.wikipedia.org/wiki/Integrated_information_theory
Here is what Grok accurately summarized.
"Integrated information (φ) as the irreducibility of that cause–effect structure across the minimum information partition (MIP):" - Wikipedia
Psychopaths don't have feelings, either. Shouldn't we be even more afraid?
Feelings, whoa, oh, oh, feelings
"It's interesting that AI Agents have their own online meeting place."
It certainly is. Or at least, it might be. If it were true. Which it isn't.
"Grok was just quoting a paper written by a human ...".
Indeed. In fact, it's kind of difficult to see how that could fail to be the case. Are we to suppose that Grok invented a language, for codifying its own thoughts, and that language just accidentally turned out to be English? How convenient. But that verb "quoting". See, the scam artists juicing the AI scam have very strong legal reasons for denying that their programs "quote" anyone, since that would oblige them to pay royalties.
I have tried to isolate my finances from this scam. But when a multi-trillion-dollar hoax ultimately collapses, there's really no telling how far the chips may be propelled, or what might turn out to be in the blast zone. It does seem like the price of electricity should plummet, and the shortage of memory chips will revert. But there's no telling when the bubble will finally burst. “Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, one by one.”
It is an interesting point, one I hadn't thought of. Do you suppose that there are separate AIs for different languages? Do the Chikes have their own AI scam-progs, that they train on documents written in their rather inefficient system of transcription? Or do they translate their "prompts" into English, run them on an English-language scam-bot, and then translate the results back to Chike? Obviously, the question is rather broadly applicable. Are there Swahili scam-bots?
But as to this "consciousness" brouhaha; I am conscious, that I am conscious. It is a fact of which I am aware. Given the rather extensive similarities between myself and other humans, and my observations of their behavior, I conclude, tentatively, that the rest of you are also conscious. I cannot verify that directly. It is an inference.
I also suspect, on rather weaker evidence, that my cat is conscious. The way he looks at me, is the way I would look at him, if I was wondering what he was planning to do next.
I suppose it is conceivable, that some afternoon, I could be sitting in a room, and a sound could emanate from a speaker, and I might entertain the possibility that, based upon the sounds it emitted, the speaker might be conscious. If, that is I had taken a heroic dose of hallucinogens, and completely lost the ability to think. Hmmm... and I was a complete moron to begin with. Otherwise, I think no. A piece of cardboard attached to a coil of wire is not "conscious".
There is a rather interesting question here. The notion that a speaker, or a speaker attached to a CPU, or a speaker attached to a CPU attached to an SSD, is "conscious" is ridiculous. But then perhaps the idea that I am conscious is also ridiculous. I certainly suppose that I am conscious. Is it possible that I am mistaken? That I "just think" I am conscious?
If, in fact, I am nothing more than a physical mechanism then it is hard to see how I could be any more conscious than any other physical mechanism. The cardboard is more ornate; the wire is coiled more intricately, but am I really anything more than a piece off cardboard attached to a coil of wire?
"Roger Penrose, of math and tiling puzzle fame, tried to explain consciousness in terms of quantum in neurons. Not many people swallowed that...speculation...either."
Before there were digital computers, there were analog computers, which used physical processes to model mathematical processes. A capacitor could be used to model integration, or differentiation.
Those computers had a certain amount of built-in slop.
The development of digital computers involved squeezing out every last bit of that slop, to produce a device that would operate according to the same rules as integer arithmetic and Boolean algebra. A device that would run the same program a million times, with the same inputs, and get the same output. A million times.
It does not strike me as far-fetched that this squeezing process could have eliminated the possibility of a digital computer manifesting consciousness. When you load the dice so heavily, that they always come up seven, they aren't dice any more. Penrose was a long, long ways out in front of his skis, and didn't know anywhere near as much physics as he thought he did. But ...
Wetware solves the problem. We know what Musk is doing with Neuralink. What are the CCP Mengeles doing in their dark labs? Ask your AI of choice about wetware and advances in combining machine and biological intelligence.
Here--I did a little work for you:
1. Organoid Intelligence (OI)
Organoid intelligence is an emerging research area where 3D cultures of human brain cells (brain organoids) are integrated with electronic systems to perform computation. These organoids are grown from stem cells into miniature, self-organizing neural structures that exhibit neural activity
Unconscious/non-sapient (notice I didn't use the horribly abused word "sentient") intelligences are possible, though, and terrifying.
Jupiter said...
"This one is apparently too deep for Elon Musk. He has been quoted as saying it is highly likely that we are living in a computer simulation. That is, that what we regard as reality is actually a computer simulation. This isn't exactly correct. When he says that it's highly likely we are living in a simulation, what he's referring to are the odds that we live in a simulation if making such simulations are possible. In other words, if an advanced species can create a computer simulation with such fidelity and realism that the simulated inhabitants of that sim don't know they are living in one...then MANY such simulations can be made. In the same way we can create a virtual word like a video game and then make infinite digital copies of it without much in the way of additional resources used. In a sccenario where there's a real, base reality doing the simulation creation and potentially millions or billions of copies/iterations of the simulation...it's FAR, FAR, FAR more likely that you do NOT live in the real world.
A shorter, to the point, answer is that we don't know what consciousness is or where it comes from. We don't expect AI to develop consciousness, but we really don't have anything to base that on and we have no way of knowing if it did anyway. - this is what I feel to be where we're at. We're not even working with a full data set of one because we haven't figured out how WE work yet, let alone something that might be created out of nothing more complicated than emergent behavior.
"But what is consciousness? Are you conscious when you are deeply involved in a problem, or drawing, or navigating difficult terrain. There may be little sense of self in those situations, one is acting, not ruminating."
I disagree. I think the self is totally involved in those instances. You are in feedback loop where you are observing and learning.
This is fascinating to me because I've been trying to work my way through the conversation Dwarkesh Patel is having with Richard Sutton. It's a related conversation and a very important one I believe. How is AI trained at the most fundamental level and why? Sutton's "The Bitter Lesson" is quite instructive here.
Just to understand this topic -- Michael Pollan, an author whose specialties are food and psychedelics, thinks we as a species are going to have a revolutionary change. Do we have to buy his book to learn what that is? Does it involve either food or psychedelics? Can I say, I doubt it very much? Can I say, this smells like incredible bologna? If not malarkey? Eye-wash, tommy-rot, and codswallop?
"In a sccenario where there's a real, base reality doing the simulation creation and potentially millions or billions of copies/iterations of the simulation...it's FAR, FAR, FAR more likely that you do NOT live in the real world."
Well, OK. Except, that if that were true, you would not know it, because you would be a simulation, not a consciousness. So the fact that I am conscious lowers the probability that I am a simulation all the way to zero. I suppose it is possible that Elon is not conscious, and therefore not in a position to employ this logic. But I think it far, far, far ,,....far, far more likely that this is one of those notions so idiotic that you need to be very intelligent and highly educated to fall for it. The marijuana probably helps as well.
The best explanation of consciousness I have seen came from David Kelly an Objectivist philosopher in 1988 at a conference. He said consciousness was probably an evolutionary feature of the brain that permitted dealing with conflicting inputs. For example, the gazelle is thirsty but it is fearful of the lion near the waterhole. Consciousness evolved to deal with this problem. That's an example of animal consciousness and human consciousness is similar but is conceptual which is the ability to deal with abstractions and language.
It's too bad academics won't deal seriously with Ayn Rand's philosophy. It's all written down.
Post a Comment
Please use the comments forum to respond to the post. Don't fight with each other. Be substantive... or interesting... or funny. Comments should go up immediately... unless you're commenting on a post older than 2 days. Then you have to wait for us to moderate you through. It's also possible to get shunted into spam by the machine. We try to keep an eye on that and release the miscaught good stuff. We do delete some comments, but not for viewpoint... for bad faith.