Behavior OnLine Forums  
The gathering place for Mental Health and
Applied Behavior Science Professionals.
 
Become a charter member of Behavior OnLine.

Go Back   Behavior OnLine Forums > >

Notices

Reply
 
Thread Tools Display Modes
  #41  
Old February 23rd, 2006, 03:52 PM
Fred H. Fred H. is offline
Member
 
Join Date: Jul 2004
Posts: 483
Default Re: A Free Will Challenge

Quote:
TomJ: We have concluded no such thing. There is no free will, especially if you imply animals do not have it.
To be more consistent with your own POV, you should say that it’s just the determinism of the algorithms that happen to be imbedded in the tissue known as TomJ that have “concluded no such thing.”

Think about it. Again.
Reply With Quote
  #42  
Old February 23rd, 2006, 04:34 PM
TomJrzk TomJrzk is offline
Member
 
Join Date: Aug 2004
Location: Dallas
Posts: 257
Default Re: A Free Will Challenge

Quote:
Originally Posted by Fred H.
To be more consistent with your own POV, you should say that it’s just the determinism of the algorithms that happen to be imbedded in the tissue known as TomJ that have “concluded no such thing.”
That's beside the point but I'll answer it: Yes, my brain, weighing its memories and instincts, concluded that there is no evidence yet of the existence of free will.

Your statement that 'it has been concluded' just left out the explicit 'we'. It has not been concluded since there is still disagreement. Now, if you want to say 'Fred has concluded', I would have no problem with you stating your own POV, no matter how incorrect it may be. And I'd really appreciate it if you would go back to your post and edit out that error.
Reply With Quote
  #43  
Old February 24th, 2006, 10:34 AM
Fred H. Fred H. is offline
Member
 
Join Date: Jul 2004
Posts: 483
Default Re: Free Will Challenge & Conclusion

Quote:
TomJ: Now, if you want to say 'Fred has concluded', I would have no problem with you stating your own POV, no matter how incorrect it may be. And I'd really appreciate it if you would go back to your post and edit out that error.
But if things are as you say, then it’d not be “you” or “Tom” that’d appreciate the requested edit, because “you” or “Tom” are merely illusions generated by the deterministic algorithms, embedded in the tissue of a brain, that would “appreciate” the edit. And it’d not be “Fred” that has concluded anything, but merely the deterministic algorithms embedded in the tissue of a brain that has “concluded” whatever was concluded; and it’d not be “Fred” that goes back and edits the post, since “Fred” would merely be an illusion generated by those algorithms.

Accordingly, if things are as you say, then “we” don’t exist, feelings and conclusions are nothing more than subjective constructs, and only blindly deterministic algorithms are “choosing” and “concluding” whatever blindly deterministic algorithms “choose” and “conclude.” So relax, neither “you” nor "Fred" really exist, and this is all an illusion.
Reply With Quote
  #44  
Old February 24th, 2006, 11:01 AM
TomJrzk TomJrzk is offline
Member
 
Join Date: Aug 2004
Location: Dallas
Posts: 257
Default Re: Free Will Challenge & Conclusion

Quote:
Originally Posted by Fred H.
blindly deterministic algorithms
You're misrepresenting my views yet again. Even without free will, you are still Fred, you use your brain and make choices. For the sake of others, I will explain myself:

There is only evolution and psychology; hence the name of this forum. Our brains evolved to the point where we can consider our origins and imagine our futures. But they are still brains; they make choices by firing neurons and neurons are affected by neurotransmitters. I see no evidence for a 'soul' or 'spirit' or anything beyond the neurotransmitters.

You are still Fred. You choose to type what you type because your brain has decided what's best for you and does it. Your brain is affected in varying amounts by the environment; what you read in my typing affects what your thoughts are and plays a part in determining your typing. But my typing is deterministic; directly affected only by the current condition of my brain, which is composed of functional modules and memories. So, I was destined to type exactly what you see now. If you read this, you're destined to interpret what I typed as you will. But your decisions are affected by your memories; I'm adding some.

My hope is that something I type turns on a light bulb in Fred's brain that gives him some insight into my POV and hopefully that's for the better. My need is to make sure those that may read this in the future with open minds have something besides your ??? to affect their memories. I wanted to write something honest about what I think of your writings but that will not help this 'conversation'.

So, I decide, Fred decides, and this conversation continues... There is nothing 'blind' about it, we have the power. Obviously.
Reply With Quote
  #45  
Old February 25th, 2006, 10:53 AM
Fred H. Fred H. is offline
Member
 
Join Date: Jul 2004
Posts: 483
Default Re: Free Will Challenge & Conclusion

Quote:
TomJ: For the sake of others, I will explain myself….

You choose to type what you type because your brain has decided what's best for you and does it.
Yep, that’s essentially what I said—it’s your view that when “you choose,” it’s actually “[the algorithm(s) in] your brain [that] has decided what's best for you and does it.” And obviously any “you” is merely an illusion created by those algorithms—or as Crick might have said, you’re nothing but a pack of algorithms.

Interestingly, using your POV as the model, your brain continues to erroneously “decide” that my brain is somehow “misrepresenting” the “view” of your brain—your algorithms seem to be stuck in a loop; perhaps an SSRI would help remedy that annoying problem.
Reply With Quote
  #46  
Old February 25th, 2006, 11:53 AM
ToddStark ToddStark is offline
Member
 
Join Date: Jul 2004
Posts: 174
Arrow Re: the explanatory tactic of functionalism

Well Fred, I'm offering a different way of looking at some basic concepts, and you want to ridicule even my most basic definitions from the start. I'm obviously not making an argument, I'm trying to paint a different picture to help you see why I think differently. On metaphysical issues that's often the best we can do, I find.

That's why we find each other such unsatisfying intellectual conversation partners. We don't share the same basic values regarding what makes a good intellectual discussion. However, I am reluctant to give up mine as you are yours, so I will continue for the moment. My viewpoint is based on the ways in which I think cognitive science has continued to evolve over the decades to help us better understand the way human minds work in empirical terms.

I will try to make the previous point more explicitly.

1. If you don't bias the concept (unfairly I think) by imaginaing that it has to flow from human conscious ruminations, I find it is pretty clear that responding differently to different environmental conditions is a simple form of choice. It is at least continuous with how we think of choice. I don't say it is human or even animal choice. It is perhaps molecular choice, electronic choice, maybe choice among bugs. Think of when an insect follows a pheromone trail wherever it leads. There is intention and purpose of a sort either in its body or in an evolutionary adaptive sense, yet it is not really human style thinking. Clearly you don't equate insect choice with human choice, but can you imagine that insects have in some sense more of a choice than bacteria? That concept of choice being a continuum is central to my perspective. If you can't imagine this, I guess you are blocked from seeing my viewpoint.

2. When something can change the way it responds to different environmental conditions, we are a tiny step closer to human style choice. Machines that don't just respond to different environmental conditions differently but also learn to respond in entirely new ways are making a new kind of choice. The most sophisticated machines now in use sometimes manage this.

3. When something we designed comes up with entirely new ways of behaving that we can't link back to the design we originally gave it, then it makes sense to me to say that it can have new kinds of choices. IN this sense, human beings have very different ways of making choices than machines because their evolutionary history has given then many levels of machinery for making choices in different ways. Your concept of choice as a result of rumination is just one of these, the last in a long line of innovations, but one that you want to make all-important, and I want to make part of a larger continuum.

Essentially, I am adopting the stance favored in science which is known as functionalism, defining things in terms of what things do. That's why I feel justified in thinking of choice as things responding differently to different environmental conditions. You are defining choice partly in terms of how it feels and how human beings seem to experience it when we do it. Yes, I consider the human experience of choice to be important, just as you do, but for me it is secondary to the explanatory tactic of functionalism. That is the very fundamental starting tactic that distinguishes our viewpoints. It is the explanatory tactic (perhaps the only one) that even makes it conceivable to have a science of human mind.

I don't see how one could ever appreciate the value of the physicalist or naturalist viewpoint without at least tentatively adopting the explanatory tactic of functionalism. We don't need to assume that humans are mechanical or adopt radical behaviorism to understand current physicalism, but we do need to assume that things are best defined in functional terms rather than personal phenomenological ones. I know it's a tough pill to swallow, and that's why most people don't swallow it. And why most people can't comprehend philosophers like Dennett. And I believe also why that sort of philosopher appear to some folks to be bizarrely denying consciousness.

I can't possibly defend the warrant for functionalism, except perhaps on a historical basis, which I won't attempt here. I will just state that I think it is immensely powerful for explaining otherwise seemingly inexplicable things through specifically empirical kinds of inquiry. Whereas our subjective experience of how we make choices does not really lend itself very well to empirical inquiry. That's the reason I adopt this way of thinking about free will that goes somewhat against our intution that human beings have meaningful choice and nothing else does.

Todd
Reply With Quote
  #47  
Old February 25th, 2006, 01:12 PM
TomJrzk TomJrzk is offline
Member
 
Join Date: Aug 2004
Location: Dallas
Posts: 257
Default Re: Free Will Challenge & Conclusion

Quote:
Originally Posted by Fred H.
Yep, that’s essentially what I said—it’s your view that when “you choose,” it’s actually “[the algorithm(s) in] your brain [that] has decided what's best for you and does it.” And obviously any “you” is merely an illusion created by those algorithms—or as Crick might have said, you’re nothing but a pack of algorithms.

Interestingly, using your POV as the model, your brain continues to erroneously “decide” that my brain is somehow “misrepresenting” the “view” of your brain—your algorithms seem to be stuck in a loop; perhaps an SSRI would help remedy that annoying problem.
Wow, you're on a roll!!! With this and your post on the other thread, we may have made some headway!

My only disagreement with your representation of my views was the word 'blindly'; I'm sorry I didn't make that clearer and I'm so glad that you left it off this time, I'm out of our loop now. There's nothing blind about it. The majority of our eyes are open and our brains allow in much of what is needed to make 'good' decisions (see the thread "The Repressor Module: EP's Holy Grail" for evidence that the brain does not allow everything in). We do think and decide; I just don't see any evidence for a 'soul' or 'spirit' that adds anything else.
Reply With Quote
  #48  
Old February 25th, 2006, 01:38 PM
Fred H. Fred H. is offline
Member
 
Join Date: Jul 2004
Posts: 483
Default Re: A Free Will Challenge

Quote:
Todd: Essentially, I am adopting the stance favored in science which is known as functionalism, defining things in terms of what things do.
Yeah, I know, you’re essentially a “strong AI” guy. Here’s something from Wikipedia:
Quote:
The strong AI vs. weak AI debate is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness can not be achieved by formal logic systems, while Douglas Hofstadter in Gödel, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of Functionalism. In many strong AI supporters’ opinion, artificial consciousness is considered as the holy grail of artificial intelligence.
Functionalism holds that the exact biological structures of the mind need not be the same, just so long as the same “process” is achieved, i.e. algorithms. IOW, just copy and past Todd’s algorithms from Todd’s brain to say a Cray, and voilà !—the Cray is Todd too . . . or should that be Todd II? And additionally, you’ll essentially have achieved immortality! Oh happy day.

As I’ve previously noted, it’s essentially your view that human consciousness is the result of algorithms, algorithms that somehow manage to avoid that pesky halting problem, and/or Godel’s incompleteness theorem. Fine.

I’ve looked at and considered this stuff at some length, and I’ve concluded that AI is a pipe dream. Nevertheless, I’ll admit that AI makes great science fiction—From “2001: A Space Odyssey”—
Quote:
HAL 9000 computer:
Look Dave, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.

I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.

I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I'm a... fraid….
Reply With Quote
  #49  
Old February 28th, 2006, 11:15 AM
TomJrzk TomJrzk is offline
Member
 
Join Date: Aug 2004
Location: Dallas
Posts: 257
Default Re: A Free Will Challenge

Quote:
Originally Posted by Fred H.
I’ve looked at and considered this stuff at some length, and I’ve concluded that AI is a pipe dream.
I've concluded that a machine that can mimic your neurons and neurotransmitters and store all of your memories and instincts would be just as conscious as you. If there's anything else, I have yet to see evidence of it and the null case is still valid, Occam's Razor again.
Reply With Quote
  #50  
Old March 2nd, 2006, 12:32 PM
ToddStark ToddStark is offline
Member
 
Join Date: Jul 2004
Posts: 174
Default Functionalism does not mean "strong AI"

Hi Fred,

I am not "strong AI" at all. I don't say that brains are computers (and certainly not that they are fundamentally rule-based symbol processors!). I say rather that functionalism is the only way to understand a mind scientifically. It may or may not succeed, I suppose, but to me it seems probably the only way to get around the problems with non-embodied models of minds.

In cognitive science, there are basically two schools of thought: the tradition of rule-based systems (which is where "strong AI" arises), and the tradition of connectionist networks.

My view, just so you understand, is that neither of them captures the human mind, but that together they get as close as we can currently get. The most plausible approach I have seen so far is that of Gary Marcus, who endorses the idea that sufficiently sophisticated connectionist networks can be used to emulate rule-based symbol processors. I suggest that this is not too terribly far from Dennett's suggestion that a brain can emulate a "semantic engine" but not quite become one. I don't say that such a thing is a mind, but it is probably as close as we can imagine a design for one with current theory.

If I understand your argument, you are arguing that because I endorse functionalism, I am also a strong AI proponent, and that strong AI is a weak position. As I understand it, strong AI is essentially a rule-based approach to the mind. I used to program in LISP in college, so I have a rudimentary understanding of what they were trying to do in early cog sci by using rules to emulate human thinking. The dream was once that "expert systems" could perform as well as human minds, so long as we get the rules right.

Ok, my response is that I agree with you that strong AI is not a very supportable position, for a number of good reasons. However, strong AI is no longer really current cognitive science either.

We can build some killer chess computers, but even those are usually not entirely rule-based systems. The technology has changed to be more like a human brain, in the sense of using connectionist networks and selection algorithms rather than just rules. So although in trying to argue against functionalist you might want to argue that computers are all rule-based, and that all of cog sci is "strong AI," it truly isn't accurate. The technology has become more similar to a biological brain in some ways, so that distinction is becoming harder to maintain.

I don't endorse strong AI, nor do I agree with you that functionalist approaches to the mind are all equivalent to strong AI and its rule processing.

We agree that there is no computer model of a human mind that is adeuqate in all ways, however, I differ in that I think it is possible in principle to get closer to equivalence (and probably not by relying solely on rule or symbol processing alone).

If you should want to pursue this, Gary Marcus has an excellent book on the differences between rule-based and connectionist models of the mind which would help you see why "strong AI" is a strawman argument.

I suspect that your argument is not really about strong AI, however, but that a human mind is not a machine, and so presumably cannot be represented in functional terms. I won't argue that, because as I said, it is an explanatory tactic not a metaphysical commitment. I don't claim that thinks *are* nothing but what they do, I claim that often the only way to make a scientific theory of them is to see them that way.

kind regards,

Todd
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 04:38 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
Copyright © 1995-2023 Liviant Internet LLC. All rights reserved.