Monday, November 2, 2009

Conceptually Falli...I Mean Slipping

On page 201 Hofstadter discussed how certain associations in his brain made him slip up what he said in certain situations. This made me think of not so much accidental slip ups in the use of language, but purposeful ones. For instance, the use of texting lingo (lol, b/c etc...). These "shortcuts" (which aren't really shortcuts in my eyes) can make the use of a language different and sometimes even more confusing. A person can also slip up with using the texting jargon, for instance, a person once wrote "LOL" in a funeral guest book, thinking it meant "lots of love" when in turn it means "laugh out loud." So, people can mess up the meanings of made up words too.

But for conceptual slippages with respects to a computer program, a person wouldn't want their program that reads through a paragraph to mistake paper for wood, or candy for apple. But, they would rather want it to recognize that paper comes from wood, or that there is such a thing as a candy apple. We wouldn't want a person thinking that "a snicker's a day keeps the doctor away now would we? Even though most sixth graders would love this...

-Bryan

Con-nect-ion(-ism)

Page 175 caught my eye in chapter 4. The section that was being discussed dealt with "Objectivism and Traditional AI." The bottom of 175 talked about connectionist approaches to AI, and dropped some big names who work a lot with this. Pretty much connectionism are represented with neural networks and how we are able to model and see how the brain works. They went away for a while, but as of the late they have been coming back into fashion with respects to AI.

The part talked about how these types of models are represented are vectors in a multidimensional space, where there positions are not anchored to anything, but can adjust freely due to the environment it is currently in. These models have been used to show how certain genetic algorithms are performed, such as the one proposed by John Henry Holland. He took notions from evolution and applied them to computers, and was able to create a population and reproduce certain traits (especially dominant ones) for many generations. He took evolution and made it so we can see it occur in a matter of seconds, versus many lifetimes.

A person can utilize connectionist models to show associations with other like things. Hebbian Learning can also be represented with using neural networks. Since Hebbian Learning is all about associations, a person could model specific objects with other objects. Hence, machine learning can take place along with semantic workings of the like.

-Bryan

Thursday, October 22, 2009

"WELCOME"

A long long time ago, when Clinton was president:
I hear a Macintosh computer start up, (that glorious stretched out C -Major chord) and I get a chill down my spine. I log into my (now out of date) AOL e-mail and get a "Welcome Bryan" (I thought it was awesome that I could program my AOL e-mail to say my name as it started up...oh 7th grade) at the start up. I then get bored, and start to mess with text to speech and had my computer reassuring me that everything would be all right, and that my sister didn't really mean to say the things she said to me...I then proceed to log off and go to bed to start the whole process of being a teen all over again.

I'll stop with the stories, and get to Hofstadter and the Eliza Effect. I brought up this tale of me way back when to connect with what Hofstadter brings up on page 157 when he is talking about how an ATM is grateful that it received a deposit slip and very thoughtfully printing out "THANK YOU" on its screen. Now, I agree with Hofstadter that the ATM has no lifelike structure on the inside, and that people just mistakenly think that something like this is all right to say. A mistake yes, reality no. A person could defend the issue, saying that since it reacts to what a human does it has intelligence, but no sane human would ever think an ATM can think on its own. It needs some type on user input before it can get to the end result of "THANK YOU" .

This is what Hofstadter was getting to when he was talking about the ELIZA project. That it is not necessarily the case that the computer it "helping" a person through a hard time, but merely reacting to what the user is inputting. This is no different than the AIM bot such as smarterchild.

A side note, as I am typing a red line will appear below a misspelled word. Does the computer "know" that it is misspelled and wants you to correct it? No, it is just linked up to a dictionary and when an unknown word appears or a word that is close to a word in the dictionary appears it will throw the red line on the page which tells the user to correct it.

-Bryan

Thursday, October 8, 2009

A Little Numbo Never Hurt Anyone...

For this blog we had to read pages 138 to 154. This section in the book discussed how Defays set up the architecture of Numbo, a sample run of Numbo, and how Numbo compares to other computer models. He also discusses how strict comparisons between humans and Numbo are not possible, and this is what caught my eye (p. 151).

There are three reasons why Defays does not think that comparing Numbo and humans is a great possibility. The first one states that Numbo's knowledge base is impoverished and that major aspects of adults mathematical background is lacking in the Pnet. The second is that the way that humans tend to approach a problem have been ignored. He gave the example of the way the bricks are layed out in a linear fashion, which causes humans to read (left to right in this case) in one way, causing some subconscious desire to favor earlier bricks rather than latter ones. Finally, the third comparison was that of ad hoc solutions added to the architecture of Numbo.

This interested me because in figure III-6 Defays shows two different protocols happening side by side. Now, if a person were to go up to the protocols and look at them, I don't think that they could tell the differences from the human and machine protocols, on more complex problems. But, I do believe that on problems that only require a person to recall what the answer is, rather than try to figure it out, would easily be spotted by another person. He gives us the target 6, and bricks 3 3 17 11 22 at one point. Any human (that reads right to left) would get 6 using 3 + 3, whereas Numbo might use 17 - 11 or something else. As soon as we (as humans) have to start doing major computations is when it is harder to tell the difference between human and Numbo.

(Would this be the case with something like language? What about Jumbo? Would we be able to tell the difference with longer versus shorter words? What about word phrases?)

-Bryan

Monday, October 5, 2009

Non-Deterministic Determinism

As soon as I saw that the section that we had to read for this post dealt with non-determinism the Koch Snowflake immediately jumped into my consciousness. If you don't know what the Koch Snowflake is I highly suggest taking a look at it: Koch Snowflake. I won't go into a lot of detail about it, but it deals with taking something that seems very complex and non-deterministic (random), and is able to make it more deterministic. Now, with real snowflakes this isn't the case, since a snowflake is like a human and every human is different, but has the same "structure."

Now that I got that out of the way, we can go to Defays' project called Numble. On page 132, he gives a math problem, consisting of a 'target' and five 'bricks.' The idea is a lot like out initial Crypto assignment when we were given five numbers (like bricks) and were expected to find the solution (like a target). The biggest difference between the two is that in Numble, a person can use one or all of the bricks to get the target, whereas in Crypto we had to utilize all of our 'bricks' to get our 'target.' Other than that they hold the same concepts. Which is trying to model how we as (as intelligent beings) can take symbols and create a solution from the given solutions. (I use the word 'symbols' because we don't always have to use numbers, or even letters to get a solution. It could be music, where a person is able to start writing a song, and can get a solution by finishing the song, as an example).

Defays' goes on to almost structure a (neural) network on page 136 with his Pnet. (If you don't like neural network, you can just say a network, but since we're dealing with AI, I thought it to be appropriate to connect another aspect of cogsci). Structure is important with how one is to model, and if something has a weak structure (like a horribly designed bridge) it will fall apart. But, if one is able to build something complex out of a simple structure, then you can go from determinism to non-determinism, such as the Koch Snowflake.

(This all deals with a person using "top-down" or "bottom-up" processing that is. A person will seem more deterministic if they use "bottom-up" rather than "top-down" where they seem more stochastic or non-deterministic. But that's my opinion only, some may not see it this way).

-Bryan

Thursday, October 1, 2009

More Than Meets The Eye...

When I was reading this section of the book (111-126) it dealt a lot with how Jumbo takes words and transforms them into other formations of the word. It gets back to my post on anagrams, and how "hot shots" can be perceived as "hots hots." Hofstadter talks about two different transformations in Jumbo. The first one is "entropy-preserving" and the other one is "entropy-increasing." He goes on to define "entropy" as "perceived disorder." So, working by analogy, a person is able to take the latter part of the phrase and make sense of it. For e-preserving, we can assume, without reading ahead that we are trying to keep the perceived disorder in check, and for e-increasing we can say that we want to increase or raise the perceived disorder.

"Cognition equals recognition" (119). This is the thesis that Hofstadter said we are trying to prove, and he goes on to say that there are infinite amounts of ways to write the letter 'A' but we can all recognize it as the letter 'A'. He also goes on to say that there has to be flexibility with Platonic abstraction and a mental representation. In this sense we have the ability to recognize things that we have seen before, and at some level we learned what it was. I always think of the story of the sailor on a foreign island trying to tell the residents of the island that a ship is coming towards the shore. But, since they have never seen a ship, they do not recognize it and think the sailor is crazy.

A side note, this section reminded me of the T.V. shows from my childhood, and how almost everything that I watched had something transform. Such as Transformers (shouldn't have to explain here) to Superman when Clark Kent would run into a phone booth and "transform" into Superman. I just thought that virtually everything that we are exposed to can have an aspect of transformation.

-Bryan

Wednesday, September 23, 2009

To Glom Or Not To Glom

...that is the question. What Hofstadter means by the word glom, is to put two things together into some type of chunk. On page 110, Hofstadter talks about higher-level structures having their own properties, and he uses this example. He first takes the letter units "t" "h" and "e" and says that a person can make the word "the" out of these three letter units, but for the higher-level structuring he is talking about is taking "th" (as one glom) and attaching it to "e" to have a two units making up a word instead of three. This brought up a very intriguing aspect to me, and it dealt with what Hofstadter was talking about earlier in the chapter.

From page 99:
"...the way in which we mentally juggle many little pieces and tentatively combine them into various bigger pieces in an attempt to come up with something novel, meaningful and strong."
This quote made me think of the idea of taking a single idea, and making it into something larger (such as a thesis or a dissertation). We, as humans, do this everyday when it comes to "putting the pieces together." When it comes to something as simple as being able to read a newspaper article and understanding what is being said within. So, going from small scale to large scale was an important aspect, how about reversing it?

There is a quote from someone I can't recall, that went along these lines: "I started out wanting to cure the world, I then focused on human anatomy only, which brought me to the brain, which has me working on memory..." So, this idea of starting big, ending small isn't a bad one, but it can be very overwhelming. This was Hofstadter's idea with the terraced scan. Having almost everything laid out in front of you, but then having to sift your way through the "top" portions of these ideas, to figure out what it is you are looking for. He uses the analogy of "don't judge a book by its cover." So, as humans, we also do this on a day to day basis. Getting back to the newspaper example, why did someone choose that particular article to read? As humans, we utilize these methods everyday of our lives.

-Bryan

Monday, September 21, 2009

maragan

Can you figure out the anagram from my title??? Anagrams are just one of the things that Hofstadter talked about in the preface to chapter 2. But, the To Read or, Toreador section brought up some interesting points on how and everday cognitive activity can be used in a program. Jumbo, Hofstadters anagram solving machine, was put up to the task to try and figure out how to make all of the letters fit together. Hofstadter brings up the point of the word hotshots, and some others, and how some can perceieve the word to be read as "hots hots" (say that ten times fast). Now, my point on this is when is a machine able to decipher "hotshots" from "hots hots" ?

This is what Hofstadter was getting to when he was talking about the nonstandard parsing. He goes on to say that we, as humans, are rather remarkable insofar as to not be tripped up by this. I should say, most humans are good at this. To be able to take a frozen set of letters, and be able to convert it into a uniformed word or phrase is rather impressive. But, it is one of those "things" in life that is often over looked (such as the ability of a five year old to boot up a computer and actually use it). So, I'll end with this link to a pretty cool anagram unscrambler.

http://www.crossword-dictionary.com/anagram.asp

-Bryan



Wednesday, September 16, 2009

Me Two (or three)

Hofstadter brought up the "Me-Too" Phenomenon on page 75. I instantly thought of another example that (at least I think) always comes into play with the phenomenon. So, when I was a very small child, my Uncle went to me "I'm going to have a baby!" This blew me away! I was no more than eight (maybe twenty...) and he just stated (with exasperation) that he, (my Uncle) was going to have a baby. Whoa...I won't go into any biology but needless to say I was shocked. But then I figured out, (and I think you all know where I'm going with this) that he wasn't going to have a baby, my Aunt was. Thus, at some level I had to generalize the situation, to figure out what was going on. This is a perfect example on what I think Hofstadter was trying to convey. That, what we say as humans, must be interpreted in a fluid manner, or else it won't make sense. This shared essence, as he calls it, must be totally implicit (pg. 75).

How does all of this tie into our course? Well, I pondered on this question for a little bit and this is what came about. That, when we do these types of "Me-Too" gestures, we as humans seem to be able to process it, with little trouble, unless someone is out in left field when they are suppose to be on the pitchers mound (I'll stop with my horrible "sports" puns). To fix these issues, we could use logic (a very general logic), and it would lay out a nice and neat form of what is being said when a person encounters the "Me-Too" Phenomenon. If we can define thought (maybe language?) in this way, we could try to clear up the generalizations that happen when the "Me-Too" Phenomenon occurs.

I'll leave with a question: What would happen with a person who had dual-personality syndrome and this "Me-Too" Phenomenon? Just something to think about... Or how about this, would if we had a program that runs parallel with itself, and it encounters the phenomenon? A lot of ambiguous problems could occur.
-Bryan

Monday, September 14, 2009

And so on...

"..."

What does this really mean? To us, it means that whatever came before in a sequence, should just repeat itself throughout. For example, 1, 2, 3, 4... We are suppose to interpret this (as humans should) that the next logical number should be n + 1, where n represents the last number in the sequence. Hofstadter brings this notion up in the section entitled "On Deciphering Shorter versus Longer Messages," as well as in an earlier part of the chapter. My question is, do we as humans really know what comes next in any type of sequence, whether it be numbers or language? Could this just be another case of rule following, or another case of heuristics?

Hofstadter brings up another key point on page 68, when he is talking about a "message" being too long or too short. If it is too long, we can get easily confused by trying to deal with too much information, and if it is too short, we may not have enough evidence to make any sort of conclusion on how the sequence should continue on. So, Hofstadter talks about analogy-making, and how AI models of this nature, are created in such a way that the "blurriness" of a sequence is taken away, and certain set constraints are put in place.

I'll leave with a quote:

"The slow one now
Will later be fast
As the present now
Will later be past
The order is
Rapidly fadin'.
And the first one now
Will later be last
For the times they are a-changin'."

Bob Dylan


-Bryan

Wednesday, September 9, 2009

Musical Math

There was an interesting section in the reading we had to do for this entry. It was on pages 49 - 51 and the section was titled "Good-bye Math... Hello Music!" Okay, maybe in the context that Hofstadter was trying to convey with saying that he didn't want to get bogged down in the "real musical understanding" but to rather be influenced by melodic patterns seems fair for the Seek-Whence Project (Hofstadter, 1995, 50). But, I would like to take a little twist on this take of music (since I have been a "musician" for over a decade now), and how it deals with math.

I'll start with his melodic sequence that he used in the book (EAEAEBEBECECEDEDEEEEEFEF). How he broke it down was using uppercase for all of the alternating E's and lowercase for the scale moving up in pitch. Here is where I would say that math comes into play with regards to music. I can take this notation of EAEAEBEB... and use numbers to represent where they fall in a scale. Now, some (non-musicians) might say that "numbers are never used with respects to notes, only rhythms." Well, I beg to differ. For instance, in Jazz many musicians use numbers to call out notes, instead of fumbling with writing out every single note on the page. So, if we had a C chord, and someone yelled out "play the 1st, 3rd and 5th" a person would play C then E then G. This works universally with any chord structure (if someone yelled out play the 1st, 3rd and 5th of a D chord, the person would the respected notes of that chord.)

This brings me to adding math into Hofstadter's sequencing from the book. If we were to take each note and associate a number to it starting with A equaling 1, B equaling 2 and so on, it would look like this:
515152525353545455555656 =
EAEAEBEBECECEDEDEEEEEFEF
Then, you could break it down to:
(5 1-5 1) (5 2-5 2) (5 3-5 3) (5 4-5 4) (5 5-5 5) (5 6-5 6) like he does in the book. In essence, one could take an entire musical phrase and break it down in numbers, thus taking something like Hofstadter's idea of [2 n 2] and making a melodic line out of it. So, to some degree, there can be music made from numbers. This also gives rise to even more pattern recognition to take place.

-Bryan

Monday, September 7, 2009

(Not) Recognizing Our Patterns

When we wake up in the morning, we start a pattern (we sometimes call it a routine). We do our usual things, brush our teeth, shower, eat breakfast and so on. I pose this question to you, do you think we realize that we are going through patterns every second of our lives?

This then brought me to the idea of how our book Fluid Concepts and Creative Analogies by Douglas Hofstadter was set up. I saw patterns happening all over the place. There was a pattern when the author finishes a section and moves to the next section (e.g. ends paragraph and section, new bold heading, starts new section etc...). A pattern was formed. I think that a main idea that the author was trying to convey was that everywhere we go, and everything we do, there is some type of pattern associated with everything we do.

An idea that the author brought up was on page 24 when he was talking about the 2121121212112... pattern, and how it creates a "child" pattern, which in turn creates another "child" pattern (all "child" means is that there is another pattern after the initial one has started). This brings us to recursion, and my favorite part of the chapter that the author talked about, the Fibonacci numbers (1, 1, 2, 3, 5, 8, 13 ...) These numbers are recursion at its finest. To find the next number, just take the previous two numbers and add them together (to find the next number after 13, add 8 + 13 = 21 etc...). So, recursion in itself is the process of backtracking to smaller patterns to figure out the entire pattern.

This idea of recursion comes into play when we (as humans) try to figure out complex ideas. We have to work at a smaller less complex level to be able to build up to the end result. We do this with logic, start with the smallest sentence (or idea, hopefully atomic sentences) and build on the system from there. I guess the motto, "aim big, start small" comes into play here. Which is a very key concept if one is to figure out how very complex patterns work.

-Bryan