On page 201 Hofstadter discussed how certain associations in his brain made him slip up what he said in certain situations. This made me think of not so much accidental slip ups in the use of language, but purposeful ones. For instance, the use of texting lingo (lol, b/c etc...). These "shortcuts" (which aren't really shortcuts in my eyes) can make the use of a language different and sometimes even more confusing. A person can also slip up with using the texting jargon, for instance, a person once wrote "LOL" in a funeral guest book, thinking it meant "lots of love" when in turn it means "laugh out loud." So, people can mess up the meanings of made up words too.
But for conceptual slippages with respects to a computer program, a person wouldn't want their program that reads through a paragraph to mistake paper for wood, or candy for apple. But, they would rather want it to recognize that paper comes from wood, or that there is such a thing as a candy apple. We wouldn't want a person thinking that "a snicker's a day keeps the doctor away now would we? Even though most sixth graders would love this...
-Bryan
Monday, November 2, 2009
Con-nect-ion(-ism)
Page 175 caught my eye in chapter 4. The section that was being discussed dealt with "Objectivism and Traditional AI." The bottom of 175 talked about connectionist approaches to AI, and dropped some big names who work a lot with this. Pretty much connectionism are represented with neural networks and how we are able to model and see how the brain works. They went away for a while, but as of the late they have been coming back into fashion with respects to AI.
The part talked about how these types of models are represented are vectors in a multidimensional space, where there positions are not anchored to anything, but can adjust freely due to the environment it is currently in. These models have been used to show how certain genetic algorithms are performed, such as the one proposed by John Henry Holland. He took notions from evolution and applied them to computers, and was able to create a population and reproduce certain traits (especially dominant ones) for many generations. He took evolution and made it so we can see it occur in a matter of seconds, versus many lifetimes.
A person can utilize connectionist models to show associations with other like things. Hebbian Learning can also be represented with using neural networks. Since Hebbian Learning is all about associations, a person could model specific objects with other objects. Hence, machine learning can take place along with semantic workings of the like.
-Bryan
The part talked about how these types of models are represented are vectors in a multidimensional space, where there positions are not anchored to anything, but can adjust freely due to the environment it is currently in. These models have been used to show how certain genetic algorithms are performed, such as the one proposed by John Henry Holland. He took notions from evolution and applied them to computers, and was able to create a population and reproduce certain traits (especially dominant ones) for many generations. He took evolution and made it so we can see it occur in a matter of seconds, versus many lifetimes.
A person can utilize connectionist models to show associations with other like things. Hebbian Learning can also be represented with using neural networks. Since Hebbian Learning is all about associations, a person could model specific objects with other objects. Hence, machine learning can take place along with semantic workings of the like.
-Bryan
Subscribe to:
Posts (Atom)