Teaching the Machine: Some Half-Baked Thoughts

My Thanks To …

My thanks go to Bill Genereux (Twitter: @billgx, web site: http://billgx.edublogs.org/) for prompting me to write this post.

The Prompt

Bill tweeted “Just watched it again. @mwesch’s Machine is Us/ing Us. I see something new each time. http://www.youtube.com/watch?v=6gmP4nk0EOE” which led to a wide-ranging discussion that we both agree has lots of avenues left to explore. This post is what I see as I look down those avenues.

The Half-Baked Ideas

Bio-Interfacing

What emerged towards the end of the time that we had available together was the idea of bio-interfacing the web (in whatever future form it might takes) and connecting directly into the human nervous system. From an ethical viewpoint, I am neutral on that issue. But there is then the question of “Would I want to?”. The answer is “Probably not.”. While I would be delighted to have a cochlear or retinal implant to solve some issue with my ever aging carcase, I would be unlikely to want to have my sensory inputs directly connected in such a fashion to any external equipment on a permanent basis.

I may be out of step with other people on this feeling. As Bill says: “a while back, I tweeted about having an implant to be directly connected to the ‘net. I was amazed at how many said yes.

Teaching the Machine

If we admit of a world where most people are bio-connected as above, then the phrase “teaching the machine” makes sense, because we will have at least partly transformed ourselves into the Borg, and since we are all teachable as human beings, and we are all part of the same machine, we can indeed “teach the machine”.

Bill also tweeted “When we program a computer, don’t we do a rudimentary form of teaching it what we want it to do?” which I found to be a great disrupter of my thinking processes, always a welcome moment as it gives me a chance to learn. My answer to that question was at the time and still is “no”, and now I have to justify my response.

While I have yet to fully think through this concept (hence calling this post “half-baked”) the question resolves to “Can you use a reward-based scenario to reinforce a particular behavior?”. If the answer is “yes”, then you have something that is teachable, otherwise you don’t. This has a number of consequences.

If something is “teachable” (as per the above) then that implies consciousness. Moreover, it implies purpose, the purpose in this case being the seeking of reward. I have yet to see a machine that has, rather than merely simulates, purpose. This leads to points to two great thinkers in this area: Alan Turing, and Isaac Asimov.

THE TURING TEST
In essence, the Turing Test is a measure of how much a machine can make a human believe that it is also human. I have yet to see anything that even gets away from first base, let alone getting anywhere near that benchmark.

ASIMOV’S ROBOT SERIES
Isaac Asimov poses a far more interesting question in his robot series. While the possibility of robots with consciousness must be admitted, when can we say that a robot has consciousness? I do not know the answer to that question. What is worse, I do not even know what questions I would need to ask if I was presented with such a creature, and asked to assess whether or not it was truly conscious rather than just simulating consciousness: I lack the experiences which might allow me to formulate the appropriate questions.

Leave a Reply

Your email address will not be published. Required fields are marked *