Source Code for Linux Supercomputer Artificial Intelligence
in English and in German
and (JavaScript) in Russian


1. System Diagram of the Robot AI Mind with User Manual and AI Mind FAQ

  /^^^^^^^^^^^\  How A Mind Generates A Thought   /^^^^^^^^^^^\
 /    EYE      \ CONCEPTS                        /    EAR      \
|   _______     |   | | |    __________         |               |
|  / cat   \!!!!|!!!|!| |   /          \        |               |
| / image   \---|---|-+ |  (  Sentence  )-------|-------------\ |
| \ recog   /   |   |c| |   \__________/        |             | |
|  \_______/    |   |a| |      |   \  ______    |   auditory  | |
| recognition   |   |t| |      |    \/ Verb \   |             | |
| of a cat      |   |s|e|      |    ( Phrase )  |   memory    | |
| initiates     |   | |a|   ___V__  /\______/   |             | |
| spreading     |  f| |t|  / Noun \/    |       |   channel   | |
| activation    |  i| | | ( Phrase )    |       |   ________  | |
|   _______     |  s| | |  \______/    _V_____  |  /        \ | |
|  / new   \    |  h|_|_|      |      /English\ | /  "cats"  \| |
| / percept \   |  /     \   __V____  \ Verbs /-|-\  "eat"   /  |
| \ engram  /---|--\ Psi /--/ Nouns \  \_____/  |  \ "fish" /   |
|  \_______/    |   \___/   \_______/-----------|---\______/    |
The Mind diagrammed above has been successfully implemented in software
since the 7 June 2006 AI breakthrough as announced at the following site:
  • http://www.mail-archive.com/agi@v2.listbox.com/msg03034.html
  • and as taught in the Wikipedia-based free AI textbook.

    Mind.Forth may move up from 32-bit Win32Forth to 32/64 iForth v4.0
    to run on 64-bit Linux machines, perhaps using the
    http://en.wikipedia.org/wiki/VIA_Nano 64-bit CPU chip,
    and perhaps also using the "fJACK" sound software based on
    http://jackaudio.org technology. For the biggest-ever list of
    Mind.Forth mind-modules and their on-line documentation, see
    http://cyborg.blogspot.com in the right sidebar Links column.

    View the complete set of diagrams of the Mentifex AI theory of mind.


    2. AITree Cognitive Architecture


    AdminisTrivia BeVerb HelpWanted InFerence JavaScript KbSearch KbTraversal MentifexBashing MileStones MindGrid

    Seed AI is an actual type of primitive artificial intelligence
    capable of germinating into a vast fan-out of evolving AI Minds.
    At first human beings need to improve the AI Mind software spawned
    by such Seed AI specimens as MindForth and the JavaScript AiMind.html
    tutorial AI, but recursive self-improvement will eventually lead to an
    exponential spiral of intellectual fitness culminating in SuperIntelligence.
    See the Mentifex AGI RoadMap.


    3. Mentifex State of the Art


    MindForth Programming Journal (MFPJ)


    Saturday, December 12, 2009

    dec12mfpj

    The MindForth Programming Journal (MFPJ) is both a tool in
    developing MindForth open-source artificial intelligence (AI)
    and an archival record of the history of how the AI Forthmind
    evolved over time.


    Sat.12.DEC.2009 -- Basket of Problem Behaviors
    With each new MindForth AI coding session, we may reevaluate our
    list of salient bugs and issues to work on, importing the old list
    and passing it on for the next coding session.
  • SpreadAct needs a more general search-find-exit coding than "zone".
  • Obsolete EgoAct module needs removing with rollback of associata.
  • Mechanism for detection of duplicate thought needs removing.
  • BeVerb requires too strict a word order to function;
  • EnArticle kicks in inappropriately with proper name ANDRU.
  • num(ber) of "IS" gets falsely changed from "1" to "2";
  • Entry of "WE" does not convey idea of "YOU AND I".
  • Create EnPronoun to say "I" instead of "ANDRU"?
  • BeVerb supplies wrong form regardless of subject noun number.
  • I YOU THEY are functioning but not HE SHE IT WE.
  • EnArticle needs way to insert "AN" before a vowel.
  • KbTraversal should activate I; YOU; ROBOTS; [new/old concept].
  • AI often says "ME" when it should say "I".
  • Need way to trigger statement "I DO NOT KNOW".

  • Sat.12.DEC.2009 -- Towards Creation of Classic AI Software

    With MindForth we are trying to create a classic specimen of AI software
    that will be studied and taken apart for years and for intellectual mastery.
    The program "Eliza" was such a piece of classic AI software, but it was
    nowhere near to being as complex and intricate as MindForth. The classic
    program "Shrdlu" was complex and sophisticated, but it did not "catch on"
    and serve as a fan-out point for AI evolution, as we expect MindForth to
    serve. We want MindForth to be the first True AI and to be acknowledged
    as such. However, we realize that, if MindForth "catches on" enough to
    be ported into more popular and more prevalent languages than Forth,
    it will soon be eclipsed by AI Minds coded in the other languages.
    We want the version of MindForth just before it is eclipsed to be
    classically excellent software in such ways as being thoroughly
    documented; being optimized for functionality and for clarity;
    being lean and trim without left-over "Junk DNA" code that serves
    no useful purpose; having meaningful and deglobalized variables;
    and being as robust, bug-free and bulletproof as possible.


    Sat.12.DEC.2009 -- Improving upon the SpreadAct "zone" mechanism

    For a long time we were happy to use the "zone" variable in
    SpreadAct because it worked well enough with short English words
    that we could observe the overall functionality of SpreadAct.
    We were sacrificing, however, the ability of SpreadAct to
    find and operate on words longer than the value of "zone".
    Therefore we would like now to replace the "zone" mechanism
    with a search for a delimiting blank space instead.

    Uh-oh. It looks as though SpreadAct may be performing its searches
    within a requirement for being in Tutorial or Diagnostic mode.
    No, on closer inspection, that problem does not seem to be the case.
    We did have to put more order into the indenting of the code.

    The "zone" is just an Index time point, designating the approximate
    area in the Psi array where an amplificand concept has been found,
    and where a "pre" or "seq" concept may be nearby. As the AI Mind
    grows more and more complex, we may have to expand the "peri-zonal"
    search area to include Psi concept points rather widely separated
    in terms of time "t" because of extra English words in the auditory
    array. We could allow quite a bit of extra "t" spacing, while also
    taking out some error insurance by either bypassing non-SVO concepts,
    or perhaps by increasing the search-span to accommodate non-SVO
    items, if it is possible to change a loop already in progress.

    Oh, no. We typed in, "i need transparency" and the AI assigned
    psi concept #3 (usually meant for "ANY") to "transparency".
    Somehow the letters "a - n - y" must be accumulating the
    serial activation that attempts to recognize the word "ANY".

    Anyway, if we let SpreadAct operate over a longish stretch from
    each "zone" point in time "t", say, 32 to 64 spaces long, then
    we may catch (include) any "seq" word even separated from the
    zone point by the intervention of an article and an adverb and
    an adjective or two. So perhaps we should not dispense with "zone",
    but rather we should expand our search area away from "zone".

    Now that we have made minor adjustments to SpreadAct, we may wait
    for even greater new functionality before we upload new MindForth code.
    We may want the ability to make a "dunno" answer to questions on the
    order of, "What do cats eat?" We would need to go beyond the "isflag"
    and the "areflag" and have a "doflag" and maybe a "doesflag". Then it
    will be a natural progression to deal not only with "what is?" questions
    but also with "what do?" questions.


    Sun.13.DEC.2009 -- "What-do" Questions and "Dunno" Response

    Whereas with the "what-is" and "what-are" questions we could look
    for too low an activation in the VerbPhrase module, as a trigger
    for having the AI say "I DO NOT KNOW", for "what-do" and
    "what-does" questions we will not only have to look in the
    NounPhrase module, but also in the circumstance of a search for
    a direct object. Then, if the NounPhrase module does not find
    a sufficiently active candidate for a direct object, we can
    have the SelfRef module issue an "I DO NOT KNOW" statement.
    For instance, for the question "What do monsters eat?", the AI
    might know the word "monsters" and the verb "eat" but not any
    correct answer to the question.

    When a "What do...?" question comes in, there can apparently be
    interference among activation-levels from concepts currently
    under discussion, so we may try issuing some calls to PsiDecay
    as soon as an incoming question sets a "whatflag". This suggestion
    is something that most likely only an experienced AI-mind-coder
    would come up with, having learned various tricks of the trade.

    Now the question comes up of where do we install the treatment
    or handling of the "whatflag" and the "doflag", and the calls
    to PsiDecay. We would like to forestall the generation of a
    normal thought and instead let activation spread from the
    main subject-noun of the enquiry to the verb of the enquiry,
    with a chance for slosh-over to a known direct-object, or
    with a re-routing to SelfRef if no sufficiently active
    direct object presents itself.

    One way to do the trick would be
    to issue a call to PsiDamp after a what-do sequence, so that
    the incoming subject-noun and the incoming verb could cause
    a slosh-over of activation to any correct direct-object.
    If there is indeed a highly-activated direct-object, the software
    will simply generate a normal sentence. But if there is not
    a valid direct-object, EnCog or some other module can re-route
    to the SelfRef module for an "I DO NOT KNOW" response.


    Fri.18.DEC.2009 -- Removing Superfluous Variables

    The porting of mind-modules from the 13dec09A.F Win32 Mind into the
    iForth 17dec09A.frt gave us the opportunity to include only variables
    demanded by iForth when we tried to run the Supercomputer AI. Today
    we have gone back into the Win32 Mind and commented out some variables
    which were not ported into mind.frt and which do not seem to serve
    any useful purpose.

    \ variable back ( replaces "bulge" for "pre" in SpreadAct )
    \ variable decpsi1 ( decremend concept 1 for de-activation )
    \ variable decpsi2 ( decremend concept 2 avoids repetition )
    \ variable decpsi3 ( decremend concept 3 tracks recent psi )
    \ variable firstword  ( So "DO" query triggers kbSearch )
    \ variable jdex   ( Testing a Reify subordinate loop index )
    \ variable psi6 ( temporary tutorial enx for VerbPhrase use )
    \ variable thot1    ( 22jan2008 for detecting repetitions )
    \ variable thot2    ( 22jan2008 for detecting repetitions )
    \ variable thot3    ( 22jan2008 for detecting repetitions )
    \ variable thotcyc  ( for seeking repetition in a cycle )
    \ variable thotnum  ( a numeric concatenation of psi numbers)
    \ variable txen ( Reify: time of transfer to English lexicon)
    \ variable ultpho  ( 17may2009   )
    \ variable version 20090525 version !  ( for troubleshooting)
    \ variable xthe 0 xthe !  ( Xfer NPhr motjuste to EnArticle )
    
    We may publish the AI code at least one time with the obsolete
    variables commented out, before we actually delete them. Thus
    we will provide a gradual, non-abrupt record of our actions.


    Fri.18.DEC.2009 -- General Situation

    Although in 13dec09A.F our treatment of answering "what-do"
    questions turned out to be unsatisfactory, we are rather confident
    that we can get "I DO NOT KNOW" responses to a question like
    "What do monsters eat?" based on having the subject and verb
    create the same slosh-over effect that they would create during
    the internal generation of a sentence. Failure to create a
    slosh-over effect would trigger the NOT-KNOW response, because
    the AI does not know the information that would have been found
    by the slosh-over mechanism. We can also count on using suppressors
    like PsiDamp or PsiDecay to zero out the background noise when a
    "what-do" question comes in. We may need, however, to modify the
    ReActivate mechanism to have it incorporate the slosh-over effect,
    so that the presence or absence of slosh-over may forego or trigger
    the NOT-KNOW response.

    In the iForth AI on the Netbook computer, we need to devise code
    that will indicate that the AI is busy thinking after the entry of
    human user input. Whereas the Win32 Mind shows a jittery prompt
    where the user is supposed to enter a sentence, the iForth AI does
    not jitter and therefore gives no indication of activity. We may have
    to create code that displays a lengthening series of dots or some such
    when the AI has generated a thought and is waiting for human input.
    When we were first developing the Win32 Mind, the jitter of the
    prompt was not at all appealing to us. Now in iForth we have a
    chance to create an august interactive scenario much more worthy
    of a supercomputer intelligence. We do not want a massively
    inert screen, but we also do not want a jittery phenomenon.
    We could even alternate, and show, say, a question mark for
    one count to a thousand, and then either something else or
    nothing for an intervening count to a thousand.

    Supercomputer:
    Homo sapiens:


    Sun.20.DEC.2009 -- Improving the Comprehension Algorithm

    Back in the Win32 Mind, we may have devised a conceptual solution
    to the "what-do" problem. In the current state of our code, we get
    different answers when we type in:

    Human: what do kids make
    Robot: THE KIDS MAKE
    NPhr: candidate d.o. & act = 39 34 THE ROBOTS

    Robot: THE KIDS MAKE THE ROBOTS
    and
    Human: what do kids think
    Robot: I DO NOT KNOW
    Currently the EnCog software checks noun-activations for an activation
    that is positive (above unitary one) but below (arbitrarily) thirteen,
    and uses a "dunno" flag to call SelfRef if any noun-activation is
    outside of the positive 1-to-12 range. As we discuss in fp091212.html,
    if it encounters a somewhat active direct-object noun, the software
    will simply generate a sentence of declarative knowledge instead
    of making a NOT-KNOW statement. However, there were immediately
    problems with this clumsy algorithm. Although it worked with an
    initial what-do question, it tended not to work in the case of
    subsequent what-do questions. It also does not seem right to have
    the lofty, grandiose EnCog module stooping down low to look into
    noun activations. Such low-level work should be the function of a
    lower-level mind-module. Accordingly, we have been thinking of
    shifting the what-do response mechanism into the areas of code
    associated with the ReActivate module. We would like to alter the
    current treatment-of-input arrangements in such a way that
    subject-verb-object "slosh-over" of activation shall occur
    already during input-sentence comprehension and not have to wait
    for the actual generation of an output thought. In other words,
    comprehension should become more dynamically active, so as
    immediately to provide materials for rapidfire human-computer
    dialog. If the user asks, "What do monsters eat?", the AI should
    be capable of blurting out "DRAGONS" or whatever as an answer,
    or "I DO NOT KNOW" if no direct object for "monsters eat" presents
    itself. The very word-order of the query "What do monsters eat?",
    if we disregard any activational aspects of the "what do" lead-in,
    is perfectly primed to set such activations going as will flush out
    either a slosh-over direct object or a trigger-enabling to issue
    the NOT-KNOW reponse. This looming change in the comprehension
    algorithm promises to be a major advance in mentifex-class AI Minds.


    Sun.20.DEC.2009 -- How Comprehension Works in MindForth

    When a sentence like "What do dogs eat?" comes in, we are concerned
    mainly with how the words "dogs" and "eat" are handled. So let us
    run MindForth to see. Actually, we ask "What do kids make?" because
    we want to use words that are already known to the AI in its
    EnBoot sequence. If we avoid pressing [Enter] and we simply hit
    the space-bar one time before stopping the AI with the [Escape] key,
    we run a .psi report and we observe that KIDS is left with an
    activation of 16 and MAKE with an activation of 52. Further back,
    in the EnBoot, ROBOTS has an activation of 10, so there has been
    some slosh-over. OldConcept calls ReActivate which calls SpreadAct.

    One technique of response to "what-do" questions immediately presents
    itself. Oh gee, this idea is so simple, why did we not think of it
    earlier? We already have the beginning of a "what-do" flag apparatus.
    Conditional logic lets us ascertain that first "what" has come in,
    and then either "do" or "does". But we should not be working within
    the NounPhrase module. We are not generating a sentence; we are
    comprehending a sentence. We should be working down in the
    modules below the level of the SensoryInput module.

    At least the whatflag is already down there in the InStantiate
    module. So are the doflag and the doesflag. And it is cerainly
    okay to have the SelfRef module deal with such flags, because
    SelfRef is going to say "I DO NOT KNOW", if warranted. But SelfRef
    should not be looking into generative NounPhrase for its cue to
    decide whether it knows or does not know the answer to the query.

    It should be a very simple process whereby either SelfRef or EnCog
    answers a what-do query. Once the what-and-do flags have been set
    to positive, a noun coming in should be allowed to pass activation
    to a verb, but afterwards the activation of the noun should be
    set to zero or thereabouts. Then a verb coming in should be allowed
    to pass slosh-over activation to a direct object, after which the
    verb should have its activation set to zero or thereabouts. Then
    EnCog should either have SelfRef say "I DO NOT KNOW", or EnCog
    should grab the active direct object and blurt it out as an
    answer to the query. If necessary, it is okay for EnCog to call
    NounPhrase to blurt out the word as an answer. It may even work
    to have EnCog call VerbPhrase if the question is something like,
    "What do robots do?". In that way, the same algorithm may work
    to answers questions about both nouns and about verb-actions.
    In other words, the algorithm may be more universal.

    Inside InStantiate it is easy to trap for a "doflag" and for
    a noun part-of-speech (pos), but we want to let an incoming
    query-noun first get to SpreadAct before we de-activate nouns.
    SpreadAct is not called from InStantiate. It is called from
    higher up, from OldConcept calling ReActivate. So in OldConcept
    we set up a conditional call to a new NounClear module.

    Before we code further, we had better go into EnCog and
    comment out our previous arrangements involving what-do queries.

    Inside InStantiate, it does not do much good to call NounClear,
    because program control shifts back up to OldConcept, which calls
    ReActivate for the recent concept.


    Mon.21.DEC.2009 -- Debugging

    Somehow, when we run the current MindForth a second time,
    the EnBoot "pos" values get changed. Such a thing is not
    supposed to happen. And it happens immediately when we enter
    "MainLoop" to start the AI a second time. All the "pos" values
    get changed. We know that MainLoop is running EnBoot on the
    second time around, because it is unavoidable. Let us therefore
    go into EnBoot and put some diagnostic declarations. Uh-oh,
    the HCI screen erases the messages. And the EnBoot "pos" values
    change on the second time around even when we use the old
    24may09A.F version of MindForth instead of the 20dec09A.F version.
    Let us see if the "En" array is being affected. No, only the
    Psi array is being affected, not the "En" array. Hmm, the problem
    went away when we commented out the following code near the end
    of InStantiate.

    \ ordo @ 1 > IF
    \   psi @ seq !
    \   vault @  t @ 2 -  DO
    \     I 1 psi{ @  0 > IF
    \       seq @ I 6 psi{ !
    \       LEAVE
    \     THEN
    \   -1 +LOOP
    \ THEN
    \ 0 seq !
    
    Then we put the above code back in without commenting it out,
    but we tested "ordo" values after each run. The "ordo" variable
    was not being reset, so we made it be reset to zero when the
    Escape key was pressed to quit the AI. Again the pos-change
    problem went away. Of course, there may be some other mischief
    that the wrong "ordo" value is causing.

    Anyway, now we can go back to testing our proposed new
    comprehension algorithm. The changing-pos bug was preventing
    noun activations from being reduced to zero by the "doflag".

    We are trying to use NounClear to wipe out noun activations
    just before we enter the verb in "What do kids make?"
    After the entry of "kids", noun activations are indeed
    going to zero, but as soon as we type in the initial "m"
    of the verb "make", somehow the activation on "kids" is
    going back up to at least fifteen (15), as shown by a
    .psi report. We suspect the NounAct module. Let us investigate.

    Now in a later coding session we try running our input-test in
    diagnostic mode, and immediately we get a glimmer of insight.

    SprdAct adds 9 to 73 9 (lim = 63) for t=274 MAKE engram; in sprA spike = 9
    ReActivate adding 16 to 72 at 267
    SpreadAct has been called.
     sprdAct: caller & seq = 148 0
    Near end of OldCept: doflag & pos = 1 5
    OldCept: doflag & pos = 1 5
    OldCept calling NounClear
    NounClear now resets nouns to zero act. m
        PsiDamp called for urpsi = 72  by module ID #104 AudInput
          PsiDecay called to reduce all conceptual activations.
    ake
    OldConcept has been called.
    InStantiate adding 36 to 73
      from OldConcept
    Reactivate has been called.
        Calling ReActivate. psi = 73
    
    Even when we type in just an extra blank space-bar instead of
    "make", diagnostic mode reveals to us a call to PsiDamp.
    Near end of OldCept: doflag & pos = 1 5
    OldCept: doflag & pos = 1 5
    OldCept calling NounClear
    NounClear now resets nouns to zero act.
        PsiDamp called for urpsi = 72  by module ID #104 AudInput
          PsiDecay called to reduce all conceptual activations.
    
    And PsiDamp sets a "residuum" of sixteen (16) on the word being
    psi-damped. Now, perhaps we will eventually still want the call
    to PsiDamp to be there, but right now we are trying to wipe out
    all noun-activations after the subject-noun goes in, so that
    the combined activation from entered noun and entered verb will
    slosh over onto any available direct-object noun in the KB.

    Ah, yes. Our fully commented 24may09A.F code reveals to us that
    AudInput calls PsiDamp in order to "Knock down cresting concept."
    Once we get beyond the entry of "kids", it is time to reduce
    the "kids" concept in activation. And the call from AudInput to
    PsiDamp goes out in both internal and external POV settings.
    So perhaps at the end of AudInput we should temporarily issue
    another call to NounCLear just to continue experimenting with
    our new comprehension ideas.

    We surrounded the during-input AudDamp call with temporary code
    to prevent it if the "doflag" is non-zero, and we finally achieved
    a reduction of noun-activations to zero. Now we type in "What do
    kids make...?" and the concept-word "ROBOTS" is left with ten (10)
    units of sloshed-over activation, while all other nouns are at
    one or zero. Somewhere we need to set a flag for EnCog to either
    call SelfRef or to say the "ROBOTS" answer.

    Oh, this comprehension coding is going to take a lot of work.
    We now have the algorithmic tools to do what we want, but we
    are worried about getting it all as right as possible.

    Our software is able now to isolate the sloshed-over direct-object
    answer to a what-do query, but we are no longer using the
    NounPhrase module to operate the mechanism of response -- or are we?
    We could conditionalize the response with a flag like unto the
    "doflag".

    We modified the EnCog code to look for a noun-concept with a greater
    activation than nine, because slosh-over seems to be at ten units.
    We made up a dummy SelfRef module with a NOT-KNOW statement and a
    LEAVE command, to see if we could get to SelfRef and stop the AI.
    Apparently, when program-flow comes back to EnCog from SelfRef,
    there is often some sort of bug involving an infinite loop that
    gives way to a crash of Win32Forth. We are glad to find out that
    the crashing bug is apparently not in SelfRef. We should be able
    to troubleshoot the EnCog Forth-crashing bug by building EnCog
    up from a simpler beginning.

    Tues.22.DEC.2009 -- Abandoning 20dec09A.F Spaghetti Code

    As described in the previous MFPJ document fp091218.html,
    we solved the problem of answering "what-do" queries with
    either a single word or a NOT-KNOW response, but we were
    left with a 20dec09A.F AI Mind full of spaghetti code. Now we
    start over again from the previous 18dec09A.F MindForth version
    which we rename as 22dec09A.F and which we use to develop the
    advances made in 20dec09A.F while abandoning the spaghetti code.

    We have recently had the insight that for self-referential statements
    like "I DO NOT KNOW" in response to "what do" queries, we should be
    working and coding more in the comprehension area of the AI source code
    rather than in the NLP generation area. We also realize that we may
    add self-referential response mechanisms to an AI Mind essentially
    without disturbing the pre-existing functionality. In other words,
    by using conditional flags to invoke the self-referential responses,
    we leave the as-is AI functionality basically intact and undisturbed.
    We are not really merging a new algorithm with the mass of older
    algorithms, but rather we are superimposing a new algorithm without
    violating any sort of Heisenberg Uncertainty Principle in terms of
    AI Mind functionality. We add new functionality, but we do not
    change the old functionality.

    Wed.23.DEC.2009 -- Reconstituting the EnCog Module

    Yesterday in the 22dec09B.F MindForth we kept getting buggy
    behavior similar to an infinite loop veering off into a Forth
    crash, until we had SelfRef call NounPhrase to utter tersely
    a one or two-word answer. Meanwhile, however, we had made so many
    changes to the EnCog module in an effort to eliminate the bug,
    that now we want to reconstitute EnCog with a wholesale importing
    of code from a slightly previous version. Without the typical
    EnCog code, the AI tends to state a subject and then not make
    a statement about that subject.


    Thurs.24.DEC.2009 -- Implementing the whdsvflag

    Now that the "whdsvoflag" algorithm works reasonably well for handling
    questions like "What do robots make?", now we want to code in the
    similar handling of the "whdsvflag" algorithm for answering questions
    like, "What do robots do?"

    It may well be that the "what-do-X-do?" coding has to move out of
    comprehension and back into the NLP generation domain. This idea
    comes from the likelihood that we will want the AI to restate the
    subject mentioned in the query. For instance, if a user asks,
    "What do robots do?", we would like to AI to say something like,
    "Robots help people," or even, "Robots need me". So EnCog should
    retain a grip on the query-subject and use the subject to launch
    a statement. However, if no verb for the given subject is
    sufficiently activated, we want EnCog to call SelfRef for an
    "I DO NOT KNOW" statement.

    Although we might try to keep hold of a subject-noun during a
    what-do-X-do incoming query and reactivate it in order for it
    to spread activation to any logically correct verb, it might be
    better (and follow Occam's Razor) to use the activations of
    the input itself for guidance and not re-activate a subject-noun
    that has been zeroed out. So perhaps we should re-examine our code
    and use NounClear in a situation where using PsiClear was perhaps
    overkill.

    As soon as we stopped using PsiClear inside InStantiate and we used
    NounCLear instead, we started getting a logically correct verb
    for our queries, but not a logically correct direct object.


    Fri.25.DEC.2009 -- Troubleshooting the whdsvflag

    In the 24dec09A.F release of MindForth we got the basic functionality
    of the whdsvflag mechanism to distinguish between factual and "dunno"
    answers to questions in the "what-do-X-do" format, but EnCog was not
    properly blanking out the rest of any incipient thought. The AI was
    stating the query-subject, the logically correct verb, but the wrong
    direct object. This problem resulted apparently because the activation
    on the logically correct verb was high enough to find the verb but
    too low to generate a proper thought with proper slosh-over to a
    logically correct direct object. We need now to experiment with ways
    to improve the responses.

    When we ask the AI, "What do robots do?", the software is correctly
    finding "ROBOTS" and "NEED" but not "ME" as the logically correct
    direct object. But when we simply type in "robots" and press Enter,
    the software correctly says, "ROBOTS NEED ME".

    Perhaps the problem is that the flags are still on, still positive.

    When the word "robots" is simply entered, ReActivate gets called
    to activate not just the current node of ROBOTS, but all previous
    nodes in recent time.

    We could set up a special concomitance flag to assist query-results
    in going through the generation path with sufficiently high activations.

    In both the situations, after the entry of the words "what do",
    all noun activations are essentially zeroed out, because the
    perinent "wh" flag is positive. When a noun like "robots" comes
    in, currently it, too, gets zeroed out, so that the main
    activation can go onto the verb, and slosh over onto a noun.
    We should perhaps change the current arrangement, and let
    the subject-noun obtain a high activation.


    Sat.26.DEC.2009 -- Backing Up and Taking Stock

    In the 25dec09A.F experimental version of MindForth, we were not able
    to resolve the probems with "what-do-X-do?" functionality. Our impasse
    results in a need to back up a little and take stock of the situation.

    When the user types in a question like, "What do robots need?", special
    flags should make the input event the equivalent of leaving out the
    "What do" and simply typing in "robots make". Then the AI should give
    the correct answer or a NOT-KNOW response handled by the SelfRef module.
    The system gets complicated because we have various flags associated with
    the Moving Wave Algorithm (MWA) which postulates that only one cresting
    concept should be super-activated at one time.

    We would like to back away from issuing terse, one-word answers and have
    the AI go back to generating a sentence of response. We would also like
    to introduce an ability of the AI to call some sort of ProNoun module
    and substitute "THEY" in place of a plural noun contained in the input
    query. In fact, substituting "THEY" may obviate the need to have
    NounPhrase search for a mental repetition of the most active concept-noun.
    As mind-designers, we may appreciate a kind of short-cut here, because
    using "THEY" eliminates a computational step. But we still have to
    follow the path of "slosh-over" to determine whether the answer will be
    a statement of fact, or a SelfRef NOT-KNOW admission.

    We could probably use EnCog to wait for the DO-KNOW/NOT-KNOW decision
    and then, for the DO-KNOW case, to call a ProNoun module to say "THEY",
    to repeat the verb of the query, and to say the answer-word. In fact,
    the special section of EnCog could have hold of both the ProNoun call
    and the "givenverb" -- if we want to make a variable out of it.
    Thus VerbPhrase would not have to be called to generate the response,
    although verbal slosh-over would have found the active answer.

    In the "what-do-X-do?" situation, there would be no known, given verb.
    In this case, EnCog could still substitute "THEY", but would have to
    search for both a verb and any direct object. The verb and object
    would have to be known already, even before the saying of "THEY",
    because "THEY" is too general a word to be associated with a
    particular verb and a particular direct object. So the pronoun
    "THEY" may serve a double purpose for both humans and AI Minds.
    It eliminates the unnecessary speech of repeating the subject, and
    it lets a natural or artificial intelligence glide over the unneeded
    work of thinking a repetitious thought just to answer a KB query.
    By using special tags to hold onto latent concept-words, the AI
    can make a quasi-noncomputational response. Even for what-do-X-do queries,
    the tags will get assigned as the question comes into comprehension,
    and then the EnCog module will simply utter the tagged words. We just
    need some good variable names for the trio of tags.

    We may eventually move into the area of answering questions on the
    order of, "What eats fish?" and "Who buys books?". Thus we gradually
    build up a thinking and reasoning Mind.


    Sun.27.DEC.2009 -- Taking a New Direction

    Yesterday in the MFPJ we developed the idea of answering "what-do"
    queries by locking on to concepts involved so as not to have to
    call NounPhrase and VerbPhrase to search for things already known.
    In 27dec09A.F MindForth we now declare the following variables.

    variable quo ( 27dec2009 query-object for EnCog response )
    variable qus ( 27dec2009 query-subject for EnCog response )
    variable quv ( 27dec2009 query-verb for EnCog response )
    
    These subject-verb-object (svo) variables will be used to keep track
    of concept-words that will surface in the response that EnCog will make
    if it knows the logically correct answer to an input query.

    The 27dec09A.F AI bypasses the unimproved 25dec09A.F code and is derived
    from the 24dec09A.F version, which answered "what-do-X-do?" queries
    but did not properly eliminate a simultaneous incipient thought in EnCog.
    Now we want EnCog to handle "what-do" queries in a drastically different
    way.

    When a "what-do" query comes in, we want the AI to identify the
    Psi concept number of the subject of the query and assign the number
    to the "qus" (query-subject) variable, so that EnCog will be able to
    include the query subject in a response, without having to conduct
    a superfluous NounPhrase search for a most-active subject. Now,
    where -- in what mind-module -- is the point where the query-subject
    can be locked onto with the "qus" variable? It must be either the
    InStantiate module or the OldConcept module, in both of which
    the whdsvflag and whdsvoflag variables are dealt with. Let us assume
    that the proper venue is the OldConcept module, as part of the
    comprehension pathway for external input.

    We were able easily to trap the query-subject with the qus variable,
    but our attempt to trap the query-verb with the quv variable ran into
    a problem when "quv" locked onto "do" as the verb, instead of the
    real verb of the query. But perhaps we are learning a lesson here.
    Perhaps "do" is a legitimate query-verb, if it is the second instance
    of "do" coming in.

    By inserting code to reject 59=DO as the query verb, we were able to
    lock onto both the query-subject and the query-verb. What we are
    calling the query-object (quo) must actually be found by means of
    detecting the slosh-over of activation from subject and verb to a
    logically correct direct object.


    Sun.27.DEC.2009 -- Formulating Query-Responses

    In our second coding session today, we have created 27dec09B.F in order
    to preserve the introduction of the s-v-o query variables while we
    experiment with some attempts to formulate proper responses to queries.
    We declared a "tdy" (temporary duty) variable as a conditional test in
    the NounPhrase module, so that we could call NounPhrase but not say
    the word selected.


    Mon.28.DEC.2009 -- Diagnostic what-do-X-do Messages

    Yesterday in our 27dec09B.F Web upload we got the AI to hold off
    on stating a single-word answer to a what-do query, and instead
    to state the answer-word at the end of a full sentence of reply,
    as in the following exchange.

    Human: what do kids make 
    Robot: KIDS MAKE ROBOTS
    
    The AI was able to answer such a question with either an "I DO NOT KNOW"
    statement or with the known answer as shown above, first by locking onto
    the query-subject and the query-verb, and then by discovering and locking
    onto the query-object as an associated noun with cumulative slosh-over
    activation built up from the combined activations of the query-subject
    and the query-verb. Now we want to get the AI Mind to answer questions
    in the "What-do-X-do?" format, such as, "What do kids do?" In this more
    general case, the AI will have to lock onto the query-subject "kids"
    and identify either the most recently associated verb or the most
    highly associated verb, and then follow slosh-over to lock onto any
    available query-object.

    Since the what-do-X-do query is more open-ended than a question
    like "What do kids make?", it will be necessary to reserve for
    later the treatment of such issues as how to use the ConJoin module
    to concatenate multiple answers into one statement of response,
    and how to formulate answers that consist of an intransitive verb
    with no query-object to lock onto. For instance, the AI might need
    to answer, "Kids study," or "Kids play." The issue of switching over
    to an intransitive verb has already been dealt with recently during
    the development of the BeVerb module in MindForth AI.

    In a what-do-X-do query, after the query-subject comes in, OldConcept
    calls ReActivate, which in turn calls SpreadAct. But only the
    whdsvoflag has been set, and not the whdsvflag, because the second
    instance of "do" has not yet come in. So the software has to find
    the newly activated verb before the second "do" comes in.

    In a simple what-do query, EnCog searches the noun-space for a
    query-object that has (slosh-over) activation on it. Likewise,
    in a what-do-X-do query, EnCog will have to search for an activated
    verb after the query-subject comes in, and before it is even clear
    just what kind of query is occurring. In other words, EnCog will
    have to identify a potential query-verb and then discard it if
    the incoming query uses a different verb.

    Because the query treatment uses suppressors like NounClear or
    VerbClear or PsiClear, the various candidates for the response are
    all available by the end of the query input. The query-subject is
    a given, and the query-verb may or may not be a given. Therefore
    in EnCog we might try identifying a "qup" (query-predicate),
    just in case no actual query-verb is given during a query in the
    what-do-X-do format.

    Transcript of AI Mind interview at 7 10 44 o'clock on 28 December 2009.
    what do
    OldC: qu svo = 0 0 0 kids
    OldC: qu svo = 72 0 0 do
    
    
    
    OldC: qu svo = 72 0 0
    VerbAct has been called; psi = 73
    EnCog: s p v o = 72 73 73 39
    EnCog: active whdsv verb is 73
    Encog: whdsv & whdsvo = 1 1
    EnCog: whdsvdunno & dunno = 0 0
    EnCog: query object = 39 KIDS
    OldC: qu svo = 72 73 39 MAKE
    OldC: qu svo = 72 73 39 ROBOTS
    OldC: qu svo = 72 73 39
    
    Robot:  KIDS MAKE ROBOTS
    Human:
    

    Transcript of AI Mind interview at 7 12 24 o'clock on 28 December 2009.
    what do
    OldC: qu svo = 0 0 0 robots
    OldC: qu svo = 39 0 0 need
    
    
    
    OldC: qu svo = 39 74 0
    Encog: whdsv & whdsvo = 0 1
    EnCog: whdsvdunno & dunno = 0 0
    EnCog: Potential q-verb = 74
    EnCog: Potential q-verb = 74
    EnCog: Potential q-verb = 74
    EnCog: Potential q-verb = 74
    EnCog: query object = 0 ROBOTS
    OldC: qu svo = 39 74 0 NEED
    OldC: qu svo = 39 74 0 ERROR
    OldC: qu svo = 39 74 0
    
    Robot:  ROBOTS NEED ERROR
    Human:
    


    Tues.29.DEC.2009 -- Cleaning up the what-do-X-do Mechanisms

    Although the 28dec09A.F version of MindForth was able to answer questions
    in the "What-do-X-do?" format, an "ERROR" was found when an answer was
    supposed to be a personal pronoun. We need to go into the AI source code
    and permit both nouns and pronouns to be used as an answer. After the change,
    we obtain much better results.

    Transcript of AI Mind interview at 6 9 16 o'clock on 29 December 2009.
    what do
    OldC: qu svo = 0 0 0 robots
    OldC: qu svo = 39 0 0 need
    
    OldC: qu svo = 39 74 0
    Encog: whdsv & whdsvo = 0 1
    EnCog: whdsvdunno & dunno = 0 0
    EnCog: Potential q-verb = 74
    EnCog: Potential q-verb = 74
    EnCog: Potential q-verb = 74
    EnCog: Potential q-verb = 74
    EnCog: query object = 65 ROBOTS
    OldC: qu svo = 39 74 65 NEED
    OldC: qu svo = 39 74 65 ME
    OldC: qu svo = 39 74 65
    
    Robot:  ROBOTS NEED ME
    
    Then a very esoteric, arcane bug tends to manifest itself. Instead of
    taking the "ME" concept and making a correct statement in relation
    to "ME", the AI started saying "I DO NOT KNOW KNOW ME", as if it
    were really trying to say, "I KNOW ME." Obviously, the "I DO NOT KNOW"
    is coming from the SelfRef module, which is generally accessed if
    a flag has been turned on, or kept on when it really should have
    been turned off. So we need to find the spot where a SelfRef-calling
    flag should be reset to zero.

    We may be using too many flags and variables in our treatment of queries
    put to the AI Mind. Gradually we ought to eliminate unnecessary items.


    Wed.30.DEC.2009 -- Creating the EnPronoun Mind-Module

    It is a mere formality now to create the EnPronoun module and to
    position it appropriately within the MindForth sequence of modules.
    The handling of "what-do-X-do?" queries is a fitting occasion for
    the enabling of the AI to substitute an English pronoun for any
    plural noun filling the slot of "X" in the query. There are other
    occasions for using a pronoun, such as for avoiding the repetition
    of the same noun over and over again, but the query-treatment is an
    especially obvious opportunity for introducing the new functionality.

    The new mind-module ought to be introduced with an eye towards the
    expansion of its functionality in the future. We will try using
    "atcd" as a variable for the antecedent of the prounoun, and
    "gndr" as a variable for the gender of the antecedent. The Psi
    concept array does not yet contain a panel-flag for the gender
    of a noun, but we must plan for its eventual inclusion. When we
    answer an input query with the pronoun "they" in reference to
    the subject of the query, the antecedent of "they" is readily
    apparent. However, we should not introduce the use of pronouns
    without providing the "atcd" variable for keeping track of
    antecedents for as long as a conversation or a train of thought
    dwells upon a particular antecedent. We should keep in mind
    a mechanism for having the EnPronoun module let go of any particular
    antecedent within an arbitrary number of thought-generations after
    the topic of the antecedent has been abandoned as a subject of ideas.
    Then the "atcd" variable, which has been holding a Psi concept number,
    should perhaps be reset to zero. It may also be possible to include
    gender in a hybrid Psi concept number of an arbitrary length, so that
    a word like "friend" can be pronominalized as either "he" or "she",
    depending on recent context.

    http://www.scn.org/~mentifex/mindforth.txt

    http://groups.google.com/group/comp.sys.super/browse_thread/thread/9e8654540b7126fc#

    A Mentifex artificial intelligence breakthrough treats of machine reasoning with inference.

    Return to top; or
    http://www.delicious.com/url/e2c27ccbe6cc35d8fcec1ec6fc7d2258
    Cyborg Weblog
    http://cyborg.blogspot.com/2009/11/linux.html
    Which Linux For Non-Techie Windows Users?
    Marco Guardigli
    AiMind.html for MSIE
    AI Mind discussion at Chatbots.org
    German artificial intelligence Wotan
    German Artificial Intelligence User Manual
    Russian Artificial Intelligence User Manual
    iforth.html
    IntelForth
    mind.frt Supercomputer AI
    AI4U e-book at iUniverse;
    AI4U textbook at Amazon;
    The Art of the Meme -- Mentifex e-book at Amazon;
    index.html