Tutoring agent behavior will be adaptive in two senses. First, each agent will be the owner of a small set of sub-topics and related cases. As a learner operates in the synthetic environment they will be building their own case, and the relevant agent will be alerted for remediation whenever a learner case becomes similar and relevant to a case under the agent's control. As learner behavior changes, so will their profile and the nature of the cases they match against. In this way, agents will gain and lose interest in a player according to the changes in the learner's profile.
Second, learner state will be preserved throughout the course of their involvement of the synthetic environment. As learners leave the game, either as successful or unsuccessful players, their state and experience will be saved as a new case. These saved cases, according to their profile, will become part of the inventory assigned to one or more of the tutorial agents. As later players enter the synthetic environment, the tutorial agents will have these additional cases of success and failure to present as part of their remediation package. In other words, tutorial agents will begin the game armed with prototypical case studies, but they will accumulate additional student case studies as players enter and leave the game over time.
When a rule fires, the client program is notified, and the player sees the tutor pop up and warn them (via QuickTime). With this warning, the player can ask for more information (bringing them to the Advice Network), or they can ignore it and carry on at their own risk.
The idea is that the Proactive Tutor is the mentor looking over your shoulder as you play. Your mentor should be there when you need help, but when you know what you're doing (or when you think you know), you can ignore the mentor and do whatever you want (at your own risk, of course).