Website Logo. Upload to /source/logo.png ; disable in /source/_includes/logo.html

Mocking Eye

'Tis all in vain?

Processing User Goals and Narratives

| Comments

In order to model a strategy to reach a goal, we need to parse some user input. A goal is a particular frame with particular arguments. Each step in a strategy is---in fact---also a goal! Some goals are stubs, certainly. This feature means that the system understands the underlying details better and better. If once "make a salad" is specified as a step in "make a dinner", and later "make a salad" is narrated, next time "make a dinner" is undertaken, the details of salad-making can be taken into account. So, how does one undertake processing a narrative? Each sentence is examined separately. It is an underlying assumption of the system that sentences will be kept simple. So, the goal is one statement, and each sentence in the narrative is a statement. I am adopting the method described in "A Maximum Entropy Approach to FrameNet Tagging" (2008) by Michael Fleischman and Eduard Hovy. According to that model, the MaxEnt classifier (I'll be using an NLTK impementation) will take these features:
  • Phrase type: PP, NP, etc.
  • Voice
  • Position: position in the sentence
  • Grammatical function: external argument, object argument, etc.
  • Head word: the verb in question
And decide what each word in the sentence is what frame element (Agent, Cause, etc.) In addition to those features, an n-gram model may be applied, wherein the each subsequent word processed will be supplied the classification of some of the previous words, since once one word is classified as an Agent, another one is unlikely to also be one. So, a user tells a simple story, and what do we get? We get a frame tagged with the head word. That is, a Motion frame, for example, would also include the particular verb lemma:
    The boy walked to school
  • Theme: "the boy"
  • Direction: "to"
  • Goal: "school"
  • Head: "WALK"
The space of strategies is basically a graph of frames. As each frame gets defined in terms of possible subsequent frames, a Hidden Markov Model of narratives is generated. Then, a wide variety of techniques is available for leveraging HMMs to get us better strategies!