Elicitation

Contrastive Analysis: Finding the Submodality Driver

NLP contrastive analysis is the diagnostic procedure that makes all other submodality work precise. Without it, practitioners guess which submodality to shift. With it, they know. The technique compares two internal representations that differ in emotional quality, maps their submodality profiles side by side, and identifies which specific sensory differences account for the emotional difference. Those differences are the drivers, and they are the only submodalities worth changing.

The principle is simple: if two representations of similar content produce different emotional responses, the difference must be in the coding, not the content. A memory of public speaking that produces anxiety and a memory of public speaking that produces confidence contain the same activity. The coding, the visual, auditory, and kinaesthetic submodalities of each representation, is what differs. Find those coding differences, and you have found the control panel for the emotional response.

This procedure is foundational to the entire submodalities framework. Every technique, the swish pattern, belief change, mapping across, compulsion blowout, depends on knowing which submodalities to target. Contrastive analysis provides that knowledge. Skip it, and you are running interventions blind.

Selecting the Two Representations

The two representations must share enough content similarity that the comparison is valid. You are isolating the variable of emotional response, so everything else should be as similar as possible.

Good pairings:

A food the client compulsively craves versus a food in the same category they feel neutral about. Chocolate (compulsive) versus crackers (neutral). Both are snack foods. The content category is matched. The emotional response differs.

A person the client feels intimidated by versus a person of similar status they feel comfortable with. Both are authority figures. The interpersonal dynamic differs.

A task the client procrastinates on versus a similar task they complete without resistance. Both require similar effort. The motivational response differs.

Bad pairings: comparing a phobic response to spiders with a positive feeling about puppies. The content categories are too different to isolate the submodality variable. Any differences you find might reflect “animal coding” differences rather than emotional response differences.

Running the Elicitation

Elicit each representation separately. Have the client bring up the first experience and hold it while you systematically map every submodality across all three channels.

Visual channel: Location in the visual field (left/right/center, up/down). Distance from the client. Size of the image. Brightness. Color saturation. Focus (sharp/blurred). Movement (still/movie). Association/dissociation (looking through own eyes versus watching self). Border (framed/panoramic). Dimensionality (flat/3D).

Auditory channel: Presence of internal dialogue or sounds. Location of the sound source. Volume. Pitch. Tempo. Tone quality. Whose voice. Continuous or intermittent.

Kinaesthetic channel: Presence of a body sensation. Location in the body. Intensity. Temperature. Movement direction. Texture. Pressure. Duration.

Record every value. Then break the client’s state (look around the room, count backward) and elicit the second representation with the same systematic approach.

The elicitation takes fifteen to twenty minutes for both representations. Rushing it produces incomplete maps, which produce unreliable driver identification.