Task Type: Conversation

The Conversation task type allows researchers to create dynamic, one-on-one chat experiences that feel natural and personalized–helping teams uncover deeper insights faster, without compromising on quality or context.

By simply defining the objective, tone and instruction for the task, researchers can enable the AI moderator to guide each participant through an interactive and comprehensive discussion that adapts to their responses in real time.

For participants, the discussion feels like a genuine, human conversation that draws out richer, more reflective responses. For researchers, it delivers deep qualitative insights with the seamless convenience and flexibility of asynchronous data capture and review.

The Conversation task can be used on its own or as a complement to other task types. This article will cover: 

Creating a Conversational Task

When configuring a Standard or Journal activity, Analysts and Moderators can choose the Conversation task type from the Task Type menu.

Conversation tasks are configured with three core elements:

  • Task name
  • Introduction
  • Conversation Objective

The Introduction appears at the top of the conversation and provides participants with essential context throughout. This is the place to set expectations, share instructions, or attach Photos, Videos, or Files that support the discussion. 

Admins must also define the Conversation Objective. This objective defines the goal, tone, and behaviour of the AI moderator, shaping how it leads the conversation. The Conversation Objective is only visible to administrators

To ensure the Conversation Objective meets their desired outcome, researchers can specify exactly what they want to learn from participants, suggest steps, or provide moderator tips and essential background context.

While admins can also include specific questions they would like the AI to ask, the true power of the Conversation task lies in letting the AI adapt and guide the flow naturally toward the objective. With this objective in mind, the AI will continuously monitor progress throughout the exchange and only conclude once the objective has been met. 

To support task creation, sample objectives and writing tips are available in-platform. Learn more by reviewing our guide on Writing Effective Conversation Objectives.

To create a more dynamic experience, Admins can pipe participant responses from previous tasks directly into the Introduction and Conversation Objective. When included in the Introduction (e.g. “You mentioned [insert response] was your preferred grocery brand”), piped responses can be leveraged to set a contextual stage. When included in the objective (e.g. “Learn why the participant chose [insert response]”), researchers can ensure the interactions are tailored to the individual participant.

Previewing the Conversation

Prior to launch, Admins can preview the Conversation task to observe the AI moderator in action and iteratively adjust the setup as needed. They can preview the entire activity or a single task. If the Conversation Objective includes piped participant responses, Admins must provide sample values to simulate the earlier tasks.

Learn more about previewing activities and tasks

Understanding the Participant Experience

When participants complete a Conversation task, they engage in a multi-turn, text-based conversation with an AI moderator. The AI begins by introducing the topic and then adapts its follow-up questions in real time based on participant responses. Conversations are automatically saved as they progress, allowing participants to exit and resume at any time. Once the objective is reached and the conversation is fully completed, it cannot be restarted unless an administrator has rejected the task response.

Participants can view their own conversation summaries alongside the transcript, providing a clear record of what was captured during the exchange. If the activity is socialized, participants may also access the transcripts and summaries of other participants.

Automatic Summaries and Translations

When a Conversation task is completed, the transcript is automatically saved and summarized to highlight the key insights, with the summary structured around the original conversation objective. Both the transcript and summary are automatically translated into all enabled study languages, making insights instantly accessible to multilingual teams. 

Analysis, Reporting and Exports

Conversation summaries appear in the Stream and are included in all Activity Transcripts. Each task response is linked to its corresponding transcript, which can be exported for further analysis.

  • HTML transcripts include both the conversation summary and full transcript, allowing researchers to review the entire exchange in context.
  • Excel and CSV transcripts include only the conversation summary, providing a concise view of the key insights for easy comparison across participants.

All conversation transcripts and summaries are available within Recollective’s Ask AI feature, and text from both summaries and transcripts can be highlighted and saved as verbatim excerpts. Participant responses are also captured in the Words summary for quick reference.

To understand why a specific question was asked and how it relates to the defined conversation objective, administrators can review the AI moderator’s internal reasoning by hovering over the brain icon located beneath each moderator message. This option is visible only to administrators during conversation previews and while reviewing completed conversations.

Additional Settings

As with other Task types in Recollective, researchers can take advantage of several additional setting options when configuring a Conversation task:

  • Completion Requirement: Beneath the task name, a Required Task toggle can be enabled to make the task mandatory. When the toggle is off, the task is considered optional and can be skipped by participants. If the task is optional, the Ask for reason if skipped setting appears and can be enabled to prompt participants to provide a skip reason.

  • Block Pasting: To discourage participants from using generative AI tools (e.g., ChatGPT) to answer questions, admins can enable Block pasting of external content to prevent users from pasting text from outside sources.
  • Point Customization: If the Points feature is enabled for the study, the option to Customize points awarded for task completion becomes available. Once enabled, a custom point value can be specified that is higher or lower than the default points awarded for that task type.
  • Task Visibility: If necessary, Admins can limit which participants are permitted to view and complete the task. Task Visibility settings allow visibility to be restricted by segment, previous task responses, previous task completion, or overall activity completion.
🗣️ Still have questions or want to leave feedback on how we can do better? Contact us
Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.