Conversation: Task-Level Summaries

This feature is currently being rolled out to customers.

Task-Level Summaries allow researchers to generate concise, narrative-style summaries of participant responses for each Conversation Task within a Standard or Journal activity. 

Instead of manually scanning large volumes of responses to understand what participants are saying, researchers can quickly view a synthesized summary of each task’s Conversation transcripts directly within the platform. These summaries highlight key themes, emerging patterns, and overall sentiment, helping teams gain an immediate high-level understanding of the data.

Accessing Task-Level Summaries

Researchers can access task-level Conversation summaries from both the Activity Summary page and the Summary Stream when filtered on a Conversation Task. Conversation Task cards and related filter options also display an icon to indicate when a task-level summary is available, making it easy to quickly identify which tasks include a synthesized view of participant responses.

Generating Task-Level Summaries

Task-level summaries become available once at least two participant responses have been completed. To ensure both scale and accuracy, the platform uses two different generation approaches based on the volume of data collected.

For 20 or fewer responses, summaries are generated directly from full Conversation transcripts to preserve nuance and include representative participant quotes. For tasks with more than 20 responses, a summaries-of-summaries approach is implemented, synthesizing insights across individual AI-generated participant summaries to provide a comprehensive view of the dataset.

Note: Summaries can only be generated when viewing the full set of responses. If Participant or Segment filters are applied, the Task Summary tab remains visible, but summary generation is disabled to ensure insights reflect the complete dataset.

Summary Refresh Notifications

Each generated summary displays the specific number of responses used at the time of creation, providing clear context on what the synthesis reflects. 
When new responses are received or underlying data changes—such as new responses being submitted, existing responses being deleted or rejected, or individual participant summaries being regenerated—the platform surfaces several indicators to notify the researcher:
  • A count of new responses since the last summary was generated
  • A red dot indicator to signal that the summary may be out of date
  • The option to Regenerate the summary to include the latest data

Refining Insights with Ask AI

Task-level summaries are designed to be a springboard for deeper investigation. By surfacing what participants are saying at a high level, researchers can move from initial review to deeper analysis faster. At the end of each summary, researchers are encouraged to launch Ask AI to explore specific questions and further refine their insights.

🗣️ Still have questions or want to leave feedback on how we can do better? Contact us at helpdesk@recollective.com.
Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us