TemperStack
Intermediate8 min readUpdated Mar 18, 2026

How to aggregate multiple data sources on Make

Quick Answer

Aggregate multiple data sources on Make by creating a scenario with multiple trigger modules, using Array Aggregator or Text Aggregator modules to combine data, and configuring proper data mapping between sources. This allows you to merge data from different platforms into a unified output.

Prerequisites

  1. Active Make account
  2. Connected data sources (APIs, databases, or apps)
  3. Basic understanding of Make scenarios
  4. Data mapping knowledge
1

Create a New Scenario

Log into your Make dashboard and click Create a new scenario. Select the first data source you want to aggregate from the available apps. Configure the trigger module by connecting your account and setting up the appropriate filters or search parameters for the data you need.
Tip
Start with your primary data source as the trigger to establish the main data flow.
2

Add Additional Data Source Modules

Click the + button after your trigger module and search for your second data source. Add modules like Search, List, or Get depending on your data retrieval needs. Repeat this process for each additional data source you want to include in your aggregation.
3

Configure Data Retrieval Parameters

For each data source module, set up the connection parameters, filters, and field mappings. Use Dynamic Fields to link data between modules when possible. Configure date ranges, search criteria, or specific record IDs to ensure you're pulling the correct data from each source.
Tip
Use consistent identifiers like email addresses or user IDs to match records across different data sources.
4

Add an Aggregator Module

Insert an Array Aggregator or Text Aggregator module after your data source modules. In the Source Module dropdown, select the module that processes multiple items. Configure the Target structure type to define how you want your aggregated data structured.
Tip
Use Array Aggregator for structured data objects and Text Aggregator for simple text concatenation.
5

Map and Structure Aggregated Data

In the aggregator settings, click Add item to define the structure of your aggregated output. Map fields from different data sources to create unified records. Use functions like if(), emptystring(), or coalesce() to handle missing data or format inconsistencies between sources.
Tip
Create standardized field names in your aggregated output to maintain consistency across different data sources.
6

Add Data Transformation Logic

Insert Set Variable or Iterator modules if you need to process the aggregated data further. Use the Math or Tools modules to perform calculations, data cleaning, or formatting operations on your combined dataset before final output.
7

Configure Output Destination

Add a final module to send your aggregated data to its destination (Google Sheets, database, webhook, etc.). Map the aggregated data fields to the appropriate columns or fields in your output system. Set up any required formatting or data validation rules.
Tip
Test with a small data set first to ensure proper mapping before processing large volumes.
8

Test and Schedule the Scenario

Click Run once to test your aggregation scenario with live data. Review the execution log to verify data is being pulled and combined correctly. Once satisfied, set up scheduling by clicking Scheduling and configuring your preferred frequency (every 15 minutes, hourly, daily, etc.).
Tip
Monitor the first few scheduled runs to ensure consistent performance and data quality.

Troubleshooting

Data fields not matching between sources
Use mapping functions in the aggregator module to standardize field formats. Apply formatDate(), trim(), or replace() functions to normalize data before aggregation.
Scenario timing out with large datasets
Implement pagination in your data source modules and add Sleep modules between API calls. Consider breaking large datasets into smaller batches using date ranges or record limits.
Duplicate records in aggregated output
Add a Deduplicate module before aggregation or use filter conditions to identify and remove duplicates based on unique identifiers like email or ID fields.
Missing data from one of the sources
Add error handling routes and use Ignore directives for modules that might fail. Implement fallback logic with if() functions to handle missing data gracefully.

Related Guides

More Make Tutorials

Other Tool Tutorials

Ready to get started with Make?

Put this tutorial into practice. Visit Make and follow the steps above.

Visit Make