News API v3 offers extensive historical news data from 2019, but retrieving large volumes requires an understanding of technical aspects. This guide explains how our data is structured and provides best practices for efficiently working with historical data.

Understanding data indexing structure

Our system stores data in monthly indexes. This architecture optimizes the search and retrieval of large volumes of news content.

Key implications of this structure:

  • Data is organized into separate monthly indexes.
  • Queries spanning multiple months need to access multiple indexes.
  • Performance is optimal when querying within a single monthly index.
  • Queries across very long time periods (e.g., 5+ years) can cause performance issues.

Technical limitations

While you technically can query data across our entire historical range (2019 to present), doing so in a single request is not recommended for several reasons:

Performance degradation

Queries spanning multiple years require searching across numerous indexes, significantly increasing response time.

Request timeouts

Complex queries combined with long time ranges may time out before completion (default: 30 seconds).

Multi-index complexity

Long time ranges require coordinating searches across multiple monthly indexes.

Limited result access

Long time range queries may miss most relevant historical data, as the API limits responses to 10,000 articles per request.

❌ Incorrect approach

q=financial crisis&from_=2019-01-01&to_=2025-01-01

This query attempts to search approximately 72 monthly indexes at once, which may lead to poor performance or timeout errors (408 Request Timeout).

To retrieve historical data while maintaining performance efficiently, follow this systematic approach:

1

Estimate data volume using aggregation

Before retrieving actual articles, use the /aggregation_count endpoint to understand the volume of data matching your query across time periods.

Example request:

{
    "q": "your search query",
    "aggregation_by": "month",
    "from_": "2020-01-01",
    "to_": "2020-12-31",
    "lang": "en"
}

Example response:

{
    "aggregations": [
        {
            "aggregation_count": [
                {
                    "time_frame": "2020-01-01 00:00:00",
                    "article_count": 2450
                },
                {
                    "time_frame": "2020-02-01 00:00:00",
                    "article_count": 3120
                }
                // Additional months...
            ]
        }
    ]
}

This step helps you:

  • Identify which time periods have the most relevant content.
  • Determine if your query is too broad or too narrow.
  • Plan your time-chunking retrieval strategy.
  • Calculate if time chunks need further subdivision (if >10,000 articles per chunk).
2

Process data in time chunks

Once you understand the data volume, retrieve articles in appropriate time chunks to avoid potential timeout issues. While longer ranges work, complex queries spanning 30+ days can cause 408 timeout errors.

{
    "q": "your search query",
    "from_": "2020-01-01",
    "to_": "2020-01-31",
    "page_size": 100,
    "page": 1
    // Additional parameters as needed
}

For each time chunk:

  1. Implement pagination to retrieve all articles for the period.
  2. Process and store the data for that period.
  3. Move to the next time chunk only after completely retrieving the current period’s data.

For detailed guidance on implementing pagination, refer to our guide on How to paginate large datasets.

Example: Retrieving historical data

Here’s a practical example showing how to retrieve a week of data using the recommended approach. The same logic scales to retrieve months or years by adjusting the date ranges and aggregation period (day/month):

Best practices

  • Use specific queries: Narrow your search terms to reduce result volume.
  • Prioritize recent data first: Start with recent periods and work backward if needed.
  • Implement rate limiting: Space out requests to avoid hitting concurrency limits.
  • Handle timeouts gracefully: Implement retry logic with exponential backoff.
  • Monitor performance: Track query response times and adjust your approach as needed.
  • Consider data storage: For large historical analyses, store retrieved data in a database or file system.

Common pitfalls to avoid

PitfallImpactSolution
Querying multiple years at onceSlow performance, timeouts (408 errors)Break queries into monthly chunks
Using overly broad search termsExcessive result volumeRefine query terms to be more specific
Insufficient error handlingFailed data retrievalImplement robust retry and error handling
Underestimating data volumeResource constraintsUse aggregation endpoint to estimate volume first
Requesting too many results per pageSlow response timesUse reasonable page sizes (100-1000)
Improper pagination implementationIncomplete data retrievalFollow our pagination guide