In this article, we'll cover how to perform basic checks for determining the source of an error, common issue types, and data latency. We'll also describe the kind of information the Criteo team needs to investigate an error.
This article doesn't only refer to errors that the API can produce, but also provides information about its expected behavior.
These simple checks will help you to separate client-side issues from server bugs.
If the request is not formed according to the specification or an invalid request header is provided, the server will respond with an error in the 400 code series.
Part 1 of the tutorial covers 4xx error series in more detail here: How to handle API errors.
Requests failing at the origin or blocked by the firewall configurations are surprisingly common. Check your connection by attempting to open a page in your web browser or run the curl command in the terminal. Make sure your personal or organizational firewall is set up to allow connection to Criteo services.
Here are the Criteo sub-networks to be whitelisted by region:
For best results, our recommendation is to whitelist the ranges for all DCs even if your server is located only in one of the regions.
Incorrect request URLs
If you are using variables or path parameters with your request, make sure that the final address is structured correctly. Uninitialized variables can result in the response code 404.
Transient issues can be resolved by properly implemented retry mechanisms, such as the exponential backoff technique.
If you are getting an error in the 500 series consistently this could indicate an issue on the Criteo server-side. You can always check the latest service availability status by visiting https://status.criteo.com/.
If the status page does not explain your issue, we encourage you to report it on our discussions page.
In your post, please share the following information:
- RequestId or TraceId, if available
- Endpoint and its version
- App name or app ID
- Payload without any identifiers or sensitive information
- Response message without any identifiers or sensitive information
If your query contains private or sensitive information, we will strive to get back to you in a timely fashion via email. In this case, we will also need the advertiser ID, the full payload, and the full response message.
If you would like to share your application output with us, please consider turning on verbose output. For example, here is how to turn the debug mode on for the Python3 requests library:
import requests import logging # Enabling debugging at http.client level (requests->urllib3->http.client) # you will see the REQUEST, including HEADERS and DATA, and RESPONSE with HEADERS but without DATA. # the only thing missing will be the response.body which is not logged. try: # for Python 3 from http.client import HTTPConnection except ImportError: from httplib import HTTPConnection HTTPConnection.debuglevel = 1 logging.basicConfig() # you need to initialize logging, otherwise you will not see anything from requests logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True requests.get('https://httpbin.org/headers')
Java and other languages have similar mechanisms of invoking debug mode in basic HTTP clients.
Starting from How to handle API errors - Part 1 to now, we have focused on common issues that can arise with the API.
Let's talk about data and metrics. Data latency can also be mistaken for causing issues. Some metrics may have several hours of latency, while others may seem to have longer delays. The following table provides the maximum and median data delays for different metric groups.
Revenue and basic metrics, including:
Retention for sales data is 4 years unless requested by hourly dimension. The hourly dimension data has 3 months of retention.
Data freshness guidelines on this page are not official SLAs and therefore maximum delay values are not guaranteed. They reflect the 95th and 50th percentile latency based on the last two weeks of import data. It is your responsibility to assess the precision-error trade-off of your application based on the snapshot of historical data provided above. Official SLAs will be shared at a later date.
Whether you are building an aggregator service with Criteo as one of the sources or simply trying to compare the numbers in the Management Center Analytics UI and the API, you may have encountered a situation where the reported numbers in the UI and API are not the same.
As a first step, we need to make sure we have an apples-to-apples comparison by making sure our views are closely aligned.
Aligning the timezones and the date ranges in the API request body and the platform UI (whether Criteo Management Center or a third-party platform) will resolve the issue most of the time. Check this document for the list of supported timezones.
Please view this document for the list of supported metrics in the Criteo API.
The naming pattern provided on the metrics page will help you understand how you're comparing the data in different systems. Ask yourself:
- [Devices]: Am I comparing cross-device sales or same-device sales?
- [AttributionModel]: Am I comparing custom attribution sales or default attribution sales?
- [LookbackWindow]: Am I selecting the same attribution lookback period?
In general, the Criteo API allows you to retrieve all possible metrics while Management Centers' Analytics module is more focused on the metrics that are deemed relevant to a specific account and its settings.
Due to the specifics of the billing mechanism, the cost data for the day immediately preceding the current day may be adjusted within a 5% threshold soon after midnight. Please take this into account when determining the refresh window of your connector.
In some cases, the metrics could be updated several days after a click occurs. For example, this can happen after removing IVT (invalid traffic).
If you still have questions or suspect you've found a bug, please report it here.
Updated about 1 year ago