Caladan is very impressed with the team’s work! At last, all of their raw data is in one centralized location and normalized data is available in the ODS. Excitement is growing; they’re so close to making sense of it all!
The team will now do the hard work of making a recommendation for policy implemenation. To do so they will need to design a Data Warehouse for serving the data to ML and Reports. The team is also going to need to calculate the effectiveness of each policy on the sample countries by calculating a growth percentage change on a daily basis and aggregate the growth percentage on a weekly basis.
Looking toward the future, if new data becomes available, the marketing team would like to be able to access it within an hour of its creation.
Caladan also wants the team to explore how to efficiently determine whether changes to their solution will work, preferably before deploying it to production. Currently, the process for ensuring the data import solution is correct is to manually analyze the results of a test execution. Considering the time and effort this process requires, and the ever-present chance of human error, they would like the team to automate the process.
The team can create a rather simple Star/Snowflake schema or make it more elaborate as they see fit. The calculation should be done as the data is loaded into the Data Warehouse.
Note that the dimensions should all be considered Type 1 dimensions.
Reference: Type 2 dimension
Unit tests avoid using external resources such as remote services or http requests. External resources can be mocked or stubbed as appropriate.
Additional tests (e.g., integration, end to end, synthetic monitoring) would make use of these external resources to ensure that the system components are properly connected. However, this challenge deals only with the unit tests, which assert the more granular behaviors or logic of a single functional component.