Study Notes
Download pdf 844 KB
12 years delivering excellence
Join a global community
Globally recognised
Toolkits, content & more
There’s a number of things to think about when doing data processing.
Understanding what data, raw data is both valid, what’s relevant, what’s clean, what is pure data in effect, is very, very important, and distinguishing that from good and bad data.
How do you do that? Where do you cluster up your different data sets and across different variables? Ultimately, organizations at the top level will usually sort data under customer level data or product level data and based on those types of variables you can be quite helpful. Unfortunately, it doesn’t cross-pollinize very well.
What does the data mean? What is the kind of aggregated thing that it’s trying to say?
At a high level you need to do the aggregation bit, but then at a lower level you need to be able to go down and dig down into deeper pockets of your data to understand what key target segments may be doing and responding to your data in different ways.
So for instance, in the insurance world we would have channel splits so you could understand your overall retention rates of the number of people retaining an insurance product, but then on the other side if you dug down deeper you could understand that at a channel basis, so people who are coming into the telephone, people coming to the web, people coming through an aggregator, and then you could basically drill down and see well, what were the core insights here? Why would these people leave and why were these people staying? So it’s very important to do a high level as well as deep dive.
Now the problem with high-level versus deep dive is statistical relevance and accuracy. If you’re doing it too deep down, effectively, you may not have the volumes to be able to get accurate prediction statistically, while at the general level you may be able to do that higher up. So you got to kind of mix and match your approach, and there’s an element of pragmatism that plays in the mixer when you’re processing your data as well that you must be conscious of.
What do you then do with your data? How do you then transfer what your raw data is into something meaningful? And the way that you will do this is by asking yourself key related questions within the dataset, and once you have an understanding of what key questions you want to ask, it becomes more relevant and you can kind of go off to what you’re trying to seek for. It’s like if you try to look for a needle in a haystack. If you’re not asking the right questions, you’re unlikely to get many meaningful answers. While it’s asking the right questions and setting up in the preparation phase as we talked about in stage one, will perhaps give you more robust answers.
Once you’ve got the analysis and you’ve understood what you’re trying to go after, you need to report back on that. Standardized reporting is things that people do on a regular basis, and it enables you to churn out monthly MI reports in a consistent way based on new updated data sets. This has got a huge advantage in that your organization gets comfortable with what they’re trying to read, what the KPIs are, and then they can monitor the changing nature of those KPIs. It becomes essentially important to set up what those KPIs are from the outset though, of course, so you need to think about that.
And then finally, how you classify your data into different segments and build it up is very important as well. So classification of your data, and then understanding what it means How do you report and how do you analyze it are some of the key facets when thinking about micro-processes?
So when processing your data, what are some of the best practices that you can use?
Let’s think about some of the challenges of data processing. These fall into two core buckets.
On the other side, you’re thinking about security challenges.
Ritchie Mehta has had an eight-year corporate career with a number of leading organizations such as HSBC, RBS, and Direct Line Group. He then went on setting up a number of businesses.
Data protection regulations affect almost all aspects of digital marketing. Therefore, DMI has produced a short course on GDPR for all of our students. If you wish to learn more about GDPR, you can do so here:
If you are interested in learning more about Big Data and Analytics, the DMI has produced a short course on the subject for all of our students. You can access this content here:
The following pieces of content from the Digital Marketing Institute's Membership Library have been chosen to offer additional material that you might find interesting or insightful.
You can find more information and content like this on the Digital Marketing Institute's Membership Library
You will not be assessed on the content in these short courses in your final exam.
ABOUT THIS DIGITAL MARKETING MODULE
This module dives deep into data and analytics – two critical facets of digital marketing and digital strategy. It begins with topics on data cleansing and preparation, the different types of data, the differences between data, information, and knowledge, and data management systems. It covers best practices on collecting and processing data, big data, machine learning, open and private data, data uploads, and data storage. The module concludes with topics on data-driven decision making, artificial intelligence, data visualization and reporting, and the key topic of data protection.