Are you a PhD or Master-level researcher who has already collected data (either quantitative or qualitative or both) and is likely to analyse it with a view to addressing research gap by answering research questions or approving/disproving research hypotheses? Do you have curiosity to detect unseen patterns or to establish meaningful connections or to figure out relationships between different variables with the help of collected data? If you answered ‘yes’ to the above-given questions, you must glance through this article as it attempts to offer deep insights into methods that could be used to analyse both quantitative and qualitative data.


Let us start by figuring out how best to analyse quantitative data in the space given below.

Before we move on to the various methods that are used to analyse quantitative data, it is imperative that we take a look at how to prepare the data for the purpose of conducting analysis appropriately.


Data preparation happens to be the first and foremost thing that any researcher should undertake so as to be able to successfully derive meaningful insights from the given data. Below given is a 4 step-process that you could use to convert raw data into something that holds some value and meaning:

Data Validation:

In this stage, the researcher is supposed to validate the data using the following 4-step model, in order to make sure that it was collected in line with pre-set standards and without any bias.

  • Fraud: The first step of data validation requires the researcher to determine whether each of the target respondents was interviewed in reality or not.
  • Screening: After determining the authenticity of raw data in the first step, the researcher has to get on with screening the respondents to ensure that they were targeted in line with the research criteria.
  • Procedure: Moving on, the researcher is to determine whether data was collected in accordance with the data collection procedures duly laid down in the methodology chapter or not.
  • Completeness: Last but not the least, completeness of the collected data has to be established, meaning the researcher needs to look at whether each target respondent was asked all the questions or not.

In order to validate the data, the interviewer or researcher has to randomly select a bunch of surveys and validate the collected data by means of placing calls to target respondents or emailing them and asking a particular set of questions (from the survey) again to find out if the collected responses are authentic. Let us say that there were 100 respondents from two different cities. In an attempt to validate the data, the researcher will call up or email 10 respondents from each of the two cities and ask them re-answer couple of questions from the survey. It is to be noted that data validation is a time-consuming process and could be very lengthy to deal with in case of large number of survey responses.

Data Editing:

Close on the heel of data validation comes data editing which simply refers to the process of dealing with empty fields or incorrect information (present in raw data set) with a view to maintaining accuracy of the results. Basically, one has to check on basic raw data (ranging from ‘matching total number of questions to total number of columns’ to ‘matching total number of respondents to total number of rows’ to ‘ensuring consistency and validity of time stamps’), deal with outliers (i.e. data points that are significantly different from other data points in a data set) and edit the raw data to identify and remove any data point that could question the authenticity of the results as part of data editing process.

For instance, some fields may have been left empty by target respondents. Such empty fields are to be either removed or duly filled when editing raw data.

Data Coding:

When it comes to preparing data for the purpose of analysis, data coding plays a crucial role. The task of a researcher during this stage is to group survey responses and assign values to the same.

Let us say, for instance, that 2000 respondents have been surveyed by a researcher. In order to find out the average age of the surveyed target respondents, the researcher will go about creating age groups first and assigning code to the age of each respondent afterwards. Any target respondent aged between 14-16 would, for example, be assigned a code of 1. Anyone falling into age category of 16-18 would be assigned a code of 2 and the process goes on until each respondent in the data set has been assigned a value or a code.

Engaging in such kind of data coding will help the researcher simplify the process of data analysis later on as the researcher will be dealing with easy-to-manage brackets rather than a massive range of individual ages.


After the raw data has been validated, edited and coded, it is now ready to be analysed. There are two popular methods of analysing quantitative data, namely, descriptive statistics and inferential statistics.

Descriptive Statistics:

When a researcher goes about digging into the collected data to simply describe the population through numerical calculations, graphs or tables, he or she could be said to be involved in the process of conducting data analysis using descriptive statistics. It is often the first phase of data analysis that could be performed using the following methods:

  • Mean – Also knowns as average, it refers to the central value of a discrete set of numbers. It is often calculated by dividing sum of values by the number of values.
  • Median – For a data set, it is often the middle value, separating the higher half from the lower half.
  • Mode – It refers to the most repeated value of a data set.
  • Percentage – It is often used to express a small group of respondents within a large data set.
  • Frequency – It demonstrates the number of times a specific event occurred in an experiment or study.
  • Range – It represent the difference between the highest and the lowest value of a data set.

Before you make use of descriptive statistics, you need to determine which of the above-mentioned methods can answer your research question in the best possible manner and what it is that you would like to show to the target audience. For example, gender distribution of respondents in a given data set could be best represented using percentage.

Also known as univariate analysis for its ability to analyse single variable, descriptive statistics is a method of data analysis that is often useful when research is based on small sample that does not require generalization to a larger population. For instance, comparison of percentages of children that have been vaccinated in two separate cities could very well be dealt with using descriptive statistics.

Inferential Statistics:

Also referred to as bivariate or multivariate analysis for its ability to deal with two or more variables, Inferential statistics is a method of data analysis that is used to generalize results and make predictions about larger population.

This is often the second phase of quantitative data analysis used by the researchers to depict the relationship between two or more variables rather than describe single variable. A researcher could be said to be involved in inferential statistics when he or she is using the following:

  • Correlation – This statistical technique demonstrates the degree (association or absence) of relationship between two variables. For instance, taller people tend to have more weight. Hence, height and weight are correlated. With that said, there might not be causal relationship between the two, meaning gaining weight does not cause one to become taller.
  • Regression – This form of statistical analysis has to do with predicting value of the dependent variable on the basis of already-known value/s of one or more independent variable/s. For example, we can figure out one’s weight on the basis of one’s height.
  • Analysis of Variance (ANOVA) – This technique of statistics is often used to analyse the differences among group means in a given sample. To put it simpler, ANOVA can help a researcher test the degree to which two or more groups vary or differ in an experiment. For instance, if you are trying to gauge relationship between the total number of children in a family and socio-economic status, you may begin with recruitment of a sample of families from each socio-economic status and then ask them about their ideal number of children. ANOVA, in this case, would be used to check if difference between groups’ responses is statistically significant or due to random chance.

While the above-given statistical techniques are what researchers often resort to for conducting inferential analysis, there are other methods available too, such as standard deviation and variance.

This brings us to an end for the discussion related to quantitative data analysis. Hence, we should move on to discuss how best to analyse qualitative data in the space below:


Mostly used for exploratory research, qualitative data analysis refers to the process of analysing data collected in the form of words, observations, images and symbols to answer research questions. Unlike quantitative data analysis where the researcher is required to validate, edit and code the data before carrying out actual analysis, qualitative data analysis begins as soon as the data is collected by the researcher.


Often do we see data preparation and data analysis being done by the researcher simultaneously. The following steps of qualitative data analysis would throw more light on the same:

  • Familiarity with data – The very first step for any researcher conducting qualitative data analysis is to get familiar with the collected word-rich data by means of transcribing and reading through it over and over with an intention to seek out basic patterns or observations.
  • Review of research questions – Following the first step, the researcher begins to review the research questions that could be best answered by the collected data.
  • Development of framework – After reviewing the research questions, the researcher goes about developing a framework by means of coding or indexing the collected data. All the broad ideas, phrases, concepts or behaviours are first identified and then assigned a value. This particular step helps with structuring and labelling of data for the purpose of drawing meaningful insights.
  • Identification of patterns and connections – The last step is to do with discovering themes that could best address the research questions, identifying patterns and finding areas for future research.

Let us now expand on what it takes to identify patterns when you have qualitative data at hand. Also known as themes, qualitative data patterns could be discovered in several ways.

The first one to be mentioned here is a word-based method wherein the researcher could resort to techniques, such as word repetitions or key words in context. In case of word-repetition, the researcher looks at various words that have been repeatedly used in the text to be analysed. For example, a researcher studying about what people think in relation to Brexit could come across words, such as change, negotiations, politics, deal etc., on numerous occasions. Hence, such words could be further used for the purpose of conducting qualitative data analysis by the researcher.

Where a researcher chooses to go for the second technique, i.e. key words in context, he or she basically tries to dig into a particular concept by putting context around it. For example, a researcher attempting to study about anxiety among the target respondents is likely to study the context of when anxiety was reported by each respondent. This could be reported when working under a lot of pressure at work or suffering from poor financial condition or fighting a family member etc.

The second method for identifying themes is a set of scrutiny-based techniques. ‘Compare and contrast’ happens to be one of such techniques wherein the researcher reviews the collected data and comes up with a theme on the basis of various pieces of text that are either in favour of it or against it. Let us suppose that the researcher has discovered a theme, i.e. ‘significance of social media in today’s world’, on the basis of the arguments for or against it found in the text. While a school of thought may be supporting the presence of social media, another school of thought may be talking against it. The data could be further analysed by the researcher to unearth the real causes behind the thought process of each group involved.

Finding out metaphors and analogies, and discovering connectors (depicting relationship between different ideas) in the text are some of the other ways a researcher could engage in scrutiny of qualitative data for the purpose of finding patterns.


Let us now review some of the popular methods that are often resorted to by researchers for conducting qualitative data analysis in the space provided below:

  • Content analysis – Often used to analyse documented information such as text, media or even physical items, this method of qualitative data analysis is quite a common one among researchers. Often would you see interview transcripts being analysed using this particular method.
  • Narrative analysis – When a researcher goes about analysing data collected from a number of sources (including interviews, observations or surveys) to answer the research questions, he or she could be said to be involved in narrative analysis that aims at drawing insights from stories and experiences shared by target respondents.
  • Discourse analysis – Similar to narrative analysis, this method attempts to analyse data obtained from various interactions with target respondents to answer the research questions. In addition, discourse analysis takes into consideration the social context in which conversations between the researcher and respondents take place, meaning this method would also take into account respondent’s day-to-day environment when analysing the collected data.
  • Grounded theory – Using this method, a researcher analyses qualitative data in the form of various case-studies in an attempt to find out why certain phenomenon took place. The researcher has to go through the process of altering and creating new explanations as he/she develops knowledge about a particular phenomenon by means of reading new cases regularly. The process comes to an end as the researcher gets hold of an explanation that fits all case-studies.

While other qualitative data analysis methods, such as conversational analysis, exist, the above-given methods are what researchers often make use of in order to analyse qualitative data.

When done appropriately, data analysis could provide solid grounds for ‘Results’ and ‘Discussion’ chapters of your PhD thesis or dissertation. This, in turn, would give rise to a credible piece of research for sure. Hence, you must take utmost care when conducting data analysis to be able to derive meaningful and insightful findings from the data.

Hopefully, the article has served its purpose of clarifying your doubts related to quantitative and qualitative data analysis. Please feel free to share the article with your connections who might benefit from it.

Last but not the least, if you are still struggling with any aspect of data analysis while working on your PhD thesis or dissertation, please feel free to write to us at or fill out our Contact Us form and our team of native English-speaking UK-University Professors and Sr. Lecturers would be more than happy to assist you further.

Share This