Free Data Collection Methods Essay Sample
Methods of collecting data are important for any study because they avail the required data for the study. The collected data and information should be valid and reliable for the purposes of the study and its objectives. Validity and reliability are measures that are used to establish the relevance of the survey. This paper examines the various types of validity, reliability and the types of data collection methods and instruments.
The performance of any tool used in research is usually described in terms of psychometric properties, which include reliability and validity. Most of these properties are evaluated statistically. Reliability refers to the degree to which an instrument produces consistent and stable results over time and under similar conditions. According to Miles & Huberman (1994), reliability as applied in research means repeatability or consistency. A measure is considered reliable if it would give us the same result repeatedly. There are various types of reliability, such as test-Retest reliability and parallel forms reliability (Social Research methods, 2010). However, reliability cannot be calculated accurately and it is the reason as to why it has to be estimated.
The estimation of internal consistency reliability entails making use of a single measurement instrument administered to a group of respondents on one occasion to estimate reliability. The reliability of the instrument is judged by estimating how well the items that reflect the same construct yield similar results. The consistency of the results for different items for the same construct within the measure is also very important in the determination of the reliability. There are a wide variety of internal consistency measures that can be used.
Test Retest reliability is the type of reliability that is administered when the same test is administered to the same sample of study during different occasions. The assumption is that there is no substantial change in the construct that is being measured between the two occasions. Time is critical while administering the test. If the time is short, the correlations between the two occasions will be higher and if the time is longer, the correlations will be few. The reason for this is because the two observations are related over time and if time is closer, more similar factors that will contribute to error will be obtained (Waliman, 2005). The test-retest estimator is mostly applied in most experimental and quasi-experimental designs that use a no-treatment control group. In these designs, the researcher always has a control group that is measured on two occasions (pretest and posttest). However, the main disadvantage of this approach is that is that there is not any information about reliability until you collect the posttest and, if the reliability estimate is low, you're pretty much sunk.
On the contrary, test-retest reliability can be estimated by have a single rater code the same videos on two different occasions. The inter-rater approach can be used especially if one is interested in using a team of raters and there is need to establish that they yielded consistent results. A suitably high inter-rater reliability justifies allowing them to work independently on coding different videos. One might use the test-retest approach when there is only a single rater and there is no need for training others. On the other hand, in some studies it is reasonable to do both to help establish the reliability of the raters or observers
Parallel-forms reliability first creates two parallel forms by formulating many questions that address the same construct and then dividing the questions randomly into two. The two sets are administered to the same respondents in a study. The estimate of reliability is correlation between the two parallel forms. Split half reliability is where all items that measure the same construct are randomly divided into two sets. The instrument s administered to a sample of respondents and the total score for each half is calculated. The split-half reliability estimate is the correlation between the two total scores.
The consistency with which two or more people administer the tool is called inter-rater reliability. The establishment of the inter-rater reliability outside of the context of the measurement in the study determines whether two observers are consistent with their observations. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. For instance, researching on child mother interactions, the researcher can collect videotapes of child-mother interactions and have a rater code the videos for how often the mother smiled at the child. To establish inter-rater reliability you could take a sample of videos and have two raters code them independently.
Validity is another measure that is used to measure the usefulness of a study. Validity refers to the extent to which the instrument measures what it is designed to measure. Validity can vary from one sample or setting to another and by how the instrument is used. An instrument that is valid in one situation may not be in another. A study is valid if its measures actually measure what they claim to, and if there are no logical errors in drawing conclusions from the data. There are various types of validity (Social Research methods, 2010).
Campbell and Stanley (1963) use the term internal validity to refer to the extent to which it is possible to make an inference that the independent variable is truly influencing the dependant variable and that the relationship is not bogus. Thus, there is a causal relationship between the variables under study. The term external validity refers to the ability of generalizing of the research findings to other settings or samples. The author would suggest that this study is not particularly general.
Concurrent validity is the type of validity that theoperationalization ability of an instrument is measured to distinguish between groups that it should theoretically be able to distinguish between. An example could be assessment of manic-depression, the measure aim should be able to distinguish between people who are diagnosed manic-depression and those diagnosed paranoid schizophrenic. If the need is to assess the concurrent validity of a new measure of empowerment, both migrant farm workers and to the farm owners might be measured to theorize that the measure should show that the farm owners are higher in empowerment.
Predictive validity is another form of validity where the ability to predict something that it should theoretically be able to predict is measured. For instance, it might be theorized that a measure of math ability should be able to predict how well a person will do in an engineering-based profession. Experienced engineers could be given the measure and see if there is a high correlation between scores on the measure and their salaries as engineers. A high correlation would provide evidence for predictive validity it would show that our measure can correctly predict something that we theoretically think it should be able to predict.
Discriminant validity examines we examine the degree to which the measured entity differs from other parameters of the study that it theoretically should be similar to. For instance, to show the discriminant validity of a Head Start program, we might gather evidence that shows that the program is not similar to other early childhood programs that do not label themselves as Head Start programs. Alternatively, to show the discriminant validity of a test of arithmetic skills, we might correlate the scores on our test with scores on tests that of verbal ability, where low correlations would be evidence of discriminant validity.
Our features
300 words per page instead of 280

Free revision (on demand)

Discount system

Affiliate program

VIP services

Round-the-clock support

Construct validity is the extent to which an assessment device may be said to measure a theoretical construct (Miles & Huberman, 1994). This is related to how well the experiment is operationalized. A good experiment is measured by how well it can turn constructs into actual measurable things. Construct validity is used as a measure of experimental design of study. Content validity establishes how well the instrument measures what it is supposed to. It occurs when the experiment covers the subject under study adequately. The samples of the study should be large enough and be taken for appropriate targets. Convergent validity compares the score obtained with one instrument with that from another that measures the same concept; the scores should be related. Divergent validity compares instruments that measure opposite concepts (e.g., hope and hopelessness). The predictive validity of an instrument measures how accurately it allows the user to predict the future. Criterion-related validity compares test scores with the subject's performance on a particular criterion (Waliman, 2005).
Data Collection Methods
Data is important in any study. There are several methods that are used to collect data for the study such as experiments, observations and administering surveys. The methods used usually depend with the research method of the survey. The two common method of collecting data are conducting interviews and using questionnaires.
Interviews are a common method used to collect data in both quantitative and qualitative surveys. According to Bryman & Bell (2007), the interviews that are used in quantitative researches are usually more structured than those used in qualitative surveys are. An interview is used to obtain information from one person about particular situations, problems or topics. The interview can be structured or semi-structured. Standards questions are common in structured interviews. According to Yin (2003), interviews can be conducted face to face, over the telephone or over the internet. The structured interview is designed to elicit specific responses to specific questions. Responses to a structured interview will normally be easier to quantify and interpret since uniform questions tend to yield a narrower range of responses. The semi-structured interview uses open-ended questions to explore broad issues in a non-directive, non-threatening manner.
Just like face-to-face interviews, telephone interviews are one on one methods of gathering information about the phenomenon of study. They are used often in highly structured surveys to gather non-sensitive responses to questions that require specific, tightly focused questions. They can also be used to validate information gathered by written surveys and questionnaires and for follow-up to focus groups and face-to-face interviews (Yin, 2003).
Questionnaires are data collection methods that are used to gather specific information from large populations about what they think, feel, or do regarding specific job responsibilities, problems, or situations. Surveys and questionnaires can be either paper-based or electronic. According to Miles & Huberman (1994), Questionnaires provide a convenient way if collecting data from a known population. Questionnaires usually have closed ended questions, open-ended questions and likert scale questions. They are easy to analyze and most statistical software can easily process them. When questionnaires are compared to face-to-face interviews, they are cost effective because the cost that the researcher uses to travel and meet the respondents is saved (Waliman, 2005).
Data Collection Instruments
There are two ways to approach any study, either, qualitative or quantitatively. However, a study can combine the two approaches by employing both qualitative and quantitative methods. The distinction between these two methods arises in how data is collected, treated and analyzed. Data collection methods used in a study depend on the research methods being used.
A study that is quantitative usually makes an emphasis on transforming the data quantities and the models of statistics for purposes of measuring and analyzing data. The research topics in which researchers use this method have clear idea of what they look or search for and tools like questionnaires are used to collect data. Quantitative approach has its focus on the preciseness of data that is seen in terms of measuring data. Quantitative approach to a research involves using statistical data analyses to obtain information about the study simply because the approach is based on measuring the quantity or amount (Harvey, 2006).
On the contrary, a qualitative approach to research makes the researcher to switch his focus to gaining a better understanding of the problem of the research by giving detailed information about the main theme of the study topic. The major goal of a qualitative study is have a complete and detailed description of the study phenomenon by applying reasoning. Qualitative research involves the use of interviews and observations to collect data and there are no formal measurements involved. Qualitative approach makes the researcher understand the phenomenon understudy and be in a position to describe the whole situation as it is. The approach consists of descriptions, quotes, observations and excerpts from books and other documents (Quinn, p. 2002, p. 308).
Data Collection is an important aspect of any type of research study. Inaccurate data collection can affect the results of a study and ultimately lead to invalid results. Methods of Data collection for impact evaluation vary along a continuum. At the one end of this continuum are quantitative methods and at the other end of the continuum are Qualitative methods for data collection. However, this study examines only two data collectionmethods employed by any study, either qualitative or quantitative research. According to Abeyasekera (n.d), quantitative methods of collecting data usually rely on random sampling and structured data collection instruments that fit diverse experiences into predetermined response categories. The methods produce results that are easy for the researcher to summarize, make comparisons and generalize.
The main concern of a study is to test the hypothesis that is derived from the theory and be able to estimate the size of the research phenomenon. The participants of the study may be randomly picked and assigned to different tasks. Where this is not feasible, the researcher conducts the data collection exercise on the participants and the situational characteristics so that he / she can control their influence on the dependent variable or the outcome. If the researcher intents to generalize findings from the respondents to a larger population, probability sampling will be used. The main strategies of collecting data include experiments, observing and recording well-defined events, obtaining relevant data from the management information systems and administering surveys with closed ended questions.
The data collection instruments that a study employs usually depend on the research methods, data collection method and the type of data to be collected. Despite all these, two instruments are common to many studies (Yin, 2003). They are interviews and questionnaires. Interviews as explained above are used to collect data from a person in particular situations. The interview can be structured or semi structured. Interviews are recommended as data collection instruments because they help build a rapport with the respondent, they are flexible and the guides of an interview are easier to develop. Interviews can be proved a reliable instrument for collecting data if the same result is obtained from using the method on repeated occasions. The interviews are valid if they measure what they intent to measure (Miles & Huberman, 1994).
Questionnaires are also good data collection instruments that have found their use in various surveys and studies. They are used to collect specific information from a large group of respondents drawn from a large population. The questionnaires are important since they are a cheap instrument of collecting large amounts of specific information from a large population. The evaluation of the content validity of the questionnaires requires the researcher to establish whether collected data match the purpose and objectives of the study. This means that the researcher must evaluate whether the objectives of the study cane be met by the collected data and information (Bryman & Bell, 2007).
Validity and reliability are important measures that determine the relevance of a study. There are several types of validity as well as reliability. For instance, the construct validity is the extent to which an assessment device may be said to measure a theoretical construct while test Retest reliability is the type of reliability that is administered when the same test is administered to the same sample of study during different occasions. Two common methods and instruments of collecting data are interviews and questionnaires. Though interviews are quite expensive, they can collect specific information from individual respondents. Questionnaires are important because they enable the collection of a large amount of data from a large population.