Friday, April 3, 2009

Week 11: Historiography

Vitanza, "Notes Towards Historiographies of Rhetoric . . .": Here Victor Vitanza considers whether history writing can be ideologically practiced as separate from a fidelity to historical fact, especially considering that facts are basically interpretations.

He notes 3 schools of thought:

1. The Aristotelian school, which argues that poetry is separate from history, and that history is limited to the particulars of human action.
2. The 19th c. German philosophers school, which argues that history should be interpreted through poetry, as intuition and imagination help uncover the "truth" of history. Objectivity is impossible.
3. The Science of History school of thought, which argues that whether through econometrics, or the narrative-structuralist logic, or Marxims, either one must always historicize (Marxist logic), or that history is beyond ideology.

Vitanza pushes for a balancing of the counter-statement, for identifying ever shifting ground in relation to the "question of history," and for postponing all decisions in developing and identifying any scheme that allows a finite permutation of stases and tropes that could end the conversation (71).

The historical methods he notes are the

I. Traditional Historiography with TIME/narrative as a major category OR does not emphasize time/man as a major category. (Either naive and unselfconscious or highly-conscious and positivistic). Authors attempt to documentize and catalog/orize methodically in 1st view and attempt to process in the computer a series of facts in the 2nd view. Omniscient POV. Most histories of rhetoric fit into this first model, particularly the archival model.
II. Revisionary Historiography with either (A) full disclosure or (B) self-conscious critical practices. In (A), historians of rhetoric attempt to correct misinformed or misunderstood views, primarily on the sophists. In (B), Marx, Nietzsche and Freud, among others, attempt to account for distortion, i.e, the "hermeneutics of suspicion" (99). They wish to destroy a "false reality."
III. Sub/Versive Historiography: Here the traditional views of history are subverted. It attempts to situate itself outside political positions so that it no longer has as "capitulation to the eventual recapitulation." It is concerned with pedagogical politics that are non-fascist. It seeks a non-authoritarian education (down with Critical Authority and clarity) and asks for a non-disciplinary nonalignment.

The objective is establishing a category that destroys categories and "unnames the naming" (84). He attempts to be counter-ideology (85).

Corbett, "What Classical Rhetoric has to Offer . . ." -

Well, it is definitely not Sub/Versive (which opposes the history of Knowledge) as it, for example, outlines methodically the manner in which Aristotle teaches his students to appeal directly to the emotions of the audience. The historiographic method that best fits his writing is the Traditional, and within that category, the 1st style, which is naive and unselfconscious, and attempts to categorize methodically. An example of its naiveite include the following: "Aristotle was certainly an amateur psychologist, as most of us are; but because he was a keen observer of human behavior, he learned some valuable lessons about what makes the human animal tick" (68). Nothing new is added to the conversation by a recapitulation of the basis components to verbal communication.

Zappen, "Francis Bacon and the Historiography of Scientific Rhetoric":

Okay, this work is not nearly as naive as the last. Here the author reviews three 20th century interpretations of Bacon's science and rhetoric (1. positivistic science/plain style, 2. institutionalized science/high-figured style, and 3. democratic science) then draws upon the traditions of Puritan reformers as an example of plain style. It ends by examining the different ideologies within each view and offers an alternative to how scientific rhetoric should be viewed. What is interesting is that it describes three views, and then offers a fourth (Zappen's), but never specifically notes what is wrong with the other three views.

For example, when Zappen examines the historical view of Bacon through the lens of various academics, including Kuhn, Boas and Stephens, among others, and their view of Bacon as helping develop scientific communities, he then notes that scholars have challenged this view of institutionalized science by showing Bacon as sympathetic to an alternative ideology of science. So these democratic historians instead emphasize that Bacon believed in a more democratic and humanitarian science, and they question the attitudes usually attributed to Bacon. Yet in neither the first school (institutionalized science) nor in the second school (democratic science) does Zappan specifically mention what he takes umbrage with, only counters them with an additional reading. Such a format implies a Revisionary Historiography school of thought, and specifically one that uses self-conscious critical practices. Oddly, he points out the misunderstandings that each school of thought has about the other but never really takes a stand as to which is the best, holding with his first tenant that each "reflects a different ideology" or alternative vision. He's the nice kid on the playground. I like him.

Howard, "Who 'Owns' Electric Texts?":

Well, now I understand why Victor wants to copyright the letter "V." I assume he wants the capitalized version. This article segways nicely with the fair use/copyright issues that we just discussed in 802. Here Dr. Howard explains the juggernaut that protects copyright holder's profits; also, bear in mind that the state is also supposed to promote public good with said laws. He gives five modern scenarios for how electronic environments require writers to juggle between intellectual property laws and copyright laws, though these writers are hardly an expert in either law.

So it all began with the printing press, and its cheap(er) production of books/texts. It delves into historical changes in copyright laws and notes that authors cannot expect to profit from a "monopoly of truth." Too bad, it's a good racket. Dr. Howard does note, however, that even though a writer may have to deal with changes to intellectual property & copyright laws, they should familiarize themselves with these laws to avoid court costs and damage to their reputations. This article seems Revisionary Historiography/full disclosure, though at first it seemed Traditional Historiography. However, it does attempt to account for distortion of views on copyright laws - primarily due to the great swings of ownership over time - and it is attempting to correct misunderstood views of said laws.


I do not know whether we are required to analyze these readings with a certain methodology in mind (quantitative/qualitative), other than Victor's general overview, so I've read them with an analysis of method in mind.

Sunday, March 29, 2009

Week 9 & 10: Quantitative Descriptive Studies & T/Q Experiments

Week 9: Quantitative Descriptive Studies

Lauer & Asher:

Q1: The appropriate purposes for quantitative descriptive studies include "to generate variables, to operationally define them, and to develop an early understanding of their relationships" (L&A 82). It is an attempt to understand and explain any phenomena in research.These studies go beyond case studies/ethnographies "to isolate important variables developed by these studies, to define them further, and to quantify them at least roughly, if not with some accuracy, and to interrelate them" (L&A 82).
Q2: Subjects are selected after considering the number/type of variables the researcher wishes to study. A large # of subjects are necessary - at least 10X's as many subjects as variables.
Q3: Data is collected and analyzed by multiple methods, including any methods that give "researchers the data that they need to quantify and interrelate the variables they wish to study" (L&A 85). The choice of variables to observe and intercorrelate is important and is broken into independent/dependent. To relate the variables, a variety of statistical analyses can be used, including interval data, frequency counts, proportions and percentages, Chi square, Phi coefficient. To show interval with interval variables - correlational analysis. To relate interval variables to nominal variables - variance/F-test, t-test, point biserial correlation; To relate nominal to rank-ordered variables - Wilcoxon T, Mann-Whitney U. To relate rank-ordered variables to other rank-ordered variables to one group - Spearman's rho.
Q4: The kinds of generalizations possible include the ability to build theories about the composing process, the contexts of writing, the pedagogy of writing, but it can usually not be used with cause-and-effect relationships among variables (L&A 102).


Q1: The purpose of this quantitative descriptive study is to examine the initial construct of nanoscale science and technology in written popular media and thereby explore the emergence of one new scientific concept and how it has endured in public discourse. Farber discusses his hope to study how science and technology are represented in public discourse and to build public recognition and awareness of science and technology.
Q2: Subjects selection came from studying writings in the popular press that specifically examined how science (nanoscience) emerged. Using these popular accounts of science, this study sought to understand how new representations in science and technology emerged and endured in popular media, and predicts that new representations would emerge similar to those described by the theories of change in scientific communities.
Q3: Data is collected and analyzed by searching key words that at first generated 885 articles, and then eliminating and categorizing them – so he studied the propositional content, the grammatical structure and the discourse analysis. Overall he looked for similar themes, rhemes (information presented after the verb), and topics. He then found that there were 39 representations of nanotechnology and nanoscience in 262 occurrences. Each representation occurred 6.89 times. Since this type of study is a composition study, there should be correlation between two or more raters or coders and a determination of internal-consistency reliability, but there is not.
Q4: Farber does make generalizations, as when he says that although predictions about nanotechnology are broad ranging, actual initial applications of nanotechnology have been most prominent in electronic computer-memory technology and in polymer coatings. But the methods of acquiring the findings seems flawed, as when he broadly states that the “process of presenting technical information for general audiences can be enabled by combining social and technical approaches” (161). The solid research that would allow such generalizations seems missing from the research.


Q1: The purpose for this quantitative descriptive study is to determine the barriers most frequently encountered by affected listening effectiveness in business college students, as given in the Watson and Smeltzer (1984) study (expanding barriers from 14 to 25). Golen mentions there are a large # of potential barriers. Noting that no studies have determined factors of barriers to effective listening, he briefly examines several studies on listening behavior.
Q2: Using the backdrop of a major southwest state university, he examined 3 large business communication lecture sections of approx. 400 students, with 33 breakout sections of 35 students each. Golen used a random sample of 10 sections and only one of these students who attended the large lecture was included in the study. 279 questionnaires were collected and analyzed – each containing questions on 25 barriers to effective listening; the SAS was used to analyze the data. The barriers themselves were selected by identifying the common barriers listed in a review of listening literature.
Q3: Data collection: 279 students were in the study. The questionnaire contained a 5 point Likert-type scale, and an internal consistency measure – coefficient alpha – “indicated that the questionnaire was reliable (alpha equal to .79)” (28). The questionnaires were broken into 3 areas: the relative frequency of listening barriers, the results of a factor analysis, and the relationship between demographic variables and factors. Then six independent listening barrier factors were generated and determining whether there were any significant differences among the factors based on demographic information was determined. The study did note that “the results revealed no significant interaction effects. However, “there was a significant main effect for the students’ sex for two out of the six factors” (33). No further explanation was given.
Q4: Using the research from other studies, Golen generalized about their findings. I am not quite sure why, all of a sudden, these studies became so crucial to his own study. The end of his study was basically filled with the findings of others.

Week 10: Experiments
Lauer & Asher, "True Experiments":

The great advantage of true experiments is that it can suggest cause-and-effect relationships without threats to external validity (other than Mortality). They never, however, suggest certainty.
A. A true experiment seeks to demonstrate that "the researcher actively intervenes systematically to change or withhold or apply a treatment (or several) to determine what its effect is on criterion variables" (152). Uses randomization in which subjects are allocated to treatment and control groups. One treatment method applied to experimental groups and another (traditional method) to control group. All are evaluated using at least one common measure.
B. The control group affects subject selection in that they help establish randomization, or a situation in which groups can be “not unequal”: “If subjects are randomly allocated to two or more groups, the researcher can generalize that, over a large number of random allocations, all the groups of subjects will be expected to be identical on all variables” (155). The control group acts as a standard, typical condition, a foil in which the special treatment is absent.
C. Independent (differences prior to research) and dependent variables (introduced by researchers) impact data collection and analysis in that in true experiments, ALL variables, known and unknown, upon which humans can differ will be expected to be EQUAL in all treatment and control groups, both at the time of randomization and for the future, unless there is unequal intervention treatment.

True experiments must be done in natural environments. Thus researchers must have enough thick description of their treatment conditions to increase theoretical knowledge and allow for replication. The cause-and-effect relationship among new variables is more evident in this type of experiment, but a limitation on true experiment is its focus on limited variables and structured situations in order to determine possible cause/effect relationships.

Lauer & Asher, "Quasi Experiments":

A. A quasi-experiment seeks to demonstrate cause-and-effect inferences using subjects, treatments and criterion measurements. Can be classified into strong or weak categories, based on the equality or inequality of the groups as established by the pretest.
B. The control group affects subject selection in that there is no randomization of subjects to groups, so there is the possibility of internal validity threats (because already established groups stay in place).
C. Data collection and analysis of independent/dependent variables depends upon whether the quasi-experiment is classified as strong or weak. In strong quasi-experiments, the research can consider equal only those explicit variables measured by the pretests. (Not of great concern because of the relatively small # of variables that correlate with the criterion variables). Weak quasi-experiments rely upon initially unequal groups, so they differ on one more variables. Unequal groups acted upon by equal treatments will still be equally unequal on the criterion variables.

Carroll, Smith-Kerker, Ford & Mazuro-Rimetz:

This study is broken into two sections: Minimal Manual and Learn by Doing/by Book.
A. (Experiment 1). Minimal Manual: This experiment seeks to address self-instruction manuals for people learning computing devices. Specifically, the Minimal Manual is mentioned, which apparently affords a more efficient learning progress (it addresses the preference to start immediately on real tasks, the preference to skip reading, and errors as a consequence of active orientation in learning). The Minimal Manual seeks to capitalize on learning styles and strategies. (As an aside, this experiment bores me and it is possible that the authors may lack a personality – just one will do.) The manual is designed to be bare-bones (45 pages), without undue verbiage and is tested using Guided Exploration. They analyzed the training situation, including the errors made, and then re-worked the manual to be more open-ended, thus better resembling real work and maintaining learner motivation.
B. I could not find the subject selection, much less any mention of a randomization of subjects. Hence, this study is not a true experiment. No mention is made of using intact groups, so I doubt it is a quasi-experiment either.
C. As the study is poorly designed, it makes no difference whether they use independent or dependent variables and how these impact data collection and analysis. Criterion variables are unclear. There also should be correlation between two or more raters or coders and a determination of internal-consistency reliability, but there is no mention of such.

A. (Experiment 2). Learn while Doing (LBD)/Learn by Book (LBB): The focus is to contrast between a commercially developed, standard self-instruction manual (SS) and the Minimal Manual (MM). As there is no randomization of subjects, this is not a true experiment. As there is no use of intact groups, this study is not a quasi-experiment – though the authors may believe that it is because it does seek to establish cause-and-effect relationships.
B. There is no control group or randomization of subjects. Subjects (see p. 90) were sent from a temporary agency (thus, their subjects do not fall into either real or quasi) and consisted of 32 subjects.
C. It mentions two independent variables – the manual (either SS or MM), and instructions, either LWD or LBB. The LWD learners were given 5 hrs. to perform a series of tasks using the system, and the LDD were given 3 hrs. to use the manual in order to learn the system. They were separately given 2 hrs. to perform a series of tasks using the system. I have to wonder if allowing a variance of time between the LWD and LDD is consistent with the quasi-experiment. Again, no inter-related reliability, no clear criterion variables, no established reliability.

Notarantonio & Cohen:

A. This study on the effect of the Open and Dominant communication styles on sales effectiveness seeks to demonstrate the reliability and validity of the communicator style construct. According to the introduction, respondents to a 42-item questionnaire rated the Dominant/low Open and low Dominant/high Open salesperson more favorably than the high Dominant/high Open or Low Dominant/low Open salesperson. The study also considers various studies (Williams and Sprio; Norton, Bednar; Sheth, etc.) in terms of perceived communicator styles and their effect. This study was a quasi-experiment, though again the subjects were not really an already established group which must stay in place. It did use the same college and the same major, but a quasi-experiment implies a teacher, for example, who must study all classmates in a particular class because he or she can not send out the students they do not wish to study. In this study, these majors could be assembled into various groups and were not necessarily required to be tested together.
B. Subject selection consisted of 80 undergraduate business administration students at Bryant College. Demographics of sex, age range and college year were given. Subjects were randomly assigned to conditions, and to time and day (I wonder if they felt that was the same as the randomization of subjects as required in true experiments).
C. Data collection and analysis consisted of asking the aforementioned subjects to watch four videotapes (which were pretested by showing them to small groups of subjects – 10 each per group), and then giving respondents a communicatory style measure re: openness and dominance, in which they replied using a 7 point scale of agreement regarding the traits. The pretest consisted of a self-report of Nortan’s communicator style measure. The hypothesis (the more open a salesperson was, the more effective the individual would be in selling the product and the more positively she or he would be perceived by others) of the authors of the study (to empirically test whether or not different communicator styles affect the perceptions of the sales interactions) proved accurate. After viewing the tape, subjects were asked to complete the 42-item questionnaire on communicator style measures. 6 composite scores were identified, including a general attitude toward helping, product perceptions, customer/salesperson interactions, respondent purchasing behavior, product purchase probability, and salesperson perceptions. The analysis of this data consisted of a two-way ANOVAS run with Openness and Dominance as independent variables for the measure of openness, dominance and the six composite measures. All measures ranged from 1 to 7, with 7 strongly agree.


A. Here students in grades 5, 7, 9, 11, and college were taught a board game, after which they watched a demonstration film and then wrote directions for the game. The study provided a rich description for the growth of students’ informative writing skills at the aforementioned grade levels. This study included a review of three different procedures that have been employed in eliciting children’s explanations of games, with this study following the third method – asking students to explain a new game that they have just learned to play themselves.
B. Subject selection: students came from 4 schools in a small city in central Iowa and freshmen at Iowa State University (24 in grade 5, 26 in grade 7, 19 in grade 9, 27 in grade 11, 27 college freshmen). No mention is made of random selection, but the quotient in Lauer & Asher (10X’s as many subjects as variables) for a descriptive quantitative study seems to be met. A quasi-experiment has already established groups stay in place – and perhaps they consider this grouping from each grade level to have already been established, but no evidence is given of such. It is neither a randomized study, so it is not a true experiment, nor a true quasi-experiment. Students were “selected” based on their ability to follow the rules, to meet “normal” testing requirements and to speak English. (Also, students also had to watch the film a second time 3 days later and qualified for the study by earning 90% on the quiz with 16 being disqualified. College students were particularly mistreated because they were never allowed to really watch the game, were tested anyway, did poorly, and no one seemed bothered about it).
C. Data collection and analysis: the effect of grade level on game information scores was analyzed with a one-way ANOVA. The statistical significance of differences between mean information scores for adjacent grade levels was assessed with Scheffe post hoc tests. In addition to total scores for overall informativeness, scores for each of the ten individual game elements were given in order to assess the extent to which the scores reflected a development trend. A nonparametric trend analysis was employed and the resulting z statistics were used in order that the ten game elements from those exhibiting the strongest developmental trends to those with the weakest trends. A chi-square test was performed to assess the statistical significance of the association between grade level and students’ listing of the game pieces, between grade level and students’ mentioning the object of the game, and between grade level and students’ use of the three explanatory approaches. The strength of the relationship between the variables was assessed with Cramer’s statistics. Geez, and all of this for a game!

The results demonstrated a strong effect on the informativeness of students’ explanations. They also used a series of nonparametric trend analyses to determine the degree to which subjects’ explanations of each of the ten game elements improved across grade levels.

Friday, February 27, 2009

Week 8: Ethnographic Studies

Lauer & Asher:

Q1: In contrast with case studies, ethnographies observe many areas of their subjects, looking at the entire environment over long periods of time in order to identify, define and interrelate variables in context.
Q2: Triangulation impacts data collection and analysis because it involves a multiplicity of observations, including identifying & defining the whole environment they plan to study, planning ways of varying their observations, mapping setting, selecting observers & developing a relationship with them, and establishing a long period of investigation. And then, because this method involves a variety of techniques, it contributes to challenges in coding from a variety of sources for data analysis.
Q3: Validity comes for a "continual reciprocity between developing hypotheses about the nature and patterns of the environment under study and in regrounding of these hypotheses in repeated observation, further interviews, and the search for disconfirming evidence" (40). In order to ensure that data analysis is reliable, there must be "reliability among coders and the testing of schemas in new environments" (43). Beaufort states that it is not possible to measure reliability in ethnography (194). Researchers must also consider the common problems of interpreting qualitative data, as per Sadler (data overload, first impressions, confidence in judgment, etc...). Researchers must also contend with whether the data will be replicable and stable over time.

Q1: In keeping with observing the entire environment over a long period of time, D&F focused on a year-long process of collaboration among a group of computer software company executives (Microware, Inc. @ NSU) during the writing process of a vital company document.
Q2: The multiplity of observations of triangulation in this study included participant/observations, open-ended interviews, and discourse-based interviews (he also used field notes and tape-recorded meetings). Data collection came from 3 to 5 days a week of visits to the company for eight months. The data was analyzed chronologically using analytical categories, which linked to major themes.
Q3: Anderson ensured his research, which yielded results that were of a particular case, and "as such can be generalized only very tentatively" were valid by comparing his results with "the results of research cited at the outset of this report" (183). He also mentioned that there had been little research into the relationships between writing and the evolution of an organization. Although I did not find any evidence of how he coded all of these interviews, it appears that he concluded his research was valid because it met the same standards of the research he quoted.

Q1: Here the writer focuses on a segment of the ethnography of writing in a workplace setting (a medium non-profit agency, the Job Resources Center, with 50 employees, 300 students and an annual budget of $3 million) by showing how the goals of a discourse community connect to writers' roles and the socialization process for writers new to a community (in this case, Pam and Ursula). It examines this "window of culture" over a long period of time (for one year) (L&A 40). The data she reports is drawn from a larger ethnography of writing in a nonprofit center and reveals the socialization process of two writers new to the organization (187).
Q2: The triangulation impacts data collection and analysis because of the sheer volume of data that is collected and the coding from a variety of sources. Here she looked at field notes, interview transcripts, writing samples and social roles within JRC. She collected data weekly, observed client programs and the full range of activities at the agency, she interviewed the executive directors, interviewed each woman weekly, audiotaped conversations and photocopied all of the writing.
Q3: She uses the data to offer "a view of an informal process of acquisition of writing skills among college graduates who were learning new genres and the norms of discourse" and from this data she speculates on potential implications for managers with workplace communications responsibilities and for educators charged with writing curriculum and staff development programs (191). Using the areas of triangulation mentioned in Q2, she assured validity by "comparing different data sources to one another, compared different responses of the informants over time, and solicited the informants' responses to drafts of the research reports (194). She states that an N of 4 is not sufficient for generalizing from the data, nor is it possible for ethnography to measure reliability, but the findings were still rigorous and systematic.

Q1: This ethnography seeks to understand the standardization processes involved in the writing done (the Building Project) by a class of 7th grade students in Crayton, a city of 1 million, during an 8-week project. It became a political project, though not by design, as the survey results "must have caused concern." Apparently, after the school board voted to close the school, the survey of the Sanders community revealed that parents, students and staff of SMS did not want a new school.
Q2: Acting as a participant observer in a classroom, the author collected audiotaped classroom and group discussions, student writing, field notes, and texts, including a history booklet, a neighborhood planning book , two videos and census data. She examined the exchange of ideas to see how standardization as a process of production, consumption and distribution played into students' writings.
Q3: Validity comes for a "continual reciprocity between developing hypotheses about the nature and patterns of the environment under study and in regrounding of these hypotheses in repeated observation, further interviews, and the search for disconfirming evidence" (40). Sheehy hypothesized that standardized testing is dominated by testing rhetoric, and is a "play of power that can be charted both in the act of writing and in written forms' effects" (334). At the end of her study, she concluded that standardizing "forced involved direct teaching, genre memory, and several strategies employed to bring cohesion and unity to diverse ideas" (333). The findings of this research are "influenced by a major limitation of the research design, a limitation which Grossberg (1992) cautioned can happen with analyses of power" (366). As research becomes more abstract, it moves away from observable data to how power operates at the concrete level. She found she could not draw many connections to more abstract data as it was focused on the specific contexts of the Building Project. She could not make connections between the visible context and the history of the essay in schools, and her data, "as framed, had only one teaching connection to it: Audrie's relationship to the school board" (366). She was, however, able to demonstrate the contexts the 7th graders demonstrated in recontextualizing via "emotional appeal, veiling contradiction, and intertextual and interdiscursive alliances" (366). She concludes that her research is valid in the sense that it did shed light on the tensions of "centrifugal and centripetal forces in students' writings" (368).

Q1: Using autoethnography of her own experience on a plane headed to the Dulles Airport on 9/11, Ellis shows that everyday stories merit telling from those not directly involved in the attacks. She uses this story (her "chaos narrative" to find personal and collective meaning in this tragedy (377). Her goal is to have others put their stories into words and "compare your experience to mine, and find companionship for your sorrow" (378). A case study, in contrast, would closely examine subjects in a natural setting and would use content analysis to develop and quantify variables in communication data (L&A 31).
Q2: The only triangulation in this tale appears to come from the multiple stories that intersect around the experience of 9/11. Her aim is understanding, which "offers the possibility of turning something chaotic into something potentially meaningful." She wishes to heal personally and collectively through personal narrative: "As a qualitative methodologist and civic-minded person, I feel a calling to bring personal stories to cultural situations. I offer this approach as a way to understand and heal ourselves and the nation" (401).
Q3: This researcher does not attempt, other than seeking legitimate 9/11 stories, to determine whether the stories are reliable and valid.

Q1: Using the analytic ethnographic paradigm, Anderson seeks to "clarify an approach to autoethnography that is consistent with traditional symbolic interactionist epistemological assumptions and goals, rather than reject them" (378). Here he identifies 5 key features of analytic autoethnography, and draws upon several realist ethnographic texts (for example, Robert Murphy's The Body Silent, which is an "illness ethnography," and unlike a case study, features a sustained focus on this tradition) that exemplify these impulses, and lastly, he draws upon several examples from recreational skydivers from his own research.
Q2: In keeping with the triangulation aspect of ethnographies, Anderson mentions that the "autoethnographer...must also record events and conversations, at times making fieldwork 'near[ly] schizophrenic in its frenzied multiple focus'(380). He mentions that while skydivers prepare for a jump by mentally rehearsing the jump, conducting checks on gear, and joking with other jumpers, he uses the time to consciously observe and etch conversations. He accommodates the problem of "multiple foci" by alternating simpler jumps where he can be more attentive, with fully jump-focused rides (380). He also mentions that such multitasking creates "potential pitfalls, exacerbating certain problems endemic to field research" (389).
Q3: The limitations of analytic autoethnography include that most researchers do not find research interests that are deeply intertwined with their personal lives, as "autoethnography requires" (390). This research is apparently undergirded by a "quest for self-understanding" and an opportunity to explore aspects of the researcher's social life in a "deeper and more sustained manner" (390). He mentions that the methodological advantage "relate to the ways in which being a CMR facilitates the availability of data" (389). Although Anderson mentions little specifics on how validity and reliability were instituted in the studies he summarizes, he does state that "the ethnographic imperative calls for dialogue with 'data' or 'others.' He also mentions David Karp, in Speaking of Sadness, as he was "always disciplined by the data collected in in-depth interviews" (386). Specifics, however, were not given.

Saturday, February 21, 2009

Week 7: Sampling & Surveys

Appropriate Purposes for Surveys
Surveys can appropriately be used to make large descriptive tasks possible with a minimum of cost and effort. (Sampling surveys describe a large group (population) in terms of a sample (smaller part of group). It concentrates on a few variables of small groups, which can be representative of a larger group.) It allows researchers to obtain readily observed or recalled behavior, and can be related to several major demographic characteristics. It helps reduce large populations to a manageable size because of sampling procedures and provides a valuable means of obtaining representative descriptive information (not cause/effect).

Surveys make the research effort more reasonable. Researchers should look into the question of feasibility (# of units) from which they can collect good data and also adequately analyze it.

Subject Selection
The simplest and best strategy for subject selection is random selection, in which the number of the population selected for study is put into alphabetical order, and then selected randomly by hand (long and involved), or ideally by calculator or computer (using a random number function). Data is collected using questionnaires (scored, and open ended questions). Other types of sampling include systematic random sampling (useful when the population to be studied is already organized in a sequence in the data); quota sampling (helpful when a researcher knows the % of specific features of the population); stratified samples (when some parts of the population are of more immediate interest); cluster samples (when the researcher wishes to study individual units within a large population. This type of sampling should be avoided unless the researcher is using or is a strong statistician).

Collecting and Analyzing Data
Questionnaires (scored and open ended questions) and surveys can be used. Prime consideration should be given to the capability of eliciting a high response rate. Response rates are part of analyzing data, and depend on following sampling size, and paying attention to the nonresponse to questionnaires (remember to chase the data - make phone calls, and try to make two or three attempts to obtain the data). 3 types of data can be collected in a survey: nominal (simple counting/#'s and %'s); interval (usually comes from test scores w/large #'s of items); rank (subjects placed in hierarchical order, which is assumed to be equal order) (p.70-74 L&A).

Possible Generalizations
This is descriptive research, so one must be careful drawing conclusions from the results, as it is often difficult to make cause/effect statements. These generalizations should be made only to the population from which the sample was drawn (p.78 L&A).

(Material taken from Lauer & Asher)

Friday, February 13, 2009

Week 6: Case Studies

At the very heart of a descriptive study is the observation of an environment or analysis of data without affecting the nature of the situation under scrutiny. So we can analyze using surveys, ethnographies, case studies, prediction studies, etc..., all of which are qualitative descriptive studies. The strength of this study comes from the observance of a specific situation with clearly defined variables, and of course the weakness of this type of study comes from analysis that can not be replicated because, for example, it may lack explicit instructions, or from coders who do not agree beforehand on the importance of category development, or from researchers who do not question the theoretical assumptions underlying their research, all of which weakens the significance of the findings.

The purpose of case studies is to identify important aspects or variables in natural conditions through the study of individuals, small groups, or whole environments. It is not cause and effect, but rather allows researchers to form theories and hypothesis from observing a small number of subjects in the natural environment. Subjects can be selected, for example from the classroom of a researcher, of from the interviews of researchers. In Lauer & Asher, Emig chose 8 subjects from various types of schools, and Graves selected 8 students from a larger study. In keeping with the small nature of these studies, Deborah Brandt interviewed over eighty people born between 1895 and 1985 and then chose two subjects born sixty-eight years apart to study - both of whom had striking parallels in their lives.

The data is collected a variety of ways, including letters, speeches, TV shows, essays, etc. Emig, for example, collected conversations, tape recordings of students composing aloud, accounts of processes, discrete observations of composing, writing samples, and school records (L&A 26). Graves used folders (these contained tests of intelligence and reading, assigned and unassigned writing, study records and observances of the student both in and out of school) and observations of 53 writing episodes, in addition to interviews. Overall, the researcher of this type of study uses the data to seek out patterns and identify operationally definable variables, and then asks how they relate to one another (L&A 27).

A major aspect of case study analysis is content analysis (which includes coding, or the setting up and labeling of categories and variables). Content analysis, according to Lauer & Asher, is a "major measurement procedure" that allows researchers to substantiate that materials and observations are ultimately quantifiable (27). Here communication data is analyzed, patterns are noticed, and variables are identified. The kinds of generalizations possible include extensive descriptive accounts. These studies should optimally "relate their findings to the research of others," thus increasing their ability to generalize about their study (L&A 33), and as with Graves, furnish questions for subsequent research, which is an approriate and valuable feature of qualitative study. Deborah Brandt notes that her aim is to "extract from their experience lessons that can be applied to literacy learning in other economic configurations - not in order to predict particular outcomes, but to understand better the struggles that economic transformations bring to the pursuit of literacy" (377). So, in essence, she notes that her study is not a cause and effect, but rather an observation that allows a hypothesis that can create further understanding.

Friday, February 6, 2009

Week 5: Internet Research/Ethics

One obvious issue with internet research and its impact upon human subjects is that there is little anonymity. Once information is entered on a computer, it is a permanent feature of the memory banks and only needs a skilled computer hacker to find it, and the same can be said of internet storage and the myth of anonymity for research subjects. The memory banks are endless and long-lasting. Curtis might like to know, for example, that when I researched CITI, his blog entry was listed 4th on the Google list. It is uncomfortable when one realizes that the internet helps us retrieve information but also takes away our privacy.

I have to laugh at the notion that CITI has created a site to train us in the ethics of working with human subjects. Really? A computer training site that teaches ethics? If Aristotle had known computers were coming, he could have taken a vacation instead of writing the Nic Ethics. I understand that its objective is to guide researchers in what is and isn't permissible, but we all know that the ones willing to listen are the ones that probably aren't going to be the problem. I'm worried about the guys out there who took the CITI training and are secretly attempting to clone humans, or the ones who are out in the open and checking on how genetic mutations of seeds affects humans, or the ones who are cloning animal parts to be used in humans. There is no way that CITI can train scientists to stop hurting mankind, as science has never acknowledged an existence of the soul (indeed, couldn't find it or measure it or statistically calibrate it). I'm not anti-science, but we sure do spend a great deal of time cleaning up the mess of the crazy ones, and I'm not sure that we can turn ourselves around from the worst bits (nuclear weapons, manipulating crops and animals without long-term investigation of the consequences, pollution from plastics, coal refineries, and the list is endless). So I'll take the training, but at the end of the day, if this is all we've got, why bother?

Friday, January 30, 2009

Week 4: Measurement

Both quantitative and qualitative are empirical methodologies involving "systematic research of contemporary phenomenon to answer questions and solve questions" (Morgan 25).

Quantitative design is as follows: Quantitative is experimental with “randomized subject selection, treatment and control groups” (Goubil-Gambrell 584). According to Goubil-Gambrell, quantitative involves experimental and quasi-experimental writing research.

Quantitative research does the following:
• Quantifies key aspects
• Manipulates variables (the manipulation is called treatment)
• Measures and statistically analyzes
• Establishes cause and effect relationships
A quantitative study of user experience with online manuals might, for example, identify how much experience the readers have with these books, and how quickly they could solve the problem.

Qualitative design involves case or ethnographic study with representative subjects in natural settings (Goubil-Gambrell 587). The greatest strength of this type of research is its in-depth depiction of subjects in an actual setting. Morgan states that “qualitative research enables researchers to investigate the process of the problem situation or describe features of the problem” and adds descriptive to the types of case studies (27).

According to Goubil-Gambrell, this type of research does the following:
• Descriptive in nature, and process
• Observes a specific situation
• Identifies key variables
• Frames questions
A qualitative study of user experience with online manuals might, for example describe how they used them, who used them, how often they were used, and whether or not people liked using them. Morgan lists 6 sources of data for qualitative researchers, including documentation, archival records, interviews and surveys, direct observation, participant observation, and physical artifacts (29).

There are two requirements of measurement: reliability and validity. According to Lauer & Asher, validity and reliability influence each other, but while measurement can be reliable without being valid, it must be reliable to be valid.

Reliability is one of the main requirements of all measurement instruments, and is the ability of independent observers or measurements to agree. It is largely socially constructed and contains a collaborative interpretation of data influenced by researchers’ tacit knowledge. It is generally reported as a decimal fraction. The three types of reliability include equivalency, stability and internal consistency. The data – whether it is nominal or internal – governs the kind of analysis used to determine reliability.

In public education, the emphasis upon state and federal testing of students has brought me some familiarity with the stability needed to judge whether test results will hold across time with some accuracy, but what it fails to judge is whether students will give a flip and actually try with the same intensity from one 3 month period on MAP tests, for example, to another.

Validity is its ability to measure whatever it is intended to assess, and it rests on the “soundness of interpretation of the measurement system” (L&A 141). Writing tests are valid if they have “congruence with major components of writing behavior” (L&A 140) Four kinds of validity can be determined: content, concurrent, predictive and construct. While the question of reliability can be definitively answered, the question of validity “can never be fully solved” - writing theories continually grow, change and evolve (L&A 141).

The terms probability and significance run throughout the Williams article regarding statistics and research, and includes percentages and samples that are endlessly plotted – so it deals with the likelihood that something either will or will not happen with any statistically definable result. The article, for example, points out that in terms of distribution, it can be seen through probability rather than frequency distribution. In fact, it seems much more useful to take the extra step (and extra work) to consider the term of distribution as a probability, and later encourages us, when dealing with null hypothesis, to use probability as a basis for our decision. It is apparently a sounder model?

Significance refers to whether a researcher can call whatever the difference or relationships he/she is studying “statistically significant” (Williams 61). There also seems to be some convention that notes that “significant at the p<.05 level” occurs at the level with which the null hypothesis is rejected (upon attaining this level of calculated probability) – so the two terms work together (Williams 61).

And that’s my book report.