Friday, 25 July 2014

Dark Cloud over Academic Freedom

The US supreme court's ruling upholding the subpoena issued on the basis of the Mutual Legal Assistance Treaty (MLAT) between the US and the UK makes researchers vulnerable to the same reprisals and targeting as informants and spies.  The work of researchers, the interviews they collect, the analysis they provide, can be invaluable in conflicts and this ruling changes them from a resource for peace to a tool for destruction.

What was the case that tipped the balance?  A murder in Belfast over 30 years ago allegedly committed by Gerry Adams.  (Prof. Robert White of Indiana University's sociology dept gives a nice timeline of The TroublesThe News Letter, The Pride of Northern Ireland gives a timeline of the events of the case; and Boston College gives a timeline of legal proceedings)  The reason a murder case in Northern Ireland touches reseach in the US is the MLATreaty... there were documents collected by researchers at Boston College that were subpoenaed as evidence, and similar the conundrum courts face with journalists when they know details of a crime, the court in Northern Ireland felt there was important evidence in the interviews that could be shared via this treaty.  The Belfast Project documented many hours of interviews with participants on both sides of the conflict on the strict written understanding of confidentiality until their death unless otherwise granted.  This kind of precaution was and still is felt to be necessary to protect interviewee's lives and those of their families.  In fact, this case is about a kidnapping and murder of a woman by the IRA.

Now there is ample meat on this bone of contention for legal and social science scholars.  First, should we consider the researchers at Boston College researchers or were they journalists or even IRA affliated persons (which gave them trust to interview those communities) with no academic credentials?  Does their categorization even matter when the real issue is the breach of confidentiality of their sources?  The breach of trust, a foundation in collecting interview-based research.

Another issue, not emphasized during any of the court hearings, was that after 1972, anyone arrested and imprisoned in Northern Ireland was considered a participant in the conflict rather than a criminal.  There was a conceptual and legal change in standing for crimes committed thereafter as being part of a larger battle.  As I understand it (from speaking to experts in this area), if this murder charge had been brought in 1972 and Adams had been arrested then, it would not have been a murder charge but rather a political one (called Special Catergory Status).   And this is a key distinction both during and post-conflict for rebuilding because, to take a different example like Egypt after the revolution in the streets in 2011, would it be helpful to go back and prosecute every person they could find for breaking a window for vandalism and every person for assault and battery for protest related violence?  (Certainly some post-conflict resolutions chose to pursue justice for key leaders such as through the ICC, but this is not always the case.)  At some point, most post-conflict societies decide to draw a line of forgiveness (such as truth and reconcilation) in order to move forward.  And in no way is the forgive/move on method easy, it is simply that there is something unusual about a murder case in these circumstances.

The researchers ceded their interview data to the Boston College Library where anonymized data could by used by other researchers.  The data was held by a third party not unlike how we use cloud data storage or email or other digital storage resources to facilitate data collection and security.  Ultimately, it became the university's decision to comply with the subpoena not the researchers themselves because they had given up the data.  How we store our data, who controlls it, who has access to it is ever more important with this implications of this ruling.

As argued in the Massachusetts ACLU's amicus brief, described here by their executive director Carol Rose, “It is alarming that the trial court opinion suggests that the Constitution surrenders US citizens to foreign powers with fewer safeguards than are afforded to citizens subpoenaed by domestic law enforcement agencies.  If the government has its way, it would straightjacket judicial review of investigations and prosecutions by any foreign country party to this treaty, including Russia and China.”

The examples given in the amicus brief illustrate how information sharing (or not sharing) was a factor in recent legal actions in countries subject to the MLAT:
The prosecution of Nobel Prize winner Liu Xiaobo by the Chinese government for, “inciting subversion of state power.”
The recent arrest and prosecutions of non-govermental organizations, including civil rights groups, by the Egyptian government.
The sex discrimination case recently dismissed by a Russian judge who stated that, “If we had no sexual harassment we would have no children.”

This begs the questions, why did the US Supreme Court grant this subpoena request now?  The support and 'special relationship' between the US and the UK has developed a unique flavor as a result of the war on terror.  A kind of complicity.  Despite pressure from senators and then Sec. of State Clinton, as being a politically destabilizing move, this ruling opens the door wider for government to pressure researchers for data.  For me and my colleagues, I can only imagine the consequences.  We are the ones hiking into the hills to ask former child soldiers about their experience, to ask suspected taliban about their motivations, to ask corrupt drug enforcement police about their allegiances, what could possibly go wrong for us or for the people we interview if we are are no longer seen as purely academic researchers?

And what of the critics who say that social science provides no concrete results towards solving war and conflict?  Just because the effects of research informing policy-making are too complex to throw up on a powerpoint slide does not mean they do not exist.  The knowledge gained by investigating the nature of conflict, its intricacies and ramifications, its participants and their motivations, this certainly leads to better planning for preventing conflict and better policy-making when embroiled the unstoppable ones.  What is the alternative?  Not understanding the nature of the thing and making guesses about policies for troops and sanctions and alliances in the dark?

Finally, the amicus brief written by group of concerned social scientists does a wonderful job of outlining several key reasons why this ruling was aggregous and should be added as a point of review on the ethics panels for all researchers in order to understand how their data will be protected at their institutions.  In fact, if you've never read a legal brief (or tried it and hated it), this is the one for you.  It tells a story, makes a compelling argument, and stays well clear of jargon and things like, 'pursuant to code 3.1.c.-3. blah blah.'  Enjoy.




 




Friday, 25 April 2014

The Blind Spot for Big Data

The New York Times has been doing a series of pieces on the uses and limitations of Big Data.   While I do not specifically focus on big data, I look at some of the ways we collect it; therefore, I am interested in the downstream implications once it's aggregated.  How could small distortions at the scale I study become much larger? 

Since I look at conflict, the piece by Somini Sengupta, 'Spreadsheets and Global Mayhem' certainly caught my eye.  The title for the opinion piece about all the ways we are trying to mine data for conflict prevention matches the term 'spreadsheets,' a feeble and not very advanced technology for organizing stuff, against the description 'global mayhem' (for me it evokes Microsoft Excel battling the Palestinian-Israeli conflict.)  The title conveyed the incongruence of strategies centered on big data.   Collecting information, aggregating it, that isn't enough.  The sheer weight of it, the potential feels powerful.  Surely, answers must be in there somewhere?  But finding patterns, asking the right questions, creating really good models with the complex information such as communications data (much of it translated)... that's a long ways off.  We don't really know what to do with what we have, and we don't really know what the answers mean from the models we build.  That's where I think we are.  Most marketing firms vehemently disagree. (Sentiment analysis).  And certainly the types of conflict prediction machines Sengupta references such as the GDELT Project and the University of Sydney's Atrocity Forecasting  believe fortune telling is within our digital grasp.

Another piece by Gary Marcus and Ernest Davis 'Eight (No, Nine!) Problems With Big Data' addresses some of these issues including translation.  They remind the reader of how often the data collected has been 'washed' or 'homogenized' with translation such as the ubiquitous Google Translate.  The original data may appear several times over in new forms because of this tool.  And there is a growing industry of writing about flaws with big data.  The debate has made many who work within the field weary or intensely frustrated because the debate is fueled largely by popular misunderstandings of a very complex undertaking.

From my perspective, there remains a giant blind spot, what I call the invisible variable of culture. Most acutely, this involves the languages now coming online, the languages spoken in regions experiencing a tech boom.  Individuals in these areas must either either participate online and with mobile communication technology in a European language or muddle through a transliteration of their own local language which will not be part of this Big Data mining.  My research looks at the distortions in the narratives they produce in both instances.  The distortion over computer mediated communication such as SMS or smart phone apps which compartmentalize narrative, is a problem about how we organize what we want to say before we say it.  This pre-language process varies by culture and structures how we connect information such as sensory perception.  At the moment, our technology primarily reflects one culture's notion of how to connect information, how to organize it conceptually.  This has implications both in how information technology collects data and how questions about that data are posed and understood.

What if other cultures have a fantastically different concept of organizing information?  How do you know the data you've collected means what you think it means?

[math example: your math is base 10... but other groups might use base 12 or base 2, etc... so when you see their numbers and analyze them with your base 10... they make sense to you but don't mean what they meant originally.]

We haven't cracked the code yet of how to incorporate a variable like culture into software applications.  It's more than translation.  It's not as easy as word replacement.  It's deeper than that.  It's context.  It's at the level of concepts and categories.  The way we see things before we use language.  That's not to say we can't unravel these things with algorithms... but those are often based on (even unconsciously) our understanding of communication.  And there is massively insufficient research on most languages out there.   If there are around 6800 languages, Evans and Levinson (2009) figure that:
Less than 10% of these languages have decent descriptions (full grammars and dictionaries). Consequently, nearly all generalizations about what is possible in human languages are based on a maximal 500 language sample (in practice, usually much smaller – Greenberg’s famous universals of language were based on 30), and almost every new language description still guarantees substantial surprises.
And the languages within the tech boom regions such as Africa and Southeast Asia are certainly part of the knowledge void.  We aren't prepared to collect this data yet.  The data we do collect are basically shoehorned into a format meant for English and for western concepts (like our notions of cause and effect or even time).  Data from these language groups including usage patterns, such as the flu or pregnancy predictor algorithms we've read about, won't be any good without further cultural adaptation.  And when it comes to crunching the data, we have a lot to learn about asking context specific questions and understanding the data from a non-western framework. (My own research results have shown me it's the difference between thinking you've identified a victim or a villain.)

While widely not understood yet, these cultural differences in the Big Data story are a dazzling challenge to consider.


The Global Database of Events, Language, and Tone (GDELT) is an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what's happening around the world, what its context is and who's involved, and how the world is feeling about it, every single day. - See more at: http://gdeltproject.org/index.html#sthash.Tl6hO993.dpuf
The Global Database of Events, Language, and Tone (GDELT) is an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what's happening around the world, what its context is and who's involved, and how the world is feeling about it, every single day. - See more at: http://gdeltproject.org/index.html#sthash.Tl6hO993.dpuf
The Global Database of Events, Language, and Tone (GDELT) is an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what's happening around the world, what its context is and who's involved, and how the world is feeling about it, every single day. - See more at: http://gdeltproject.org/index.html#sthash.Tl6hO993.dpuf
The Global Database of Events, Language, and Tone (GDELT) is an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what's happening around the world, what its context is and who's involved, and how the world is feeling about it, every single day. - See more at: http://gdeltproject.org/index.html#sthash.Tl6hO993.dpuf

 



Friday, 21 March 2014

Another war with privacy

In a recent piece in the NYT's titled:'Talking Out Loud About War, and Coming Home' Karen Zraick described a troubling feature of contemporary American culture-- the failure to discuss the experience of war.  I characterize it as troubling (although perhaps not uniquely American, I argue it is culturally rooted), because the debate about being in war lacks the voices of veterans.

What intrigues me most about this phenomenon, this silence, is how it can happen at all in the culture of new media.  While we capture, catalog, share, comment, repost, and remix every second of our existence, we still avoid this topic.  What is it in our cultural dna that compels us so strongly to keep the experience of war private?

According the Zraick, returning veterans today are seen, but they are not heard, and they are not even asked.  They remain isolated which is detrimental them and the nation. She explains the reasons for the reluctance to engage with veterans:
Civilians said they were reluctant to bring up what the veterans had experienced in combat, for fear of reopening old wounds. One mentioned guilt at not having served, another of growing up with a distant father who had been scarred by war. Some spoke of wanting to reconcile opposition to war with support for those who had fought, or anger about what the veterans had gone through. 
Without including the realities of veterans' experiences, the national discussion surrounding whether to be in a war relies on abtractions, rhetoric on broad ideas of freedom and democracy; it does not include the details of what it means to the human being doing the fighting.  Should this be part of the debate?  Why do we turn away from this?


www.1914.org

In her compilation, How We Are Changed By War: A Study of Letters and Diaries from Colonial Conflicts to Operation Iraqi Freedom (2011) (review), Diana Gill appraoches this question by examining the words of soldiers, support staff, and families over almost amost a century.  She finds there has often been a willing denial about the intensity of the situation we find ourselves in.  She offers several reasons for this avoidance, all of them culturally rooted: we don't complain; we put country first; we don't want to worry family; we stay positive... and many more.  It is a rich trove for sociologists and pyschologists to trace all the factors.  What stands out for me is the contrast to other approaches for addressing painful and violent events-- a contrast in terms of communication style and cultural attititude. 

Take the District Six Museum in South Africa for example.  There the approach is to bring everything out into the light. 
Gone
Buried
Covered by the dust of defeat –
Or so the conquerors believed
But there is nothing that can
Be hidden from the mind
Nothing that memory cannot
Reach or touch or call back.         Don Mattera, 1987 (District Six Museum website)
Not only can you walk through the houses, you can write your story on the wall, you can add your living memory to everyone else's.  The idea (paralleled in other museums in post-conflict zones in Africa) is to provide a space (a sensory environment) to heal and to use the memory of the pain to prevent similar conflicts.  It is the opposite approach in many ways to the American style.  However, I hesitate to generalize one style as open and another as closed.  I think each style is concerned with trust because of the sensitive nature of the topic and is therefore still aimed at an internal or intragroup audience. 

From the observations of Susan Sontag in her 2003 essay, Regarding the Pain of Others, the visualization of pain, the horrors of war and suffereing, these have played a role historically and certainly have a pyschological effect.  Among the cultural turning points for the US would be the televised war in Vietnam and subsequent decsions not to broadcast violent content.  Watching programming today, compare the American Al Jazeera and the Arabic version of the same new story; the coverage of war and violence is strikingly different.  Is the American version sanitized because no Americans are shown to be harmed?  (as in Sontag's title, only the pain of others is shown)  Or is it more humane?  Is it part of the phenomenon of denial of the details of the war experience?  I have more questions than conclusions, but a taboo is a rare bird these days.  It deserves to be investigated more thoroughly.




Wednesday, 12 February 2014

For your eyes only



While I write constantly about adapting technology to other cultures, to make software which is more useful as a tool for information gathering and analysis especially when the information comes in the form of communication or narratives, I may not write enough about how these culture have already adapted.  One of my colleagues who researches security in the Great Lakes Region (DRC/Rwanda) reminded me about the frighteningly sophisticated system which grazes on the western-made media sources, but does not itself rely on them to organize, store or analyze what it finds.
I had experienced similar things in North Africa when giving a police report.  An efficiency of information collection and tracking untethered to computers, or for that matter, text-based record keeping of any kind.  The speed was jaw-dropping to witness.  And a bit scary.  The role of computers, mobile phones, and online platforms was purely to connect with the outside, the west, as an audience.  It was a strategic and sophisticated media manipulation.  I have written a bit about this as a political tactic during the revolution in Tunisia and Egypt during 2011 ; a time when group leaders were rapidly improving their skills for targeting audiences and crafting language-specific messages.  I was also part of projects aimed at using new media to target domestic audiences.  By comparison, these were lackluster in the amount of political energy they generated.  Photo-sharing and film were much more popular than text-based mediums.  I didn't find the 'revolutionary' media strategies to be very rousing for domestic audiences because they didn't work within preference communication modes, i.e., orality.  In sub-Saharan Africa where more languages are tonal, I predict this phenomenon would be even more pronounced (hence my research).  And perhaps one reason there has not been an African Spring similar to the Arab Spring (worth considering among the myriad of reasons....)

In Uganda, where I recently did fieldwork, the profusion of mobile phones is hard to ignore.  If everyone has one and is eager to use it in some fashion, why not get the most out of it rather than remain a data donor?  The responses from participants in my experiment reflected an attitude toward technology as though they engaged with it as partial selves.  As bilinguals, they are able to choose their mode of communication, and for them ICT was not connected to their Acholi-selves. 
“The Europeans are the ones who brought all this. It was not ours,” said a skilled laborer, male age 40+
Using the novel approach from the field of cognitive linguistics, I was able to highlight deficiencies of ICT software from a perspective that could change this sentiment.  Indigenous software could be developed which felt like it spoke to and worked with their Acholi-side rather than forcing them to switch over to their English side.  The advantages of this type of adaptation in design have implications for economic development, information security, and political participation.  Besides retaining non-technology based channels for information which are already efficient, it is imperative that cultural groups address the inherent power imbalance created by perpetually importing foreign methods for capturing information by developing their own.  Controlling the information (by controlling the software codes) could mean changing the power dynamics behind how that information is leveraged in policy-making. 

Tuesday, 14 January 2014

Negative Measure


Built to solve problems.  List deficiencies.  Map crises.  The ICTs for conflict management aggregate the negative and forget to leave a space for the positive.

In the survey I devised to collect descriptive information about a video scene my experiment participants watched, I followed the models of several other ICTs for conflict management and collected information about the individuals perpetrating the actions, the location, the level of damage inflicted, and the level of insecurity participants observed.  However, when I compared the structured answers with oral descriptions, participants often spent time detailing the involvement of bystanders.  Did people offer help?  There seemed to be an expectation of community intervention to calm a situation.  Also, participants were measured in their consideration of the guilt or innocence of the perpetrator.  They offered more than one explanation for the scenario they watched so as to place the motivations, culpability, or even the justification for involvement in mild violence into doubt.

The categories of perpetrator and victim, villain and target were not delineated in the same way as I expected.  The core conceptualization of the event as a 'problem' may in fact be the problem.  This is the initial premise for taking the report.  We want to learn more about it (the problem), about its components, its actors, locations, moving parts, so we can design a solution and prevent its re-occurrence.  What if the local population doesn't perceive a problem?  Or what if they understand the maladjusted components in a manner that is undetectable, or conceptually invisible, with the current ICT approach?  I think it's a matter of the wrong model, not the wrong impulse to improve.

From my experiment, I asked individuals to identify 'the attacker,' the person hitting another man about the head and chasing him through the scene.  Most took this to mean 'who is causing the problem?'  And they identified the man I would have called 'the victim.'  Moreover, several individuals told me they came to this conclusion because this problem-causer/victim was not fighting back but being chased and hit while offering no defense.  This meant he was guilty.  For me, this meant he was in need of help.  This model for recognizing justice, cause-effect, and culpability was foreign to me.  It would be worth doing more experiments around just this concept (perhaps conceptual transfer experiments) and sampling more than just me.

Is it just a matter of thinking up better names for categories?  Better questions to ask on surveys?  It's more than an issue of gathering information quantitatively or qualitatively.

Take for example the issue of the egg, the spear, and the egg-water.  Context is key for meaning.  This is true for any language.  In Luo, tong means spear and egg depending on tone, depending on context.  (The phrase tong pii means 'clean water' in Ethiopian Anuak, but 'egg water' in Kenyan Luo, two closely related Nilotic languages.  Although with negotiation, the phrases could mean water for eggs in either language.  So that's funny.)  The thing about tone and context is that they are reliant on a speaker-listener interaction, a volley, an exchange, a non-solo communication act.  Not like writing.
"Hope" is the thing with feathers—
That perches in the soul—
And sings the tune without the words—
And never stops—at all—.... E. Dickinson, (not a Luo).  
Much of our communication tech has evolved to capture, facilitate, speed and streamline writing, an individualistic form of expression.  It simply can't convey a type of communication with an essential, reverberative quality in which semantic content is as much (or more?) tied to the speaker-listener relationship as it is to anything that can be captured with text. 

Yes, these tools are meant to increase our ability to go into the field and gather information from 1000 individuals instead of 30.  This is great for researchers and polling and participatory governance and all sorts of reasons, but only if the tool is a good tool, that is, it assists us in doing a task we are already doing, makes it simpler, faster, easier in some way.... but if instead it brings us speedily to the wrong results, then what good is it?

The problem comes from the the fact that the tools being used now were built to capture western narratives (or logical constructs, conceptualizations of events) and communicate among NGO staff after disasters.  These same tools have been only slightly modified and then redeployed for conflict use such as post-reconstruction governance surveys, violence reports for election monitoring, etc.  All the while not recognizing the new users have new needs.  New conceptualizations of the events they are describing (and new ways of linking them) may be the key to empowering locally driven solutions and disengaging externally mandated ones.

Thursday, 19 December 2013

Original and Extra...Complicated


Field experience as a cultural mediator led me to the hypothesis about how ICT software was capturing narratives.  In the experiment, participants watched a video and described it three times:
stage 1 out loud in Acholi
stage 2 written on a mobile device in Acholi
stage 3 out loud in English
I predicted a pattern of similarities between the the written Acholi version and the oral English version.  I thought that because the software had been developed from a western cultural perspective, that even when the interface was translated for users, they would have to get into an English mindset to engage with the logical organization of the software.  And because of this effort at a cognitive level, the narrative they would produce via the ICT would be more similar to the oral English version and less similar to the oral Acholi version.  In this way it could be said that the Acholi narrative was disrupted.
hypothesis: that narrative given via ICT in Acholi would be more similar to performance in English than to oral Acholi.  [stated as a null hypothesis: The oral Acholi version will resemble the ICT Acholi version]
The apples and oranges bit that makes statistical modeling a challenge is two-fold.  First, the experiments I adapted were comparing only speech (similar to my stage 1 and stage 3).... more like apples to apples.  And secondly, each of their participants provided similar narrative content, at least, similar enough to compare... so  here, oranges to oranges.  In my case, even though I controlled the topic, the video, one participant remembered seeing a dispute instigated by a woman while another remembered seeing a group of government thugs enforcing curfew.  Furthermore, some participants provided only a few phrases while others talked for several minutes, many paragraphs of data.  Their descriptions of the video clip are barely in the same galaxy. 

The plan was to compare each narrative stage by analyzing frames.  A frame refers almost to a fragment or phrase constructing the narrative of, in this case, an event.  Take for example the frame, I saw fighting  and I saw two men fighting.  While similar, these would be considered two different ways of framing because of the added detail of the number of actors, the two men. 

The frames, such as giving a general overview followed by reference to who was in the frame-- I saw fighting. I saw people fighting-- would be considered two frames.  If the details in these frames are not markedly different between any two stages, then they are counted as the same even if the syntax is different such as I saw a fight vs. They are fighting

When differences occur, they are not only counted but ranked for importance.  Based on the narrative structure in Acholi (used as a baseline/norm and derived from this study as well as literature), the general category, then the participants, then details of actions and location are given hierarchical importance.  So for example, if two versions match until a detail of location, if one mentions a location and the other does not, they may still be considered a match if the general scene category and description of participants and actions match.  If the locations differ entirely, one event is recalled as being in a market and one at a taxi stage, then they will likely be considered different narratives.  For shorter narratives, most if not all frames and details must match.  

Another consideration is tone.  Throughout stage 1 and 3, the space that an oral modality gives for adding extra words, words which convey the (un)certainty of the speaker about his/her memory play a role in strength of the witness statement.  However, both stage 2a and b do not afford this same linguistic wiggle room.  There were built-in points to express doubt in stage 2b, and they are a point of focus in analysis.  Stage 2a was an open SMS format, so it was up to the participant to inject the intended level of (un)certainty.  Divergence in tone, in expressed level of doubt, constitutes a means for differentiating narratives as well. 
 
For stage 2 (a and b), it gets a bit trickier to compare written results with spoken... and sometimes only a ticked box for yes or no.  But this is the meat of the question.  Does the ICT format adequately capture the categories and concepts for the event frames? (Many participants in my experiment were reluctant to hold the device, type themselves, and often spoke the answers aloud which I recorded with an audio device and typed for them.  The data collected via the mobile device in stage 2 was not, therefore, strictly written.  Much of it was still spoken.)

For stage 2b to be judged a match with another stage, the answers should convey similar information to that given in one of the oral narratives without contradicting or adding new information.  If the reader had only the information from stage 2b, could s/he reasonably imagine a scene as told in one of the other narrative versions?  If one of the oral narrative versions has several other frames or details not captured by stage 2b, then no, but if the sense of the event framing such as ‘theft with violence’ was captured by the answers to stage 2b’s question series, then it can be reasonable concluded that stage 2b adequately captured that narrative.  (If it sounds complicated, you're not wrong.  As I have said before, this was a first attempt at bringing a problem to the surface so it can be better understood, studied, and addressed... so far, lots of good indicators plus heaps of ideas to improve future trials.)

Because of the ordering of the stages, stage 2a often repeated much of what was said in stage 1.  For psychology experiments, the effects of ordering are considerable and are usually countered by mixing up the stages; however, in this case, the role of priming was anticipated.  The structure of the experiment captures the real-life field experience where an individual recalls an event orally and then creates a written record of that event with ICT software, often with the help of a trained information collection specialist.  Priming effects would counter my hypothesis, in other words, I almost stacked the deck in favor of my own hypothesis failing.  So if the results show my hypothesis still has value, there must be a very strong force at work.

Of the instances that were considered 'ICT success,' most came from participants who followed the stage 2a path.  This may have been because it was a quick experiment and easy to simply repeat in written form what you had just said out loud.  It is fresh in your mind.  The differences came in the form of tone as well as more obvious detail and frame shifts.  The tone change is interesting because in the oral account, the speaker may have said, I saw some fighting like maybe there was this guy who caused a problem or maybe it's a taxi conductor wanting change....  There was doubt and alternatives expressed.  In stage 2a, the version was clear and direct.  I saw one guy hit another guy in the street.  The versions often make it sound much more like the participant is sure of the identity and ready to accuse someone to the police; whereas the oral version is not accusatory and gives a couple of possible scenarios for what prompted the chaos.  If I only read the ICT version I would believe there was an actionable threat, a report of violence that needed a response.  If I heard the oral version, I would not be as concerned.  The aggregation of this type of report, stripped of the measured tone of the speaker can heighten perceived threat levels unnecessarily.  

I will spend more time on this topic in further posts as well as talk about the results of my frame analysis.  (Two other colleagues are reviewing my results. Independent evaluation.)  But preliminary conclusions are that my hypothesis was correct.  So that's cool.  Other results point towards the strong division between oral and written narrative structures, something we all kinda know intuitively and there is a ton of research on how speaking is different than writing, so it's reassuring that results stayed with this pattern... the interesting parts come in looking at