December 6th Response

The reading was a thread on twitter that highlighted the use and misuse of social media data. It was a very insightful text. I especially liked that the author addressed what runs through peoples minds regarding anonymous release of this data but in fact by releasing different pieces of private information, they give the public the tools to trace this back to its origin.  I follow MG’s thought process on whether or not making accounts “private” could change the amount of information gathered or if the user has no control at all over any of this. 

Thursday, November 29

Today’s article on the FAIR principles of data use gave four broad methods of effectively producing fair data and further explained these ways. It tells us that data should aim to be   Find able, Accessible, Interoperable and Reusable.  It is, however, slightly difficult to this achievement as there is no universal storage of data that ensure that all data recorded is done with unique, non repeated identifiers and are done in language language and format that is accessible to all. Also, technology advances at different velocities in different parts of the world and advanced techniques that make data FAIR in some areas could also be making it harder to understand or translate for those in other areas. ZC also describes another problem with “accessible” data, the abundance of data could make it harder to discover actually useful information.

November 27th Response

Today’s article, Debates in the Digital Humanities, is a very interesting and insightful one. The article initially points out the strong biases that develop against more technological approaches to some things and makes use of printed media received as more reliable sources of information than those published online, because it is the most conventional form, to some, when faced with an ‘easier’ alternative it became the ‘true’ form. This ideology is also seen in the acceptance of Digital Humanities as a true form of expressing scholarly intelligence. The article is well written with in text citations that help with references but a lot of the claims made are open to questioning- as some of my peers pointed out in their comments on hypothesis- even though the reasoning is logical and can be followed.

November 15th Response

The article published by the Political Geography for today’s class on mapping discusses the relationships between the demographics in Los Angeles and the political trends in the various areas within that state.  The maps done to demonstrate these trends, however, do not effectively communicate this to the audience as some of them make use of percentages and not actual figures, some lack borders that distinguish the individual states and the color coding of one map made use of colors that were too close to each other. KL makes a great point of how these misinterpretations can occur, with an example of the most recent congress race in Maine.

Identifying Geographical information in text – November 13

The brief article describes the introduction of data analysis software that help in distinguishing geographical locations mentioned in text based on their context and proximity to other adjectives that describe the unique locations. This was used to map out locations of interviews as well as the locations mentioned in the interviews. As MS-C stated, Named Entity Recognition seems like an important tool to  collect data from text  This is a useful way to understand and analyse migration patterns however the visualizations that explain this data are quite confusing. By placing only a section of what seems to be some landmass, I am unable to identify this location as a continent, country or state.  Also, the labels are to little to offer any form of guidance and the only thing I am able to deduce from this data presentation is that people moved around and lived in an area that seems to be the west coast of some bigger area.

November 8 – Lynching, Visualization and Visibility

The reading for today, by Lincoln A. Mullen,  is an example of how the presentation of visual data in ways that the general public can easily understand and absorb is not only effective communication of information but also a way of making knowledge and history that would otherwise be ignored accessible to all. Some parts of the text explain how so many governmental offenses to the basic human rights of individuals are legalized simply because the law refuses to acknowledge them or because they are “invisible”.  I think data visualization is useful most especially in cases like these because by presenting such masses of data the public get to see and acknowledge what is being done and the scale at which it is done and then choose what to do with this knowledge. They could become part of the movement against it. MS-A also makes a fair point that such visualizations rely on complete and accurate data to avoid miscommunications.

Response Post 10/04

I found two articles of the selection for the week quite interesting.  The extract on psycho-history gives a bit of a solution to how we can make data collection and use less biased. If studying the human sources and collectors of the data alongside data analysis was a norm, then maybe the individual perspectives and opinions that generate bias could be understood and the extremeness of these biased views could determine whether or not the data collected is valid enough for public use. My colleague ZC makes a good point on the lack of clarity regarding the methodology, is it primariy math based? The article by Michael Martell on the wage gap between men of different sexual orientations was also very insightful. The methods the author used just to ensure that other factors, that were worth considering in the general analysis of the entire situation, were looked at alongside the data are other ways that we could move away from the collection and use of biased and general data.

css.php