Postgraduate Course: Managing Digital Influence (PGSP11441)
Course Outline
School | School of Social and Political Science |
College | College of Humanities and Social Science |
Credit level (Normal year taken) | SCQF Level 11 (Postgraduate) |
Course type | Online Distance Learning |
Availability | Available to all students |
SCQF Credits | 10 |
ECTS Credits | 5 |
Summary | Please note that this course is only available to students of the Data Science, Technology and Innovation (DSTI) online distance learning programme
One of the most impactful effects of easier access to a larger proportion of data on an increasing number of phenomena is the use of rankings to assess all aspects of the performance of products and organizations based on customer feedback. This 10 credits course provides students with skills to (1) develop a comprehensive understanding of the making of organizational reputation indices; (2) compare different methods to collect data on digital influence; (3) capture the effects of rankings on organizations; (4) manage reputation risk in the light of new social media-based ranking systems. Our analysis will start from media ranking and progressively extend to automated ranking systems. The course also offers a tutorial on Using NodeXL as a tool for Measuring online Influence. |
Course description |
Week 1 - Media Rankings
In the first week, we will provide an overview of popular media rankings and discuss aspects of their making.
Study Session 1 - Introduction
In the first study session, we define what a ranking is and provide an overview of media (on-line and off-line) reputation indices in different domains. A ranking is a judgment of a firm, an individual or a product made by a set of audiences on the basis of perceptions and assessments that are assembled and made available via a ranking system, which defines, assesses and compares firms, individuals or products according to certain predefined criteria. US-based Fortune Magazine's ranking, published since 1982, is one the best known rankings of firms (Fombrun & Stanley, 1990). A different set of rankings is the ranking of universities. Generating similar furor in readership, university rankings such as the U.S. News and World Report (USN) are meant to provide accessible information to educational consumers. The case of USN (Espeland & Sauder, 2007), ranking law schools since 1987, is interesting as it is the dominant ranker of law schools, virtually monopolizing the legal education. Business schools, in contrast, attend to at least five high-profile rankings (Sauder & Espeland, 2006).
Reflective Discussion Topic
Add a ranking that is not covered by examples given in Study Session 1 and observe its evolution over the last four years.
Numerous commentators have raised severe criticisms to Fortune AMAC. Could you search on-line for a criticism and say what you think about it?
Study Session 2 - The making of a ranking
In this study session we will discuss aspects of the making of a ranking, conducting a closer inspection of some media rankings studied in the literature. We will address how criteria used to define reputation and applied statistical methods change over time.
In the previous Study Session we addressed the annual Fortune survey of America¿s Most Admired Companies (AMAC) as one of the most well-known reputation ranking systems. But there are many other regional ranking systems that share its principles, like the Britain's Most Admired Companies, published by Management Today. Schultz, Mouritsen & Gabrielsen (2001) addresses for example the case of a 14-year-old Danish firms ranking system.
Inspecting the making of a ranking is often a difficult endeavour as while main findings are published in magazines, the database and the more detailed measurement are not always available, or are available on purchase.
Reflective Discussion Topic
Take the ranking systems "The RedMonk Programming Language Ranking" and the above mentioned "Management Today Most Admired Companies Ranking" and tell whether and how they make available their data and measurement techniques. Comment difference in their model.
Week 2 - Assessing Digital Influence
We will then describe the different ways of assessing digital influence. One is social, interaction-based model whilst the other relies heavily on the use of social media data. The model of assessing influence based on personal familiarity and informal networking channels is considered in relation with important recent developments based on making use of data analytics tools (like Klout, PeerIndex, Kred etc). We will ask: Is the new model measuring influence through counting numbers of 'followers' and re-tweets going to replace more traditional 'qualitative' methods in assessing influence? Are these two wholly differentiated processes or there is an amount of cross-over within these practices?
Study Session 1 - Survey-based assessment methods
Survey-based reputation ranking method has been criticised by saying that they rank 'intuitively'. It is impossible to discriminate between firm-specific criteria because they all correlate (Fonbrun & Shanley, 1990) which emphasize that:
(i) respondents can 'cheat and lie' (Reingold & Habal, 1998);
(ii) respondents show limited capacity to make fine distinctions between criteria used in assembling the overall rank, and that:
(iii) reputation is largely underdetermined by the criteria suggested to account for it.
(iv) Furthermore, in Schultz et al. study on "sticky reputation" (2001) size appears to be an important correlate of reputation, and it may therefore be related to awareness of the firm based on its presence in the world.
Size is what gets firms into business magazines and periodicals. Positive correlation between ranking and size is also noted by Downes (2000) and Welch (2002). This observation shifts our attention away from ranking criteria (which are the basic feature of survey-based methods) and it gives weight to issue related to the media presence of a firm. This is an opportunity for us to discuss existing forms of assessing reputation in relation with new developments based of making use of social media data (like Klout, PeerIndex, Kred etc).
Reflective Discussion
"Snowballing" from source to source was once a 'social network' issue, to speak in terms of method. Who else should I speak to? That's the question at the conclusion of the interview, if trust has been built. Now that much of the information we retrieve derives from the Internet, the issue of trust becomes digital. Discuss the trustworthiness of search engines as a source for recommending information and comment on what are your ways to check the reality of information that your retrieve from the web?
Study Session 2 - Social Media Based Assessment Methods
In this study session we will reflect upon the consequences of 'automating' the measurement of influence, where algorithms are created to crawl through blogs and social networking sites to automatically return a 'score' based on various weighted metrics concerning the impact of particular market actors. As an example, we will discuss the implication of social media for the field of Industry Analysis.
Reflective Discussion Topic
Influence measurement tools are becoming increasingly popular but are also highly opaque. It is far from clear, for instance, how their score is calculated or the algorithm itself is put together. Describe in as much detail as you can the working of a social monitoring algorithm of your choice.
Week 3 - Using Gephi to Measure Digital Influence
The course offers a tutorial on Using NodeXL as a tool for Measuring online Influence. This gives students an opportunity to engage with social network modelling in order to explain and visualise influence in specific networks. Using third party interfaces, Twitter data will be filtered and imported to Gephi in order to measure and visualise influence on 1) specific networks e.g. 'who are your most influential followers?' and 2) specific keywords 'who are the most influential user on a specific keyword?'. To answer these questions different methods will be used and illustrated to give students an overview of existing possibilities.
Assisted Tutorial -Gephi
Week 4 'New ranking methods based on social media data: are they successful?
We will then extend our discussion to include the 'entrants' in the ranking ecosystem. Internet and social media technologies have lowered the barrier to access the ranking market. Plurality of rankers may in theory make a difference, and the power of existing ranker can be weakened. We will host a guest lecture by a member of the Institute of Industry Analyst Relations (IIAR) to speak about whether IT organisations have changed their attitude towards rankers. The guest lecturer will discuss hands on experience of the effect on rankings in the IT sector, with particular reference to consultancy service and whether new methods of analysis have affected the market of supplementary knowledge about on-line influence measures.
Virtual Classroom - Guest lecture from Member of IIAR
Week 5 - What rankings do to organizations
We will discuss what rankings do to organizations, in terms of how external audience react to rakings and the influence of prior rakings on surveys that determine future ranks, the use of rankings to make funding decisions and how activities within organizations conform to ranking criteria. We will present studies that demonstrate how on-line rankings can have complex effects on organizational behavior, creating processes of 'reactivity' whereby, in an effort to improve a position, organizations begin to conform to these measures.
Study Session 1 - Moving up
Moving firms onto (and up in) the list takes fairly dramatic changes
"As there is limited set of positions in the top 20, the implication is that small improvements envisaged for a firm have no influence on its ranking position. Often new and small firms only experience 'small improvements'. Thus, there is relatively little visibility of 'rising firms' within such a ranking system. The consequence is that it takes fairly dramatic changes to move a firm onto the list." (Schutz et al., 2001: 36). Espeland & Sauder (2007: 34) add that the limited set of positions in rankings creates a prisoner's dilemma for organizations since they are strictly relative and punish organizations who fail to conform. The fate of organizations expressed in rankings is not simply intertwined, but is zero sum: one school's success may come at the expense of many others.
Study Session 2 - Boycott rankings
Espeland & Sauder (2007:34) suggest that ranking measures are imposed and therefore coercion can be considered a further mechanism of reactivity. Studying coercion as a mechanism of reactivity reveals why forms of resistance based on boycotts fail. Boycotts are for example those described by Yee (2004), when Harvard Business School and Wharton Business School announced that they would no longer fully cooperate with BusinessWeek refusing to release data for their biennial survey of MBA programs (Yee, 2004).
Reflective Discussion
Organizations may be subject to a number of different assessments. Some can be official (e.g. audit and required performance measures) and some can be unofficial, web-based, tripadvisor-like - e.g. ratemyprofessor.com in the case of Higher Education...Pick an organization in a sector of your choice and try to identify how many assessment criteria it has to respond to. Discuss the possible consequences of this multiplicity.
|
Entry Requirements (not applicable to Visiting Students)
Pre-requisites |
|
Co-requisites | |
Prohibited Combinations | |
Other requirements | Please note that this course is only available to students of the Data Science, Technology and Innovation (DSTI) online distance learning programme |
Information for Visiting Students
Pre-requisites | None |
High Demand Course? |
Yes |
Course Delivery Information
Not being delivered |
Learning Outcomes
On completion of this course, the student will be able to:
- - assess evidence deriving from monitoring digitally derived internet data, recognizing its strengths and limitations in comparison to other ways of apprehending customer needs;
- - make best use of the results of digital data analytics for service design, marketing and institutional reputation management;
- - appreciate the practical benefits and limitations of digital data for organizational decision-making;
- - identify, access and commission on-line data analytics tools and services appropriate to their needs;
- - understand when and how to procure social media data analytics services and how to combine them with existing knowledge practice.
|
Reading List
Readings:
Downes, D. (2000). Does BusinessWeek ranking matter? The MBA Newsletter, 8(9), 5-10.
Espeland, W., Sauder, M. (2009). Rating the Rankers. Contexts, Vol. 8, No. 2, pp. 16-21.
Espeland, W. N., & Sauder, M. (2007). Rankings and Reactivity: How Public Measures Recreate Social Worlds1. American Journal of Sociology, 113(1), 1-40.
Fombrun, C., Shanley, M. (1990) What's in a Name? Reputation Building and Corporate Strategy, The Academy of Management Journal, 33 (2), pp. 233-258.
Reingold, J., & Habal, H. (1998). How we kept the data unsullied. Business Week, 19(October), 94.
Hillis, K., Petit, M., Jarrett, K. (2013). Google and the Culture of Search. New York, Routledge.
Schultz, M., Mouritsen, J., & Grabielsen, G. (2001). Sticky reputation: Analyzing a ranking system. Corporate Reputation Review, 22, 24¿41.
Welch, I. (2002). The 2000 business week rankings of business schools: Why they are both harmful and wrong. Available at: «http://welch.som.yale.edu/academics/bweek.html». Accessed 7.11.2006.
Yee, C. (2004). Ranking methods under fire. University Wire, 4(September), 1.
Weblinks:
Interview with Espeland and Sauder on Context Podcast (from minute 5): http://thesocietypages.org/officehours/2009/07/16/ranking-colleges-and-supernatural-beliefs/
Webinar on The Impact of Social on the Analyst Industry: A Roundtable w/ Jonny Bentwood, Barbara French, Carter Lusher, and Jeremiah Owyang: http://vimeo.com/13520800
Cornell University study on how Amazon manufacture book reviews:
http://www.freelunch.me
University of Oxford research on online rankings:
http://www.howsmyfeedback.org/
|
Additional Information
Graduate Attributes and Skills |
Not entered |
Keywords | Not entered |
Contacts
Course organiser | Dr Gian Campagnolo
Tel: (0131 6)51 4273
Email: g.campagnolo@ed.ac.uk |
Course secretary | Mr Jason Andreas
Tel: (0131 6)50 3937
Email: Jason.Andreas@ed.ac.uk |
|
|