The Global Go to Think Tank Index (Part 4): Longitudinal Analysis

The GGTTIR’s contribution to the creation and stratification of the field of think tanks:

By Dieter Plehwe and Moritz Neujeffski

This final blogpost in our series is devoted to the way the think tank index and ranking works and what we may think of as its performative effects. By way of rating and ranking a wide range of think tanks across the globe according to a range of criteria, the “global-go-to-think-tank-index-and-ranking” (GGTTIR) has purposefully contributed to the creation of a global field of think tanks since 2006: organizations that can be assessed and compared no matter how great the differences between them are considered to belong to the same type of organization. Considering indices and ranking devices it is not only important who comes up on top and who is at the bottom or does not even show up in the listing. Apart from serving as a device of stratification and hierarchy, the GGTTIR must be considered a tool of unification no matter if important differences and variations are glossed over by the common branding as think tanks. Organizations are held to have more in common than what sets them apart in spite of highly relevant differences in funding (public, private, commercial etc.), output (academic research, partisan policy paper, advocacy material etc.) or function (for profit expertise, scientific research, lobbying etc.).

Structure of information in the GGTTIR

In order to assess such achievement and ambiguity we will now look closer at various dimensions and developments of the GGTTIR and attempt answering a number of questions: How is the GGTTIR constructed and how did it change over time? What can we see if we look at the fate of think tanks in the ranking over time? What strengths and weaknesses show up upon subjecting the ranking to closer statistical scrutiny over time? And finally, more broadly, what is the possible impact of the GGTTIR in light of contemporary changes in modes of (global, national, local, sectoral etc.) governance?

The making of the GGTTIR

In principle we can distinguish between indexes based on measurable evidence (like number of students, grants, patents etc.) and those that are based on assessments and perceptions by “experts” (respondents to a survey selected by attributed competence, not random). In the latter case “hard evidence” may certainly also play a role, but it is not immediately and “by itself” used as the basis for judgement. Due to difficulties in obtaining hard figures, survey efforts rely on expert judgement as a proxy. Unlike efforts to neutralize bias in random surveys, some surveys ask for the judgement of people who are supposed to have a good insight into a subject matter, expert surveys. The best-known and probably most influential example of such an effort is the corruption perception index of Transparency International. Results generated this way are by no means objective, of course. But the presentation of results by way of numbers makes it easy for journalists to draw on the report, compare figures from one country to another and over time. Regardless the weak evidence, TI’s corruption perception index is widely reported and cited. It provides a classical example of the authority of numbers the weak evidence bases notwithstanding.

The GGTTIR has been created as an intermediate form of “controlled perception” index combining fact-based assessment and perceptions not rooted in verifiable “facts”. Observers, many of whom are think tank insiders in addition to journalists, academics etc., provide answers to a questionnaire. Respondents are supposed to base their judgment on evidence to be collected from a think tank website etc., but it is not clear if this will, let alone can be done due to data limitations. In order to iron out inconsistencies and oversights, a team of experts gets a second look at the results and has a mandate to adjust the rankings. Due to the controls Prof. McGann and his team have built into the process of rating and ranking of the GGTTIR after the return of the survey data material it might best be considered an expert controlled think tank perception index. A more solidly fact-based instrument (university ranking) has been described as “weak expertise” due to the constant need to negotiate the methods employed with the audience by Miguel Antonio Lim in 2018 [1]. McGann and his team clearly follow an even more pragmatic approach, which is problematic in terms of scientific criteria needed for a systematic and standardized rating and ranking system. An unknown number of think tanks remains a) outside the database or b) outside the ranking. Compared to university rankings, the approach is highly selective, which is not possible to see by way of looking at the results only. We need to present the methodology in somewhat greater detail.

The GGTTIR relies on a permanently updated repository of think tanks. The ranking exercise proceeds in three steps.

Step one: For the latest (2019) report, more than 8.000 think tanks in the updated database and about 12.000 individuals have been approached to nominate think tanks for the ranking exercise. If think tanks were nominated at least ten times by respondents, they become part of the ranking exercise in addition to the top-ranked think tanks of the previous year. The nomination process can be considered an attempt to be inclusive, but it is unclear how many think tanks remain outside the database because they are not nominated 10 times by the existing in-group. The nomination process was opened to a larger think tank community in 2010 in order to reduce the American bias and the exclusion of many think tanks in the Global South. This can be considered an important step to negotiate the GGTTIR with an expanding audience. In addition to opening the nomination process, an outreach process was launched in 2010 to reach think tanks in Latin America, Asia and MENA regions.

Step two: In addition to the organizations involved in the nomination process, more than 68.000 “journalists, policy makers, public and private donors, and functional and regional area specialists on the TTCSP listserv were invited to participate and over (sic!) 3974 fully or partially completed the nomination and ranking surveys” writes James McGann in 2020 [2]. Unfortunately, we are not told here about the composition of the relevant group of almost 4.000 participants. Including public and private donors in the list of people charged with the rating and ranking can be considered unfortunate as these individuals would be likely to have conflict of interest issues. In a letter to the participants in the survey experts are reminded that they are not supposed to rank their own organization and that this will be monitored. But donors of think tanks can rank the think tanks they sponsor? Likewise, those regional experts, policy makers and journalists with links to specific think tanks might also be considered likely to be biased in their judgement. Reviewers are told to consult the list of criteria on pages 31-34 of the 2018 ranking to make up their mind. Consulting these pages, we can see a rather comprehensive set of criteria related to leadership, staff, academic reputation, publications, demonstrated commitment to independent research, access key institutions and policy makers, media reputation (number of mentions etc.), social media perusal, funding, management, cooperation and networks. A Think Tank Assessment Tool is described composed of Resource Indicators, Utilization Indicators, Output Indicators and Impact Indicators. Given the amount of work necessary to obtain and ascertain the diverse sets of information asked for it is questionable that survey participants will meet a high standard. The relatively low number of respondents (about 4.000 of almost 70.000, just over 5 Percent) and the not specified number of incomplete responses cast doubt on the results obtained in such a survey. While the criteria suggest an evidence-based survey, the question is why the rating and ranking is not securely based on such evidence? Prof. McGann himself emphasizes that he and his staff composed of several hundred interns [3] in Pennsylvania and different world regions are not sufficiently funded. The “collection, research, and analysis for this project, as in previous years, were conducted without the benefit of field research, a budget, or staff.” [4].

Given hundreds of interns we still might ask why some of the evidence-based criteria are not researched in a systematic way. E.g., staff members and academic papers published by staff members should be feasible. Some other output data might also be more systematically established. But anybody who has done research on think tanks knows quite well that information offered by think tanks can be quite selective and difficult to assess. There are few think tanks willing to offer precise information on funding and staff, for example, and it can be very complicated to count research documents published on websites due to quality issues. Measurable information such as peer reviewed publications on the other hand do not play a role in most think tanks except those that count among academic institutes or a few that have a dual existence as policy institute and academic institute such as Bruegel in Brussels or some of the larger think tanks in the global North in particular, which can afford academic personnel. A lot of the information obtained on and from think tanks thus could serve to obtain a more solid perception of the work of some think tanks, but it would also document the lack of information available of the work of many think tanks.

Given such resource and object limitations, the effort to produce a reasonable result by way of mitigating against all kinds of shortcomings and bias need to be recognized. But it must remain open and subject to critical analysis if the claims made by Prof. McGann with regard to the result of the rating and ranking can be corroborated: “Despite these limitations we are confident of the quality and integrity of the findings given the number of peers and experts from around the world who have taken the time to participate in the process.” [5]. Some of the findings presented below on the outcome of the yearly ranking over time will allow us to revisit the question. Ex ante we can only bemoan a lack of transparency regarding the database and the participating rankers. Unfortunately, for example, we are not informed how many surveys were completed partially only and more information on the composition of the final rating community would clearly be welcome (age, gender, country, occupation/group etc.).

Step 3: Expert panels review and correct the survey results. These panels consist of participating experts of the previous year and newly nominated individuals. They are composed of “hundreds of members from a wide variety of backgrounds and disciplines”. It would be helpful if the report would expand on this information. It is unclear, for example, if and to what extent “corrections” matter in the different regions and policy areas on which the GGTTIR is reporting. On which basis are corrections made? Do the expert teams take a closer look at the criteria listed in the report and numbers (of staff, papers etc.) researched to compare think tanks in order to change the pecking order? How frequently and how far reaching are such changes?

McGann claims that “Given the rigor and scope of the nomination and selection processes, the rankings produced thus far have been described as the insider’s guide to the global marketplace of ideas.” [6]. In terms of the procedure it is clear, however, that there has not and cannot be a common standard of evaluation. The methodology adopted is highly pragmatic and does not meet academic standards of traceability and reproducibility. There is no proof of the comparative rating data and the results are deliberately subject to revision by the expert panels. The exercise apparently places more trust in the expert panels than in the survey results.

McGann and his team have made important efforts to remedy stark misrepresentations on the other hand. Most importantly, an audit has been conducted in 2018 on the think tank database and a new section has been introduced in the same year for respondents to report gross omissions or errors. The team received more than “650 comments and suggestions” and claims “we have done our best to correct them.” [7]. Again it would be useful to learn more about such alterations in support of the claims to rigor, or even in support of critical self-reflection of the difficult exercise.

Modifications over time

The history of changes made to the indexing and ranking exercise of the years offers additional insights. Changes have been made a) to address regional asymmetries / aggregation problems and b) to offer new / additional utility of the ranking (and to attract more think tanks and additional audiences to support the effort).

Regarding a) regions / countries the first step was to separate the U.S. and rest of the world in order to allow non- U.S. think tanks to enter the upper ranks, which was probably due to the much greater participation of U.S. respondents in the survey effort in the beginning. Other regions were also subject of subdividing in order to recognize a greater variety of think tanks from different countries, namely Latin America and Asia (in 2012, 2013, 2014). More numerous were b) new categories to recognize particular groups of think tanks like those with a budget below 5 Mio. U.S. (2010), best advocacy campaign, best policy, best for profit, top energy, top education, best new think tanks of past 24 months (instead of 18 months), all in 2012. In 2013, the Security/Intl. Affairs category has been subdivided in domestic defence and foreign policy / intl.  affairs. In 2013, another whole set of new categories was added: think tanks to watch, best use of social networks, best collaboration, best think tank network, best think tank conference, best managed think tank, best new idea / paradigm developed, and best transdisciplinary think tank. The latter categories speak a lot to the growing recognition of think tanks operating in all kinds of formal and informal networks. In 2014, again we have the subdivision of a category, this time health splitting in global health policy think tanks and top domestic health policy think tanks. In 2015, best regional studies (not university affiliated), best regional studies (university affiliated) and think tanks with best practices (policies and procedures) related to quality, independence and integrity was added. Two more categories were added in 2017: water security and food security. In 2019, top think tank on artificial intelligence was added and the removal of every think tank top ranked for three consecutive years from the ranking. Such top think tanks were moved into a new class of centers of excellence, now playing in a think tank “champions league” so to speak. [8]

What can we make of such changes? The ranking exercise has become more interesting for a greater variety of readers as a result. 20 plus new categories produce a large number of additional think tanks in top positions likely to use the GGTTIR for their own marketing purposes. The removal of the top tier from the ranking (after #1 position in three consecutive years) may be the result of complaints about hierarchical conservatism. If think tanks seem to have no chance rising to the top they may lose interest. At the same time, new categories like “best for profit” or the distinction between university and not university affiliated think tanks studying regions appear to recognize the difficulty to compare apples and pears much like a category of think tanks with small pockets.

Preliminary conclusions

McGann offers his results with great confidence on the one hand, but also has been keen to offer strong disclaimers in general. A listing and good ranking in the GGTTIR is not supposed to provide “a seal of approval or endorsement for the institution[…] Conversely, an organization’s exclusion from the rankings does not necessarily indicate poor quality, effectiveness, or performance.” [9]. The numerical hierarchy supplied by the ranking in so many categories of course suggests otherwise and the ranking has certainly been used as a sign of recognition. But the ambiguity of such a rating and ranking device has also led quite a number of think tank professionals to be critical and careful about the GGTTIR as we have seen in the use of the index on the websites of think tanks. Contrary to expectations, even among the highly ranked think tanks are quite many which do not make use of the index to promote themselves. While many may think they do not need the GGTTIR for their own recognition, a fair number of think tankers may also have decided not to be complicit in such a peculiar and pragmatic design. Some experts like Daniel Florian, the publisher of a think tank directory in Germany [10] dropped out of the rating and ranking effort when the survey became ever more diverse because he felt he could not possibly be knowledgeable about the different fields and because he raised doubts about the composition of the expert panel community [11]. Florian’s point is well taken. Due to the weight of the expert opinion of what needs to be considered a rating and ranking in-group the GGTTIR is best considered an insider assessment of developments in the think tank field. Resulting restrictions and contradictions will become visible once we look at the results of our database analysis in greater detail below.

Altogether we are left with the only global and quite comprehensive index and ranking of think tanks, which provides a valuable tool to get first step information on all kinds of think tanks, but falls way short so far of the requirements a systematic effort would require meeting academic and more sophisticated practices and standards in comparable index productions as clarified by Lim in 2018, for example. If Lim nevertheless calls the university ranking an exercise of “weak expertise”, the GGTTIR think tank ranking would probably need to be considered murky expertise the way it has been constructed. Even a scientifically more reliable tool would still be problematic in terms of turning all kind of qualitative information in quantitative variables and numerical orders and because of its complicity in a social reproduction of hierarchy based on both facts and illusions generated by numerical difference. But a better rating and ranking exercise might force think tanks to provide more reliable information and to more reliably provide consistent information.

Regardless the doubts about the methods and the largely invisible communities and audiences involved we can nevertheless look at the results to obtain additional insights. Not about the approximation of an objective reproduction of think tank strength and weaknesses, but about the way in which a certain in-group of think tank community members and observers wants the public to see the think tank landscape. In order to see how the picture has changed we can look at the data presented by the McGann team as a whole and the data over time, which gives us certain insights that cannot be obtained by looking at an individual ranking only.

Key results:

  • Reproduction of hierarchy: Top 20 Think Tanks (listed in GGTTIR rankings until 2019)

Top 20: Think Tanks ranked most often (1-10)

RankThink tanksCountryNr. of positions in the ranking
1brookings institutionUnited States267
2heritage foundationUnited States190
3rand corporationUnited States187
4cato instituteUnited States173
5carnegie endowment for
international peace
United States164
6council on foreign relationsUnited States161
7center for strategic and
international studies
United States160
8fraser instituteCanada158
9catham houseUnited Kingdom147
10bruegelBelgium144

Top 20: Think Tanks ranked most often (11-20)

RankThink tanksCountryNr. of positions in the ranking
11fundacao getulio vargasBrazil142
12centre for european
policy studies
Belgium134
13center for economic and social
development
Azerbaijan128
14international crisis groupUnited States127
15peterson institute for
international economics
United States127
16konrad adenauer foundationGermany126
17american enterprise institute
for public policy
United States124
18center for american progressUnited States122
19transparency internationalGermany121
20african technology policy
studies network
Tanzania118

It is not difficult to recognize the weight of the Western superpower in terms of McGann’s think tank ranking. The top seven are all from the U.S. and another four U.S. based think tanks occupy positions in the ranks 11-20. Interestingly, the neoliberal power house from Canada, Fraser Institute (Economic Freedom of the World Index etc.) comes in on place 8. Fraser complements two other power houses of the conservative and neoliberal movement, Heritage Foundation and Cato Institute, among the top 10 think tanks. The first partisan think tank set up to counter the weight of the neoliberal think tanks, the Center for American Progress in the United States, comes in on place 18.  European think tanks Bruegel (10) and CEPS (12) speak to the increasing relevance of the EU. Three think tanks listed from the Global South (Brazil, Azerbaijan, Tanzania) speak of the effort of McGann and its team to increase the visibility and recognition of think tanks outside the dominant center of the world economy. The first UK think tank is the Ur foreign policy think tank, Chatham House (9). The first German think tank is the Christian Democrat party foundation Konrad Adenauer Stiftung with offices all over the world.

The top hierarchy is much more stable than the lower ranks. We divided the sample up into deciles to observe mobility across ranks. The following table shows the results.

  • Reproduction of Hierarchy: Limited competition at the top, more competition below

Heterogeneity per Decile: Top Think Tanks Worldwide

DecilesNr. of consecutive
years
Unique think tanks per decileHeterogeneity ratio
110212,1
210343,4
38334,1

Brookings institution & Rand corporation stay amongst the top 10 for the entire time. The biggest winner we identified has been the Institute for International Political Studies (Italy) with a big leap from 118 to 47. The biggest loose in turn was the British-American Center for Global Development from position 17 to 162.

If we take the regional categories only we get the following picture.

Heterogeneity per Decile: All Regional Categories

  • Reproduction of hierarchy: think tank scope

Think Tanks constantly in Top 20 positions

Think tankNr. of categoriesPart of top-20 all stars
brookings institution32yes
center for strategic and international studies17yes
rand corporation15yes
carnegie endowment for international peace14yes
chatham13yes
amnesty international10no
bruegel8yes
council on foreign relations7yes
transparancy international7yes
heritage foundation7yes

Think Tanks constantly in Top 20 positions (continued)

Think tankNr. of categoriesPart of top-20 all stars
human rights watch6no
center for american progress6yes
institute for international political studies4no
fraser institute4yes
brics policy center3no
asian development bank institute3no
konrad adenauer foundation3yes
urban institute3yes
friedrich ebert foundation3no
center for economic and social development3yes

Looking at the top listed across several categories yields another feature of hierarchy. Nine out of the top ten think tanks are also constantly listed on the top in many different categories, ranging from seven at the low end (Heritage, TI and the CFR) to 32 in the case of Brookings. Adding the top listing in a larger number of categories adds Friedrich-Ebert Foundation to the top 20, another global operating political party foundation from Germany (Social Democrat). Two think tanks from BRIC countries come in looking at top listings in different countries, the BRICS Policy Center and the Asian Development Bank Institute. Interestingly, Amnesty International and Human Rights Watch also gain visibility if their ranking in different categories is considered rather than the overall listing among the top think tanks.

  • Reproduction of Hierarchy: Regional disparity shrinking

Which Think Tanks Get Into the Rankings?

At first glance, we can easily observe the success of the GGTTIR in redressing the regional participation of think tanks in the rating and ranking exercise when it comes to North America and Europe. But Latin America and Africa remain as marginal across time. Only South East Asia wins at the expense of both Europe and North America.  Furthermore, the arguably most important area of leadership is leadership in policy fields. Once we look at some of the most important policy areas we can see yet another dimension of the hierarchical stratification represented in the GGTTIR over time

  • Reproduction of hierarchy: Think Tanks in Policy Field according to regions

Geographical Distribution per Policy Areas

ContinentsEnvironmental
policy (%)
Energy policy (%)Health policy (%)International
Economic
policy (%)
Social policy (%)
North
America
34.9340.9765.6239.6032.85
Europe28.0924.3116.6732.1229.17
Asia19.9426.7410.4214.7815.38
Sub-
Saharan
Africa
6.842.785.212.747.53
MENA3.065.212.082.741.44
Central
and
South
America
7.130.000.008.0313.62

It is quite striking to see how large the North American share of think tanks in Health Policy has been according to the perception of experts responsible for the GGTTIR, for example. In many policy fields the combined participation of North American and European think tanks can be two thirds to three quarters, and many think tanks in the Global South receive funding from their brethren in the North.

Conclusions Studying the GGTTIR has yielded tangible results with regard to the stratification of think tanks, the reproduction of the perception of hierarchies as a result of rating and ranking devices and some of the considerable asymmetries of the contemporary think tank landscape. Despite the strong effort of the McGann team to reach out it is obvious how strong the dominance of the Global North is in terms of the perception of mainstream think tank experts at least. Only South East Asia has been added to some extent on the think tank landscape since the beginning of the GGTTIR effort. American hegemony in general and neoliberal hegemony is represented in the dominance of think tanks from the United States and in the strong representation of neoliberal and conservative partisan think tanks in the top ranks. Notably absent from the top are radical think tanks like Rosa Luxemburg Stiftung.

Not even included in the ranking are many of the smaller radical think tanks that do not matter from the perspective of the participants in the GGTTIR survey even though the European Memorandum Group or the Canadian Center for Policy Alternatives and many other small think tanks exist that fight an uphill battle against what has become a neoliberal mainstream in addition to the bulwark of radical neoliberalism promoted by global network of partisan think tanks. A think tank rating and ranking effort could help to increase transparency across the think tanks landscape, but it would have to take sorting and assessing more seriously. Instead of treating the large variety of think tanks composed of academic institutes, commercial think tanks, advocacy think tanks, party related and partisan think tanks alike, a careful analysis and measure of think tank finance and think tank output would go a long way to clarify the extent to which different species of think tanks populate the increasingly crowded landscape.

Types of Organizations in the Ranking

Types of think tanksAverage rankingNr. of think tanksNr. of positions
Independent30.829513882
Pol. party affiliated36.685517
Government affiliated37.11052492
University affiliated46.3139855
For profit55931386

If think tanks do not offer information required to understand their relationships with funding agencies and interest groups (constituencies), they could be grouped in a special category of shady tanks. Even with limited resources it would also be possible to assess the output: quality controlled research almost exclusively goes to academic journals. While many think tanks claim to produce original research, most of the think tanks in the ranking rarely publish in quality journals. This does not discredit the research contribution of many think tanks, but it would help to put organizations like the climate denial leader Heartland not in the same bracket with academic research institutes or academically versed think tanks like Brookings. It would help avoiding confusion among journalists and the public at large, because some of the shady think tanks involved in contemporary governance and policy networks would no longer benefit from the prestige of think tanks that invest time, money and personnel to seriously contribute to solving policy problems rather than to engage in strategies of sabotage now prominent in the field of climate and energy politics in particular. If the GGTTIR ranking continues,  a major reform effort should get underway to help policy makers, analysts and funding agencies in the field with a much greater degree of transparency.


[1] The building of weak expertise: the work of global university rankers
[2] 2019 Global Go To Think Tank Index Report, p.10
[3] Internships at the Think tanks and civil society program
[4] 2019 Global Go To Think Tank Index Report, p. 10
[5] 2019 Global Go To Think Tank Index Report, p.10
[6] 2019 Global Go To Think Tank Index Report, p. 10
[7] 2019 Global Go To Think Tank Index Report, p.32
[8]2019 Global Go To Think Tank Index Report, p.33
[9]2019 Global Go To Think Tank Index Report, p.33
[10] www.thinktankdirectory.org
[11] Was bringen Think Tanks, by Daniel Florian