DOI: 10.1089/dst.2013.0013 | CITEULIKE: 12937804 | REFERENCE: BibTex, Endnote, RefMan | PDF PDF Version

Carson, L., Bartneck, C., & Voges, K. (2013). Over-competitiveness in Academia: A literature review. Disruptive Science and Technology, 1(4), 183-190.

Over-competitiveness in Academia: A literature review

Lydia Carson, Christoph Bartneck, Kevin Voges

University of Canterbury
PO Box 4800, 8410 Christchurch
New Zealand
christoph@bartneck.de

Abstract - This study focuses on the negative effects of the highly competitive academic environment. We summarized the literature on what consequences an over-competitive system has on the people involved and on the productivity of the system as a whole. We conclude that negative effects outweigh the potential gains that competitive systems bring about. The literature suggests that not only does constant rejections demotivate the majority of academics, the funding allocation process in itself seems to be inefficient. The pressure on academics is so high that we tend to systematically over-estimate our success chances of our funding proposals, manuscripts and promotion requests.

Keywords: competition, academia, negative effect, rejection


Introduction

Competition is a constant condition in modern society. We compete not only in sports, but also for better jobs, influence and status. Companies compete with their products and politicians compete for votes. The assumption is that through competition people in general work harder and that the better person, or product, can be identified and rewarded accordingly.
The value of competition in economic and social mechanisms has been largely unquestioned since the development of modern neoliberalism in the 1980s, based on the economic and philosophical writings of Hayek and Friedman (e.g. (Friedman, 1962; Hayek, 1973)). The philosophy of neoliberalism led to a range of legislative reforms reducing government control of the economy, including support for free trade, privatization, and deregulation.

In Australia, this neoliberal agenda found its way into higher education in 1987 through reforms initiated by the Labor Education Minister John Dawkins, with a shift towards non-public funding and a market orientation in the tertiary sector (Lacy & Sheehan, 1997). The election of the conservative Howard government in 1996 continued this sector reform, continuing the real funding cuts initiated by the previous government, refocusing the purpose of universities, and changing management practices. A similar path was followed in New Zealand (Roberts, 2005, 2007).

“Tertiary education institutions would, neoliberals believed, be better served by a ‘Board of Directors’ style of governance, with full competition between public and private institutions, lower government subsidies, and stronger (managerialist) accountability mechanisms…. The model of the market, in New Zealand as elsewhere, provided the basis for the whole organization of society: the ideal was one in which different individuals would strive for advantage over others in an environment of largely unfettered competition, with minimal state interference and a heavy emphasis on "the bottom line" in all policy and decision-making processes” (Roberts, 2007, p. 351).

Academia is clearly no exception to this new emphasis on competition. The often declared state of “publish-or-perish” hints at what pressure academics are exposed to. It does appear, however, that the process and goal of the competition in academia is more ambiguous than in other fields of human endeavor. It often reminds us of the Caucus-Race as described in Alice In Wonderland (Carroll, 1865):

There was no “one, two, three, and away!” but they began running when they liked and left off when they liked, so that it was not easy to know when the race was over. However, when they had been running half an hour or so, and were quite dry again, the dodo suddenly called out “the race is over!” and they all crowded round it panting, and asking, “but who has won?”

Not only can the academic environment be confusing as to how and why people compete, but many participants are also under the impression that the often-ignored negative side effects counterbalance the positive effects that competition might bring about. At a point where the negative effects completely outweigh the positive effects, competition turns sour and the system is in a state of over-competiveness.

This paper presents a literature overview of over-competitiveness in an attempt to gain a better insight into the underlying processes and its consequences. We focus on the potentially most significant factors that may be related to over-competitiveness. More specifically, we intend to highlight the negative effects of competition. These factors are often neglected since we prefer to think about ourselves as winners and need to maintain an aura of success (Day, 2011). Furthermore, it is always dangerous to question the rules of the game while playing it. We structured the factors roughly by how a stereotypical academic, let us call him Brian S. Smith, would make his way though the academic world, starting as a student and ending as a Professor Smith.

Becoming an academic

Competition in academia for recognition and advancement is common. For a new academic it begins at graduate study level with competition to get into a good university, where a student will develop his or her knowledge, skills, and competencies. It is also in this environment that, by osmosis, Brian S. Smith and other academics develop attitudes and values that will serve them in their academic profession (Wood, 1990, p. 89). When reaching the post graduate level Brian S. Smith will seek out a supervisor who will act as a mentor and role model encouraging him in his own publications starting him off well in his own career (Wood, 1990, p. 91). In that career academics will then engage in continual competition until they are employed in a prestigious institution with a high caliber of doctoral training, colleagueship, and access to resources.

In the competitive environment of scientifically advanced countries academics are encouraged to compete against one another to become a specialist in their field, and concentrate all of their efforts toward gaining promotion and increasing salaries (Roberts, 2007, p. 360). In a paper analyzing the outcome of a reform of the academic career structure in Norway, Olsen et al. (2005) compared the common competition model with the competence model adopted where associate professors can apply for promotion to full professorships on the basis of individual research competence irrespective of vacant professorships. The competition model in comparison is a conventional promotional system where those seeking promotion compete with other applicants to fill a vacant position (Olsen, Kyvik, & Hovdhaugen, 2005, p. 300). In the United Kingdom neither system is made compulsory by the government, as other governments have, but it is entirely up to each university to make use of the system they chose (Olsen, Kyvik, & Hovdhaugen, 2005, p. 300). The competition model is still used in most countries (Olsen, Kyvik, & Hovdhaugen, 2005, p. 300) although it is less frequent in the USA. Still, some universities in the US use quotas to regulate the number of tenured track staff (Trower, 2002, p. 39) and the proportion of tenured staff has decreased from around 50% in the mid 1970s (Roey & Rak, 1998) to 21% in 2010 (Knapp, Kelly-Reid, & Ginder, 2011). Altbach (2002) has pointed out that “without question, the most important development is the diversification of the types of appointment related to teaching and research. Among the most significant changes are the increase in the proportion of academic staff without permanent appointments, even in countries that retain tenure arrangements, and the greater use of part time teachers”. In computer science, for example, the number of post-doc positions has exploded in the last decade while the number of available tenure track positions has dramatically decreased (Jones, 2013). Brian S. Smith will have applied at many universities before he would have acquired his first employment.

Job security is decreasing for academics as there is much competition for promotion and often there are few vacancies, making prospects seem gloomy (Gilliot, Overlaet, & Verdin, 2002, p. 278). There are far more traineeships than there are positions for academics and consequently many researchers are left completely discouraged or drop out of academia (Osmond, 2004, p. 101). Working under the competition system Brian S. Smith would often have to apply to fill vacancies in other universities (and perhaps even in other cities or countries) in order to be promoted. However, academics with various commitments such as family are less mobile than others. As a consequence, many academics postpone their parenthood until their career prospects are established (Kemkes-Grottenthaler, 2003). In particular, women are often forced to compromise on their careers for the benefit of their families and children (N. H. Wolfinger, M. A. Mason, & M. Goulden, 2008). Women in academia have even fewer children compared to other professional women, primarily because it takes longer to achieve the job security of tenure (Nicholas H.  Wolfinger, Mary Ann  Mason, & Marc  Goulden, 2008). In addition, it is almost impossible for researchers to return to academia after having been out of the university for a few years. Because there is such a high emphasis on publishing, returnee researchers would have little hope of obtaining a position in competition with ‘true’ academics who have been in academia since they finished their PhD and have developed a long list of publications (Gilliot, Overlaet, & Verdin, 2002, p. 208). Since Brian S. Smith remained in university, he remained a ‘true’ academic and will have built up a good publication and teaching record. There are usually far more competent academics than there are professorships to fill (Olsen, Kyvik, & Hovdhaugen, 2005, p. 301), and the internal competition for these limited placements can lead to less collaboration between staff members (Olsen, Kyvik, & Hovdhaugen, 2005, p. 302).

High levels of competition, lack of time, and lack of funding are identified in the literature as some of the most stressful aspects in academia (Gmelch, Lovrich, & Wilke, 1984). Brian S. Smith will have to write many funding proposals in order to execute his research and to improve his performance as an academic. At least fourteen (of the 34) OECD countries use Performance-Based Research Funding Systems (PBFS). This entails assessing the research performance of universities and then funding them on the basis of their performance. In New Zealand 60% of this assessment is based on “quality ratings of each individual academic researcher” calculated based on the submission of their evidence portfolios (Roberts, 2007, p. 354). Brian S. Smith will receive a grade, just as he did in high school, for his performance as a researcher. Roberts (2007) asserts that these funding systems only increase time pressures, and that the investment of money, time and energy into PBFS is counterproductive to a vigorous research environment. The time to be had for reflective writing is passed over to writing proposals and obtaining funding (Osmond, 2004, p. 101) and that time is only made worthwhile if it aids the process of getting funding for research. Brian S. Smith will have wasted a considerable amount of time on funding proposals that were eventually rejected. Geard and Noble (2010) argue that this system leads individual researchers to deplete overall resources, as “their individually rational efforts to write a convincing proposal that gains them a slice of the funding pie lead to an equilibrium in which the research output of the system as a whole goes down” (Geard & Noble, 2010, p. 7). As each academic strives to write and assemble the best evidence portfolio, to secure a credible rating and draw more funds to their institution, any benefits involved are counteracted by the enormous cost involved in assigning rankings to academics, and consequently the distribution of funding is rendered inefficient. The costs related to choosing grant applications in the US and Canada is over and above $100 million annually (Osmond, 2004, p. 101). Osmond further suggests that the presumed gain in quality is less than that amount, and that the costs related to choosing grant applications be distributed directly to researchers.

Those that are continually successful at writing successful funding proposals may have mastered the art and their success is more related to ‘slick grantsmanship’ (Osmond, 2004, p. 98) than their proposed research addressing the questions that matter most to society. Most funding proposals these days require extensive argumentation for the social and economic benefits of a project. It is no longer sufficient to describe a clean research methodology. It is also necessary to have a sense for what the “hot topics” are. These may change frequently depending on the agenda of the currently leading political party. Grantsmanship can even become an alternative to scholarship in the path to promotion (Osmond, 2004, p. 102). Let us assume that Brian S. Smith has mastered all these challenges and finally secured a permanent position at a university.

Institutional Reward Structures

Brian S. Smith is now a permanent faculty member and he is exposed to the reward structures implemented at his university. Investigating the views of academic staff from one Australian university in the late 1980s, Wood (1990) reported that research activity was highly variable and influenced by a number of factors including access to funds and the promotional reward structure. Although this research was conducted some time ago and some aspects might have changed, the findings are still applicable today.

Benner and Sandström (2000) argue that funding is a key incentive in the academic system since its reward structure influences the performance and evaluation of research. This provides an incentive for academics to choose research topics that are more likely to draw outside funding (Wood, 1990, p. 95). Academics generally prefer to research those topics they find most intellectually challenging, but questions of funding influence the choices they make (Wood, 1990, p. 92). When there is fierce competition for funding and academics adjust their choice of topic accordingly, the autonomy of the individual academic is diminished which then becomes counterproductive. Jorge Cham expressed this absurd condition in his cartoon “Intellectual Freedom” (see Figure 1). Brian S. Smith will frequently face this dilemma, that many of the topics he considers interesting are unlikely to attract external funding and are therefore pushed back in favor of topics that are of less interest, but that could potentially receive funding. Brian will spend a considerable amount of time on researching topics that are not close to his heart. This decreases his passion for his work.
Description: http://www.phdcomics.com/comics/archive/phd072011s.gif
Figure 1 : The evolution of intellectual freedom by Jorge Cham (with permission from author)

Good research integrates freedom of inquiry. Therefore, it can be expected that academics researching areas they are not interested in would produce a lower standard of work, as they may lack the enthusiasm, motivation, and commitment a more intellectually stimulating topic would provide (Wood, 1990, p. 92). More importantly, economic constraints can not only result in lowering standards of academic work, they can even fundamentally corrupt research, as the example of Kern and Keller have shown (Washburn, 2011). Commercial interest can inhibit certain studies from being conducted or even motivate academic misconduct. In an over-competitive environment the temptation to give in to such commercial interest is increased.
In a survey on faculty in Management departments Miller et al. (2011) found faculty are motivated by the possible opportunity of “enhancing their professional reputation, leaving a permanent mark on their profession, and increasing their salary and job mobility” (p. 422). By the middle of the 1980s tenure in many universities in the United States and Canada was determined based on an academic’s publications (De Rond & Miller, 2005, p. 323). Publishing “not only plays a crucial role in determining the fate of ideas, but also influences the career advancement of individual scholars” (Bedeian, Van Fleet, & Hyman, 2009, p. 211). Successful publishing is crucial for academic promotion and salary increases; it also has spillover effects for gaining funding for further research (Bedeian, Van Fleet, & Hyman, 2009, p. 211). Good publishers gain a good reputation and are more likely to gain recognition through promotion (Bedeian, Van Fleet, & Hyman, 2009, pp. 211-212).

In an interview study at an Australian university Moses (1986) found that many perceive the university as interested more in publications than in scholarship. He also found that some adjust their activities accordingly. Brian S. Smith will emphasize his research over his teaching. The idea of “getting the outcomes that are measured” is one broadly recognized in a range of corporate and institutional environments, academics also follow the incentives provided by their institution, recognizing what is valued in their department. What earns a promotion will be seen as what is valued by the university, staff will notice the attributes of those being promoted and strive to emulate the same things (Moses, 1986, p. 147). Though some universities have made an effort to increase their emphasis on teaching performance in promotion rounds, this does not seem to have had an effect on the ground. “What actually happens on promotions committees is inaccessible data; what staff believe happens is not” (Moses, 1986, p. 140). In the absence of explicit changes to policy and actual process, those academics who are seeking promotion still have a strong incentive to focus on publishing.

Academics responding to such incentives can start to rely on gamesmanship. These academics become devoted to prioritizing research, pursuing publishing goals at the cost of other responsibilities. Brian S. Smith is no exception to this tendency. Academic gamesmanship encourages strategy in producing numerous quick-to-publish articles, also know as Least Publishable Units (LPU) (Huth, 1986). Academics complement this torrent of publications with a sprinkling of papers of higher quality that make a contribution to the discipline. Promotional requirements are a significant enticement to employ gamesmanship in research (Wood, 1990, p. 84). Heads of departments often dictate who will carry out what academic tasks. Most academics want to spend the bulk of their time researching so it is in their interest to protect the spare time they have as research time. Academics may use their academic ranking outcomes as arguments for differentiation of research time. Academics may also employ negotiation strategies to obtain a buy-out from teaching and administrative duties (Paye, 2011, p. 13).

In the highly competitive academic environment, it is easy for academics to gear everything toward publishing. With “intense intra and inter-institutional competition, production overrides all else. Production matters more, and indeed comes to stand in for creativity, critical thought and collegiality. Having a love of learning, a passion for teaching, and a commitment to intellectual integrity become relevant only insofar as they can be harnessed for the production process “ (Roberts, 2007, p. 359).  Brian S. Smith will have to become a writing machine to get ahead in the publication race. Academics striving for tenure in universities in the United States and Canada are under the most pressure to publish, whatever the quality. Without tenure academics are less likely to undertake risky or divisive research (Wood, 1990, p. 90). An over-competitive environment favors mainstream ideas, and suppresses novel and opposing views (Fang, 2011).

It is not just publishing that is vital for survival, publishing in peer-reviewed journals is more highly recognized and increasingly, publishing in those journals that are considered top-tier in their field (Miller, Taylor, & Bedeian, 2011, p. 428). Some departments only value publishing in a small number of journals without considering others, making it very stressful for academics as top-tier journals have very low acceptance rates (Miller, Taylor, & Bedeian, 2011; Starbuck, 2005, p. 432).
Research assessment, evaluation, and rankings on individual academics is the foundation for research funding allocation by governments to universities and used by heads of departments in promotional decisions. Brian S. Smith’s past performance will have an impact on his ability to secure future funding. Adler and Harzing (2009) question if these measures encourage scholarship or simply encourage publication. Rankings allegedly measure research quality, but depending on the system used, quality may be measured by no more than “counting publications in high impact-factor journals along with citations in the limited set of journals that such systems recognize” (Adler & Harzing, 2009, p. 74) . The only research that is included in ranking systems is measurable research but in the words of William Bruce Cameron (1963): “Not everything that can be counted counts, and not everything that counts can be counted.” Brian S. Smith would be wise to focus his attention on tasks that are being counted by his department. He might check his and his competitor’s h-index score on Google Scholar frequently to monitor the state of the game.

These crude research measures are now so widely used that they can determine whether someone is promoted, whether they are judged a success or a failure (Adler & Harzing, 2009, p. 72). The prevalent use of ranking systems provide incentives for academics to focus their productivity toward obtaining favorable assessment results, leading scholarship away from addressing the questions that matter most in the various fields (Adler & Harzing, 2009, p. 72). Academic assessment undermines rather than fosters scholarship that matters (Adler & Harzing, 2009, p. 73).

Promotion as an incentive increases competitiveness, and a weighty emphasis on evaluation is likely to have the effect of encouraging academics to meet the expected requirements for promotion without striving further (Moses, 1986, p. 138). This can result in a loss of efficiency, and once again researchers are likely to choose topics based on the ease in which they are published as opposed to what stimulates them as researchers (Wood, 1990, p. 87).

In the highly competitive environment of scientifically advanced countries the incentive to improve one’s rankings will always be more motivating than pursuing research driven by curiosity.  Those with finesse in academic gamesmanship will generate just the right amount, with all the right people in exactly the right places. They will learn to work the system with efficiency, producing more and more of what is valued and rewarded by the institution (Roberts, 2007, pp. 359-360). Roberts predicts they will become conformists valuing prestige over the quest for knowledge (Roberts, 2007, p. 359). Let us hope that Brian S. Smith has not become a complete conformist and that he managed to produce a sufficient number of good publications.

Assessments and ranking

Brian S. Smith is now an established academic and now has to face the ranking systems used in academia in decisions of whom to hire for an academic position to provide a basis on which to differentiate between candidates (Roberts, 2007, p. 361). But academics are not running a simple running race, rather, each academic has different goals in their research, which will have varying levels of difficulty to achieve, each taking varying lengths of time to achieve them through varying paths (Osmond, 2004, p. 98). Academics are reasonably incomparable as winners and losers. Brian S. Smith will have a very different profile than, for example,”Professor Rivera”. It will be hard to compare them directly and with them competing for the same resources, it will be hard for the decision making committee to favor one over the other.

Research assessments and rankings on academics are dependent on an array of arbitrary decisions, including those “related to choice of publication outlet, choice of time period, weighting of data, and aggregation of individuals to an institutional level(Adler & Harzing, 2009, p. 74). Adler and Harzing (2009) outline key problems with academic ranking systems, one of the topics they discuss are the differing parameters around which publications should be included in a system, noting that some systems only include articles published in journals, in English, from top-tier journals, and either internationally published or nationally published (not both). This may lead to a situation in which Brian S. Smith published one of his best articles at a conference and hence it is not counted for his further promotion.

In relation to the weighting of data, Adler and Harzing discuss decisions on what is valuable, the invisibility of specialized journals, and whether or not to allocate extra weight to the first author of a multi-authored article. Given all of the combinations of measurability any given system could have, it seems that any one system would fall short of adequacy for the use in ranking academics, the allocation of research funding, or making promotional decisions (Adler & Harzing, 2009, p. 75).

Consequences of rejection

In his academic life, Professor Smith has written many papers and funding proposals. Not all of them have been accepted. Given the low acceptance rates at most journals, almost all academics experience manuscript rejection. Health Research Council chief executive Robin Olds, the head of one of the biggest science funding organizations in New Zealand, said in a recent radio interview that the number of applicants receiving funding has dwindled in the past decade. The National Institute of Health (USA) reports that its acceptance rate has fallen from 31% to 17% in the last ten years. Using social identity and rejection sensitivity theories, Day in her landmark paper (2011) explains why negative emotional responses to manuscript rejection are normal and predictable. She exposes how for some scholars, continual rejections may be “emotionally difficult and lead to decrements in creativity, productivity, and professional satisfaction” (p. 704). Professor Smith will frequently have to deal with his motivation being drained by frequent rejections. Because of the many avenues for rejection that regularly occur, such as funding proposals, manuscript rejection, and rankings and promotion, academics are susceptible to becoming rejection sensitive. To belong to the academic community you must publish, when academics have their manuscripts rejected, those that are rejection sensitive may feel as though they don’t meet the requirements to be in the academic community (Day, 2011, p. 704).  Sometimes these academics can feel alienated leading to further feelings or disillusionment and discouragement (Day 2011; Miner 2003). When experiencing persistent rejection “authors may become isolated, expend too little energy on research, produce little meaningful work, avoid research projects, and perhaps even ultimately withdraw from scholarly activities” (Day, 2011, p. 705).  Researchers may try to avoid any interaction or discussion concerning publishing (Day, 2011, p. 707). An academic’s social identity is at stake when they feel as though this membership is threatened (Day, 2011, p. 707). To maintain a sense of belonging to the academic community researchers need to have a steady stream of publications (Day, 2011; Miner, 2003; Starbuck, 2005). Day (Day, 2011, p. 708) describes how the concept of stigma applies to academics:

“Invisible stigma theory suggests that keeping the stigma hidden requires that in each relevant social interaction the rejected scholar make a decision as to whether to reveal the rejection. To retain secrecy about rejections and others’ perceptions of his or her membership in the social identity, an underachieving scholar must repeatedly decide whether to retain secrecy through such means as avoiding conversations with colleagues concerning research productivity, declining research collaboration with others, or “hiding” his or her vita by keeping it off the university’s Web site. The cost of these repeated disclosure decisions is heightened fear and anxiety. An academic in this scenario may stop researching altogether.”

Junior researchers submitting their first manuscript can get such a shock that it effects them their entire career (Miner, 2003, p. 4). According to Day (2011), research on how people sensitive to rejection respond in cases of repetitive rejection sheds light on the impact of rejection on scholars. Firstly she suggests that rejected scholars may dismiss the comments and concerns of the reviewers to try and disassociate themselves with the people involved and the review process itself (Day, 2011, p. 709). Secondly she notes how rejection often leads to antisocial behavior and therefore may lead to an avoidance of collaborative research or healthy colleagueship (Day, 2011, p. 709). Thirdly due to the effect of rejection on self-esteem scholars often won’t talk about their rejections to their peers in order to save face (Day, 2011, p. 709). In addition to these things rejection is connected to low academic performance and likely to cause procrastination (Day, 2011, p. 709). Because the experience of rejection causes people to avoid similar circumstances, they may postpone research until they are no longer submitting manuscripts (Day, 2011, p. 710). All of these effects are likely to cause scholars to submit fewer papers for review, if not drop out altogether. Bedeian (2004) also claims that dissatisfaction with the publishing game may cause researchers to discard their manuscript or drop out of the publishing process altogether (Bedeian, Van Fleet, & Hyman, 2009; Day, 2011; Miner, 2003), which means research that was potentially ahead of its time gets lost to the discipline altogether. These reactions may also apply to promotional rejection, Moses’ (1986) interview study at an Australian university found six senior lecturers who had applied for promotion fruitlessly and had since given up. We hope that Professor Smith has not given up doing his research and that he continued to be a productive academic.

A Drop in Standards

Professor Smith is not the only one affected by the over-competitive academic environment. The negative effects do not only affect him personally, but they also affect the standards of the community. The overly competitive environment of scientifically advanced countries can lead to a drop in research standards. Due to the high competition for research funding, many academics are heavily restricted by the inadequacy of the funding they receive (Wood, 1990, p. 94).  Having funds available to travel to conduct field-work or hire student researchers will make a significant difference to the scope of a project (Wood, 1990, p. 88). Having student researchers is an important part of academia “these students enrich the environment through their enthusiasm and new ideas” (Wood, 1990, p. 90).

Given how highly competitive it is to get a manuscript accepted, academics are adopting tactics that lower the standard of their research in various ways. Excessive competition is a threat to integrity as researchers are less likely to follow scientific ideals (Fanelli, 2010, p. 2; Washburn, 2011). Miller et al. (2011) found that the consequences of the pressure to publish included “heightened stress levels; the marginalization of teaching; and research that may lack relevance, creativity, and innovation” (p.422). It is those very consequences of rejection discussed by Day (2011) that lead to the lowering of standards (p. 705). Feeling rejected and having less energy can lead academics to procrastinate, avoiding their work (Day, 2011, p. 711). In an interview study by Moses staff admitted the temptation to produce below standard manuscripts so they could produce them at a faster rate, they also admitted to an inclination towards short-term research and to publishing insignificant data (Moses, 1986, p. 146). This pressure to publish lowers standards, deterring academics from the opportunity to conduct creative and non-traditional research (Miller, Taylor, & Bedeian, 2011, p. 433). The focus has shifted from making discoveries to the quantity of papers published and in some instances the journals in which they are published; this distortion directly deteriorates the quality and utility of the articles. In circumstances where the pressure is to publish in top-tier journals, academics are more willing to adjust their manuscripts to fit editor preferences. This disempowers those academics concerning their permanent mark on their discipline and decreases the overall standard of their work (Adler & Harzing, 2009, p. 73). “Burnout, turnover, decreased innovation through risk-aversion, decreased productivity of post-tenured professors, and abandonment of potentially fruitful but unappreciated lines of research” all lower the overall standard of research in a discipline (Day, 2011, p. 711).

There may not be a high correlation between the reviewers’ judgments of an article and later citations. Using a statistical theory of review processes, Starbuck (2005) found that about half of articles published were not the best submissions to those particular journals, and some of the articles ranking in the highest 20 had been rejected by as many as five journals. "29% to 77% of the articles in the first quintile of journals [in terms of quality] do not belong in the highest 20% of the manuscripts . . ." (Starbuck, 2005, p. 197). Many articles that get rejected are just as good if not better than what get published. Miner (2003) claims this is because having an acceptance rate as low as 10% makes it unfeasible to discriminate effectively (p. 4). Therefore, there is a considerable level of randomness in editorial selection. “Highly prestigious journals publish quite a few low-value articles, low-prestige journals publish some excellent articles, and excellent manuscripts may receive successive rejections from several journals” (Starbuck, 2005, p. 196). Some of Professor Smith’s best work was rejected several times before its final acceptance. For that reason articles cannot be judged principally on which journal they are published in. Even Tim Berners-Lee’s first paper on the World Wide Web was initially rejected by the Hypertext conference in 1991.
Studies of scientists tend to concentrate on the elite and successful (Gulbrandsen & Smeby, 2005, p. 948). Even in Miner’s (2003) analyses of criticisms of the review process (with original data from Arthur Bedeian), all the criticisms were from submissions that resulted in publication. If there is a significant problem in the publishing process, as we hear from those still in academia, what are the feelings of of those who have dropped out completely. After his conclusions Miner noted the need for a similar study receiving feedback from those academics whose manuscripts were rejected (Miner, 2003, p. 3).

The system is self-perpetuating. Brian S. Smith sits in his office, he sends funding proposals to compete for outside funding, manuscript submissions to compete for published articles, evidence portfolios to compete for his academic ranking, and applications to Deans to compete for promotion. Each of these submissions has a high chance of rejection. For the small percentage of those who are not rejected, the funding they receive improves their chance at being published, which will improve their academic ranking and chances for promotion. For those that manage to get published in a top tier journal, this will also improve their academic ranking and chances for promotion. For those that receive a decent ranking, promotion is more likely. And those that get promoted will receive a better ranking next time and better opportunities overall. On the other hand, if the academic is not receiving funding, manuscript acceptance, quality rankings, and promotions, they are receiving rejections. Rejection leads to doubt in their abilities, which leads to procrastination and lack of productivity (Day, 2011, p. 711). In a field or discipline certain ways of doing things are routinized, accepted and sustained by the discipline’s members. The cycle continues, not only does the system perpetuate opportunities (or failure) for academics within the system, but procedures become standardized and the existing competitive structure of university research is continually reproduced. (Benner & Sandström, 2000, p. 292). “Funding agencies contribute to constructing, reproducing, and changing the institutional order of academic research. … Thus, research sponsors influence the framework for research performance and the networks which form part of the research environment” (Benner & Sandström, 2000, p. 293). Funding applications are accepted based on criteria and research is produced within those same criteria, facilitating the reproduction of the organization (Benner & Sandström, 2000, p. 293). If an institution makes it into, for example, a top 50 ranking, pressure and expectations will be put on academics within the institution to maintain and improve their position (Miller, Taylor, & Bedeian, 2011, p. 434). Junior researchers coming into academic institutions will pick up on what is valued and prioritize their own research agendas accordingly (Moses, 1986, p. 146).

One of the traditional roles of research has been to question prevailing views (Roberts, 2007, p. 363), yet there are compelling incentives in the current university system to conform. There is therefore a smaller amount of research that tests existing practices to see if it is accomplishing what is proposed. (Miner, 2003, p. 4).

Conclusion

This study focused on the negative effects of highly competitive systems. A recent study of staff dissatisfaction levels in tertiary institutions in Victoria (Australia) found that, along with the dissatisfaction generated from management practices directly impacting on the respondent, a “general discontent with neo-liberal change across the economic and social spheres” (Fredman (Fredman & Doughney, 2012, p. 55) was also a contributor to work dissatisfaction among academics. The incorporation of an over-competitive environment into academia is a clear outcome of this neoliberal change. (Green, 2006) argues that the two main drivers of work dissatisfaction under neo-liberalism are workloads and the perceived loss of control. There is a general feeling that market-based reforms have not delivered freedom and flexibility, but further managerialism and control.
By summarizing this dark side of academia, we hope to stimulate a discussion on how we conduct our academic lives. The integrity and values of institutional research faces many threats. The particular concern discussed in this paper was the effect of the over competitiveness in funding, publishing and promotion. Combined with the increasing use of academic rankings to evaluate academics this environment reinforces reward systems that work as incentives for academics to produce measurable results. Competition is encouraged in order to increase efficiency. But when the level of competition is such that many academics have to deal with persistent rejection, the negative spillover effect is a dropping of standards, and survival by publishing becomes more important than the pursuit of knowledge.

Besides the negative psychological effects that over-competitiveness brings about, it can even be argued that the current competitive funding distribution process is inefficient. Of course the peer review process will at least filter out the worst proposals and it is likely to favor `better’ proposals. These proposals are likely to result in more and better research output. The peer review process certainly has a certain success as a filter. But the effort necessary to run the process and its negative side effects described in this paper seem to outweigh this advantage. We speculate that a research funding lottery might be at least as successful as the current system.

Despite the extremely low acceptance rates, for example 7.7% for the 2012 Marsden funding in New Zealand, hundreds of researchers surprisingly still submit their proposals. Let us assume that it might have taken a Brian S. Smith a week to write such a proposal at an hourly rate of $200. This would result in an investment of $8,000. If we offer Professor Smith a choice - that he could either keep the $8,000 or that he can bet it on one number in roulette with the chance of winning $280,000, he would probably choose not to play. But the pressure on academics is so high that we overestimate our success chances and keep on writing proposals. And while the available research funding remains stable we dramatically increase the number of proposals, which increasingly renders the system inefficient. Moreover, the hours Professor Smith invests in the proposal and the time of the reviewers to judge the proposal does not appear in any “official” recordkeeping, and hence the organizers of the funding distribution process can maintain the illusion that their process is efficient.

It was also interesting to observe how few studies have been published on the negative effects of competition in academia. We talked about the papers of Nancy Day and Peter Roberts at length, partly because, to our knowledge, there are not many studies that investigated these negative side effects. What prevents us from opening our eyes to the absurdity of the academic situation? Why are we so convinced that our proposal will be accepted? Why do we believe that we will achieve a tenured process and that our paper will be accepted by Nature? Coming back to Alice’s question “Who won the race?” we conclude that currently we are all losing.

Limitations and future work

This study focuses on a review of relevant literature on the negative effects of over-competitiveness. We are not able to offer a solution for the problem of how to allocate limited research funds, although we previously offered some ideas on how to improve the publication process (Bartneck, 2010). Unfortunately, we can not just “try out” different allocation processes and compare them to others. Changing a research funding process so fundamentally and comparing its results to other processes is impractical, since it would take years to observe the impact that any individual funded project had.

We are therefore currently in the process of developing a computer simulation that tests the efficiency of different funding allocation processes. Until this simulation is ready we do not have any suggestions other than our personal opinions, and those solutions already suggested in the literature. One possible alternative to distributing funding would be a simple lottery. It would cut the overhead dramatically while still being able to give larger chunks of money to researchers. An alternative to over competitiveness in the publications would be to adopt an “All in publication policy” as described in (Bartneck, 2010). These processes and policies would disrupt the process of “normal” science since they fundamentally change the incentives for researchers and the spirit in which resources are distributed. We would finally acknowledge that science is largely unpredictable. Knowing up front what scientific discoveries will have a lasting positive impact is difficult if not impossible.

Albert Einstein famously said in 1918 that “In the temple of science are many mansions, and various indeed are they that dwell therein and the motives that have led them thither. Many take to science out of a joyful sense of superior intellectual power; science is their own special sport to which they look for vivid experience and the satisfaction of ambition; many others are to be found in the temple who have offered the products of their brains on this altar for purely utilitarian purposes. Were an angel of the Lord to come and drive all the people belonging to these two categories out of the temple, the assemblage would be seriously depleted, but there would still be some men, of both present and past times, left inside. … If the types we have just expelled were the only types there were, the temple would never have come to be, any more than a forest can grow which consists of nothing but creepers.” Being interested in science for no other purpose than science is and will always be the heart of science. What we still observe is the battle between the intellectual and societal levels of quality (Pirsig, 1991). The superiority of the intellectual quality level is still being challenged by societal needs.

References


This is a pre-print version | last updated January 27, 2014 | All Publications