Rambles around computer science

Diverting trains of thought, wasting precious time

Tue, 29 Aug 2023

Teaching loads in computer science: it's not just lectures

About four years ago, when I had spent a year at the University of Kent in my first job as a lecturer (or “Assistant Professor” for North American-style readers), I had to fill in a probation review form. One question asked me to reflect on my teaching load over the previous year, which had consisted of 1.4 courses or “modules”. One module lasts for a 12-week teaching term, with one reading week, and students typically take four modules at a time. I said that in my view, 1.4 modules had been a heavy load for a new lecturer trying also to do research at the same time. I still think I was correct to say so.

My head of department's response was unsympathetic. “You only had to give thirty lectures. Normally it'd be thirty-three!” This made me unhappy, partly on a personal level—why ask for a reflection if you already know the “right answer”?—but also because counting lectures is “obviously” not a good measure of load. Since then, my move from Kent to King's has provided another study in this.

(It's been a couple of years since I wrote about my then-impending move. I realise I haven't yet written anything directly about my teaching experiences at King's. I'll get into that a little in this piece, but not a lot.)

To set the record straight, here I'll enumerate some other dimensions by which teaching loads differ. These might be useful for those weighing up job offers: it's worth getting a feel for how the institutions compare on as many of the dimensions as you can.

All this deserves some caveats. Of course there will be plenty of other ways in which institutions differ. Also, the loads at any given department are a moving target... I'm told my department at Kent has managed to reduce loads significantly since I left, while King's had been reducing them significantly immediately prior to my joining. At King's right now, the load is generally favourable compared to my time at Kent, both on a lecture-count basis and on many but not all of the other factors. However, there is one big exception, bringing me to the first of my many dimensions....

Number of students, as reflected in assessment and marking

This is the obvious one but it needs saying. All else being equal, marking is usually linear in the number of students. All else is never equal, however. Only slightly less obviously....

Amount and manner of assessment, marked by whom

If there is less coursework, there is less marking. If exams are done electronically and can be marked wholly or partly automatically, there is less marking (but this comes with downsides). If teaching assistants (TAs) can mark coursework, there is less marking for the lecturer, provided that directing the TAs and moderating the results is not too much work... this can be significant. All this varies hugely between institutions (and indeed departments).

Use of coursework

Although this is a special case of the previous one, it bears repeating that departments and programmes vary hugely in both how and how much they use coursework for assessment. The weighting of coursework can be a red herring load-wise, since just because something has a low weighting doesn't mean it's a small piece of work. Bigger coursework means more marking, and often disproportionately since coursework is usually harder to mark than exams. Also, whereas exam-marking happens in a consolidated, frenzied period during which most other business is paused and external commitments can easily be blocked out, coursework comes in amid other stuff going on.

Complex rubrics and submission schedules can also ramp up the work. For example, at Kent, when marking an undergraduate or master's project, the submission consists of not only a dissertation, but a corpus (code and/or data), one or more individual reports (many in the case of a group project), and various progress reports along the way. As someone with very limited desk- and head-space, I found that this plethora of materials made the process more time-consuming than it needed to be.

Student level

Again this is “obvious”, but a preponderance of relatively weak students, who therefore need a lot of help, can become a major ingredient of load. At the other extreme, the most able students can be needy in their way, though it's a nice problem to have. In theory, the course should adapt to stretch students “just enough” whatever their level, but this falls down when the spread of abilities is large.

Absolute class size matters too, because when the number of students is large enough, the spread of abilities is always large, almost independent of entry requirements. Put differently: if the invariant is that 10% of admitted students slip through the “equippedness net” that entry or progression requirements were supposed to take care of, it matters whether that 10% adds up to 4 students or 40. Critical mass effects mean the latter can draw much more attention, not just in absolute terms but also per capita.

The amount of direct contact with students

If you have few or no TAs, you as lecturer are the TA. You are the only human interface the student has with the course. This has consequences far beyond marking. There were few or no TAs on all my courses at Kent, although some other courses did have decent provision; it often reflected the age of the module spec, with higher levels of provision a vestige of bygone, less penny-pinching days. At King's there generally is decent TA provision, but scale can make this hard to manage, the TA bag is very mixed (many are themselves undergraduates; some very good, in fairness, but not all), and there are also rumours that the money might get taken away. Some days at Kent I spent hours answering student e-mails or dealing with spontaneous drop-ins. This brings me to....

Cultural distantness of the lecturer

Institutional culture affects the threshold that students feel they must cross before approaching a lecturer. It is related to the TA issue, in that students might approach TAs first (only if they have them!). But it varies even aside from that. If students are told there's an open-door policy and they can drop in to their lecturers' offices whenever (as was the case at Kent), they will do so! Conversely, if they absorb a sense that lecturers are busy and distinguished people who have limited time for them, they will be more conservative. (It also helps if, as at Cambridge, lecturers are kept behind a locked door....) I'm not saying which of these is “better”—obviously, access to lecturers can be a boon to students, but it comes at a price.

Student dependence and learned helplessness

Even in addition to the combination of student level, TA provision and lecturer distantness-or-not, a department's culture can make students more or less self-reliant. At Kent there was an oft-remarked vicious circle of expectations around hand-holding and “being nice”, which amplified both the office drop-in tendency and frequent generalised demands for “help” of unspecified nature. At King's there are similar effects in evidence. It continues to be a bit mysterious to me how exactly these expectations came about and what might lessen them... I suspect the style of first-year teaching has an outsized influence, but it can't be just that.

Late submission, resubmission, resitting

A disproportionate marking-related load comes not from exam scripts or coursework submissions per se, but from handling those scripts and submissions that do not arrive or do not pass. When departments have more generous policies around late submissions and extensions, this causes greater fallout onto academics, who are forced either to mark in multiple batches, or to work with a narrower marking window overall. Reassessment multiplies the number of assessments to be written, and adds to the number of context switches (getting into marking mode and out of it).

In Cambridge—the most pared-down, research-focused regime I know—these simply don't happen, at least for computer science students (and, er, with the caveat that my knowledge is possibly out-of-date). If you miss a deadline, the late-submission penalty is severe and you don't get another chance. There is only one exam period a year, and if you're not fit to sit your exams, you either get a special result that year (“deemed to deserve honours” a.k.a. DDH) or you fail, depending on whether your term-time work has been up to scratch. At other universities I've worked at, any module's assessment is done at least twice per year, with all the doubling that entails.

Per-student casework, as it falls on the academic

Things like submission, marking, extensions, mitigating circumstances, release of marks, appeals and so on can generate a surprising amount of student-by-student “casework”. Mainly this is when students end up in odd or difficult situations. In addition to the inherent task of marking resit papers or what-have-you, there is the added task of regularising the student's situation, for example by obtaining exemptions (or advising the student how they can do this), or simply by forwarding information that has got lost owing to the student's unusual path. Academics invariably do get pulled into this, to a varying extent; the lurking variables, aside from the number of students, are about processes themselves (which can be heavy or light) but also about how they are administered and communicated.

If support staff are adequately resourced, academically literate, communicative and procedurally empowered to solve problems by themselves, they can take the casework off the academic. If information such as internal web pages communicates effectively with students how they can help themselves, demand can be dissipated at source. However, when these are not the case, academics get sucked in, under either our lecturer or tutorial hats. We can end up as a hapless e-mail go-between, forwarding student queries to the admin office, unable to help but also unable to extricate ourselves. If the bureaucratic machine appears to be stonewalling a student, perhaps from underresourcing and/or lacking the right systems (e.g. a ticketing system for student requests), often they will re-approach the academic rather than wait for their case to reach the front of the invisible queue. This generates more requests on the system in general and more tasks for the academic; last summer it was a major problem for me at King's.

(I can't fault the student's tactics here: an academic advocate can help cut through bureaucracy and sometimes prevent unfair outcomes. Nevertheless, this is a high-cost way for the institution to enable this outcome; a more empowered and better-trained administration is a better option.)

Misconduct levels and processes

Misconduct cases are a particular kind of casework, but worth covering specially. A Kent colleague used to bemoan that “we incentivise cheating” because the penalty is a mark of zero if caught. So the expected return on a plagiarised submission is positive academic credit. At King's, the incentives are not too different, although perhaps slightly less bonkers because repeat offences are (I'm told) treated especially seriously. I'll believe that when I see it. In any case, reporting and seeing through misconduct cases places a large burden on the academic; the incentives are either to let it slide or to design assessments so that cheating is undetectable (note: not impossible! the former is feasible but the latter usually isn't).

It seems universal across (at least) CS departments that the cost of dealing with increased misconduct levels has not been adequately “priced in” as student numbers have been ramped up. On top of that, the greater use of automated marking, to deal with greater numbers of students, has I suspect allowed a lot of undetected collusion (and worse). Pandemic-related changes have also increased these tendencies, through greater use of online assessments (open-book exams are an especially terrible idea) and through general depersonalisation. I hypothesise that the tendency to depersonalisation—particularly strong at King's since in Informatics we no longer give in-person lectures—has led some students to fall more into a “system-gaming” mindset, rather than seeing their degree as the matter of personal development that it is supposed to be.

Institutional resistance to academic-as-admin

Beyond student casework, various non-academic tasks can be intentionally pressed upon academics. These include submitting attendance data, mechanically diddling marksheets, physically sorting exam papers, signing paperwork for this or that, and so on. How much this happens is a cultural variable of the institution. During my time at Kent the institution attached considerable virtue to form-filling and other paperwork, to the extent that a colleague was rebuked for suggesting (albeit tactlessly) that academics might not be the right people to carry out sorting of exam scripts. At King's, processes and admin work are much less feted, but there is a flip side: greater prevalence of cases where “nobody knows what to do” where obtaining a sensible outcome can be a stressful process requiring much ad-hoc effort, possibly dragging out over a long period.

Churn in lecturing assignments

It's well-known that teaching something for the first time is hugely more work than subsequent iterations. Therefore, churn in teaching assignments is a hidden source of load. In my three academic years at Kent I lectured on five distinct modules. In each of those years I was lecturing something for the first time. In fairness, if I'd stuck around one more year I probably would have kept a stable assignment at last. But some churn was built in: the department had a very fine-grained workload allocation model which attempted to treat fairly a large variation in research-activeness in the academic body. Although most lecturers were nominally research-active, practically there were very differing extents of that, so if you got a strong run of papers or a grant success, you would expect to shed a chunk of teaching in the next year, to be picked up by someone else. At King's, there is no such fine granularity. Instead there is a vaguely standard load and two exceptions: “workload reduction” for those with major buy-out (50% or over), and a separate, explicitly teaching-focused career path for some academic staff. Although the Kent system is “fairer”, the King's system seems to generate a lot less churn.

Vulnerability to market conditions

In the UK's current marketised approach to higher education, there is much pressure to constantly revise the teaching offering, for example to chase new segments of the student market. I've written before about how foolish and counterproductive a trend-chasing approach can be. But this pressure is greater in some institutions than others, usually because of market position and a lack of security therein.

When in 2020, calamitous university finances at Kent led to a purge of modules, the knee-jerk measure was to discard “unpopular” modules. Of course this inevitably also discards student opportunity and staff investment; I found this exasperating, because in my view duplication of teaching was a far bigger problem, yet consolidation of duplicates was not being considered. The C++ module I taught was cancelled even though clearly there was enough demand for C++—just not among the needlessly narrow sliver of the student body to which the course was being offered. (In fact I had been asked to teach extra C++ content to a different cohort of students in another department. I said no, because there was no way of getting any workload credit for doing it, and because my teaching load was already too high.) The module went away and, of course, I was given something new to lecture the next year.

While it's true that smaller-cohort modules are higher-cost, they often provide higher value for the students that choose them. The compilers module that I taught during 2019–21 was one which, both years, some students declared as the highlight of their degree. But in 2020, this module would also by rights have been cancelled if the standard numbers test had been applied; rumour was that it was “safe” only because cancelling it would have caused embarrassment in a department where “Programming Languages and Systems” was arguably the most successful research group.

Duplication of teaching effort

Although I just covered the “what” of this, it's worth considering how a department can end up with internal duplication in its teaching offering. One ingredient is what we just discussed: courting many market segments, leading to many formally distinct programmes, each with formally distinct modules. The mistake is letting formal distinctions become actual distinctions. It takes wise management to stop this from happening. Sadly the unwisdom tends to be on the academics' part in this case.

At Kent, as a department we used to teach introductory computer science four times over: to Bachelor's students of “Computer Science” at Canterbury, to Bachelor's students of “Computing” at Medway, to “conversion Master's” students and to “Year in Computing” students (Bachelor's students studying an extra year in an otherwise non-CS degree). This could have been fine, but there was some but very little sharing between the four programmes. Sometimes there were good reasons for distinct provision, but at other times there definitely was not. (Once I was told we can't merge two near-identical courses “because that module code starts with a 3 and that with an 8”.)

Sometimes the reasons were more logistical, such as the programmes' teaching being timetabled in incompatible ways. However, even timetabling barriers don't excuse fully separate offering: the same people delivering the same course in two places at two times is much less work than different people delivering two formally unrelated courses, each with their own materials (and no guarantee of mutual awareness!).

At King's there is less duplication, and proposals to introduce new modules seem to be treated more circumspectly. However, we are just starting up a new “Artificial Intelligence” programme that certainly has the capacity for adding duplication (allow me to report back). I'd expect departments' care in avoiding duplication to be roughly inversely proportional to their teaching-orientedness: if spawning a new course is viewed as an expensive undertaking because of the other things the academics could be doing, more effort is made to avoid it, while if the attitude is “teaching is what we pay academics to do” then of course they can always be made to do more! At Kent, prevalence of the latter attitude was one of the factors pushing me away from the institution.

Over-use of crude metrics like staff/student ratio don't help things, because this phenomenon literally doesn't show up in them: same staff numbers, same student numbers, but many ways in which the teaching work can be structured. Although they measure equal, they are far from equal in reality: with many modules on the go, I found that the “divided attention” factor degraded how effective and conscientious I could be in any one part of my teaching.

Academic atomisation

Continuing the broad issue of duplication, a further variable is the extent to which academics talk to each other. Departments differ a lot on how readily they build community. Even pre-pandemic, King's was reportedly not great on this, because central London is expensive and time-consuming to get to, so working from home is more common. (Also, academics not getting their own offices doesn't incentivise working from the department.) At Kent, we were pretty good at talking to each other, with a supportive day-to-day culture, although I don't know how well this has now been revived since the pandemic.

Academics talking to each other, forming a support network, is an important way in which the academic body finds out the good ways to deal with a task or to minimise the work generated by a given issue. One hazard is that with many meetings now continuing to be virtual, there is less opportunity for informal chat, such as in the spaces before and after physical meetings, yet it's largely in these spaces that support networks are sustained.

Bad technology

Ill-fitting technology pushed down from above can easily make simple teaching-related tasks complex to perform. The converse is also true—good technology does exist—although for some reason this phenomenon mostly seems to be going one way. To pick an example continuing the duplication and atomisation themes: nowadays Moodle and similar systems have brought new levels of secrecy and obscurity to teaching within a department. Whereas not long ago, course materials went on a public or internal web page, or could be picked up in hard copy (gasp!) from an office, nowadays you can't even look at your colleagues' materials without requesting access one-by-one, unless they have had the foresight to make their page public—which is not the default. This inappropriately corporate “need-to-know” access control policy is an imposition from above, though the imposition of poorly chosen technology.

As another example, at Kent we had a very nice shared teaching server into which all students had ssh access. The students quickly learned the basics of how to use this system (it was used by many modules) and it proved a very flexible environment. For example, within it I wrote a gated submission system that could give students pre-submission feedback (helping them) and stop them submitting ill-formed assignments that would foil my automatic marking (helping me). It also ran an “anonymous Q&A” CGI script that had been used by several modules over many years. At King's there is no such server and for modules wishing to have a shared machine, a bespoke set-up needs to be provisioned each time, at much academic cost. It turns out that this can be done, at a cost of explaining the requirement from scratch several times, and continuing to fight for admin attention when (for example) the initial provisioning level is wrong or the well-running of the service is otherwise overlooked. These problems would not happen with a single widely-used service of recognised importance to the department's teaching, as was the case at Kent.

Correspondingly, here at King's even more faith is put in bought-in solutions like Padlet, Gradescope and the like (though these were by no means absent at Kent). My view is that in a computer science department these tend to be relatively high-cost and low-value.

Academic autonomy in respect of teaching

The IT problems I mentioned aren't really problems if IT systems are provided for discretionary use, if plenty of alternatives are available, and if academics are free to choose. It's when their use is mandated that the problems start to bite. Reading David Evans's excellent (but understandably US-centric) “How to live in paradise” left me suspecting that many top-level US public research universities leave quite a lot of control in the hands of academics about how to teach, compared to ostensibly “peer” institutions over here (if one includes King's in that, say; one can certainly question this!). If this is true, perhaps the US tenure system and some kind of fractionally more academic-led culture can explain this. (Or I could be completely wrong; this may be a greener-grass effect.)

My experience of UK departments is that there is more pressure, and sometimes a hard requirement, to teach in centrally agreed ways. Correspondingly, this could perhaps be explained by the tight grip here of UK government policies and their knock-on centralising influence on institutions. For example, at King's we have a default formula for teaching delivery, adopted at department level. One can ask to follow an alternative model, but that's it; the menu has exactly two options. Even worse, the alternative model comes pre-torpedoed for undesirability; it seems to be designed so that nobody will choose it. I'm still not sure why this is considered a good idea.

In fairness, institutions do vary on this even within the UK. To pick my favourite outlier again, in Cambridge a few years back, a colleague decided to teach C programming using a flipped-classroom approach, recording lecture videos and then using the timetabled lecture slots to help students with examples. (This in turn was possible only because the class size was below 100.) No other course in the department was taught this way at the time. I'm not sure whether he did this unilaterally or following some permission-seeking, but I'm fairly sure it wasn't an arduous approval process. This kind of autonomy is rare in the other UK departments I know. Compounding this is how although deviation might (or might not) be tolerated on paper, at many institutions it would likely cause the academic many admin-headaches if even the first round of student feedback was less than glowing.

Attitudes to student feedback

This is my last point and is one that has taken me a bit longer to tune in to than the others. My experience has been that departments vary a lot in their attitudes to student feedback. This can have significant consequences for workload: a department that takes every student comment as a knee-jerk “action point”, which the academic must resolve, will create a lot more work than one that simply uses student feedback as a crude aggregate barometer. This can be thought of as an internal dual to the “external” churn of one's assigned courses or modules; it is churn occurring internally to each of those courses.

Pressure to improve can be helpful, but not all the pressure actually does improve. For example, tacit “disengagement compacts” between students and lecturers (and I thank a nameless colleague for bringing that phrase to my attention recently) create pressure instead to lower standards. I wager that most of my colleagues at King's, at least those who've been around a while, would privately agree that this effect has become embedded in our department. Some have already privately said so; as one put it (I paraphrase slightly), “the approach here is to give them a gold star just for turning up”.

Workload-wise, there are other problems arising from a high internal pressure to change. The over-use of video-based delivery interacts especially badly with it, because a unit change is much more expensive when made to recorded video than to in-the-room delivery. My department currently has a policy I regard as a nonsense, which is that lecturers should “re-do 30% of videos each year”. Aside from the madness of video lecturing, it's a silly notion to enforce yearly incremental change, since changes in one place invariably have ripple effects elsewhere. This is why academics prefer to improve their courses every few years in a “big bang”, rather than try to focus changes to a small portion every year. Changes simply don't focus nicely like that! And video delivery amplifies this effect: something as simple as slide numbers visible in a video can made it a Herculean task to achieve a given change to an acceptable standard.

What that standard is also varies. Video-based delivery tends to coincide with an overemphasis on production value. As someone who was often lectured from hand-drawn OHP slides, my view is that any big emphasis on production values means the time isn't being spent on the important stuff. Yet now, if your materials look a bit raggedy, students are sure to complain, while lecturers are given gold stars for producing something slick but superficial. Different departments can be infected by this madness to greater or lesser extents.

The end

I think that's all I have. I hope it was useful to someone! I've not been particularly guarded about what I see as structural problems in how teaching is done, both in my current department and at others I know. However, no department gets away unscathed... I think that says a lot about the general state of UK higher education right now, and no doubt the same problems exist elsewhere in the world too. Reflecting more widely on “the job” (which isn't the same job everywhere), above I made a reference to David Evans's “How to live in paradise”. This is a piece I recommend and that I intend to revisit more broadly in a future post.

[/highered] permanent link contact

Fri, 01 Apr 2022

Anti-prosperity in the UK, part one

(I thought about calling this “Never Mind the Bullshit Jobs”... but that would apply better to a future part.)

Various rhetoric about government-funded academic research here in the UK seems increasingly to use the word “prosperity”. For me this has become a major red flag. Despite the positive-sounding line that “prosperity” exists as an easily recognised and obviously good thing, it is often invoked by interested parties attempting to whitewash issues that in fact are subtle and, usually, deeply political. In many cases, claims of “prosperity” are used as cover for outright harm.

(This post will be overtly political. However, I don't believe very much in it is controversial.)

For examples of anti-prosperity, we only have to look at ad-funded social media “prospering” from monetised hatred and insecurity, or ad-funded cloud computing “prospering” from mass surveillance. Alternatively we can look at the countless corporations “prospering” from the constant private and public investment in unsustainable high-energy technologies, from electric cars to data centres to cryptocurrency, deliberately disregarding both the carbon cost and the other emitted pollutants (whether particulates, carbon or coinbros) while starving out the adoption of obvious low-energy alternatives. We could look more generally at the epidemic of “crap automation” afflicting our society—lower-cost but fundamentally less effective “solutions” to all manner of formerly human service roles, most obviously in customer services and retail, but nowadays increasingly in knowledge work (transcription, translation, tuition) and even in healthcare. All of these bring opportunities to make profit for someone, but also to lose value for people at large. Nobody in their right mind would call that “prosperity”, yet that is what is too often meant. As Illich argued, our very definitions of progress are controlled by a select few—whom he called “self-certifying professional elites”—who are very often engaged or invested in programmes of work that once arguably did good, but have shifted towards doing outright harm.

Public and private harm

When public harm is profitable to private interests, we should expect those private interests to pursue it. What we should also expect is that academics and government—those who have the public interest in mind—to act against that harm. Instead, today's private harmdoers are helped on their way, by complicit academics and wrong-headed government initiatives that have created an extreme dependency on external funding and extreme contingency of that funding on indicators of “success” defined as benefit to private interests.

Here in the UK, recent governments have worked hard to more closely align public fundability of university research (whether by project grants or REF-dependent block grants) with benefit to private-sector industry. This is called “partnership” and “impact”. In the other direction, private harmdoers have worked hard to embed their interests and attitudes within universities. This is called “engagement” and “collaboration”. In the middle of all this we find the hapless individual academic. The issue is both institutional and individual. It's easy to criticise institutions, who certainly try very hard to portray this state of affairs as “the system” and therefore “the job”, in which academics are duty-bound to do whatever it takes to bring in money. Our institutions are so desperately, increasingly hungry for outside money. Promotion and other rewards are offered for “impact” and “engagement” but, for some reason, never for public good as an end in itself. Instead, “good for some business” and “good for the public” are assumed equivalent.

While the incentives on academics have never been more toxic or less conducive to good societal outcomes, academic pushback on these bad incentives has been about as effective as a hard-boiled frog. Once upon a time, one might have expected academics to speak out en masse. Somehow, individuals have become rotten too. To borrow two phrases from Jeff Schmidt's sizzling polemic “Disciplined Minds”, increasingly it is the individuals who have “assignable curiosity” and a capacity for “alienated labour” who can succeed in the academic system. If you actually care about something, the system is destined to frustrate and contain you. The changes of recent decades—to a more bureaucratic, managerial university system in general, and a more short-term impact-centric research system specifically—could hardly have been tailor-made any better to further this effect.

Cosiness and carbon

Let's consider perhaps the most glaring example, of how cosiness with the fossil fuel industry continues to be rife within our universities. While I was a PhD student at Christ's College, Cambridge, a cheerful man named Reg introduced himself to me as the new Bursar, explaining that he had recently retired from British Petroleum (BP). Here was a company that for decades had been roundly thought of as comprising jolly good trustworthy establishment sorts who were bringing “prosperity” to the great British nation. More recently it is better known for its core business of bringing about the climate disaster, as well as its criminal liability for the largest marine oil spill in history. It does so while frantically trying to greenwash its image by touting a tiny fraction of its subsidiary activities. In 2018 the Christ's College Council voted against terminating the College's investments in fossil fuel companies, despite what was at the time conspicuous lack of coherent argument supporting that decision. A little earlier, still in Cambridge, in a debate about university investments, an academic funded by the fossil fuels industry stood up and pronounced these industry players as “the good guys” of decarbonisation. That claim was not plausible at the time, and even in a few short years, it hasn't aged well.

Although implausible, I don't doubt that these beliefs were well-meant. Let's say your research is about combustion efficiency. Great! If it succeeds, we can burn less fuel per unit useful energy. Isn't that helping? And if BP (or whoever) is funding this, maybe they're doing good? It's easy to rationalise yourself into believing this sort of argument, particularly if your career, wealth and standing all benefit from it. Unfortunately it doesn't stand up to systems thinking. It's well accepted that humanity's goal now must be to keep as much carbon in the ground as possible. Increasing the efficiency of a destructive technology has many indirect effects that work against that. It is likely to prolong these technologies' lifespans, maintain for longer their positions in the market, give fodder to lobbyists' efforts to stall government regulation, bolster the fortunes of companies invested in pursuing them further, excuse and enable superficial non-fixes (hydrogen to replace natural gas, anyone? how much of that is really going to come from green electrolysis?), slow the writedown of fossil-fuel plant investments (anyone heard of stranded assets?) hence lengthen the operating life of these plants and practices, and ultimately lead to more of fossil fuel being burnt. And that is no accident; it is what these companies want because it ekes out more returns from the money they have sunk into existing activities, plant and expertise. It benefits their shareholders, to humanity's detriment.

Academics less reliant on funding from or “partnership with” giant corporates might be more awake to all this. Put differently, externally dependent research programmes are inherently corruptible. Thanks to prevailing bastardised notions of “impact” and “engagement”, even publicly-funded “academic” programmes are now largely in that category. If what we call “industry” were on balance doing good in the world, this might be not so bad. But the consumption-based industries underpinning modern society have mostly long since crossed Illich's second threshold: they are doing active harm, while passing it off as progress. Incumbent corporates' fingers reach into most or all influential institutions of the state. They spend good money on appearing like the good guys, and for them it is well worth it.

Capture and complicity

Quite often I can ascribe academic complicity to benign naivety... but not always. One non-executive director of BP is a certain Ann Dowling, a professor of considerable standing at Cambridge, who has also happened to wield much influence over funding nationally, as chair of one of the four REF panels in 2014. Besides her, many other academics in Cambridge have deep relationships with BP and similar companies. (The same is likely true in many leading institutions; I'm picking on Cambridge because I know it the best.) Not so coincidentally, divestment from fossil fuels was vehemently resisted by much of that university's establishment in the face of increasing grass-roots pressure during (especially) 2016–18. The idea that “fossil fuels are our trusty financial friend” was heavily ingrained, not just as an ambient belief but by concrete financial interests, both institutional and personal. These were enough to bring out dirty tricks and “arguments” that would shame any free-thinking academic. As these companies know, thinking is more easily captured than carbon.

This was never more blindingly apparent than in 2018 at the issue of a non-report by a so-called “Divestment Working Group”. The group was ostensibly answering a grass-roots call for the university to assess the benefits of divestment. However, it was actually given vague and irrelevant terms of reference and delivered a similarly vague and irrelevant report. Later, in 2019 it was reported in The Guardian that professors Simon Redfern and John Shakeshaft, both members of the group, failed to declare an apparent conflict of interest from their role in processes around accepting funding from BP. The linked “conflict of interest register” (not public) indeed does not mention such interests. Perhaps dealing with BP money was so routine for them that it slipped their minds. Perhaps there is another explanation. But either way, this situation represents a problem. Even crediting those involved the best will in the world, it does not seem plausible that these financial relationships were fully isolated from other decision-making.

The same document also reveals that the chair of that working group, Athene Donald, was decent enough to declare that she personally held stock in three well-known fossil fuel companies. (The holdings are described as “small”... but as any good physicist knows, small is relative.) It says a lot that this wasn't enough to red-flag her as chair; one doesn't have to be too cynical to suspect it may even have green-flagged her, given the evidently low risk of her putting her mouth where her money wasn't. The Vice-Chancellor at the time the working group was formed, Leszek Borysiewicz, had told me in person that he opposed divestment because his overriding concern was the apparent hypocrisy of holding research collaborations with companies like BP in whom the university would meanwhile be declining to invest. The same Guardian article quotes a university spokesperson stating falsely to the press that the working group's recommendation of (essentially) inaction was “backed by the full Council”. In fact, a note of dissent had been submitted by two academic Council members in addition to the two student members. The relevant press release shamelessly attempted to bury the academic dissent under an obscure heading that appeared to conflate it with the student voice. Perhaps, again being charitable, the spokesperson simply absent-mindedly overlooked the dissent; “of course” there was consensus that supporting our fossil fuels multinationals, the good guys who so generously fund our research, is the path to prosperity. Again, that's charitable in the extreme, but it's a problem even if accurate.

Cash and (career) currency

It's true that some progress has since been made on the fossil fuel issue. Both Christ's and the University of Cambridge have since decided to divest from fossil fuels, at least ostensibly and perhaps also for real. But the rot still goes deep. From where I've been standing, this change does not appear driven by any sudden re-evaluation of the nature of prosperity. It was pursued only once not doing so became toxic for PR in general and for alumni relations in particular—thanks in part to press coverage, as seen above. In short, the university acted to safeguard its own income stream, rather than out of public-spiritedness.

This isn't about fossil fuels per se. There is nothing to stop complacent institutions blundering into blinkered support for the next industries of anti-prosperity, despite their stated do-gooder aspirations. In fact this is already happening. My own field is not short of examples: where once Google or even Facebook might have seemed non-evil dynamic young collaborators, now their business models and operating practices are obviously beyond redemption, yet in the intervening time, their hooks have become deeply embedded in our universities. As more and more of academics' jobs involve fundraising for those self-same jobs, universities' resistance to blunders and infiltration can only decline. Academics' eyes light up the moment corporates dangle what, for them, are quite puny sums but to an academic are career currency. Career success selects for those who have the least compunction.

Change?

What can reverse this dynamic? Currently I don't see any tendency for self-correction—neither in general nor even (unbelievably) in the specific case of fossil fuels. In a September 2021 departmental away afternoon here at King's, we were told that the best kind of impact is financially quantifiable impact, and (with a straight face) a really great success story at King's recently was enabling Schlumberger to drill for oil and gas in quantifiably more affordable ways. This is the core activity of the fossil fuel industry—not even its greenwashed fringe. We were so proud to be directly helping with the out-and-out dirty work.

Here in Cambridge, where I live, Schlumberger is also very present, and there is a nascent campaign to get the university (here) to dissociate from it. However, reactive single-issue campaigning is not the answer. For a few headline issues, it might bear some fruit, as it did with the student campaign groups who tirelessly pushed the debate on the fossil fuel issue. (Many staff members did so too, of whom I would single out Jeremy Caddick for his heroic tenacity.) However, such campaigning takes an awfully long time to get results even in the best case. For many issues, campaigning just never reaches critical mass.

Since so many individual academics evidently can't be relied on to shun anti-prosperity as a matter of course, the only hope is if we can somehow change the incentives in a more blanket fashion. If we are to talk about real prosperity, and public research funds' ability to deliver it, we need to ask: where's our incentive for not pursuing anti-prosperity? Hatred, surveillance, fossil fuels, high-energy technologies in general (once again following Illich): how can we ramp all of these down to zero? Where are the institutional and governmental policies offering reward for work on doing so? Where are the penalties for work that goes the other way? Given the corrupt state of both government and universities, I'd even settle for a weaker version: how can we not be penalised for working on real prosperity? In a regime where big-number “financially quantifiable impact” and funding contributions from “industry partnerships” are given so much weight, there is a de facto penalty against work that isn't assisting the big-money corporates. Yet such work is exactly what real prosperity must involve. If any institution should be incentivised against anti-prosperity, surely universities should. Yet all the incentives point completely the wrong way.

Relatively small tweaks could easily help with this. For example, benefit to charities or other not-for-profit partners could be favoured over private-sector “impact”, and such partnerships could explicitly be promoted. Currently, they are latently discouraged since the usual word “industry” does not even bring these options to mind, and because financially quantified “impact” numbers are inevitably either smaller or unavailable. Conservative governments are unlikely to deliver funding schemes that reward these, but it doesn't stop local efforts to prioritise them. I suggested above that public good—as distinct from “engagement” or “impact”—could easily be an explicit criterion in promotion, for example. Local funding pots could also prioritise public good. Small financial incentives can exert a large sway on academics. Unfortunately, as we saw above, the higher strata of universities have been captured by a particularly dangerous gaggle who do harm while earnestly believing they are doing good. Combined with institutional inertia—fossil fuels are still seen as a trusty financial friend—and the chance of reform in local institutional systems and processes seems negligible.

So ends my first rant about anti-prosperity. By picking on fossil fuels, I've chosen a headline issue that is an easy target. With its being rather outside my area, I could perhaps be accused of missing nuance. In one or more future posts I'll cover two further anti-prosperity issues rather more local to computer science. One is how the popular notion of “automation” (sorry, I mean “AI”) is being missold based on industrial notions of automation that ignore issues of human value and quality. The other is how aggregation has acquired an overwhelming mindshare, to the point that even bright people forget that or “scale-up” is not the only and not even the only technical means of creating human value in industry (indeed, it readily destroys value). All three issues are related, in that the same small minorities benefit from all three, and each reinforces the others. It is a vicious cycle which universities should be countering, not amplifying.

[/highered] permanent link contact

Tue, 11 Jan 2022

(Tell me why) I don't like funding {applications, bodies, policies}

I was recently successful in my first non-trivial grant application, for a New Investigator Award from EPSRC. Great! Well, sort of. It's great because it will hopefully enable a talented person (who isn't me) to spend a few years doing some interesting research. In other ways, it's not so great. I have been grimacing at every well-meaning “congratulation” I've received. The process is a horrible, contorted mess that makes me sad, angry and anxious. I hate it and it is literally making me ill. I have one more proposal in the pipeline and another planned for just over the horizon. It is already too much for my blood pressure to handle. I am hoping this blog post will unburden me somewhat; if you read it, please read it in that spirit.

A small factor is that proposals trigger a social anxiety I have about “asking people for stuff”. I spend my life thinking differently from most other people I meet. So, psychologically, I much prefer to be independent and not need approval from others in what I set out to do. Writing papers is the culmination of this process: I get to elaborate precisely why my “different belief” is justified. Writing a funding proposal, on the other hand, is inherently speculative and so is a much less secure enterprise. I find myself having to “make stuff up”, or write things that “sound good” but don't stand up to scrutiny. How can they, when the research isn't done yet? It's an exercise in double-talk, appealing to some nebulous reader of whom I have little picture and no reason to respect. It literally stresses me out, mentally and physically. (I suspect this may be partly an autism-spectrum reaction, of being forced to meet expectations of a perceived “normal” reader.)

But it gets worse. A further aspect is that whenever I talk to funders, or to institutional support staff trained in funders' ways, I quickly get the sense I'm talking to someone with a wacky, bureaucratised and dangerously inaccurate model of how research works. My distaste and/or rage at this discourse has been stoked by many years as a ‘postdoc’—a concept fully deserving scorn-quotes, and one whose ever-growing prevalence itself owes to this inaccurate model. But now, as a nominally grown-up academic citizen, I am forced nose-holdingly to claim allegiance to all the nonsense. I feel complicit in the perpetuation of this idiot system and the resultingly horrible life situations and dire value-for-money that arise from today's publicly funded research.

All that would be bad enough even if the system were otherwise well-conceived and well-executed. Of course, it isn't. Its practices are founded on many wrong beliefs and flawed agendas. Here is a run-down of those as I see them.

The “project” agenda. Funders love the word “project”. They love to say that they fund projects. But in reality, almost all meaningful research projects both predate and outlive the funding vehicles that support them. A grant can be a vehicle for making progress with a worthwhile project, but it seldom delimits it.

The “impact” agenda. This needs no introduction. It suits my work well, but it's pushed to an inappropriate degree. To espouse it is to support misguided government beliefs that the only research worth doing is the immediately impactful. It's impossible to overlook the parallel with another wrong message that is rife in UK universities today, that the only degree-level education worth having is one that teaches vocational “skills”, preferably in “STEM”.

The “investigator vs researcher” agenda. Any project grant I might be awarded is partly or largely a vehicle for giving others time to explore what they want to explore. It may be that that is not specifically advancing “my” work, i.e. the work hypothesised in the proposal. Yet the proposal process requires me to ignore the former and pretend only the latter will happen. We can call the former “covert independent working”. It is regrettably necessary, because public policy allocates smaller and smaller shares of funding for overtly independent work (basically personal fellowships and a small fraction of PhD studentships). I have personally relied on this pattern of “covert independence” earlier in my career (albeit not without large doses of compromise and frustration). I've disbenefited from its absence (“one out of three postdoc positions ain't bad”). In disciplines without huge lab overheads, any researcher worth that name is also an independent investigator to an extent. That extent does vary. Although it might be workable to leave it down to each PI/researcher pairing to negotiate a mutually satisfying mix, regrettably, power imbalances skew this process and it fails a fair fraction of the time. It happens increasingly often when “successful” academics style themselves as “managers”. Negotiation is necessary only because funders' offerings have not adopted anything like the right mix of funding people (or institutions) versus funding projects, as I've ranted in the past.

The “national importance” agenda. National Importance should be a non-issue for any research worth funding. Inventing the future is always in the national interest. Feeding the business interests of Locally Invested Corporations may or may not be. If the work in question is not also in the global interest, it's simply advancing a race to the bottom, which shouldn't be rewarded by public funds. Examples of such races include financialisation, much work under “defence”, blockchain, fossil fuels, and so on. All this is not to be confused with the fact that that some national environments might make a good (or poor) place to do a given piece of research. If the exercise were simply about this, I could get on board.

The “training” agenda. Funders' calls and particulars are shot through with the belief that a bottleneck to academics' research success is their lack of “training” and “personal development”. In reality, the main bottleneck—besides raw talent and willingness to work self-harmingly hard, of course—is almost always firstly time and secondly consolidated time. That is not to say that all training is a waste of effort. But somehow, the present machine yields actually-useful training in at most 10% of cases. The greater part of that machine's function is much more sinister: to deflect responsibility, in the process gaslighting people into thinking that when their high-quality work or skills meet with limited success or limited opportunity, the problem is them—and certainly not the underresourced, over-competitive, incentive-broken system they inhabit. “The system works great for people who are better than you! Look at all the training on offer! If you don't succeed, you've only yourself to blame.” Meanwhile, public and institutional policy continues to ensure that no matter how strong the field, only a slim few garner the markers of “success”. The training agenda is about denying this political reality. It is most strongly in evidence in the situation facing postdocs who are searching for a “proper” academic position: funders and governments constantly dream up “training” or “career development” initiatives that they claim will “solve” this problem. It obviously cannot, because the problem is structurally embedded in (current) public policy. Training can at best move pain around and at worst just waste everyone's time, The added gaslighting just inflicts more pain.

The “token innovation” agenda. Faced with an incentive to “stand out” in a competitive field, or to appear “relevant” amid the initiatives and priorities du jour, the proposal-writing exercise gains an ugly new side: hoop-jumping, box-ticking tokenism. Thankfully I did not have to engage in this, but I've definitely seen it in action. As one very senior and successful colleague once put it to a visiting EPSRC bod: “you are just giving us incentives to lie”.

The planning agenda. Writing a “case for support” is a festival of bizarre over-planning. One has to tacitly claim to anticipate the future to an absurd degree, detailing “deliverables”, timelines on Gantt charts and the like. I try to make peace with this on the grounds that it is “a plausible story” and about “planning, not plans”. But one needs to keep up the mask.

The ‘excellence’ agenda. An open call from our esteemed European colleagues states proudly that “excellence is the sole criterion”. But excellence is not a criterion at all. Excellence with respect to what? The word has been abused to a point that is almost beyond satire, although this tour de force by Matthias Binswanger makes a compelling go of it.

The (metaphorical) suit-wearing agenda. Reviewers do their best to assess proposals' technical merit. But given the inherently speculative nature of any proposal, even with the best will in the world a substantial component of the “criticism” a proposal receives at review will be founded on superstition, impression and “feelings”. This is only natural. Proposals just don't have space to explain anything serious in scrutiny-supporting detail. Human beings faced with this underconstrained task fall back on a latent sense of “what a good proposal looks like”, involve large components of folklore and personal taste. This is witnessed by the huge amount of contradictory feedback I've received on draft proposals to date. “You have too few citations” but also “you have too many citations”. “Only include ‘management and risk’ in large grants,” but also “include more about ‘management and risk’”. “More verbiage” but also “less verbiage”. The whole process resembles the blind leading the blind. This is not because reviewers don't know about good research (many do), but because there is no reliable shared understanding of two mutually interdependent things: (1) how to detect a proposal that is likely to lead to successful research, and (2) how to write a proposal in a way that signals that likely success convincingly. So at last we reach the suit metaphor. The art of writing a proposal is like wearing a suit. It's an exercise in “looking the part” far more than demonstrating substance. The associated practices persist not on their established merit, but because there is a self-perpetuating belief that engaging in the ceremony, and doing so with an air of conviction, is a predictor of better outcomes. As a staunch suit non-wearer, I also don't believe in this ceremony. There are plenty of studies suggesting that far less effortful and more equitable ways to distribute money would deliver equal or better outcomes. Unfortunately, suit-wearers tend to become gatekeepers.

The funder-in-control agenda. This is an interesting one and is possibly the reason why academics find the system tolerable. I levelled up slightly when I realised that the whole process is set up to deceive. Government funders act like they are in control. The application process is fronted by bureaucrats who know little about research or its culture, but are instead trained up in the image of the funder's management (civil servants) and paymasters (politicians). Thankfully, these people are largely not in control. Researchers on the ground have a very different culture from all this. It is these researchers who actually review proposals and take part in panels (the meta-review that makes funding decisions). So, the funder's communications are at best in need of a pinch of salt, and at worst form a layer of active misdirection. Funders and their agents often advise in certain directions, such as “say lots about training and personal development”, “say why this should be funded this year and not next year” (I kid you not) and so on. However, the success of the application lies overwhelmingly in the hands of academic reviewers, who mostly don't care for these perspectives any more than the applicant does. (I say “mostly” because some do get institutionalised.) The same is often true of institutional “advice” from support staff—they tend to echo the funder's line rather than reality, since they are trained and briefed by funders. Many do have some experience on the academic's side of the table, but it tends to be up to the ‘postdoc’-ish level at most, so typically includes only a limited taste of writing or reviewing proposals.

The “prosperity” agenda. Faced with funders' glossy documents such as EPSRC's current Delivery Plan (I almost said “Deliverance”) repeating the word “prosperity” loud and often, I can't help feeling that an Orwellian reading of the word is sadly necessary. We know that countless technological and political changes that claim to be “delivering prosperity” are actually delivering poverty, alienation, dehumanisation and despair for much of the population. “Prosperity” has been conveniently redefined as consolidating the capital of the 1%. It saddens me to see my academic peers suckered by this, and sadly I do believe many have been (as voiced aspirationally in a departmental meeting not long ago: “wouldn't it be great if our department had an Amazon chair? a DeepMind chair?”). I have a whole other rant on this subject, coming soon.

[/highered] permanent link contact

Wed, 13 Oct 2021

No more Dr Nice Guy

[I wrote this back in April, at a point when my time at Kent was drawing to a close and the overload factor had been high for a long time. My current situation at King's is radically different! Whether that will last is less clear.]

As a graph theorist might put it, my in-degree is too high. My time and (especially) head-space are scarce resources. Access to me needs to be limited. Ironically, it requires a certain amount of big-headedness to say this.

Big-headed it may be, but failing to attend to this has proven bad for my health. Every incoming request contributes to a sense of being under attack. Avoiding this is difficult when you're a nice person, working within a culture of niceness and an institution with financial incentives to be nice to (especially) students.

From now on, I have to make an effort to put up a protective veneer of non-niceness. Here I am writing some ground rules to and for myself. It will be tough to keep to them, but as I now know, it will be impossible to do my job if I don't.

[/highered] permanent link contact

Fri, 28 May 2021

Role again

Sorry for the pun. Yes, it's supposed to be about rolling the dice. I've recently(-ish) handed in my notice here at the University of Kent. In July I'll be starting a new job (the same job, but) at King's College London.

The reasons for the move are primarily personal. Of course, that doesn't mean they are unrelated to work... in academic jobs especially, work is personal.

The move is not a promotion. In terms of accumulating money and status, if anything it will probably slow me down. So, it's just as well that those are not what motivate me.

What does motivate me? In work terms, my strongest motivation is to make progress with my research. In career terms, it's to find a home, metaphorically speaking. I will leave behind at Kent many great colleagues, and an institution that I'll remain fond of on many levels. Still, ultimately I had to admit that Canterbury didn't feel like home, and neither did the institution.

It would be wrong not to regard that as a failure. Both institution and city have many home-like qualities. I have failed to turn them into an actual home. I've also failed to repay the confidence of the institution and the senior colleagues, who invested in me on the promise that I would do so. I don't feel great about all this. Being an engineer, occasionally a scientist, I suppose I shouldn't mind failing when it teaches me something. But I prefer to succeed... and although I've learned a lot, I'm not sure it amounts to enough for me to succeed next time.

What I do know is that for most of my time in the job I've been putting a cheery face on an unhappy existence. Even before the pandemic, my move to Kent had coincided with both work and life generally becoming a lot less fun and a lot more stressful. My research usually felt like it was barely moving forwards. That does likely owe partly to my incompetence, sloth and/or feebleness of constitution. However, although I was once slapped down for saying it, I'll say it again: the teaching load has been unquestionably high. That's not a function of the number of lectures per year so much as of less quantifiable factors: the huge amount of direct contact with students (often with few TAs to absorb this), the poor design of admin processes (academics are not removed from the loop to anything like the extent they could be), and a heavy reliance on coursework and the like. (There's more; I'll save the complete list for another time.) The kinds of work that give me satisfaction are not just research—they do also include giving good lectures and explaining stuff to students. But in all cases, the satisfying kinds of work have been a small part of what I do, and the ones that feel most squeezed by other things. In all this, the institution has more often felt like an adversary than an ally. I really hadn't been expecting to apply for jobs elsewhere—my tentatively planned “drastic life change” was something like “move to Whitstable”—but when the pandemic hit, my ties to the institution felt weak. The noises coming out of the central university ‘management’ left me feeling more alienated than ever. Suddenly, moving seemed like the right thing.

Will I enjoy things better at King's? Obviously I think there's a chance. It's difficult to feel certain though. While the University of Kent has been going through an especially rough patch lately, no UK university is in great shape. My impression is that the marketisation of HE has been particularly unkind to Kent—an institution born in the optimistic sixties, and firmly of a plucky-yet-small character. It's located in the corner of a wider region not short of powerful “competitors”. Historically, public policy gave institutions room to carve out their own niche, amid an environment of only gradual change. Since the nineties but especially in the last ten years, politicians have cast aside stability and artificially induced competition. To understand the present, you can do worse than look back twenty years and recognise the glacial forces in action. Kent's Research Strategy from 1999 tells the story of those times: despite working harder and doing better, government policy had already placed it on the side of the curve that is deemed to deserve less, in a misguided belief that induced struggle would lead to a better-functioning institution. It hasn't.

Staying with the history theme, I enjoyed reading Graham Martin's account of the University of Kent's first twenty-five years, “From Vision to Reality”. Those years included an especially high-minded first fifteen followed by an apparently strong showing in the next ten, even when university budgets were coming under a big squeeze under Thatcher. Someone still needs to write the next book, covering 1990 onwards, which I would love to read. Somehow, those thirty years somehow went from pragmatic getting-by to a battle for survival. A (very) senior colleague from another department mentioned that when he had started as a lecturer, for undergraduate applicants Kent was “a viable second choice to Oxbridge”. Some of its sibling “plate-glass” institutions, such as York and Warwick, have consolidated that status... Kent definitely has not. I don't know when or whether a change occurred, and perhaps the memory is a little rose-tinted. (I'm curious about what happened around 1994 in the graph on page 11 of a little report I wrote, where the research indicators started going down... not clear it can be explained simply by sudden competition from ex-polytechnics. In any case the data is noisy.)

Another major factor for me has been that Canterbury, although charming, is a small place. I tend to like smallish places and was feeling pretty optimistic when I moved. In hindsight I should have foreseen that there is a world of difference between Cambridge and Canterbury. Not only is Canterbury half the size population-wise, it has less than half of the cultural activity. In fairness, that is mainly because Cambridge punches far above its population-weight on that. Canterbury is a fine place if you're a family type, and it's great for countryside, coastline and lazy weekend charm. For a thirtysomething single person who wants culture and social life, it's not the best. London is just far enough away to be an exertion. Back when concerts were a thing that happened, I lost count of the times I pencilled in my diary “maybe go to London to see X” on some weeknight, but just wasn't up for the trip. Reverse-commuting from London might be a better option, but that entails the double-whammy of London housing costs on a non-London salary. Big-city living doesn't appeal much to me in any case. (I've already moved back to Cambridge, and will become some kind of “part-time commuter” for my King's job.)

Ironically, the university itself is surely a place where its educated thirtysomethings could find community. But the idea that academic staff can expect to find not just “work” but also community in their jobs is increasingly alien. At the departmental level, we in “Computing” (I still hate that name) do very well at being a friendly and sociable bunch. But more widely in the university there are few footholds available. (To an extent, students at Kent also seem to get siloed by subject, which can't be good.) The relatively small numbers of PhD students and research staff at Kent also means there isn't a “long tail” to the age demographic, which might organically mix together lecturer with postdoc with graduate student with undergraduate. Such an organic long tail is of course something Cambridge thrives on... in addition to a considerable institutional culture of community-building. (In Canterbury, as I guess at most “normal” institutions, institutionally provided booze does nor pour forth to anywhere near the same extent.) In former times at Kent, I imagine that the Colleges would have provided something of these kinds of community, though they appear not to done that for some decades now. In fact rumour has it that “they never worked”... but I've also seen evidence that they once worked a lot more than they do now. The closest I get to feeling part of a wider university community is in union meetings, where I am reminded that I do have peers in the wider university, but we're all exhausted by our jobs. Indeed I'm sure I could have made a better go of things socially in Canterbury if I'd had more energy to spare.

Despite all that, I am seeing a few green shoots at Kent. In the new School-in-Division structure, we have better availability of certain kinds of support staff. At the very top of the institution there is a potentially leaner and more academic-focused structure including several fresh faces. The Senate is once again (just about) a majority-academic body that is starting to rediscover its voice. The university is unfailingly keen to reassert itself as a still-“excellent” research university, even if ‘management’ knowledge of how to do so seems patchy. In Computing I am seeing (unscientifically) a small increase in the average quality of our undergraduate applicants. We may even have made headway against the chronic understaffing and duplication of teaching work that have overloaded us during the few years I've been around. That's all good.

However, in common with many institutions, I also see signs of the opposite: the still-expanding reach of an entrenched managerialism, poor judgement in treatment of staff and finances, an ever-increasing culture of top-down decision-making, the creep of corporate culture at the expense of the academic, and an increasing disconnect between ‘management’ and the academic staff who carry out the mission of the institution. Just as some ‘management’ want to get more serious about research, so the same people seem to view this simply as a way to make academics raise an ever-greater share of the funds behind their own jobs. All UK universities are subject to these or similar forces right now. My pessimistic take is that our universities are managing to preserve academic values only to the extent that they preserve academic self-governance and self-administration—that is to say, hardly at all. Even in Cambridge, where academics have both a culture of pushing back and some mechanisms for doing so, it felt to me much like the option take up a cudgel while the artillery rains down. It's unlikely it does more than fractionally slowing the institution's progress in the sector-wide race to the bottom. That helps to preserve its advantage, but doesn't make for a well-functioning institution.

So, in my new job at King's, the most reasonable expectation is an institution with a somewhat different selection of faults. I have to roll the dice and hope that what turns up is a better fit for me personally. It's also a chance to think more strategically about what I want to do with my limited time on this planet. The past few years have been a blur. I've tried to keep too many plates spinning, with the result that all of them are teetering if they're not already on the floor. My health and self-confidence have suffered too. I'd been considering Kent the only roll of the academic dice that I was likely to get, so I'm feeling fortunate to get another one. I mustn't waste it by plodding on with “more of the same”.

I will have to let go of some of my research avenues, so that the others might actually lead somewhere. I need to focus on the ones where I'm most likely to make a difference to the world over the course of a longer stretch of my career. It takes courage to adopt a longer-term view. I will need to care less about covering my backside on short-term “impact” or “track record” issues. In caring less, I will likely trade away short-term productivity for the potential longer-term... so will probably close myself off from future rolls of the dice. I therefore need to make sure that my long-term plan can actually deliver.

I will need to be more ruthless in protecting time for the things that are important, and organising my so-called (work) life a bit more selfishly around what works for me. If I do this right, I will sadly be a less generous colleague who more often does a poor job of certain less important things. This brings an emotional cost; I enjoy doing a good job of things, even the unimportant ones, and am averse to disappointing people.

Another way to see that is that I need to choose the strategically better flavour of disappointment, and get used to it. I have repeatedly let my mind and schedule become overwhelmed by admin, teaching, marking, service, and making myself available to others. That's not a statement of how generous I am, but of how limited my capacity for those things really is. Recently it has become self-defeating, and I've been disappointing myself and others in what really matters—research very much included. It's been a downward spiral that has limited my ability to function in general. The lesson seems to be that in this job you need to declare bankruptcy early. Bad feeling is guaranteed in some form or other. I need to do what is in my own interests, and be hard-nosed about that.

“Nose of steel” is not one my attributes. And more generally, although I consider myself a natural “classic” academic, I'm still not entirely convinced that modern academia really is where my vocation lies, or should lie. I am especially resistant to shaping myself into the mould of “fundraiser–manager”—increasingly the presumed template for “science” academics, in the eyes of universities and funders and governments. Although it might make sense in bench sciences, it is a poor fit both for the work I do and for me personally.

Despite all that, the academic role is emphatically not one thing, to the point that generalisations quickly get annoying. The regrettable “winners and losers” political set-up amplifies even more the extent to which the “same job” can be radically different for different people and situations. It also means that change is possible. It seems worth one more roll.

[/highered] permanent link contact

Wed, 03 Mar 2021

Career thoughts on academia, industry and points in between

In early 2018 I was at a crossroads in my career: academia, industrial research, or somehow go it alone? I was a postdoctoral researcher at the University of Cambridge, but feeling some pressures to move on. This post is a (somewhat edited) time capsule containing some notes-to-self I made about that decision.

What happened is “history”, of course: I became an academic at the University of Kent. However, you wouldn't necessarily have predicted that from reading these notes. I'm still not wedded to the academic path; most of the questions here are still live in my mind. What I am wedded to is my personal line of research; I'm not just going to “go and work for Google” (shudder). Being a computer scientist, the options and trade-offs I face are somewhat different from those in various other disciplines.

(I should qualify: here academia means mostly “UK academia”, which has some particular problems right now... but is, sadly, not too divergent from the rest of the world.)

I have not made that much effort to edit the notes, so they are sometimes staccato, sometimes repetitive, and don't reflect where my thinking has moved on (especially from experience at Kent; in hindsight they contain a lot of Cambridge privilege). They start here.

Industrial research obviously forms a spectrum: some has more of a development flavour, whereas in some extreme cases it resembles academic research. I would want to fall near the latter end, although not necessarily maximally so.

Problems with academia. There are many. Let me start somewhere, with the following.

Government interference, managerialism. I have become jaded and frustrated by these aspects of the academic sector. They are arguably worse than corporate nonsense in industrial research, because they are so needless. Endless political “initiatives” and fad-chasing, in academia, seem less excusable than when arising from a profit motive. See also “overheads” below.

Research councils are unreliable; it's better not to depend on them. However, as an academic one has the option of using them or not; industry funding is also an option, modulo perhaps high overheads.

Unlike most sciences, I don't need big funding from research councils. My skills are in demand; perhaps I could make enough money from consulting work? My corner of CS is one where a small group, or even a focused individual, can do great work; I don't need to pay for a huge lab or huge group.

I have always felt like an outsider in the academic world. I have a habit of rejecting established research programmes, and another habit of attacking cold problems. My work meets with bewilderment fairly often. Many of the people who seem to “get it” come from industry—thinking of my blog, Twitter and industry-talk audiences. Industry conferences are interesting—they make me feel sometimes a peer, sometimes a learner, sometimes an educator. It is an educational environment, but worlds away from undergraduate computer science; food for thought.

Skill set, part 1. I see myself as both a creative individual and an individual creator. Unlike many [natural] “scientists” I do not see myself as a collaborative agent in the service of some a grand research programme. To stick with academia seems to mean sticking with a system where this is frowned upon or misunderstood, partly from how my kind of CS research is different from “science” “research” in the main. (Of course creative direction doesn't preclude “collaborating”, but it has to be a collaboration on the basis of shared vision, not just nuts and bolts.)

Skill set, part two. Context-switching is not my forte. I don't know for sure, but I feel I could easily be crushed by even a moderate teaching or admin load.

Textbook teaching is a turn-off. In Cambridge I already spend a lot of teaching effort on material that I don't even know, that I've never needed to know, that is not where my expertise lies, and that has dubious educational value (never mind training value). In this instance I'm thinking mostly about the tedious way in which compilers are taught here [in Cambridge] and elsewhere. That perhaps comes down to the fact that my strength is not mathematics, yet in CS, at least in Cambridge, the educational material is desperate to make things mathematical wherever it can. I care first for education rather than training. But this is neither good training nor good education for many of the people with relevant skills/interests. (It is interesting that education may be tailored to a skill/interest profile without becoming “training”—or so I contend.)

Textbook rejoinder: I should take care to remember that the textbook stuff did teach me some things. As a beginning programmer, the mechanics of programming were fine, but I couldn't come up with how to solve certain problems: how to write a game or other reactive system (using an event loop), or how to do heuristic rather than brute-force search (I struggled with my “Countdown numbers game” program; I remember marvelling at the pathfinding abilities of later Sierra SCI games); or how to write a parser that worked systematically rather than by ad-hoc rewritings (I remember wondering how Sierra AGI games did this... noticing a theme here). Only by imbibing some textbook-ish wisdom (sometimes from lectures) could I solve these problems.

Academic, non-practical teaching of practical topics: I've wasted too much of my life staring at students' handwritten code on paper not being sure whether it works or not. A very innovative academic course might avoid this using automated testing. But the cost of redeveloping these from scratch is prohibitive if done on a per-institution or per-academic basis... once again, justifying MOOCs (more on MOOCs below). One could argue that a lectureship would allow me to fix that, by designing the courses differently. However, no doubt my room for variation would be limited, and it all comes out of the time budget.

Teaching repetition, teaching overstretch. Lecturers produce hastily-written slide decks over and over again. They throw away their predecessors' and start over, but throw in their own new fresh set of mistakes when they do so (thanks, Philip). As a supervisor I'm tired of reading hasty, sloppy, unclear lecture materials and of seeing older materials thrown away once a new lecturer starts. The set-up of departments asking lecturers to teach courses, which the lecturer then “owns”, encourages this. Might it be better if the institution owns the course? It's hard to say. Might it be better if lecturers own courses but maintain them whether or not the institution asks them to keep giving it? Seems optimistic. Continuity seems to be the important thing: not just of running the course, but in the creative vision behind it. The current system takes liberal opportunity to break continuity.

High overheads. The academic sector has high and increasing overheads on the funding it receives, not unrelated (call me cynical) to its growing managerialism. If someone in industry has an interest in funding some exploratory research with a university, paying 130\% overheads (or whatever) can quickly erase their perceived value-for-money. It doesn't help that (Cambridge) institutional policies on overheads fail to distinguish big/rich companies from small/startup ones.

Availability of eager young helpers. This is where the academic world does well. You get access to some bright young things who can contribute prodigious efforts towards your cause for little or no money, because (sad but true) they're still in a mode of paying money for the privilege of learning. Still, to date I've yet to really make student projects work for my research; I've seen others do it (e.g. Hrutvik's CakeML project), but it seems better in Cambridge than elsewhere simply because strong students are in greater supply. Industrial research does well too in this regard, usually via established programmes of internships, but that works mostly with more advanced students and requires big money.

Libraries, journal subscriptions etc. Any form of going it alone, as some mix of consultant and “gentleman scientist”, would suffer from lacking these, unless I could also keep (some fraction of) a foot in an academic institution. Or maybe physical “local resident” access to an enlightened university's library is enough.

Future of academic CS, part one: does traditional academic learning have a healthy future in general? Does it have a healthy future in CS-style learning in particular? I'm being vague because I start to doubt the value of much of CS teaching—although I should preface that “in Cambridge”. In certain other institutions, it looks more vocational, which is potentially fine, although it's not really my calling and I don't like to see it masquerade as “education”. My working hypothesis is that textbook academic-style CS content can be delivered via MOOCs fairly effectively, so there's no point investing myself in this style of teaching unless I have the appetite to innovate in that space (I don't, yet).

Future of academic CS, part two: two other aspects of “CS education” are less easily dispensed with. Firstly, practical skills; they take practice. Secondly, deeper and longer-term perspectives, which in theory are what academics are good at. These are the sorts of things that (I hypothesise) make my writings and talks appealing to audiences like Strange Loop, Hacker News, etc.. These people mostly have experience under their belt and enjoy material that helps widen their viewpoint. It would be hard to teach undergraduates, meaningfully, the sort of content that I write in those articles. This could all be an artifact of how we have to teach CS “basics” at degree level. Perhaps (I'm guessing) in other subjects one can deliver a more educational degree because students know the basics and are ready to think about the deeper things.

Many things I wouldn't miss about leaving academia. Two are constant pressure to overextend myself, and constant dissatisfaction from doing a mediocre job. (These have “Matt Welsh resonance”.) Another is the expectation of poor work/life balance. I'm okay with the Collegiate Cambridge model of building your life around your work, i.e. deliberately eroding any separation. In fact I think that dedicated creative work benefits from this. But it doesn't mean that one should be trying to work unhealthy hours. Creativity and clear thought don't benefit from that. Modern academic careers are so pressured that it seems hard to avoid this.

The PhD mill, and my moral objection to it: there's a sort of expectation that as an academic I would acquire funding and “acquire” people, typically PhD students (and worse, RAs) to do “my work” (as opposed to just “their work, that I supervise”). Overall I'm not sure I can get behind this modus operandi. As a PhD student I did my own work. That only created trouble for me later, since it's not the normal thing, but still I feel no need to apologise for doing it. Meanwhile, fetishising lineage in research is part of what turns mostly-spent research programmes into self-perpetuating monsters, and promotes monoculture. Even the word “mentorship” makes me feel cynical.

There are some good things about academia. One reason for wanting a steady academic job has been stated to me as a “cushion for the lean years”. But there are other ways to build a cushion; how much cushion is really necessary? Do I expect any lean years? One should expect some periods that area lean in terms of grants/funding. But if I were a consultant, should I expect years that are lean in terms of clients? One would just have to put up with less interesting work, presumably.

The state within a state. Academia offers institutional support. Sometimes, purely public services (thinking public libraries) and/or wider enterprise (thinking co-working spaces etc) provide similar support. Is that enough? An intellectually stimulating environment is something academia can provide, but probably only the top institutions do really well.

R-e-s-p-e-c-t. Respectability of the academic path is another factor; but I reject that, in principle at least (though I'm not completely immune).

Growing a group: an ability to build a group around my agenda is probably a potential good thing about academia, despite the aforementioned distaste. A small group could perhaps be manageable without abstracting too much time away from activity that is nominally research. But is having 2–3 PhD students, plus teaching and admin, really productive and/or enjoyable relative to what I could do by myself (say, on research time derived from consulting income)?

Scaling to more than one person: academia lets one grow small and even medium-sized teams, although not without also becoming something of a manager. A small team is still many times better than a team of one. There is also some built-in direction: a PhD supervisor has some leadership-style influence. Of course I instinctively dislike that the moment it becomes “control”. In other contexts the same dynamic might conceivably fly (or not) for my work... I can think only of open-source as a team-building tactic. Getting an equivalent extent of team-building that way seems hard. Open source has its own problems: even fewer funding avenues, potential for pulling in different directions (but this happens in academia too), project politics, unreliable (and/or physically remote) contributors, unpredictable timeframes, mob rule of online forums, etc..

What do I think about US-style CS degrees and academia? They admit some amount of liberal arts-style breadth, and take longer to teach material [than in Cambridge], including (sometimes) proper practical classes. But their research story is probably still infected by many of the things I don't like: mathematical orthodoxy, more generally the tyranny of many incumbent research programmes that I don't believe in, the gameability of modern research culture, ditto for career incentives, and the faddish government/funding initiatives (is that less bad there than here? unclear).

Some more points for going it alone. I could pursue long-term research in perhaps a more focused, lower-stress working environment—but perhaps lonely and isolating if it's only me there... ideally want a balance. I could be free to tick some personal “living as I choose” boxes in doing so, regarding lifestyle, physical environment and working environment. After years in West Cambridge on a woefully mis-designed or undesigned site, I miss what makes the centre of Cambridge such a good space: space-efficient small-town planning. The University is moving further from this model, partly for understandable reasons, yet its managerial rather than academic leadership means it is failing to develop an acceptable alternative. I like the idea of working somewhere more rural, with easier access to nature. I could vote with my feet in favour of economic localism and other values that large organisations are painfully slow to catch on to.

Reading the page on "rat race" on Wikipedia, it fits academia well. I am reminded that although the academic world attracts a formidable density of very bright people, bright people need not be far-sighted or big-thinking people. In fact I suspect the correlation between these is fairly weak. We shouldn't suppose that adherence to “the system” is a consequence of intelligent analysis and rational optimisation; bright people can be remarkably strongly bound to social norms and mores, even when they have the intellectual capacity to question them.

An institution in mine own image? I am somewhat inspired by the Recurse Center. And I am a believer, in principle, in “if you don't see the institution you want, create it”. Creating “my own institution” really just means doing things in ways that work for me but also in a way that might provide “a home for other people”, i.e. to fulfil the moral duty of an elder. This has some appeal. If I did create a space for others, it would be intentionally small and definitely not mainstream, so it would be fine if it didn't appeal to many people. It would mainly be about enabling people to do research, as well as enabling myself; but maybe it'd have an educational angle too. I am a fan of Cambridge-esque college-style residential establishments, as a way both to build communities and to limit costs (but this may not fly without big/expensive perks, and needs updating anyway). I'd be pleased if such an institution could contribute to a low-wage-economy area (which would keep property costs down; thinking the antithesis of the San Francisco) as long as it was not too far from civilisation. My educational interests are more in substantial life-long learning, than in an undergraduate hothouse or “stamping machine”. Ditto research. How could it work?

[/highered] permanent link contact

Wed, 11 Mar 2020

Fund institutions, not projects

[This post follows a previous post discussing changes to UK government research funding, which was itself a follow-up to my earlier “Postdoc myths” piece.]

In my last post I finished by mentioning Alan Kay's favoured dictum that we should “fund people not projects”, and that this has reached the attention of Dominic Cummings in his plans to create an ARPA-like research agency for the UK. In fact the dictum is itself borrowed from J.C.R. Licklider, an early and influential divisional head at ARPA, widely credited as a progenitor of the Internet. I also noted that the point of this dictum is easily misunderstood. Here I'll discuss how this is so, and what I think is a better way to capture its intention in the case of government-funded university-based research. In short: “fund institutions, not projects”.

Consider this well-meaning article which claims to be advocating such an approach (using the dictum in the title of the piece!). It totally misunderstands the idea. It seems to think it's about the question of which ‘faculty members’ should receive the ‘grant money’. Not coincidentally, the post's ideas are feeble—stuck in the paradigm of seeking relatively centralised ways to assess the merits of individuals. Tweaking these criteria is not what Kay or Licklider were talking about. Rather, they were critiquing the very notion of project grants, and consequently the very idea of nominated “leaders” following pre-approved programmes of work directing a “team” underneath. “Funding people” does not mean “funding the empire of professor X”, via grants naming X as PI! Even the article's mooted “fund everybody” assumes a fixed prior notion of who is eligible—“faculty” in American lingo. This inherently fails to address the postdoc issues I discussed in my previous posts. The very notion of “postdoc on a project” is antithetical to Kay's (or Lick's) suggestion. For them it is simply the wrong basis on which to pay people to do research (and remember that a postdoc is by definition an employed, qualified researcher—not a trainee).

My re-statement of the idea, focused on universities rather than (as Kay tends to) industrial labs, is that we should fund institutions, not projects. In other words, devolve the decision-making: universities can hire people “on merit” as they usually aspire to doing, but without a preordained project in mind. This answers a common rejoinder, of “who decides who ‘gets funded’?”. The short answer is: institutions do. They are used to making such decisions: how do you decide which postdoc to hire, or which lecturer [a.k.a. Assistant Professor]? Even in the postdoc case, we like to think that research merit is a major factor. So our answer remains mostly the same, but project-specific criteria are explicitly removed from hiring, and project-specific direction is explicitly removed from the job that even a relatively junior (but post-PhD) person is hired to do. “Fit to the institution” is still a valid criterion of course. Let the institutions attract people who want to make a career there. If the ongoing projects are any good, they'll contribute to them; otherwise, or additionally, they'll come up with their own, and more generally contribute to the “problem-finding”, whose importance Kay also often speaks of. Problem-finding is ruled out if you employ people on preordained problems.

This brings me to my next point: it is far better to spread funds out among institutions, and devolve selection, than to run relatively centralised selection exercises like fellowship schemes. The “fund people” line often encounters an attempted rejoinder, amounting to “fellowships exist”. Some people ask “isn't that funding people? And we already ‘do it’, so maybe we just need to publicise fellowships more?”. That is privilege talking. Of course, research council-funded fellowships do exist, and yes, they are “funding people”. But they are the exception not the norm, and are set up to be so. They are the “prestige case”, and are highly competitive. (And they are, anyway, awarded on the basis of a project proposal!) The vast majority of money paying for the employment of early-career researchers is not funding them on a fellowship basis; it's on someone else's grant, meaning a project someone else proposed. The extreme competition for fellowships—a phenomenon caused by policy, not nature, as I covered in previous posts—means only fellowship applications that are “fully baked” (to borrow the words of Martin Sadler from the aforementioned NCSC RIs' conference) have a chance of being funded. Only those applicants who have received substantial patronage and/or prior funding are likely to have the resources to produce a fellowship proposal that both is and appears fully baked, and get it through the narrow review funnel. The effect is inherently conservative, and again antithetical to the idea that “funding people” is how research at large is carried out.

(More generally, people are often oblivious to their privilege. The people who speak most loudly in favour of fellowships tend to be the people who've received them. That's good for them, and very often these people are great at what they do. But as is often the case with privilege, many are slow to recognise how structural factors have acted in their favour. Sadler's point was that inevitably, polish and patronage become decisive elements in many cases. The way the money is split conspires to ensure that however good the pool of eligible researchers, only a slim fraction will be funded in this manner.)

A slightly more subtle phenomenon is that under a system of funding institutions, many more people will “get funded” in their own right since it inherently involves spreading the money out more widely, building a much wider and flatter structure rather than a “fat pyramid”. (That is rather assuming institutions don't find new, internal ways to subjugate people to projects; but I don't believe our universities have yet become so unenlightened that they would do so.) The goal is not to fund “a team under lead researcher X”; it's to fund more potentially-lead researchers and fewer subordinate ones. I say “potential” because the choice of whether to lead or become a non-leading collaborative partner rests with the researcher.

Fellowships' extreme selection practices, like long proposals, postal review and panels, are far less useful in such a context. Similarly, once institutions are free to hire people as they usually do, by job application—but with more such jobs!—we eliminate a certain fraction of the (hugely effortful) grant applications made by academics, since more work will be achievable with the institution's (increased) block funding. There is nothing infeasible about this; it is exactly the way UK university research funding worked until the 1970s. The total number of research-active roles may well work out about the same; that's an orthogonal issue, in that supposing we hold the budget fixed, the pay distribution could stay exactly the same or could change, as could the salary distribution. Even if the staffing level goes down (i.e. average pay goes up!), I'm confident that the effective research capacity would be much greater, since any shrinkage would be offset by eliminated costs: grant application effort, but also the wastage induced by postdoc-style person/project “compromises”, projectwise fragmentation personnel churn and personal upheaval (“move to another city”) that I've written about previously.

Note also that funding people and institutions in this way does not mean “make everybody permanent”. That misunderstanding arises from the same myth I wrote about earlier: the opposite of “postdoc” really is not “permanent position”. It's potentially fine for early-career research appointments to be fixed-term—if the term is long enough and if the process for renewal or progression is sufficiently lightweight (i.e. definitely not “9 months' funding left; start applying for fellowships!”). Five years seems a sensible minimum for undertaking serious work while living an episode of one's life... and not coincidentally, is what established early-career researchers used to be offered in Cambridge. Going further, in fact, there is an argument that late-career appointments in research roles should also remain conditional on actually being research-productive. An oft-noted flexibility in the current system is that institutions can move academics “sideways”, into teaching and/or admin, when they're no longer research-productive. Increasing institution-centric funding would not diminish that option; it can only increase it, since greater funds would be pooled at institution level.

One more objection that might arise is: are institutions wise enough to spend this money well? My answer is “yes, for now” and again it's because the decision-making is inevitably devolved from the centre. Although many of our universities are run appallingly badly by central administration, at the departmental level academic merit often is still recognised and does still count for something. Of course our means of assessing this are not perfect, and I get frustrated when colleagues resort to “counting papers” rather than weighing contributions. Patronage is sometimes a factor too. But at least in my limited experience, most colleagues still look for mostly the right things.

Finally, it's interesting that Cummings takes inspiration from high-profile “breakthroughs” such as the moon landings, the Internet, and no doubt other things like human genomics. I'd like to sound a note of scepticism that much of the research we really want is going to take this form. In an age of technological plenty, it is wrong to assume that what we “should” work on, in the sense of research that will improve people's lives, takes the form of identifiable “breakthroughs”—and certainly not ones in “new areas” pre-selected by government, whether they be quantum computing, “AI”, or the next fixation of government technocrats. The disconnect between apparent technical progress and improving ordinary people's lives has long been present. (On the subject of moon landings, Gil Scott-Heron's “Whitey on the Moon” comes to mind.) But this seems destined to become even more pronounced. While in the biomedical technologies, true life-improving “breakthroughs” do seem more plausible, I still have an overarching feeling of scepticism—perhaps traceable to Ivan Illich's critique of late-C.20th medicine as primarily enabling survival in an unhealthy society. In general we can learn much from the writings of Illich, E.F. Schumacher and others who have questioned the axioms of “development” and its economics. I'm not a trained philosopher or economist, so if you know other work either in the spirit of these, or critiquing them, I'd love to hear your recommendations. In my actual area of training, I've already been developing my case that advances in software are not clearly helping humanity... but I'll save that topic for another time.

[/highered] permanent link contact

Wed, 26 Feb 2020

Postdoc follow-ups

[I'm on strike again at the moment, just as when I wrote my last higher-ed piece, to which this is a follow-up.]

My last higher-ed piece, about postdoc myths was read rather more widely than I expected. (Thanks for reading!) That has left me with a few things to clear up, and a few follow-up thoughts which I didn't get on to last time.

Firstly, let me qualify: my take on postdoccing is more than a little UK-centric, and certainly doesn't generalise in all possible directions. However, I do believe it generalises to many places outside the UK, perhaps in non-obvious ways. The most contentious question of generality (at least in the Hacker News discussion) was whether postdocs “formally exist”. I gathered that many US institutions offer roles like “Postdoctoral Scholar”, for example. But my point was more about how the regulations of institutions and of funders haven't adapted. Job titles are at best a weak indicator of this, and to see jobs advertised as “postdoctoral X” is not enough to infer that there is any recognised status of “postdoc” in the institution or the wider academy, beyond “paid lackey”. Even in the UK, we see jobs advertised, including at the University of Cambridge, with titles like “Postdoctoral Research Associate”. That doesn't mean the institution has any official position of “postdoctoral” anything; it doesn't. The word is simply added for illustration; it is formally meaningless. Such employees' academic standing has been more accurately summarised as “people who do not exist” (to borrow a phrase from Anthony Edwards's remarks on the history of such positions at Cambridge). The high-level point is that institutions' and funders' processes are not designed around present career structures—where one might spend an unbounded number of years as a qualified researcher of considerable independent potential but not holding a “full” “academic” “position”, however that might be recognisable locally. Advertised job titles are not a good guide to reality.

For the same reason, it's wrong to suppose what's happening is “higher supply leading to lower price”. I've been talking about a degradation of the offering—early-career research jobs being offered on shorter contracts with fewer rights and less institutional status—and it's appealing to suppose this degradation is the result of “universities extracting more value” from the labour pool. But that is factually wrong. Neither pay nor status is re-negotiated on the basis of changing supply. Pay scales are hard to change; university regulations are even harder. To redefine positions at lower pay or lower status is a political act; someone has to pull the trigger on it. That isn't what has happened. Equally, in those cases where we would expect upward pressure we also don't see upward changes: universities and academics often find it difficult to hire postdocs with certain skills they want, but that rarely creates any action to improve pay and status (beyond a regulation-limited amount of salary-bumping), because the relevant political change is mostly beyond the means of the academics who are hiring. A key example is that many institutions face a chronic difficulty in hiring research software engineers. As far as I know, this hasn't driven many universities to reform their regulations. Instead, they have shown a boundless capacity simply to limp along with the problem uncorrected. For the same reason, there's no reason to believe downward pressure actually has much effect in cases of oversupply.

So if it is not a case of rational decision-making by universities in the face of increased supply, what is causing the body of underpaid under-statused researchers to get larger? In the UK at least, the answer is simple: it's the government, stupid. What we've seen is that the relative occupancy of pre-existing pay and status levels has been changing. That change arises not from the dynamic between universities and the labour market, but from that between universities and government. It's not supply and demand; it's poorly chosen public policy, formulated by ministers and civil servants who (as far as I can tell) don't understand research. What does change far more easily than pay-scales and regulations is budgets—what government controls. Hence the degradation is arising indirectly, not via the labour-market mechanism but by external changes to distribution of money between streams, and hence of people among the distinct scales and regulations that those pots feed. In short: for a given level of spending, we are relatively funding more postdocs and relatively fewer “full” academic staff. Note, as I argued last time, it would be wrong to equate the latter with “permanent” positions (or even with “teaching” positions). Note also, as I'll return to, the problem is emphatically not one of “not enough money”.

Career-wise, what were once stopgap arrangements—“spend a couple of years on this contract before a ‘proper’ academic role comes around”—have, creepingly, become the norm. Longstanding regulations and arrangements for “contract research staff” are applied increasingly far beyond their originally conceived uses, to an ever-larger and more ill-fitting body of staff, and over longer durations for each individual. But from the universities' point of view this is a case of boiling frogs, not rational agents. Meanwhile, I don't believe government is doing this deliberately; they're just asleep at the wheel. In fact, they don't realise they have the wheel. The (limited) evidence I've seen, such as government's response to the damning 2002 report of the House of Commons Science & Technology Committee and more recently then-minister Chris Skidmore's confused remarks on the subject (Hansard; tweet with video), is that government imagines it has no role in this, and it's all the universities' doing. But universities' hands are largely tied by how money is delivered. Of course, top-end institutions including Cambridge are culpable for their complacency in failing to challenge government.

Those two streams are “core funding” and “project funding”, which in the UK are known as the “dual support” system. I have a draft of a working paper on this subject, which I wrote as an assignment for a module in my PGCHE. I am hoping to expand it into something publishable; comments are very welcome, but be aware it is very much a draft at present. It is, necessarily, very UK-specific. It argues that the changes have come about indirectly, as unintended consequences of well-intentioned (but misguided) policies going back at least as far as 1981 and the attempt to “protect science” from wider public spending cuts. Later changes, to do with funding capital costs (“sustainability”) and with fairness and transparency of funding allocation (“selectivity”) have exacerbated the problem. The foul icing on the horrid cake is a lurking confounding variable—how much core funding is de facto being used to match-fund project grants that are under-costed.

This latter effect is subtle, and is the aspect most in need of further research. Although the headline data is clear that the block/project split has flipped from 60:40 to 40:60 between 1978 and 2018, the reality is almost certainly more drastic than that because more of the block grant is used as “match” or “top-up” support for the increasings volume of projects that are funded at below full economic cost. My lone data point so far (detailed in the draft article) is that in Cambridge, nearly all of the block research funding is being spent on subsidising project funding, i.e. on allowing it to continue being costed below the full economic rate. That's something my future research must dig into, along with a cohort-tracking study of the pre-92 universities to separate out the effects of debinarification in the early 1990s. To make clear statements about career progression, it'll also be necessary to make corrections for rank inflation: early indications are that it's now easier to get to [full] Professor, but no easier to get to lecturer [a.k.a. Assistant Professor], with consequences for how spending is distributed. Figuring out how much this generalises beyond Cambridge is another goal; my article does include some study of Kent, but so far it's less conclusive. If anyone knows another UK pre-92 university that publishes (or makes available to researchers) good-quality data about its staffing and spending over the past decades, please let me know.

The final thing to remember is that real-terms government spending on research has gone up considerably. Therefore, it's doubly unforgivable that career structures are in such a mess. When people like Sir Leszek Borysiewicz say “we don't have money to create better positions”, they're either ignorant or lying. The scarcity is entirely artificial, created by how the increased spending has gone disproportionately on project funding. This is both directly harmful (projects in themselves are a poor basis for both research outcomes and for careers), and indirectly harmful (projects, being under-costed, soak up additional block funding).

To sound a note of optimism, there are multiple ongoing shake-ups of UK government research funding. One is the mooted creation of an ARPA-like agency. Another is the “rebalancing to the regions” which suggests a brake on various institutionwise preferential attachment effects (discussed in my previous post) that have harmed career structures under the project-dominated funding regime. Both of these shake-ups are being driven by Dominic Cummings—a dislikeable figure to put it mildly, but one whose influence may yet do good in this space. At the recent Research Institutes' Conference organised by the National Cybersecurity Centre, the panel session involved three videos, one of which featured Cummings quoting Alan Kay's dictum that we should “fund people not projects”. I think Kay is exactly right, but it's interesting how often his words are misunderstood, and unclear whether Cummings has understood them. In a later post I'll continue this discussion with some notes on how this can go wrong.

[/highered] permanent link contact

Mon, 02 Dec 2019

Postdoc myths

[I'm on strike at the moment, largely in solidarity with my more precariously employed colleagues, whether hourly-paid or fixed-term or never-endingly “at risk of redundancy”. So it seemed a good time finally to finish and publish this post. I wrote most of it during the final couple of years of my seven as a postdoc, which ended in 2018.]

Lots of things are said, written and believed about postdoctoral researchers that are simply not true. This matters because real policies, initiatives, attitudes and actions are shaped by what people believe—true or otherwise. In this post, I'll tackle a depressingly long list of such myths. (I'm trying to keep this post snappy, but the flip side is that I have left out examples in many cases. For some, I also have hard data. So let me know if you'd like more specifics on anything.)

Myth: postdocs formally exist. In almost all universities I know, formally there is no such thing as a postdoc. In research councils' view of the world, it's the same: there are no postdocs, only “Research Assistants” and “academic staff (lecturer or equivalent)”. This matters because when practice on the ground no longer matches the ontologies on paper, systems become prone to poor outcomes and to abuse.

Myth: postdocs are homogeneous. Generalisations and stereotypes abound, both in writing about postdocs and in commonly held beliefs. This is unfortunate because postdocs are a highly heterogeneous bunch. Lumping them all together encourages wrong stereotypes. When these stereotypes hold sway with funders, institutions and departments, misguided policies result.

Myth: postdocs are all aspiring academics (lecturers). Clearly, some are. But there are many skill sets required in a healthy research environment. If you agree with “required”, then it follows that there should be a career path for all of them. Although there should be, currently there isn't. Once upon a time, the University of Cambridge did embrace this idea and had a variety of offices which had real status within the university, with titles including Senior Technical Officer and Computer Officer, as well as the research-oriented Senior Assistant in Research and Assistant Director of Research. These practices are mostly forgotten, and these posts replaced with lower-status unestablished positions: on the academic side, “Research Associate” is increasingly a catch-all, while on the other, technical officers are far fewer and computer officers are no longer the “peers of academics” that they once were.

Myth: postdoctoral work is “study” or “training”. It isn't; it's actually doing the actual research. I had to grit my teeth when applying for some funding that mentioned “postdoctoral students” in its particulars. Meanwhile, at a publication venue I plan to submit to, conflict-of-interest rules mentioning “advisor/advisee” seem to think there is a “postdoc” version of that. There isn't. At any career stage, we have people we turn to for advice, and people we work with. But by definition, someone with a PhD is a qualified researcher, not a student.

Myth: postdocs are “intermediate” between graduate students and more senior positions like research fellows and academics. The phrase “postdocs and PhD students” abounds. But in a university with many postdocs, the population of Research Associates is on average older and has more research experience than the holders of many flavours of early-career research fellowship. That's not surprising when the latter positions come with time limits (e.g. years since PhD) whereas the former don't. People can be postdoccing well into their thirties, forties and sometimes beyond. The “overgrown graduate students” caricature is wrong, disrespectful and leads to wrong-headed policies. (For a game of bingo, try this New York Times article from a few years ago.) According to University of Cambridge data current on 30th November 2017, of university-payrolled Research Associates and similar, over 40% had more than three years' service in the role, and around 10% of the total had over ten years of service. These numbers are underestimates of post-PhD research experience because they exclude postdoctoral experience at other institutions, and because the “and similar” positions include some of the aforementioned research fellowships which lower the average.

Myth: postdocs are on a journey to “research independence” (but are not there yet). This line is popular with funders, who don't seem to realise that their cause and effect are backwards. “Independence” is in practice a statement of one's funding status, not one's stage of personal development. As the mix of funding, in the UK and elsewhere, has moved further and further in favour of project-based grants, and away from institutional funding, hey presto! We have fewer “independent” researchers—on paper, but not in reality. In the UK, suppressing “independent” status is also a useful tactic for gaming the REF, as long as postdocs always co-author with their PIs. (If they don't, their publications are curiously lost into the REF-ether.) Again, the “paper ontologies” are a poor caricature of reality.

Myth: the opposite of “postdoc position” is “permanent position”. This comes up time and time again, but is completely false, at least in the UK. In all institutions I know of, academics (i.e. “lecturers or equivalent”, to borrow an RCUK phrase) may be appointed on limited tenure. They remain first-class citizens for the purposes I've been describing. Yet the justification for depriving postdocs of any given right or privilege is usually “they're not permanent” (a line often pulled out on-the-hoof, rather than reflecting any real rule). In fact, many postdocs are permanent, legally speaking, thanks to the 1999 EU Directive on Fixed-Term Work. Even those who aren't have a legal right not to be treated less favourably. Sneaky tricks skirting or infringing the edges of these laws are among the many ruses used by universities to keep their research staff dangling.

Myth: postdocs are itinerant, unlikely to be at the University in a few years' time, and/or are otherwise “not committed” to the university. To the extent that this is true, it is circular: funders' policies, and institutions' interpretations of them, base themselves on the assumption that postdocs will move on, and proceed to help make that assumption true. In Cambridge earlier this year, a certain fly-sheet had the temerity to claim that Research Associates did not deserve representation because they had not shown “commitment” to the institution (and that the university was not at all complicit in the underlying funding changes that had precipitated the growth in postdoc numbers; no, certainly not). An academic need not be “committed” to an institution beyond their contractual notice period. But a postdoc who spends years at an institution that can only offer them dribs and drabs of extension is showing a very strong commitment to that institution indeed.

Myth: postdocs are provided for by their PIs, so do not need representation, recognition or autonomy. There is a widespread strange belief that a postdoc's PI will “speak for them” and reliably look out for their interests. Again, this came up in certain governance debates in Cambridge. It is obviously false; a PI is only ever a partial ally, and can just as easily be an adversary. Yet these debates threw up bogus arguments hilariously reminiscent of those opposed to female suffrage—exclaiming in outrage, “they will just vote the same way as their husband!” and in another breath, equally outraged, “they might vote a different way than their husband!”. (Yes, this was a real argument of the day.)

Myth: increase in postdoc numbers is somehow a “natural” phenomenon. It's easy to encounter the belief that some sort of bumper PhD harvest, no doubt caused by a mixture of climate change and modern agriculture, has led to there being “too many people chasing too few positions”, and that is why so many people are employed on exploitative terms. This is an appealing folk theory, but it simply does not explain what is happening. Positions are not created by nature; they are created by money, spent according to policies. Suppose there are many qualified candidates competing for a fixed number of jobs. Of course, the more candidates there are, the more competition for the jobs. But it doesn't follow that the jobs' terms will become increasingly exploitative, such as being offered in shorter term, with less pay, lower status and fewer benefits. That requires a separate decision to degrade the offering. That is exactly what's happened in early-career research, by turning the knob in favour of creating only these lesser positions. Why so? It's the same story as in the wider economy these past decades: maximising the exploitable workforce, concentrating capital (research funds) among relatively few incumbents. Anyone who tries to explain it purely as supply and demand, or even “nature”, is either ignorant or is trying to sell you something. (Perhaps they're Liz Elvidge, “Head of Postdoc Development” at Imperial College London, who has a book to sell? I haven't read it, but based on her NPM 2017 performance, I assume it's peddling nonsense like this.)

Myth: postdocs are pro-postdoc. Nobody in their right mind actually wants to be a postdoc per se, with all that that implies. People mostly become postdocs because they want to do research. If there were fewer postdoc positions but overall a better path for research careers, few current postdocs would oppose it.

Myth: “nothing can be done”, or “there isn't enough money”. This is the academic version of Theresa May's “magic money tree” line, and is equal nonsense. The issue here is not the amount of money, but about how the money is spent. Policy knobs are very obviously available, but are being turned only in the wrong directions. This is a failure at the top, since that's where the knobs are. All this is outwith the control of research councils, who (despite their many gaffes) just allocate the budget they're given in the ways they know how. The blame lies with central government. In 2002, the House of Commons Science & Technology Committee produced an astonishing report which is entirely damning of the status quo and skewers the problems of short-term research positions. Government's response was a case of fingers-in-ears. Sadly the report dates itself by its naive optimism that the EU Directives I mentioned above would help; we now know that they can be bypassed. In the 17 years since, we've had no action, beyond creation of another worthless pile of paper.

Myth: postdocs just want to stay in the same job or city forever, but that's clearly unsustainable. It's particularly easy to encounter this belief in Cambridge. But the number of postdocs in Cambridge is a function of money, not of wishes. What's really unsustainable is piling all the research money into a small number ever-fatter institutions, on terms that permit only junior and short-term appointments. These institutions gain a large workforce skewed towards the relatively young and exploitable. Later these workers face a vexed choice: either be spat out to make room for the next lot of eager young things, or (if you're lucky) project-hop to another exploitative job in the same city or institution. In contrast with typical PhD-age life stages, postdocs are generally old enough to have put down roots, or to want to. Special as Cambridge is, it is nonsense to credit it with what primarily a desire for stability in one's personal life. Funnelling the bulk of research money to a select few institutions, and primarily on a project basis, is the underlying mistake.

Myth: institutions are doing what they can to support postdocs. In fact the big institutions are heavily invested in suppressing postdocs' career development. Our “leading” universities are the prime foot-draggers and non-movers in this game, and it's not hard to see why: their senior academics profit from cheap highly-skilled labour serving their research empires. Government will only change tack if academics speak to it, but those with the controlling voice have a vested interest. Of course, these already-established research agendas are not necessarily the ones most deserving of support. And even worse, project-based funding bakes in huge inefficiencies which harm outcomes.

Myth: increase in postdoc numbers is essential to creating an agile, global workforce of the future. This sort of neoliberal nonsense is popular among administrators buying the usual wrong assumptions of elasticity and fungibility of people—in short, treating people like a commodity. But on the ground, it's clear that this is a poor model of how research works. Thinly sliced short-term project-based funding not only creates poor-quality jobs, making for unhappy people, but also gives poor research outcomes. Despite the (mythical) “bumper PhD harvest”, (we) academics tend to bemoan how hard it is to find “a good postdoc” to work on their Highly Specific Project X, starting at Highly Specific Start Date D. With those constraints, that's hardly surprising. So begin the compromises. Many postdoc appointments are major compromises on both sides. Sometimes it even works out. But the failure modes are many: people don't fit the project or the PI; they become unhappy; they jump ship or jump career. Then someone new gets hired on an even shorter contract! No doubt the leaving postdoc also spent a good chunk of their work time applying for other stuff. As a result of this churn, much time is lost and much research goes unfinished or unwritten-up; this is “academic backlog” mentioned at length by Dorothy Bishop here and in this talk. Many small grants also push absurdly detailed administrative work onto PIs. All in all, it's an insane way to spend our research budgets.

Given all these problems, what should we be doing? I believe we need a substantial transfer back to core funding and away from research councils. In the UK, a little-known story of the last 40 years has been a series of transfers from core funding to project grants. Where forty years ago it was 60:40 in core funding's favour, now the balance is reversed, and the “match funding” demands of 80% FEC makes the effective core funding level far lower. My investigations suggest that this has not been driven by policy (with one notable exception), so much as it has occured through accidental drift (over roughly three further distinct periods). To fully reverse this, and provide true core funding, we must eliminate the de facto use of core funds as match funding for projects. Projects must be costed at 100% FEC, even if that reduces the fundable volume. In any case, that fundable volume should be reduced! The balance must be made up by increased core funds that institutions can use to hire and retain researchers on better terms, not simply to top up their projects. I'll write more about these issues in a future post.

I fear another “accident” is brewing. Among UK academics disgruntled by REF, it's increasingly popular to say that we should just allocate the whole budget to research councils. No matter how flawed the REF, wholesale transfer to research councils would be a terrible mistake. REF has problems, but the dominance of project grants creates far more problems. It is the over-use of projects, with their thin slicing, and not limited overall spending, that has weakened career structures. In any case, academic time spent writing and reviewing low-success-rate grant proposals dwarfs that spent on REF. The only way to provide sensible careers is an institution-oriented and actively redistributive model making proper use of core funding. It does follow that REF or its successor must leave behind its current rhetoric on “excellence” and emphasis on scores (metrics), since these also exacerbate preferential attachment. Instead it must actively spread out core research funds according to a broad assessment of an institution's capacity for quality research, including for potential and planned growth. Research funding should be a question of steady nurture and wise investment, not a laissez-faire market for powerful rentiers to battle over. The UK is fortunate, in that there is (for now) no shortage of research talent wanting to work here. It will only flourish if we can provide not just jobs, but careers.

[/highered] permanent link contact

Wed, 15 May 2019

Research careers in UK Universities: questions few are asking, part one

(Those who follow me on Twitter may have a sense of déjà vu about this, but I thought it worth elevating to blog level. I wrote most of it over a year ago... must get better at timely blogging.)

Back in September 2017 I attended the UK's inaugural National Postdoc Meeting organised by the Postdocs of Cambridge (PdOC) Society. We were fortunate to receive a flying visit from Borys, a.k.a. Professor Leszek Borysiewicz, at that time the Vice-Chancellor of the University of Cambridge. This was fortunate in that it is always illuminating to hear the thoughts of those in a position of academic leadership. It was also unfortunate, in that what he had to say showed a distinct lack of leadership. Alarm bells sounded in what he piped up from the floor during the questions in the previous session, which was about career development opportunities. His contribution struck me an astonishingly ill-considered; I would later tweet-summarise his most clueless moments, of which the first two were during this question session. Freed from character limits, here the wording is elaborated slightly.

  1. “Postdocs shouldn't be allowed to hold grants because then the institution must commit to employ them for the duration; that's too much commitment.” (tweet)
  2. “This cannot be solved by making grants more portable. This would do harm by removing incentives from smaller Universities—whose grants could be poached by competitors.” (tweet)

What was most alarming was the self-assurance with which he conveyed these completely bogus arguments. He continued the theme during his own slot, with another clanger.

  1. “We can't avoid putting research staff on short-term contracts, because we lack the funds to create more academic positions.”
  2. (tweet)

It's difficult to overstate how wrong all this is, and doubly so as an outlook for a supposed “leader”. My responses, again more-or-less as tweeted, were as follows.

To (1) (tweet): why wouldn't institutions be prepared to commit to that employment, if the grant-winning postdoc brings in their own salary, overheads, and maybe others' salaries too? (I'm aware of the alleged <100% FEC issue; is that the reason?)

To (2) (tweet), this naturally equilibriates: grant success makes institution more attractive; some do stay, so expect net gain in research power. The institutional incentive is still very much there.

Point (3) is (tweet) the classic canard. “We can't create more academic jobs” is neither here nor there. Improving researchers' lot is not about funding more positions; it's about how a fixed research budget is spent. Current research politics say, implicitly, that we should choose to employ lots, cheaply and precariously. This maximises the raw number of research staff, but keeps turnover high and skews towards the young and inexperienced. Is this really optimal? It seems hard to believe. But it is quite well explained by the incentives faced by established PIs.

What can we do about all this? The first step is clearly to challenge these bogus arguments. Sadly, the world of higher education is full of oft-repeated lines that talk about policy as if it were a law of nature. The “FEC issue” my tweet alluded to is one such line: that “universities lose money on research, since grants bring in less than the full economic cost”. Although of course grants do not always pay full FEC as costed, it is a nonsense to say we “lose money”. Universities are funded from the public purse precisely in order to host research activities (among others). So the worst that can be said is that each research undertaking consumes some of that block funding, and must be fairly accounted internally. To say “consuming funding” is fine, but to frame it as “losing money” is overlooking the very mission of a university. There is a some argument that the FEC issue is ultimately a construction of accounting; one that (I cynically surmise) is convenient to policymakers and administrators because it keeps academics subordinate.

(I have witnessed FEC arguments deployed by administrations to create the impression “you should be grateful” or “we're doing you a favour”, then used as a devious pretext for cutting the support offered to researchers—increasing the pressure to raise even more funds. That specific issue was around the costs of “training and personal development” which, it had been argued, should be costed on grants now that ring-fenced Roberts money for training courses and the like was no longer available. Of course, such courses had been offered before Roberts money existed, and in any case would merit consumption of general funds since they obviously support the mission of the University. Even worse, creating pressure to raise external funds for such things is hiding a giant double standard: that administrative functions of the university rarely face equal challenges to justify their own costs. What grants are the administration applying for, to pay for their own staff's training and development courses? That's obviously an absurd idea, but my point is that value-for-money on overheads is already highly dubious. FEC arguments allow the administration to vent budgetary pressure back out onto academics—“raise more funds”—instead of creating internal pressures. Given the patchy quality of “training courses” and “career development” sessions I've attended, such pressure could hardly fail to be constructive. But instead, we're expected to raise funds for yet more of the same.)

Let's get back to Borys. There is a general pattern that those seeking to protect the status quo—whether owing to vested interests or plain unimaginative conservatism—often deploy “fear, uncertainty and doubt” tactics. They make a vague argument as to why an alternative “would never work”. I have always found this mindset particularly jarring when encountered among researchers, whose very job is exploring unproven ideas. But it is exactly the tactic Borys was deploying in his second point. To me it appears completely implausible that grant portability would take away incentives from institutions. Poaching is already rife (thanks to the REF, which remains unreformed on this point), But even the biggest institutions are not indefinitely elastic. Maybe indeed a certain fraction of grantholders would be poached, but that is second-order effect and is likely far outweighed by the first-order benefits of increased research income. Meanwhile, it's true that growing inequality among institutions is a problem, but measures to help postdocs receive grants would work to lessen this, not worsen it. That's because current grant-awarding policies contribute massively to the “rich get richer” phenomenon, owing partly to the weight placed on track record. Spreading grant money out further down the career ladder necessarily means putting greater weight on other factors (or perhaps greater intentional use of randomness) which will favour smaller institutions. All this is also presuming a lot about the overall system of project-based grants, which, as I'll note, is far from untouchable.

Borys painted the issue as one of funding more academic positions. That is not the issue at all. The real issue is this: how should we spend the amount we currently spend? It's rhetorically convenient to take Borys's approach, painting this issue as “postdocs making demands”—for more money, or more academic jobs, or the moon on a stick. Then it can easily be dismissed. Most of our “leaders”, like Borys, are invested in the status quo. This isn't a tirade against Borys: as Vice-Chancellors go I think he was not too bad. But even a “good” V-C was happy either knowingly to advance a bogus argument to protect that status quo, or more charitably, to attend a policy-oriented meeting of postdocs without engaging his brain sufficiently on the issues of what better policies might be.

It's not just Borys. Lack of leadership is a sector-wide problem. At the same National Postdoc Meeting, one panel included a Liz Elvidge, apparently “Head of Postdoc Development” at Imperial College London. She claimed that the dire situation is “the nature of the beast”. But it is not nature! It is nurture. It is a consequence of policy; the policies could easily be different. Of course, policies won't change if the people with influence hold opinions like these. It is a perversity typical of the present state of affairs that a reputable institution would create a “postdoc development” role whose de-facto job is to further entrench a system that is actively hostile to such development.

(The notion “professional development” is routinely used as a fig leaf. Institutions love to proclaim their commitment to professional development, to cover for their inaction on the core policy issues. We saw this yet again recently, in science minister Chris Skidmore's speech about “Securing the research talent of tomorrow” It begins to acknowledge the structural problem, but rather than facing up to it pivots straight to patching symptoms—mental health and wellbeing, “career development” initiatives—and generating mountains of worthless paper, namely the Concordat. The Concordat is full of worthy-sounding requests, but it is doomed to be ineffective as long as there is no change to the funding regime. There's no recognition from Skidmore that how the money is spent is not only the cause of the problem, but is something government directly controls.)

Today's incumbents—the ageing distinguished professors who now have the strongest academic voice in shaping future policy—mostly grew up under a different system. Where was the mandate for changing it? I'm still looking—this is a topic I'm investigating as part of my part-time PGCHE studies here at Kent (yes, I'm a student again)—but I suspect the story is one of creeping change without any meaningful mandate. Our funding landscape today is dominated by project grants, rather than personal or institutional grants. But it need not be so, and was far less so when our current senior citizens were starting out. For example, although it is rarely remarked, in Cambridge, until the 1980s it was common for academics to start their career on five-year appointments as “Senior Assistant in Research” or “Assistant Director of Research”; or as “Assistant Lecturer”. These posts have disappeared, for reasons owing directly and indirectly to funding policy changes. I will leave a full exposition of that to a future post in this series, but this 2004 report published by HEPI is a good starting reference. The story in brief is that the split of public research funding between Funding Councils (core funding) and Research Councils (project grants) has shifted substantially over the last forty years—but not on the basis of any clear mandate or strategic recommendation that I've yet managed to find. I'll report back with more detailed findings in a future post.

[/highered] permanent link contact

Wed, 17 Jan 2018

How to be a Scrutineer (and a better one than I managed to be)

From December 2015 until October 2017 I served on the University of Cambridge's Board of Scrutiny. This is a high-level governance body that is “the University's chief internal mechanism for ensuring transparency and accountability in all aspects of University operations”. The role of the Board is to “examine the way in which the University is run and to comment on this to the University's governing body, the Regent House”, taking its lead principally (but not necessarily exclusively) from the Annual Reports of the Council and the General Board, and from reports concerning accounts and allocations. My service on the Board was hugely valuable experience to me personally. I hope it was also, in some smaller way, of value to the University.

One of the most difficult things was not so much discharging the responsibilities as figuring out how best to do that in the first place. So I've written the following retrospective notes on how, in my opinion, one might serve on the Board effectively. They are more about reach than grasp. Doing all of them perfectly is impossible. Doing even most of them pretty well is extremely difficult. But it's what I would aspire to if I were to re-join the Board.

Serving on the Board is a combination of learning, doing good (according to the University's mission and hopefully one's own principles) and doing necessary: putting in service that others in the University will benefit from. When I joined, I was rather underestimating the significance of the latter. Doing a service to fellow Regents is the most constructive spirit in which to operate. As a non-obvious example, one of the pragmatic roles of the Board, even in a well-functioning University democracy (alas, currently lacking) is to communicate to the University community things that the administration can't say, whether for legitimate political reasons or for regrettable lack of spine (often the reality is somewhere in the middle). One example is criticising the national government. Another is admitting the University's errors in PR-sensitive arenas (e.g. the North-West Cambridge debacle).

Understand the formal structure of the University. I spent many hours studying various web pages before I began to get my head around this. The “Officers Number” issue of the Reporter, published every Lent Term (currently: this one), is perhaps overkill but useful. The governance web site has a gentler introduction. The Statutes and Ordinances are worth consulting and getting to know your way around (roughly). One of the difficult things for me has been to remember which stuff is in Statutes, which in Special Ordinances and which in Ordinances... it's easy to be scratching your head about why you can't find something you saw previously, until you realise that it's in one of the others.

Understand both the Board's formal remit (as specified in Statute A chapter VII and Ordinances Chapter 1 section 6) and (separately) its de-facto role—including differing opinions on what the latter should be. This has been controversial since the early days of the Board. Perhaps the best reading material is the Remarks at the Discussion of the Fifth Report, which showcase the whole spectrum of opinions. Gordon Johnson had previously (at the Discussion of the Fourth Report) espoused the view that the Board should not write a report, simply lead Discussions. Among his concerns was that the University's “officers... devote a tremendous amount of time and effort to meeting the Board, providing it with information, and answering its questions”. At the next year's discussion, then-Chair Michael Potter countered that claim, after which, Johnson laid into the Board's newest Report quite ferociously, reiterating his views that its approach is wrong. Richard Stibbs countered again with the view of the Board as “interested amateurs”, in both senses of the latter word. This is a view I personally find to be realistic, again in two senses. None of this is very conclusive, but it makes clear the spectrum of opinion and strength of feeling.

Understand the Board's history. Johnson was part of the Wass Syndicate on whose recommendations the Board was created. In the aforementioned Discussion, Johnson is the most forthright in this discussion; he is not necessarily correct. I personally found it very odd that he considered it the Board's responsibility to “to explain more in layman's terms the background to what was being proposed [in the Annual Reports of the Council, General Board and of Allocations from the Chest] and the reasonableness or otherwise of [their] content”. I see no reason why these reports cannot and should not be made readable to the lay-Regent within themselves, including their own background. And of course, the reasonableness of their content has very much been the subject matter of every Board's Report. In the present era, where these Reports contain more spin than ever, it becomes ever more important to highlight their omissions rather than explain their merits. The CAPSA debacle is often cited as one of the Board's major successes to date, since the Board was part responsible for instigating the Shattock and Finkelstein reports. (I must admit I have yet to understand by what mechanism the Board did this.) This Discussion of March 2003 is good too, covering post-CAPSA attempts at governance reforms which were rather mishandled.

Obviously, read the Reporter every week. Slightly less obviously, read the Council's minutes every month. Talk to people who have been through the political mill. But also, talk to the rank-and-file, such as colleagues in other departments. Talk to as many different kinds of staff and student as you can. I did not do wonderfully at this. One thing in that spirit I did, for a period, was to work from the Combination Room one morning a week; this was partly for focus, and partly in the hope I would overhear information about what was concerning the rank-and-file admin staff at the Old Schools. I'm not sure I overheard very much of interest, but I did get a sense of what I could (somewhat blandly) call the “different culture” that operates in the Old Schools and UAS worlds compared to elsewhere in the University.

Obtain minutes. Where minutes are not made available (whether publicly, only to the University, or only to the Regent House), consider finding out why not. For example, in August 2017 I noticed that the Council's Resource Management Committee appeared to have a practice of publishing its minutes online, but also hadn't archived any since 2012 in its online archive. Was this because the committee did not meet? That seemed unlikely. Had it changed its policy on publishing minutes? There was no indication. I queried this with the Registrary's Office; for four months I received no reply, until I re-sent my reply and (not coincidentally) spoke at a Discussion (not on anything related, I might add). I then received a reply saying roughly “we can't do anything about committees choosing not to publish their minutes”. But, coincidentally I'm sure, a big pile of RMC minutes did appear in the archive at around the same time. If there's a moral to this, it's that perseverance and a certain amount of pig-headedness are necessary to help ensure diligence is applied where needed.

It's not just minutes. Other widely-circulated documents can be telling about what is going on centrally. The University's Risk Register conveys a lot about current priorities, and is available online (internally only). The annual Planning Round guidance is also telling, and usually appears online too. (In general, “following the money” is illuminating.) There's probably others that I'm overlooking here.

Make the Reporter and collected minutes searchable, in a way that works for you. In my case this means saving them on my laptop and converting them to plain text. Automating this is not too hard, because the web pages have a fixed structure. (Ask me for my Makefile!)

Know how to juggle hats. When interacting with University staff to whom it might matter, be clear whether you're acting for the Board, or personally, or on behalf of some other group.

Keep at arm's length from the Council, but have your sources. The Board's main sources of information on Council business are of course the Proctors, but they often have reasons to be discreet. A lot of University business can easily pass you by unless you go out of your way to hear it.

Protect the Board's neutrality and perceived neutrality. Avoid appearances of conflicts of interest, even if there is no conflict. For example, in hindsight, it was a mistake of mine to nominate a candidate for election to Council while I was serving on the Board. Even though there is something of a a revolving door between a subset of the Board and a subset of Council, it is best to avoid being so nonchalant about this as to plant doubts in people's minds. (As an example of something less blatant: one of my nominators for election to the Board later joined Council part-way through my term. If anybody had been paying sufficient attention, this might have been perceived as a conflict of interest, but the chances are much lower. I hasten to add there never has been any conflict on that account; the question is one of appearances.)

Develop a mental model of Council's operation, and check it with actual Council members. One thing that I learnt only quite late was the complex dynamic between the Registrary's Office and Council. The Registrary's Office has huge power over the University, since it does much of the “real work” behind business that is ostensibly done by the Council. For example, as I understand it, responses to Discussion Remarks are drafted by the Registrary's Office, as are parts of the Annual Reports of the Council and General Board. The influence of that office is huge. On certain occasions there is tension between members of Council and that Office; usually the “official story” communicated to the University, such as in the Reporter, reflects the Registrary's Office's side of any such contended matter.

Understand the HE landscape beyond the institution. Initiatives in the Old Schools often have their origins in sector-wide movements, either directly provoked by government policy or indirectly emerging from ongoing cultural shifts. A good example is the “100% FEC” issue: the claim that the University loses money on research. If I had read this article in the Times Higher Ed, or these slides, I might have pressed certain people harder when this line came out of their mouths.

Understand personal dynamics, and compensate. Like any committee, not everyone on the Board will be equally engaged or equally effective at any given time (and for various good reasons, too). These need not be in proportion to the loudness of their voice or their apparent authority in the meeting room. Compensating for this is part of the Chair's job; but everyone can help.

See membership of the Board as both an individual and a collective matter. The Board will never be of one mind. Signing off on the same Report is the basic standard of consensus. Beyond that you should not expect necessarily to have the same priorities or same position as your fellow Board members. At the risk of displeasing the Chairs I have served under, who did not hold with this policy, I believe individual Board members should be free to exercise their right to see documents—as long as this is with a valid concern in mind. I don't see why any member of the Board could not be trusted to use this privilege reasonably. Seeing this privilege as only a collective Board matter seems to me to close off the opportunity for individuals to follow their noses. I must admit it is not clear to me to what extent there was a statutory intention to empower Board members as individual rather than just collectively. But at least, I perceive that individual dimension is recognised in the case of Council.

Don't be collectively timid. During my time, I felt we were too timid on certain occasions. As one example of a potential timidness that Council and/or the Registrary's Office may play to, the Board should never accept redacted documents, unless the redacted content is “irrelevant”—the latter being the only statutory grounds for withholding information from the Board without the VC's written permission).

Don't be ashamed to have issues you care about. Obviously there are some relevance criteria. Actions of the Board should be in the interests of the University, within the Board's remit, and allocate attention proportionate to issue's importance. But if you believe something is important by those criteria, it probably is. Don't be afraid to raise it even if other Board members appear uninterested. Put differently: being a diverse cross-section of the Regent House, different Board members will care about different issues; the intention of having a twelve-member board is to obtain a cross-section. Conversely, this means allowing time to other Board members' concerns even if they don't resonate with one's own experience.

Develop a habit of contributing to the Board's ongoing recruitment and succession. The Board is continually struggling to recruit. Many people seem not to care about whether their University is governed democratically and accountably. Many do care, but haven't yet realised that the Board might be a good way to act on that. Every Board member should take care to notice anyone who seems to be an Interested Sort of Individual. Find them and encourage them to serve. Note that filling the spaces is not enough; the Board will only be in good health when elections are contested. Having a choice of membership and some effective choice of officers (Chair and Secretary) are important. (I write that as a Secretary who felt somewhat roped into it, on the grounds that “nobody else will do it”. In hindsight I think it was a positive experience, but it was unwelcome at the time, and I don't think I'm being uncharitable to myself in saying I was not the ideal candidate.)

Don't believe that everybody knows what they're doing, or that you do. When I started on the Board, things were done a certain way; it had its benefits but it also had several drawbacks. In my second year on the Board we changed a lot of things, generally to the Board's benefit. I suspect several logical continuations of those changes will come about in the next year. If I'd known earlier how much flexibility there was in how the Board conducts its duties, I might have helped push some of those changes along sooner.

Doubt people, charitably. This extends to oneself, one's fellow Board members and also to the many senior figures whose work the Board scrutinizes—who are grappling with very difficult and demanding jobs. However formidable they are as individuals, they will invariably be grappling imperfectly and may even be happy to admit as much. Among its many other modes of operation, the Board has the potential to help people out, by nudging the gears into better alignment. (That is not to say that raising the red flag or kicking backsides are never appropriate modes of operation....)

Be cooperative where possible, but don't be a pushover. One of the trickiest and most controversial aspects of the Board's operation is the question of how much it should be a savage bulldog, and how much it should be a conciliatory force. At different times, both can be necessary; good taste is required. This is a difficult balancing act.

Believe in the Board's importance, so that others will. I was quite surprised at my first Board meetings to detect hints of nerves on the part of some interviewees—people who were very senior figures in the University. But also, in those early days, we had one instance of a senior figure “shutting us out” owing to accumulated bad relations. The Board's power may be limited to giving voice to issues and arguments, but that power is still considerable. There is much latent capability for doing good, but this is slippery; harnessing that capability effectively is difficult. If the Board acts and acquits itself with fitting diligence and seriousness, including a sense of realism about what options are available to Council, it can be taken seriously by others. It stops being effective when its demands are impossible, or when it seeks to direct attention where it is ill spent. In an era when both Council and ordinary Regents are ever more disempowered by central managerialism, a strong Board is essential to elevate the University's supposedly democratic business from mere theatre to the actual rule of good sense. During my time, the Board was an effective body only intermittently. Hopefully it can be made more consistently effective; the University needs it more than ever.

[/highered] permanent link contact


Powered by blosxom

validate this page