Update: I updated this post at 1:35 on Monday 27 February to emphasise that the screen shot posted below is work-in-progress.
I haven’t posted much about the Key Information Set. In my last role I was given responsibility for collecting the data, but I left that role before HEFCE published any useful guidance. In my current role I have – hitherto – escaped responsibility but my boss has now asked me to take on the data collation and submission so I have spent a couple of days reviewing the guidance and attending the HESA KIS training event last week at SOAS. Now I have begun to get my head around KIS again.
I haven’t posted much about the Key Information Set. In my last role I was given responsibility for collecting the data, but I left that role before HEFCE published any useful guidance. In my current role I have – hitherto – escaped responsibility but my boss has now asked me to take on the data collation and submission so I have spent a couple of days reviewing the guidance and attending the HESA KIS training event last week at SOAS. Now I have begun to get my head around KIS again.
My overall view of KIS remains pretty negative. I don’t mind doing useless work because I work for money: useless and useful pay the same rate. But I think the work that goes into KIS will be actively harmful. Look at this screen shot and you’ll see that the KIS data will be presented to potential students in a way that makes it virtually impossible to tell the sample sizes from which different data are drawn, so even if they have a strong statistical knowledge, it will be hard to tell whether the difference between 75% and 80% student satisfaction on two different courses is really meaningful. This is still work in progress, so perhaps the final version will be clearer on this issue, but it will be difficult to do much within the constraints of the form.
Moreover, for reasons I’ll discuss further below, the data presented on each ‘course’ will actually be drawn from a range of different sources that don’t marry up so the employment data, for instance, may describe outcomes for a group of students who did the course some years ago when learning, teaching and assessment methods were different or they may relate to a larger group of courses because of the ways that the different data sources are matched up. The KIS will misinform potential applicants by directing their attention to fluctuations in statistical data which are probably meaningless, and away from sources of information (like what they see and feel on their campus visits) that are far more meaningful.
As I say, I don’t mind being useless, but if I wanted to do socially destructive work for money I could be in the arms trade, or big tobacco or some other more lucrative place.
Now let’s look at the data matching issues in more detail. KIS provides data on ‘courses’, and HESA defines a course for this purpose as:
[A] programme of study that a student can apply to either through UCAS or directly to the institution. Thus, if students can apply separately to courses in Physics, Chemistry and Biology each would require a separate KIS, whereas if students could only apply to a course in Science and later choose to specialise, then only a single KIS needs to be produced. The same guidance would apply where courses have optional exit points, if it is possible for a student to apply solely for the lower award then a separate KIS would need to be produced.
You will immediately see that the distinction between separate ‘courses’ in Physics, Chemistry and Biology and a Science course with specialist pathways is unlikely to be matched in institution’s administrative data. Universities usually have two main definitions of ‘courses’. On the one hand there are the approved courses (more usually the word ‘programmes’ is used, probably because it’s longer) that have been validated in accordance with the university’s validation procedures. On the other there are a set of course entities on the student record database that have been created for administrative reasons so we can identify particular cohorts of students. There is likely to be a one-to-many relationship here. For instance a Foundation Degree for Teaching Assistants may be run at a number of partner colleges; in validation terms it is the same programme wherever it is run, but there may well be separate codes on the student record database used for each college, for administrative convenience. Often, universities will market their validated programmes in different ways. So a single validated programme with a single code on the student record system may be marketed and recruited to under more than one UCAS code. An example might be HESA’s Physics, Chemistry and Biology pathways on a single programme, which might well be offered for separate application but the students then managed on a single course code.
So: much scope for confusion. But it gets worse. The KIS relies on a further ‘course’ entity, the HESA Course returned in the HESA Student Record. In the Student Record, HESA defines a course quite differently from the way it is defined in KIS.
A combination of subject and qualification that defines what a student is aiming for.
So two quite different validated programmes of an institution might constitute the same ‘course’ for HESA Student Record purposes. For example BA Fashion Design (a programme with a lot of making in it) and BA Fashion Futures (a programme with very little making, aimed at students with no design background) have the same qualification aim – BA – and in JACS terms the same subject (W230 – clothing/fashion) so they could be represented by the same HESA course. Some institutions will have done so, but others will have forced the creation of two (or even more) HESA Courses in order to structure their HESA Course entities as closely as possible to their own understanding of the course structures in their own administrative data. This obviously helps them make sense of their HESA data if they use it themselves. So in some cases there will be a one-to-many relationship between institutional programmes and HESA courses, but in others it might be the other way around.
At every stage where these ‘course’ entities need to be linked up, there are ambiguities and scope for different outcomes based on the way individual institutions have set up their student record systems years ago, and returned data to HESA in the last couple of years. It seems to me that to call the results ‘guesswork’ is an insult to guesses everywhere.
The frantic speed at which the KIS is being implemented helps to ensure that these issues aren’t fully thought-through and resolved. The current timeline is that HEFCE will publish final guidance on 29 March, the very day that the submission system opens and we are supposed to start providing our data. Full functionality in the submission system is expected on an uncertain date in July, as only then will the data garnered from HESA Courses (employment outcomes and National Student Survey results) become available. All data must be complete, checked, signed off and submitted by 22 August so that HEFCE can publish it on 17 September (for institutional preview) and it can go live on 22 September. HEFCE expect the relevant widgets to go live on institutional websites on this same date, although they have provided until 31 October before they start punishing laggards. Linking the UCAS data requires us to return a field called UCASPROGID (UCAS Programme Identifier), which my admissions colleagues tell me has yet to be published or even specified by UCAS. At the moment I don’t know how many UCASPROGIDs I will have to return, or what provision each one will cover. This means, of course, that I can’t yet quite be sure how many KIS Courses I will need to return.
So you can see that I am not glad to have gained this particular job. It looks like a lot of work, poorly specified and to a timetable with a too many unknowns in it. The end result will probably be worse than useless for prospective students. Did I also mention that it is rather high profile and therefore likely to be noticed if it all goes wrong?
But this blog post is not only an extended whinge. KIS illustrates how the past – HESA definitions used previously for different purposes, student record systems installed by institutions with other objectives in mind – sharply constrains the regulatory framework that the Government can put in place in the here and now. To get a common definition of ‘course’ into all these systems would literally take years and Ministers rarely have the patience for such things. There is a lesson about strategy to be had here, I think.
Hi
ReplyDeleteThis blog mentions some issues that are discussed in the report JISC funded HESA to investigate around the definition of a ‘Course’ in “What is a course?” . This report is published on the HESA web site www.hesa.ac.uk/content/view/2353/393/ .
JISC are also currently running programme about Course Data: making the most of course information. Information about this is available at: http://www.jisc.ac.uk/whatwedo/programmes/elearning/coursedata.aspx
Please contact me, Ruth Drysdale if you would like to know any more about this Programme on R.Drysdale@JISC.AC.UK
Ruth,
ReplyDeleteThe HESA report draws no conclusionsa nd the course data project is simply a back door to force the xcri-cap data set on an already information saturated sector.
Thanks for both your comments. I know nothing directly about xcri-cap so won't comment on that, but I agree that HESA report on what a course is essentially says comes to the same conclusion as I did - we just don't have any common, workable definition.
ReplyDeleteThanks for an interesting post Andrew. A couple of thoughts..
ReplyDeleteFirstly I think a course (or at least an individual students experience of a course) can change significantly as they progress through the HE lifecycle - so the thing that students apply to can, quite rightly, be quite different to the thing that they end up graduating from. It will always be a significant challenge to knit together these different data perspectives.
The other issue is the fact that there is no standardisation at all between institutions' use of the C-word, so organisations like HESA and UCAS who have to define data structures that represent this activity are building those structures on sand....but would a call for institutions to standardise their course structures and lexicon (I.e academic regulations?) be met with cries of "institutional autonomy"?
Andy Youell
Andy
ReplyDeleteThanks for this comment. As I said in the post institutions are inconsistent within themselves, never mind between: I don't have a solution to this problem either. My concern is that having built this structure on sand we are inviting prospective students to rely on it. We all know how that parable ends.