Jump Page GO

Sunday, April 17, 2005

York Report Part 2

Chapter Six

The Social Workers’ Experience of Using the ICS.

Introduction


The focus groups highlighted many of the issues raised by ICS. They could not, however, provide a quantitative measure of how far these issues were similar in different authorities. Nor could they show how views of the system varied with the workers’ roles or the training they received. To explore such questions we needed a different approach and undertook a questionnaire survey (also part of the Audit Study). This was carried out at the end of the evaluation, in April, 2006, to allow the maximum time for practitioners to have used the system, and to contrast with the audit study focus groups described in the last Chapter, which took place early in the evaluation.

This part of the study aimed to:

  • Describe the way social workers and team leaders views of ICS
  • Relate these views to other factors such as experience, training, local authority and role at work.4

This chapter draws on the survey to describe the respondents’ views of particular aspects of ICS (e.g. its aims), and the way their overall views of the system varied between groups. We also highlight the particular points on which the respondents were most agreed and their views on how ICS could be improved.

Method


The survey was developed on the basis of the material from the focus groups. Broadly it covered the characteristics of the workers and their views of ICS and the different exemplars it contains.

We devised the survey so that it could be completed within 20 minutes. Most of it consisted of closed questions which the respondents ticked or circled. However we also included four ‘open ended’ questions at the end which allowed them to express their views of ICS in their own words.

The survey was piloted in an authority that was using ICS but was not involved in our study. It was then introduced at meetings in the authorities and left with the teams to arrange completion.

Three authorities were involved in the survey: authority 1 (4 teams), authority 3 (4 teams) and authority 4 (1 team). At the time of the survey this authority had not yet fully implemented ICS. We excluded this authority on the advice of the advisory group.

The sampling frame was provided by the authorities. In one authority in particular they wished to include all those who had any experience of ICS. In practice we received responses from four groups of workers – team leaders, senior social workers/practitioners, social workers and ‘assistants’. We obtained lists of those believed to be holding these posts. We had responses from 35 qualified social workers, 5 senior social workers and 12 unqualified workers. This gave us a response rate of 56 per cent.

Limitations and strengths of the data


There are three main difficulties with our data.

First, our response rates are biased by team. Three teams in three different authorities provided us with a 100 per cent response rate. By contrast one team, while carrying out a precisely similar role to one of these high responders, provided only one response out of a possible 12.

As we will see later there is evidence that attitudes to ICS also differed between different teams. These attitudes were not necessarily associated with response rates – one team with a spectacularly low response rate provided one very negative reply and one team with a 100 per cent response rate was similarly negative.

The lack of apparent correlation between attitude and response eases our problem but does not eliminate it. To give a simple example, we will compare the experience of respondents in different authorities. As we have seen, however, one authority provided virtually no responses from one of its teams. Depending on the nature of the team ‘omitted’ our data may thus yield an undeservedly good or bad picture of attitudes in the authority as a whole.

Second, our numbers are low for some purposes. This makes some forms of analysis risky and renders proportions rather risky guides to what would happen with bigger numbers. It is also likely to mean that we ‘miss’ findings that would have been statistically significant with a bigger sample. We have tried to avoid inappropriate statistical techniques5. Given reasonable precautions the numbers do not, in themselves, invalidate the statistically significant findings we do report.

Third, we use the quantitative findings from the study to report areas of consensus or disagreement among the respondents. For example, we report that a large majority agreed that ‘ICS should be drastically simplified’. On the face of it this is simply a straight report of a ‘fact’. Such results are, however, much influenced by the precise form of words chosen. For example, we could probably have altered the percentage by omitting ‘drastically’.

These difficulties could mean that we should avoid reporting areas of consensus or disagreement at all. We think, however, that this would be too cautious. Particularly when combined with qualitative data the quantitative data do provide a sense of ‘where the respondents are coming from’.

On the positive side the survey builds on and complements the focus groups. It does so in three ways.

First, it represents experience at a later date. The audit focus groups were conducted soon after implementation. The survey can be used to check whether experience remains the same after the system has had time to bed down.

Second, it provides what is in some ways a more nuanced picture. Groups tend to arrive at a group view. The survey provides more opportunity for minority views to be expressed. It is also somewhat less liable to the charge that it might be unduly influenced by a small number of individuals with strong views.

Third, the survey can explore the quantitative strength of relationships. For example, the focus groups suggested that training was a very important part of the implementation process. The survey can explore whether the experience of individuals with more training differed from that of those who had less.

For all these reasons we think that the survey is an important source of evidence in this study, albeit one which must be used with caution. In the long run, much depends on the degree to which our different kinds of evidence ‘cohere’.

Results: Aims of the System


We asked the respondents what they wanted the system to do. We gave them a list of ten possible aims and asked them to rate their importance on a scale of 1 to 6.

    Table 6. 1 Possible Aims by Priority Score and Achieved Score

Possible Aim ‘Aim’

Score

‘Achieved’

Score

A convenient way of recording practical details 5.17 3.88
A way of recording that improves communication with other agencies 5.00 2.74
Management information for planning 4.55 3.11
Time-saving ways of completing forms and letters 4.67 2.56
Records that promote client involvement (e.g. are user friendly, prompt client contributions) 4.71 1.94
‘An expert system’ that promotes social worker analysis 4.35 2.60
A useful tool for supervision 4.22 2.81
A way of checking for the recurrence of suspicious names 4.57 2.96
A management method of monitoring performance 4.14 3.28
A structured way of recording information for social workers own use 4.94 3.41

In general all the possible aims were given high priority with the highest rating going to the aim of ‘a convenient way of recording practical details’ and the lowest ones to monitoring performance and to aiding supervision.

We also asked the respondents to rate how far the system achieved these aims using a six point scale, 1 for ‘worse than useless’ and 6 to mean that they system achieved the aim ‘outstandingly well’. As can be seen this second score was always lower than the first.

The lowest achieved aim was for ‘user friendliness’. Half those replying considered the system ‘worse than useless’ in this respect. This finding that echoes the qualitative data and the findings in the Disability Sub study and statistical data we report below.


Requirements for System


We also provided the respondents with a list of ten possible requirements. These elicited an even higher degree of agreement than our listed aims. Once again we asked the social workers to rate the degree to which the requirement was met. Unsurprisingly the achievement ratings were lower than those for the importance of the requirements. There was, however, quite a variation in the degree to which this was so (see table 6.2). The system was given quite high ratings for its security and for keeping all key information electronically. It was generally not given such high marks for accuracy, user friendliness, the use of its records in court, or the ease of detecting its inaccuracies.

Table 6.2 Ratings of Importance (1) and Achievement (2) of ten requirements

    Requirement Score 1-importance Score 2-achievement
    Is user friendly for social workers 5.52 2.63
    Is robust (does not crash) 5.52 2.90
    Keeps all the key information on a case electronically 5.35 4.14
    Is secure 5.61 4.49
    Allows essence of a case to be grasped quickly 5.67 3.02
    Produces records that can be submitted in court 5.39 2.66
    Produces records that clients can see, read easily and, if needed, sign 5.37 2.66
    Produces records that can be emailed to others 5.04 3.30
    Makes it easy to detect inaccuracies 5.27 2.92
    Allows enough space for free text 5.52 3.10
    Avoids need to retype duplicate information 5.54 3.33
    Has a spell check 5.34 3.90

There were no differences between the authorities in the importance attached to these requirements. There were, however, some differences over the degree to which they were achieved. Most commonly authority C scored best on achievement and in the case of ‘user friendly’ for social workers the differences were significant (p=.01). Other differences between authorities that were almost significant (p<.1) were ‘is robust (does not crash)’ ‘produces records that can be emailed to others’, ‘avoids need to retype duplicate information’ and ‘has spell check’.

The order of authorities on these variables was not invariably the same. For example, the most responsive authority did significantly worse than the other main authority in the sample in terms of the perceived ability to email records.

Overall, therefore, it seems that these are aspects of ICS that are important to social workers and that authorities can influence.

Working Conditions


We asked a number of questions primarily concerned with the details of systems operations.

Table 6.3 Thinking about your use of the system how far do you agree with the following statements?


N % Agree
I can easily get to a printer 52 94
Access to a working computer is easy 52 98
I can easily find information on ICS 52 62
Once done exemplars are quickly ‘signed off’ 47 60
I am confident in using the computer 52 98
It is/would be better for admin to do ICS entries 47 38
All key information is in the computer 50 64
The system helpfully alerts me to urgent tasks 51 22
I like the way I don’t have to retype details 47 64
The data in the system is inaccurate 45 29
Any inaccuracy is quickly picked up 44 64
It’s easy to correct inaccuracy in the system 48 44
My team uses the information in looking at its overall performance and way of working 39 44
I can easily see a list of my cases on screen 43 81
Its easy to get a printed chronology 42 45
I can easily make a case summary out of text in the system 46 52
I can easily print ICS text 49 80
The computer easily locates any of my cases 50 88
I can easily bring up the latest assessment, plan or review 50 76
The ICS screens are easy to read 49 55
You have to be an expert typist to use ICS 49 27
The mixture of paper and computer files is difficult to manage 48 63


In general these answers suggest that the basic ‘hardware’ requirements for ICS were met. Those responding had easy access to computers and printers and could use them. There was more uncertainty about the software. So more than half said that it was not easy to correct inaccuracies or get access to a printed chronology. Around half also found it hard to use the system to make a case summary or read the screens.

There were significant differences on three of these indicators between authority A and authority C. Workers in authority C were significantly more likely to say that information on the system was accurate; that they could easily see a list of their cases; that they could easily bring up the latest assessment, plan or review and the ICS screens were easy to read. Once again it seems that these are important aspects of ICS that authorities can influence.

Views of particular exemplars


The 52 respondents were asked to rate exemplars they had used on a ten point scale ranging from 1 (useless) to 10 (extremely helpful).

Table 6.4 sets out the results. The averages in the right hand column suggest that some exemplars are more popular than others. Apart from this it is hard to know what to make of them. It is perhaps of more interest that the authority C, which modified its exemplars, did score better on two exemplars that it had modified than did authority A. Exemplars where its score was better were:

  • Contact exemplar (p<.001)
  • Child protection conference report (p=.054)

There were others where the difference was in the same direction but in no case was this significant.

In keeping with other findings reported later, qualified workers had a generally lower opinion of the exemplars than others. In the case of the child and young persons plan, their care plan, the child and young person in need review and the closure record the differences were significant. Social workers differed from the rest of the sample in having a lower opinion of the child and young person’s plan and their need review.


Table 6.4 Experience of particular exemplars


Exemplar n Mean
Contact Record 35 6.26
Referral and Information Record 39 6.21
Initial Assessment Record 45 6.53
CP1 Strategy- Record of Strategy Discussion 19 6.16
CP2 - Record of Outcome of s47 enquiries 27 5.78
CP3 - Initial Child Protection Conference Report 25 5.56
Core Assessment Record – Pre-birth to Child Aged 12 Months 20 4.55
Core Assessment Record - Child Aged 1 - 2 years 26 4.65
Core Assessment Record - Child aged 3 - 4 years 26 4.88
Core Assessment Record - Child aged 5-10 years 35 4.97
Core Assessment Record - Young person aged 11-15 years 35 4.91
Core Assessment Record - Young person aged 16 years and over 26 4.73
Chronology 33 4.18
Child or Young Person's Plan 45 5.00
Placement Information Record 39 5.41
Child or Young Person's Care Plan 41 4.93
Child or Young Person in Need Review 29 5.66
Child or Young Person's Child Protection Review 26 5.42
Child or Young Person's Looked After Review 29 5.14
Assessment and Progress Record for looked after children - 1 and 2 years 7 4.43
Assessment and Progress Record for looked after children - 3 and 4 years 7 4.29
Assessment and Progress record for looked after children - 5 to 10 years 10 4.90
Assessment and Progress record for looked after children and young people - 11 to 15 years 13 4.54
Child or Young Person's Adoption Plan 9 4.23
Pathway Plan 3 4.33
Closure Record 37 5.27

Ratings are from 1 (useless) to 10 (extremely helpful)

Perceptions of effects of ICS and the use of time


The focus group members (Chapter Five) suggested that ICS took up too much time. We asked the respondents about this, distinguishing between collecting the information, recording and inputting it, and finding it on the system.

In all these areas the general perception was that ICS demanded more time.

  • Seven out of eight respondents thought that it required more time on recording
  • One in three thought it required more time on collecting information (most (54%) thought it made no difference
  • Just over half (55%) thought it required more time to find information (but nearly a third (31%) thought it made no difference here)

Although we had expected that greater experience with the system would decrease the time needed to find information on it, there was no evidence that this was so. It was also true that qualified workers reported a greater degree of increased demands than did unqualified one (p<.05).

Which groups have favourable views of ICS?


The findings discussed above concern particular features of ICS. They also suggest that overall views may vary by authority and role. The following sections examine whether this was so along with related questions.

We measured the respondents’ overall attitudes by asking them to mark the system out of 100. We told them that 1 was ‘useless’, 50 was ‘average’ and 100 was ‘excellent’ and we asked them to ‘mark’ in the light of their experience of other systems. Overall the most popular response (given by 29 per cent of respondents) was around average. Forty per cent gave a mark lower than this and 30 per cent a higher mark. The average was 46, implying, on a very crude measure, the respondents do not think the ICS is a particularly good system.

We also created a number of summary scores. These were:

  • Time Score – the mean score given to the effect of ICS on the time available
  • Aims achievement Score – the mean score given for the achievement of different aims
  • System requirements score – the mean score for the degree to which the system met various requirements
  • Working conditions score – the mean score for the variables we have described as ‘working conditions’
  • Exemplar score – the mean rating for the different exemplars
  • Overall view score – a score based on responses to the positive and negative questions about ICS we discussed below.

Unsurprisingly the various scores were strongly associated with each other with correlations varying from .38 to .65. Their correlations with the score based on marks out of 100 similarly varied from .42 to .72. We cannot tell the direction of these effects. For example, respondents may have seen the exemplars in a favourable light because they liked the system. Alternatively they may have liked the system because they liked the exemplars. What the correlations do suggest is that our simple marking system is tapping the respondents’ general views of ICS.

Did views differ by authority, team or role?


In keeping with our impression, the system in authority C was seen more favourably than was the case elsewhere (p=.01). The lowest scoring council (D) was the one which had most recently implemented the system and was represented by only one team. There were no significant differences between the authorities in our summary scores.

There were also large differences between teams in the marks given to the system (p<.002). It is possible that these differences had to do with the role of the team. For example, the two assessment teams had comparatively favourable attitudes. However, it was also possible for teams with very similar roles (e.g. two ‘disability teams’) to have very different attitudes, as we see from the Disability Sub-study. It is possible, therefore, that teams develop shared views of ICS for different reasons, for example because of the views of their team leader, or that the exemplars fit the tasks of some teams better than others. Once again the teams did not differ significantly on our other summary scores.

These differences remained significant if we took account of differences in role. They were, however, most pronounced among respondents who were not social workers. In general social workers gave low marks to the system in all authorities.

Figure 6.5 gives the average marks for the various groups’ experience of ICS. There is a significant difference between groups (Kruskall-Wallis p=.01) with social workers having the most negative attitude and social work assistants the most positive one.

    Figure 6. 5 Marks for System by Work Role


Most social workers (53%) gave the system a lower mark than 50. Just under one in three (29%) gave the system an average mark and only one in five thought it was better than average. The average mark was 36. As social workers are the main users of the system, their predominantly negative attitudes present a serious problem.


Do views vary with training, time using ICS or computer literacy?

It is sometimes said that negative views of ICS arise because social workers are inadequately trained, new to the system, or lacking in computer literacy. If this is so, one would expect that workers who did not have these characteristics would have more favourable attitudes towards the system. We found no evidence that this was the case:


  • Qualified workers (those with Dip.SW or CQSW) had significantly more negative views of the system than others
  • Dissatisfaction with ICS (as measured by the total score) tended to be higher among those who had worked with it longer (although the association was not significant)
  • Satisfaction with ICS tended to be higher (but not significantly higher) among those who were more used to computers (as measured by a series of question about their use of computers outside work)
  • Social workers who had used computers at work prior to their experience of ICS were less satisfied with ICS than those who had not had this experience

We asked the respondents about ‘in house’ and external training and distinguished between training focused primarily on the professional aspects of ICS and training primarily concerned with IT. Training that focused primarily on the professional aspects of ICS was associated with satisfaction (tau b=.26 p<.02). This, however, was not the case if we confined the analysis to those who were social workers. None of the other measures of training were related to satisfaction.

So what do social workers like about ICS?


A major section of the questionnaire was devoted to a set of 30 statements describing ICS. Social workers were asked to respond to these statements on a four point scale ranging from 1 (strongly agree) to 4 (strongly disagree). They were also allowed to tick ‘don’t know’. This latter option was generally used rarely, except in statements referring to disabled or black and minority ethnic groups, where some workers had not had the relevant experience.

Some statements seemed to elicit more or less universal agreement. Others appeared to be more contentious. There were three statements where more than two thirds of the respondents expressed a positive opinion of ICS. (Figures in brackets represent the proportion agreeing strongly or otherwise after omitting the ‘don’t knows’)

  • ICS asks for most of the essential information (84%)
  • ICS will in time lead to major improvements (71%)
  • It’s good social workers are now using computers (89%)

Social workers were significantly less likely than others to think that ICS asks for most of the essential information. Even so, however, three quarters of them thought that it did. Basically the great majority of respondents are pleased that there is a computerised system and feel that in time this will bring considerable benefit.

The positive view of computers was illustrated by the qualitative material. Respondents liked the system’s ability to hold a lot of information in one place, the ease of access, and the ease with which a worker could move from one file to another. Computerised information was also seen as more secure than information in paper files. The ability of the system to ‘self-populate’ was also seen as a virtue, albeit as seen below, one which had drawbacks.

Areas of Contention


The comments in the questionnaire contained only limited praise of the professional virtues of the system. A minority of workers praised the consistency of the format, the way the system tracked a process, and the holistic picture this could provide of the case. Senior social workers appreciated the opportunities for monitoring social work that this gave. In general, however, these endorsements were matched by others, complaining, for example, that the system did not promote a holistic picture.

In keeping with these contradictions there were a number of statements where fewer than two thirds agreed and fewer than two thirds disagreed. We give these below. As before the figures in brackets represent the numbers agreeing with the statement.

  • ICS helps with analysis and assessment (50%)
  • ICS makes it easy to review plans (52%)
  • ICS undermines social workers’ discretion (44%)
  • ICS structures the social worker’s task helpfully (36%)
  • ICS is appropriate for ethnic minority groups (61%)
  • ICS records are not appropriate for assessing risk (57%)
  • ICS makes it easier to work jointly on cases (47%)
  • ICS is an improvement on earlier paper systems (65%)
  • ICS emphasis on objectives distorts practice (57%)
  • ICS underemphasises events and evidence (60%)
  • ICS often asks for too much information (55%)

As usual social workers and qualified workers had more negative attitudes. Social workers were less likely than others to agree that ICS made it easier to co-work cases and much more likely to agree that ICS underemphasised events and evidence. Qualified workers also differed from others in a similar way. They were also more likely than others to think that ICS undermined social workers’ discretion ( a slight majority of them thought that it did).

Overall the differences between workers on these statements suggest that in these respects the ICS is a ‘curate’s egg’. For example, in some respects it may be seen as better than previous systems – so it is true that pieces of paper do not go missing from computers. In other respects it may be seen as worse than previous paper systems – paper files do not ‘crash’. Different workers may emphasise different aspects of these differences.

Shared negative opinions of ICS


A number of statements elicited shared negative opinions of ICS with which two thirds of the respondents agreed. These were:

  • ICS loses the family perspective (66%)
  • I have had less training on ICS than I need (67%)
  • The exemplars do not replace the need for reports (91%)
  • ICS saves a lot of time (17%)
  • ICS separates pieces of information that should be kept together to get the whole story (73%)
  • ICS makes it easy to get a picture of a case (31%)
  • Introducing ICS has led to delay and inefficiency (74%)
  • ICS cuts time available for seeing clients (83%)
  • ICS is user friendly for clients (11%)
  • ICS is not appropriate for disabled children (72%)
  • ICS often forces social workers to complete irrelevant tasks (85%)
  • ICS turns social workers in clerks (83%)
  • ICS helps creative flexible work (17%)
  • ICS asks for a lot unnecessary duplication (83%)

In general qualified workers and social workers answered these questions in a rather more negative way than others. For example, qualified workers were significantly more likely to feel that ICS lost the family perspective. Most of the differences, however, were small and not significant. Authority C also appeared to have reduced criticism of ICS. For example, workers from that authority were significantly less likely than those in authority A to feel that they needed more training or that the introduction of ICS had led to delay and inefficiency. Even in this authority, however, the majority of respondents felt both these things.

These criticisms echoed those made in the focus groups. They were also repeated in the comments in the questionnaires. In the main these focused on the professional aspects of the system. In summary the respondents felt that ICS was overly prescriptive; made too little allowance for different situations and clients, particularly those who were disabled; was repetitive and time-consuming, thus removing social workers from their core tasks of seeing clients; often requested irrelevant information; lost the overall picture; failed to clarify priorities; failed to provide a logical coherent structure for justifying action; and did not provide user friendly outputs for clients.

One social worker put the case forcefully.

    I apologise for my negativity. I’ve been a social worker for a long time. I can’t see that ICS has any good points. It is excessively time consuming and over complicates information gathering. It is not user friendly for social workers and completely useless for clients. It fails to paint a picture of a case. It doesn’t show any sequence of events or link information. It is repetitive and I find I can spend an inordinate amount of time sitting in front of the screen wondering what on earth I’m supposed to input because I can’t work out what’s being asked. All [the forms] are equally frustrating and time consuming. Time I could spend with clients doing

    [There should be] Less domains. They don’t always apply. [The system should be] much simpler, more basic exemplars with more space for real information rather than waffle for the sake of filling a space.

The respondents also criticised a number of features resulting from the programming of the system. Particular features that were disliked included: information that was out of date but nevertheless ‘populating’ output screens; difficulty of searching for names that might have been misspelt; lack of space to write own assessment; difficulty in finding some information; need to search for individuals separately and not as part of a family; continuing duplication between paper and IT screens; frequent system crashes; the size of particular screens and the need to continually open and close screens.

Views of the Way ahead


It is, at first sight, paradoxical that a system subject to such damaging criticism should also be seen by almost two thirds of the respondents as an improvement on earlier versions. One reason for this may be poor quality of the earlier systems. Another may be the optimism of the respondents. They were largely agreed on the potential of ICS. They also agreed on the some of the steps needed to achieve this. In more detail there was broad agreement that:

  • ICS will in time lead to major improvements (71%)
  • ICS should be drastically simplified (85%)
  • ICS should have fewer exemplars (85%)

As can be seen, most respondents are optimistic about the system. They do, however, think that it needs considerable change.

The comments in the questionnaires suggested the kind of improvements the respondents had in mind. In general they wanted the system to be simpler, less prescriptive, and better able to produce ‘outputs’ that would be acceptable to clients and other agencies. One practitioner helpfully summarised this view:

    On a simple level the system should be easy to use, not easy to change names etc in case someone does that by mistake – so it should be “idiot proof”, user friendly and not full of jargon and endless categories of “need” in which same information is given in 16 different ways. It should have special “extras” for disabled children or asylum seekers, ethnic minorities, etc. who do not easily fit into the limited space given to explain their individual circumstances. [There should be:]

  • Facility to give a “pen picture” of child and family incorporating disability, culture, extended family and family relationships.
  • “Pen picture” to be printed on all forms (maybe in summary version) to inform new social workers, duty workers, new professionals involved, etc.
  • “Pathways” so that irrelevant [questions] can be passed or a short version done.
  • Not so many tick boxes and complicated jargon-filled exemplars – keep it simple – print it big.
  • Stuff we can print out and share with families – child centred plans written in child-friendly language – many parents also have learning difficulties and cannot understand our paperwork.
  • Easy to understand, easy to navigate, easy on the eye.

Some supplementary points concentrated on the need to improve the systems ability to email information to other agencies, and the ability of other agencies to input their ‘bit’. The need for improvements in particular exemplars – such as the chronology - was made, as well as in the layout and programming. Suggestions included to improve input screens, create more free text, to have remote access and to have better software. These features should be supplemented by better training and opportunities for consultation

Conclusion


There was evidence that authorities might be able to modify some of the criticism of ICS (for example, by changing the exemplars) and that the technology worked better in some areas than others. Teams also varied in their attitude to ICS while qualified social workers were more negative about it than other workers.

In general, however, there was considerable similarity in many of the views expressed. On average these views were not particularly favourable views of ICS, did not appear to improve with time and did not differ greatly by team or by authority. Social workers, the main users of the system, were consistently more critical of ICS than other workers, seeing it as over-complicated, prescriptive and time-consuming. They did, however, value its ability to keep information, and some felt that it provided a useful framework for describing the course of a case over time.

Overall the respondents agreed on three key points: ICS is an advance on the paper systems that preceded it; it has very serious problems; and it also has the potential to bring major benefits. They also agreed that if this potential is to be realised the system must be drastically simplified, and made more ‘user friendly’. They made a number of suggestions as to how this could be done.











SECTION D









HOW THE ICS IS USED IN CSSRs


Chapter Seven

The ICS and Aggregate Statistics: The Download Study

Introduction


One of the aims of the ICS is to enable social work to be more accountable. Social workers should be able to use the system to account for work they do on individual cases. Authorities and units within them should be able to describe and account for work on groups of cases – for example, those who have particular characteristics or who are served by particular teams.

As we have seen, these purposes were well understood by the social workers. Some justified ICS on the grounds that it enabled more accountability. Others saw the same features of ICS as further evidence of a system based on a lack of trust and a wish to control the details of what they did. This chapter needs to be read against this background.

The chapter is based on data that the authorities were able to download from their system. This part of the audit study was also left as late as possible, being undertaken in the spring of 2006. It examines how far the records:

  • Followed the structure prescribed by ICS
  • Provided the information requested
  • Were consistently completed.

Our conclusion looks at implications of our findings for accountability at the level of both the individual case and groups of cases (e.g. through performance indicators).

Method


Only two authorities (A and C) were able to provide us with any computerised data. At the beginning of 2006 we asked them for data that we felt should have been available from ICS. They were not able to do this but did provide what they could.

Authority A gave us limited data on:

  • All the clients on its books on a particular date (n=590)
  • All new contacts/referrals with the department over 6 months (n=6408)
  • Initial assessments over six months (n=348)
  • Core assessments over six months (n=75)
  • All current and previous care plans for those with a current care plan on a particular date (n=620)

Authority C provided us with much fuller data on:

  • Initial contacts with children referred on or before the 15th of February 2006 (n=3266)6
  • Initial assessments over six months (n=461)

The differences between the authorities meant that the data sets could not be analysed together. We were, however, able to pursue similar questions in each authority and see how far similar issues arose.

How far did the records follow the structure of ICS?


The ICS focuses on individual children and particular pieces of work undertaken with them. These ‘pieces of work’ should have a start date and an end date and should occur in a logical sequence. So the ICS looks at the processes of referral, initial assessment, core assessment and review. These characteristics of ICS provide what we can conveniently call its structure.

In broad outline the records did indeed follow this structure. So the authorities were able to give us separate sets of data dealing with some or all of the key processes. These sets of data included information on the children who were being referred or assessed and on the dates on which the process started or finished.

There were three problems with the data we were given.

First, some of the data was duplicated in the sense that two or more lines referred to the same process starting on the same date for the same child. These duplicates made up:

  • 22 per cent of the initial referrals in authority C
  • 24 per cent of the initial assessments in authority C
  • 16 per cent of the initial referrals in authority A
  • 5 per cent of the core assessments in authority A
  • 2 per cent of the initial assessment in authority A.

Second, processes did not always have a start date and an end date.

  • 100 per cent of the referrals to authority C had a start date but less than 2 per cent were ‘signed off’ on a particular date by the social worker
  • 22 per cent of the initial assessments in authority C had no start date, 19 per cent had no completion date and 29% lacked one or other of these dates7
  • 39 per cent of the initial assessments in authority A had no end date
  • 32 per cent of core assessments in authority A had no start date and 65 per cent had no end date8

As can be seen from the figures and associated footnotes many of the processes seem to have run on for much longer than was expected by those designing ICS. An example was provided by the initial referrals in authority A. Forty-four percent of those with a finishing date were closed within one day of entry and 70 percent in no more than nine. This, however, still left 30 per cent with closure dates from ten to 180 days from starting. Moreover, sixteen per cent of the cases were not given an end date. Presumably the case was either ongoing or had been effectively closed without this being recorded. If one assumes that these cases were in some sense ‘on-going’ one would estimate that about a quarter of cases would still be open after 50 days. (See figure 7.1, where numbers on the horizontal axis give days after initial contact and numbers on the vertical axis give the proportion of cases open at that point).

Figure 7. 1 Estimated Proportion of Cases open at a given Period after Contact


A third, and possibly related, problem was that the sequence of activities did not always follow the sequence expected by the model. The main example of this came from the initial assessments in authority C. These should have lead to a decision over whether a child was indeed in need. In practice less than half of the assessments (45%) resulted in a decision over whether this was so. In some cases it seemed that the enquiry resulted in a decision that the case was less serious than had been thought so that the child was dealt as if he or she was an initial referral. In other cases, the situation seemed to have been seen as urgent so that the child was dealt with through processes associated with child protection.

How far did the records provide the content required by ICS?


We looked separately at different data sets to see how fully the records had been completed. In each set we tried to understand why some questions in the records were more likely to be answered than others.

Referrals to Authority C

Table 7.2 gives the proportion of cases in which there was some kind of information (even if this was ‘don’t know’) given for each of the fields listed. (By ‘field’ we mean essentially the ‘blank’ or ‘slot’ which a social worker fills in when answering a particular question) We suspected that the ‘duplicate’ referrals might have been more hastily recorded. The table distinguishes between duplicate referrals and others.

As can be seen from table 7.2 duplicate cases generally contain less information than others, though this is not always the case – they have slightly more information on whether the child is referred as at risk. These differences, however, are not great. Undoubtedly the most striking feature of table 7.1 is the contrast in missing information between the different fields. There are three broad groups:

  • Fields that have some information on over 90 per cent of referrals
  • Fields that have information on between 13 and 65 per cent of referrals
  • Fields that have information on between 0 and 8 per cent of referrals

These differences are important. A field that has information in 90 per cent of the cases does potentially generate useful aggregate information. By contrast it is very difficult to know what to make of a field that is completed for less than one in ten referrals. It may be useful at the level of the case. It is unlikely to be useful for statistical purposes. It is therefore important to understand why some fields were more fully completed than others.

Fields with high completion rates are those concerned with the dates of referrals, the name and role of the referrer and the language, religion and ethnicity of the client. These fields were effectively required by the system, even if the social worker did not have the information. For example, the social workers only noted the client’s actual religion in four cases in a thousand. By contrast the social workers did have information on the referral dates and the names of referrers and duly recorded them.

The ‘fields’ that have ‘intermediate’ completion rates (13% to 65%) include some which seem important for accountability. These include the name of the person taking the referral, absent in just over a third (35%) of non-duplicates, and the reason for the referral, absent in just under four in ten (38%) of non-duplicates. Other fields in this group include some which are useful to social workers (for example, the name of the GP) but which may be available at the time of a referral. (For example, the police are unlikely to include this information in a routine referral).

Finally there are a large number of fields that are completed in a very small proportion of cases. Examples of those with less than 1 per cent completion rates include: the health visitor’s address, the category of a child’s registration on the child protection register, and the date a child ceased to be looked after. The reasons here seem often to have to do with the relevance of the information, its availability and perhaps the willingness of social workers to spend time on the system. So there is very little information on whether the child has a health visitor at least partly because this is rarely relevant to older children. By contrast the low figure for re-referrals is almost certainly the product of lack of information or a reluctance to complete the form. In this sample alone 38 per cent of the cases appear to have had a ‘non-duplicate’ re-referral over the period of the study. By contrast only 2.3 per cent of the referrals have a recorded date for a previous referral9.


Table 7. 2 Completeness by Type of Referral

Name of Variable Proportion of Recorded Responses

Duplicated Cases

%

Unduplicated Cases

%

Referral Date 100.0 100.0
Signed off by team leader 1.4 1.5
Referred as child in need 1.4 22.0
Referred as in need of protection 47.4 41.3
Client aware of referral 53.6 67.5
Client’s language 99.6 99.1
Ethnicity 99.3 99.3
Religion 95.3 93.4
Nationality .3 .6
Name of referrer 100.0 99.5
Agency/role of referrer 99.6 99.0
Telephone number of referrer 5.0 13.1
Name of social worker taking referral 51.3 64.8
Reason for referral 51.3 62.1
GP’s name 25.2 31.8
GP’s address 33.2 39.3
GP’s telephone number 21.5 28.0
Parental consent to contact GP .3 1.4
Parental consent to contact school .4 .8
Name of school 31.8 39.5
School address 28.3 35.6
School Telephone number 19.5 23.5
Responsible authority 3.3 8.3
Date referral recorded 100.0 100
Date of previous referral 1.3 2.3
Whether child disabled 3.7 7.6
Whether child registered disabled 2.6 4.3
Whether on CSSR register 2.8 3.4
Category of registration .4 .7
Date of registration .3 .5
Whether looked after by CSSR 2.0 3.8
Date started to be looked after 0 .1
Whether previously looked after 1.5 2.9
Date that ceased?? 0.0 .2
Health Visitors name 4.7 5.4
Health Visitor’s address 0.0 .2
Health Visitor’s telephone number 4.7 4.8


From the point of view of analysis this raises the problem that it is not possible to know which of these reasons – lack of relevance, lack of knowledge, or bureaucratic reluctance - applies. So in the case of young children the absence of information on a health visitor may occur because the child does not have or one, because the social worker knows nothing about her/him, or simply because of lack of time for putting in data. A particular case of this problem applies to questions about information given to the client. Questions about whether the client knows that the social worker may contact school or GP are almost never answered. We do not know whether this is because social workers do not attend to this part of their work or whether they are so busy doing so that they do not have time to fill in the form.

Table 7.3 deals with questions about what the social worker did about the referral and the decisions that were taken. Information on this was required by the form but hardly ever supplied. Indeed the general impression from the recording system was that the social workers did almost nothing, hardly ever signed off their work, and were supported in this by their seniors who never signed it off either. This impression would be false. As we have seen from the record study, all records were signed off on paper. Equally it is unbelievable that only one case in a thousand received a social work visit. The only conclusion we can draw is that the social workers hardly ever used this part of the form to describe or account for this part of their work.

In our view the frequency of questions that attract hardly any responses raise serious issues for the system. Forms containing these unanswered questions give an unfair impression of a lack of diligence. They also make it harder to pick out the questions that have been answered. Finally social workers may feel that as they have been presented with so many questions there are no additional ones they should ask. So the existence of a form that is too prescriptive may ‘dumb down’ the process of assessment.


Table 7. 3 Actions taken in response to contact/referral

Name of Variable Proportion of Recorded Responses

Duplicated Cases

%

Unduplicated Cases

%

Provided information and advice 0 .4
Referred for Initial assessment 0 .5
Referred for Core assessment .1 .2
Meets requirements of being CIN 0 0.0
Receive services under Part 111 0 .3
Referred for case conference .1 .1
Referred for strategy meeting 0 .1
Refer to other agency .1 .8
No further action 0 .2
Visit 0 .1
Allocated a visit 0 0.0
Duty checks 0 .1
Letter 0 .1
Team leader decision .1 .6
Date of team leader decision 0 .1
Referrer informed of action .5 1.3
Parent informed of action .6 1.2
Child informed of action .3 .7
Police informed of allegation 0 0
Date police informed 0 0
Signature of Social Worker 1.8 2
Date signed by social worker 1.5 1.6
Signature of Team Leader 1.7 1.5

Initial Assessments in Authority C

The most striking feature of the data was the amount of blank space. The input forms being used allowed for the entry of 9 interview dates, 11 named persons who conducted the interviews, 10 contributing agencies and the names of 10 other workers who contributed. There was no information on any of these fields. The explanation we were given for the lack of any mention of other agencies was that councils need to complete assessments within seven days. If they involved other agencies they lost control of the time span. For this reason they were reluctant to do so.

We examined the 350 lines of ‘unduplicated assessments’ to assess the extent of missing data in the fields that were not completely blank.

  • 100 per cent of the assessments contained the child’s date of birth
  • 80 per cent stated what further action should be taken, although in over half of these (44% of the total) the conclusion was that nothing should be done.
  • 78 per cent contained the date on which the initial assessment started
  • 67 per cent had the name of the worker completing the initial review
  • 45 per cent stated whether a child in need
  • 26 per cent had the date on which the team manager ‘signed off’ the assessment.
  • 19 per cent had the name of the social worker to whom the case was allocated
  • 17 per cent had the date on which the case was allocated to a social worker
  • 13 per cent explained why the assessment was not completed within seven working days
  • 11 per cent stated whether a vulnerable child
  • 10 per cent stated whether in need of health and developmental services
  • 13 per cent gave the date of the next planned review.


It would also seem that the computerised record is not particularly tightly tied in to the supervisory system. For example, there were only 15 records out of 350 where there was information on when the case started, when it finished and which social worker was initially allocated to it.

Information on Clients: Authority A

Authority A provided us with a number of different sets of data, in each case much sparser than that from the two data sets provided by authority C. Authority A also allowed less discretion to those putting in the data. The focus groups suggested that this aspect of its information system was particularly unpopular with social workers. It may, however, make it more popular with statisticians.

In practice Authority A provided us with data that were effectively demanded by its system. The data set given by Authority C contained much missing data. This was not the case for the data given by Authority A. Despite this some fields were more fully completed than others.

One of the data sets from authority A provided four pieces of personal information on each of the current clients.

  • 98 per cent had a date of birth
  • 97 per cent were recorded as male or female
  • 66 per cent had a recorded ethnicity
  • 27 per cent had a recorded legal status

The high rate of complete information on dates of birth and on sex of child is encouraging, showing that systems of this sort can capture simple information. The low rate of completion for ‘legal status’ is unsurprising. Data from authority C suggested that social workers often only answer ‘fixed choice’ questions where they seem relevant. It is likely that a high proportion of those with a care order have a recorded legal status. That said, it could be that in some cases the social worker simply forgot to enter the information. These cases would not be distinguished from the much larger group were there were no legal provisions of any kind.

The fact that there was no information on ethnicity in 34 per cent of cases may partly reflect a tendency for social workers not to record this when the client was ‘White British’. In other cases the social workers may not know what the child’s ethnicity is and may be unwilling to guess.

Ignorance is obviously a more plausible explanation earlier in the life of a case than later. For this reason we looked at the relationship between length of time on the books and the presence of data on ethnicity. The clients in this set of data had mostly been ‘on the books’ for some time. Seventy per cent had had their case open for at least six months, 55 per cent of a year or more and 27 per cent for two years or more. Our analysis showed that information on ‘ethnic identity’ was much more likely to be recorded on children who had not been recently referred. Two thirds (64%) of those referred within the last six months did not have information on ethnicity. The same was true of only about one in a hundred of those referred two or more years previously.

Table 7. 4 Time since last referral by information available on Ethnicity

Initial Assessments: Authority A


We were given limited data on the initial assessments. It covered the name of the previous stage (in every case referral to children’s services), the action taken as a result, the planned and actual start and end dates of the assessment, the work group involved with the assessment and whether the assessment was cancelled or postponed.

Very little data was missing. All the cases had a planned start date. All of them were allocated to a work group. Nearly four out of ten assessments were either cancelled (4%) or – much more commonly – postponed (35%). It was not explicitly stated that the remainder were neither postponed nor cancelled but it is probably a fair assumption that this was the case. The only major source of missing data was incomplete assessments. As already discussed 39 per cent of the assessments had no end date and in keeping with this 39 per cent had no record of what action was taken.

Core Assessments: Authority A

Authority A gave us data on the type of core assessment, the work group , the planned start date, the planned end date, the actual start date, the actual end date, whether the assessment was cancelled or postponed.

Once again there was little missing data in the information we had. All the lines of data had records of the type of core assessment, the work group carrying it out, the planned start date and the planned date for completion. Fourteen per cent of the assessments were postponed and 11 per cent were cancelled. In 75 per cent of the cases there was no information in this column. We assume that this was because the assessment went ahead. As already discussed two thirds of these assessments had no end date.

Care Plans: Authority A


The data from Authority A on the care plans included past plans as well as current ones. They covered:

  • type (initial, child protection, child in need, looked after, short break)
  • the main domain (e.g. parenting capacity) to which the care plan is relevant
  • the status of the plan (e.g. historic, complete, just started),
  • need,
  • family strengths,
  • the action taken,
  • start date,
  • end date (if any),
  • person or group responsible,
  • code for the organisation responsible
  • planned outcome
  • actual outcome,

As already explained this authority's IT system was highly prescriptive, using drop down lists and refusing to allow social workers to move on the next item before completing the one in hand. This somewhat Draconian system was reasonably successful in securing high completion rates. The proportions of completed (as against missing) data were as follows:

  • 100 per cent for type, status, main domain, organisation responsible
  • 80 to 90 per cent for need, actions to be taken, person responsible
  • 70 to 80 per cent for ‘strengths’, planned outcomes
  • 46 per cent for the start dates
  • 27 per cent for the actual outcomes
  • 18 per cent for the end dates

As can be seen the fields that are fully completed have to do with intentions, justifications and responsibilities. They are about how the social worker thinks about the case. The fields that are usually not completed have to do with ‘reality’, when the plan starts, what happens as a result of it and when it stops.

In part these difficulties may have had to do with the newness of the system. Social workers may not have known when some plans started. ‘Historic’ and ‘completed’ plans were indeed less likely to have start dates than approved ones (see table 7.4). However, the pattern was confusing. Child protection plans had a start date in 69 per cent of cases, whereas this was true for only 31 per cent of initial plans. Some plans that were merely ‘proposed’ appeared to have already started, a finding that raises the question of when a plan actually does start – when a social worker thinks of it, when the senior approves it, or when something actually starts to happen (see table 7.5)

Table 7. 5 Status of care plan by whether it has a recorded start date


One reason for ‘non-completion’ may have had to do with social workers understanding of the exemplars. Less than one per cent of the care plans ‘for children in need’ but nearly half (49%) the care plans for child protection did not have information on need. In the one case social workers felt they had to refer to ‘developmental requirements’ to justify their intervention, whereas in the other they could rely on appeal to ‘risk’.

There were similar contrasts over outcomes. Less than a third (31%) plans for child protection had a recorded intended outcome whereas this was true of 90 per cent of plans for short breaks. Examination of the fields concerned with outcomes suggested that this concept was variously understood. In some cases there was a clear logical relationship between what was proposed and what was said to have happened – for example, the aim was to provide a break and the outcome was a sitting service. In yet other cases the outcomes described appear to use a different frame of reference to that used for the proposed outcome. For example, the aim might be ‘family support’ and the actual outcome might be said to be ‘ongoing’. It seemed to be rare for an actual outcome to be described as the reduction of a need. Instead it seems commonly to be an aspiration ‘Mrs X to be healthy’ or an activity or programme (e.g. behaviour management).


Comparability of Data


Statistical data require standard definitions and careful, standardised collection. It may difficult for ICS to meet these requirements. Reasons for this include:

  • The freedom given to different authorities to use different computer systems, and the forms used to collect the data
  • The different policies in which the system was embedded (For example, Authority A allowed its workers to cancel or postpone initial assessments but Authority C did not)
  • The possibility of interpreting the concepts involved in ICS in different ways (for example, the different meanings that can be given to ‘outcome’, ‘referral’, ‘start date’ and so on)
  • The number of different fields and the freedom given to those putting in the information over whether or not to complete these fields
  • The number of different teams and individuals involved in entering the data
  • The turnover of staff which means that new staff or agency staff may not be aware of the conventions applied in an authority


We looked for differences between authorities, teams and individuals in the way the forms were completed. Our ability to find these differences was limited. The two authorities had given us such different sets of data that comparisons between them were generally impossible. Both authorities, however, had provided data on what we have called the ‘structural’ variables in ICS and we could compare them on these.

We found that:

  • Only 2% of the initial assessments in authority A as against 24% in authority C were duplicates10
  • None of the initial assessments in authority A but 22 per cent of those in Authority C lacked a start date
  • Three teams in Authority A had duplicate rates for referrals of around one in five, whereas the other two had rates of one in eight and one in twenty
  • Workers apparently performing the same role in Authority C generated varying proportions of duplicates (e.g. one worker recorded 130 initial assessments, half of which were duplicates, whereas other workers generated no duplicates)

The existence of these particular variations is clearly a difficulty for the system. In other cases variations may illustrate its potential. For example, it was obvious that some workers were much more likely than others to describe a child as ‘in need’11. This difference between workers could arise from differences in workload, definition of need, or in the children with whom worker was dealing. Whatever the explanation it is a difference in which the authority might be expected to find interesting.


Conclusion


ICS is meant to enable accountability at the level of the individual and the group. Our data illustrate some of the potential problems and strengths of ICS in these respects.

Only Authority C provided us with enough detailed data to allow some assessment of the use of the system for accounting for individual practice. The results were not very encouraging. Hardly any of the records contained a basic set of information about the social worker involved, the dates the assessment started and finished, and the nature of the decision reached. This suggests that on its own the computerised system was not being used to account for what was done.

Accountability at the level of the group requires high quality recording and standard definitions. Without this it is impossible to be sure that like is being compared with like. Here too there seemed to be problems involving:

  • Duplicate records- Both authorities had a problem with duplicate entries, although the extent of the problem was much greater in Authority C

  • Lack of key dates – Starting dates for initial assessments were often missing in Authority C. Starting dates for core assessments were often missing in Authority A.

  • Incomplete records – Much of the data apparently required by the system in Authority C was not provided at all and much of it was rarely completed. The same problem was also found in Authority A, although again to a lesser extent in the limited data provided

  • Variable recording practice There were differences between social workers and work groups in the ways in which they entered the data

These problems suggest a need for caution in accepting counts of assessments or performance indicators based on the time taken for assessment. Any description of groups of cases based on information that is usually missing should also be distrusted.

The evidence also suggests that many of these problems could be overcome. The system in Authority A ensured that social workers entered dates which in Authority C they often left out. In theory the system could ensure that duplicate records were queried, key dates always entered and information supplied on all key variables.

This ‘solution’ would also have its problems. It does not deal with the problem that key concepts might be interpreted in different ways. It would not ensure that the data were accurate; as we have seen, the information the social workers supplied on ‘repeat referrals’ was almost certainly inaccurate. Moreover, social workers in Authority A resented the number of mandatory fields and the time they took to complete.

So from a statistical point of view there may be a need to concentrate on a small number of key fields which are ‘mandatory’. . The concepts associated with these fields would need to be clear to all. So social workers would need to know whether an assessment starts when a child is referred, when enquiries begin, when the assessment enters the system, or when it is authorised by a senior. They would also need to understand why this information is required and be sure that it is useful and used.

In selecting these fields it would be important to ensure that they were always relevant to the case and that they would always be available to the social worker without distorting their work or requiring them to guess12. Certain dates, the age and sex of the child, the reason for involvement and the decision reached arguably meet these criteria. Much that is currently required by ICS does not. As a result social workers may complete them by ticking a box that conveys too little information for practice but is too infrequently checked to provide information for statistics.

In practice very simple information would allow a number of useful analyses on the type of demands placed on an authority, the rate of ‘bombardment’ at different times and in different areas, the rate at which cases were ‘processed’, the number and identities of children who were referred on numerous occasions and so on. The potential of the system to provide such data must be one of its major advantages. It may well depend on the ‘drastic simplification’ for which the social workers called.









Chapter Eight

The Social Workers’ Use of ICS Exemplars: the Record Study


The exemplars are at the heart of the Integrated Children’s System. They are the means by which social work staff record work undertaken with active cases through the core stages of the ICS of Assessment, Planning, Intervention, Review and Evaluation. They are also a core element of the Children and Families’ electronic social care record.

The exemplar records were designed to

  • Support practice and management.
  • Support the monitoring of a child’s developmental progress over time.
  • Demonstrate how a single data entry system would operate.
  • Demonstrate how different reports can be generated for a range of specific purposes.
  • Facilitate information collation and analysis both at an individual and an aggregate level.
  • Provide a summary of activity at different stages in the course of work with children and families.
  • Provide a tool for use in supervision and for the management of individual cases.

The exemplars provide a framework for the gathering and production of information. They are designed to work within an electronic information system which supports single data entry of information. The core information set out in the data and process models (DOH, 2001a; 2001b) was revised in 2003 for the ICS (DfES, 2003). The exemplars have been produced as outputs from the system within which information can be electronically transferred between records, accessed on screen or on electronically generated paper reports.

The exemplars, which are used to record information gathered at each stage of the social work process, comprise:

  • Information records, comprising the Contact Record, the Referral and Information Record, the Placement Information Record, the Chronology and the Closure Record.
  • Assessment records, comprising the Initial Assessment Record, the Core Assessment Record, aligned with the Assessment and Progress Records, the Record of Strategy Discussion, the Record of Outcomes of s47 Enquiries, the Initial Child Protection Conference Report.
  • Planning, comprising the Initial Plan for the Child and Outline Child Protection Plan, the Child Plan, the Care Plan, the Adoption Plan, the Pathway Plan.
  • Review, comprising the Review Record.


Aims and Methods of the Record Study

Three parts of the research were proposed to enable an exploration of the use of the exemplars; the record study, the download study described in Chapter 7 and parts of the disability sub study (see Chapter 12). In this chapter we look in detail at the record study to explore the ways in which social workers used them to record information, and their facility for providing information about individual children as well as for management purposes.

The aims were:

  • to determine the standard of record keeping (e.g. the degree to which the different records which should have been completed are completed),
  • to assess the ‘internal coherence’ of the process, that is to determine how far there is an apparent logical connection between the information, assessments, plans and reviews,
  • To assess the validity and reliability of the records.

To achieve these aims we analysed all the exemplars provided to us that had been completed for the 32 cases in the Process and Disability studies where we had interviewed the service users. This part of the audit study took place in March, 2006, to ensure maximum use of the system within our time frame.

Framework for analysis.

ICS exemplars are complex documents. The challenge that emerged from an examination of the early exemplars we were provided with was to devise criteria that both reflected their complexity and could be used in situations where the record was less than full but nonetheless contained useful information and an analysis of that information.

We developed a framework for analyzing the exemplars which comprised of 7 aspects (see below), devised to reflect the essential purposes of any recording system, the purposes of the ICS and the above aims. The aspects, expanded from the original aims and taking the form of questions, were as follows:

    • Are the exemplars signed and dated?
    • How are service users views expressed and do the exemplars contain their signatures?
    • Is there a coherent narrative which produces a holistic picture of the child, the family and the environment?
    • Is there evidence of analysis by the workers completing the exemplars in relation to decisions about assessment, planning, intervention and review?
    • Are the decisions made from assessment through to review coherent?
    • Is evidence cited which supports the decisions made?
    • Are the exemplars effective as a professional tools?


The Table below (8.1) defines the criteria and identifies the range of evidence used in the examination of the exemplars to make the judgments.


Table 8.1. The Judgment Criteria.

    THE CRITERIA DEFINITION OF CRITERIA COMMENTS
    Are exemplars signed and dated?
    • The record is electronically signed or not by the social worker.
    • Service user has signed the exemplar appropriately.
    • Are the pilot authorities using electronic signatures?
    This was designed to be a basic test of identification of the social worker and acknowledgement by the family.
    Are Service Users views expressed in the exemplar?
    • Views are expressed in text clearly. This has two dimensions
      • Attitudes
      • What users/carers express as their ‘wants’.
    This criterion was used because a number of the exemplars have space for personal comment by users.
    Is a holistic picture of the family and child presented?
    • Is there a comprehensive statement?
      • Does a text box have an entry?
      • What is the quality and is it evidenced?
    • Can the reader ‘see what is happening’ in this family?
    • Are there descriptions of family structure?
    Telling the story is an essential part of the social work process; this criteria operated at two levels;

    1. evidence of entries.

    2. Quality of the content of the entries.

    Does the exemplar demonstrate analytical thought? There is evidence that the information recorded is analysed and coherent, demonstrating professional judgment and assessment, and is not purely descriptive. The ICS has as an aspiration - the greater use of analysis by workers.
    Is there coherence of decision making Any decision made needs to be coherent, with the information and analysis on which the decision is based being clearly identified, and the process transparent from assessment-review. This linked detail in the assessment and or description of the case to final decision.
    Is evidence used/cited to substantiate opinions, care planning and decisions? The exemplars lead to decisions. Is the information generated and collected clearly used as evidence for decisions and planning. Are the intended outcomes clear and informed by the cogent use of data collected through use of the exemplar.

    Completeness is also a feature of evidence. Is the evidence appropriate to the box/question in the exemplar and is evidential work used across all of the assessment domains?

    Completeness has an evidential dimension as well.


Is the ICS exemplar an effective professional tool? To be effective as a tool the record needs to be coherent, detailed and evidenced. Additional issues include:
  1. Is the social worker using the whole record to think about the assessment of and interventions in the family?
  2. Does it provide a clear story if others e.g. social workers, picked up the case?
  3. Does the social worker use it with the family?
  4. Can it be used for management purposes?
Our attempt to begin to discern the various elements of judging whether the exemplars are effective professional tools.



The Rating Scale

Initially the proposal was to judge each aspect to produce ratings of completion and quality of recording on a scale of 1 to 5 where 1 was poor and 5 was excellent. However, a ‘scan read’ of the pilot exemplars indicated a wide range of ‘completeness’ from very full to so limited as to be blank or all-but blank, with the exception of the self-populating sections. Equally there was a wide range of both quantity and of detail in each section of the individual exemplars. We therefore decided to reduce the scale from 5 to 3 points and to adopt the following range for each aspect.

GOOD INTERMEDIATE POOR

This allowed us to rate incomplete exemplars, where some information was useful but was incomplete. This also enabled us to rate each aspect separately, so that a case may have information which, for example, had some use as a professional tool but did not include evidence cited to support decisions. Thus each exemplar was judged against the seven aspects on the three point scale of Good, Intermediate and Poor.

  • Poor: the exemplar had either nil or only basic information
  • Intermediate, or limited: the exemplar was incomplete but contained some useful information
  • Good: the exemplar contained both a substantial quantity of and good quality of information.

This rating was applied to all the exemplars provided for the cases in the Process and Disability Sub studies where we had interviewed a service user. The exemplars were read by a member of the research team who was an experienced Local Authority Social Services/Children’s Services practitioner and manager with experience in conducting file audits and examinations for a variety of purposes. A sample of exemplars was read by a second member of the team with similar experience to validate the ratings


The Sample

The sample comprised all the exemplars appertaining to the children and families interviewed in the Disability (22) and Process (10) studies. This gave us a sample of 153 exemplars from the 32 cases, mainly from sites A and C, where we had conducted the interviews. (See table 8.2.)

Table 8.2 Cases by Authority

AUTHORITY NUMBER OF CASES TOTALS
PROCESS DISABILITY
A 5 8 13
B 0 6 6
C 5 6 11
D 0 2 2
TOTALS 10 22 32



As illustrated by Table 8. 3 the largest group of cases were children in need (13), 9 were looked after, 4 in respite care and 6 derived from child protection. The disability sample included children in long term care, short break care, children in specialist residential placements and children in need living at home from Authorities A,C and D ; the Process Study sample included children in need (secs 17 and 47)and looked after children from Authorities A and C.


Table 8.3 Types of cases by Authority.


Authority

A


CP LAC CIN CIN

RESPITE

TOTALS
Process 3 0 2 0 5
Disability 0 5 0 3 8
C Process 1 3 1 0 5
Disability 0 1 4 1 6
D Disability 0 0 2 0 2
B Disability 2 0 4 0 6


6 9 13 4 32


Reflecting the number of cases drawn from the Disability study where children were looked after, the majority of the exemplars comprised LAC Reviews (26), CYP Reviews- the Chairs Report (21), Care Plans (23), Referral and Information (20) and Initial Assessment (18) Records. The fewest included Record of Strategy Discussion (1), Transfer Record (1) and Closure Record (1). Not included at all were the Contact Record, Assessment and Progress Record, the Initial CPC Report, the Initial Plan for the Child, the Outline Child Protection Plan, the Adoption Plan and the Pathway Plan. The sample includes more Information and Assessment exemplars than Planning ones (Table 8.4).


Table 8.4. Numbers and Types of Exemplars by Authority.


EXEMPLAR
AUTHORITY
TOTALS
A B C D
P(1) D(2) P(4) D P D P(5) D
Referral and Information Record 8 0 0 0 3 9 0 0 20
Placement Information Record 0 0 0 0 6 1 0 0 7
Record of strategy Discussion 0 0 0 0 1 0 0 0 1
Outcome of Section 47 Enquiries 3 0 0 0 1 0 0 0 4
Initial Assessment 8 0 0 2 1 5 0 2 18
CYP Care Plan. 1 8 0 0 8 6 0 0 23
Core Assessment 1 0 0 0 2 2 0 0 5
Essential Information for Provider Services (3) 1 8 0 0 0 0 0 0 9
CYP Care Plan Part 2. 0 8 0 0 0 0 0 0 8
CYP Review

Chair’s report.

0 16 0 0 4 1 0 0 21
Social Workers’ reports to LAC Review. 0 17 0 4 4 1 0 0 26
CIN Review social Worker’s Report 0 0 0 0 1 0 0 0 1
Transfer Record 0 0 0 0 1 0 0 0 1
Chronology 0 8 0 0 0 0 0 0 8
Closure record 0 0 0 0 0 1 0 0 1
TOTALS 22 65 0 6* 32 26 0 2 153

1 P refers to Process study sample.

2 D refers to Disability study sample.

3 This exemplar is specific to Site A. It was devised with additional information to the Placement Information Record. Since it is a completely different exemplar it has been recorded separately but it is only relevant to Site A and in the Disability Sample was largely used instead of the Placement Information Record.

4+5 * Read but not included in analysis

Limitations

These findings need to be read in the context of the delays and problems that the pilot sites experienced during the implementation of the system, especially the reduction in the size and scope of the Process Study sample (25% of that planned). This meant both that the Child Protection exemplars are under-represented and that transfer cases (i.e. those active prior to the introduction of ICS, and continuing after it) were not included. The minimal contribution from two of our four sites is another unfortunate limitation. Our evaluation therefore has to be seen in the context that the range of exemplars, the size of the sample and the representation of different authorities use is not as planned. None the less some useful themes emerge.

The Findings

Table 8.5 summarises the overall picture across three authorities and all exemplars of the effectiveness of the ICS as judged on the above aspects and the three scale ratings.

Table 8. 5. Rating Scores on all aspects by Authority.

RATING AUTHORITY TOTALS
A C D
Poor 169 36% 138 36% 0 307 (35.7%)
Intermediate 158 33% 93 25% 7 258 (30 %)
Good 140 31% 149 39% 5 294 (34.3%)
Total


859


This crude rating demonstrates that, in terms of the quantity and quantity of information recorded, approximately one third (34%) of the aspects in all cases and on all exemplars were judged as good, in just under a third the rating was limited, or intermediate (30%), and in the remainder ( 35.7%) it was poor.

Good exemplars were those where the sections contained detailed and relevant information about the child, the family and significant others, where the involvement and role of other agencies was clearly outlined, where need, risk and capacity were defined, and where the information was analysed with the outcomes to be achieved made explicit. Here is an example of a good overall plan on a core assessment of a 14 year old boy.

    “The overall aim of the plan is to support Mr and Mrs B in their parenting of S, who is young man with autism, severe leaning disabilities, epilepsy and dyspraxia. S frequently displays challenging behaviour directed mostly at his parents, who despite great efforts to reduce the incidence of this behaviour through programmes from ISS and Family Support, find the frequency and severity of the outbursts continues to increase.

    The plan aims to provide opportunities to enhance S’s potential to live independently following the transition to adulthood by developing his sense of self, teaching him daily living skills and how to tolerate a society he will never really understand.

    The plans needs to be in place by Nov. 2002 so that parents are appropriately supported through the summer holidays, an area of high stress for parents and for S, who doesn’t understand the purpose of school holidays and cannot cope with the traditional style of support offered through the summer playscheme” .

At the other end of the scale, a third of the exemplars were poor, often containing a number of blank sections, and with little detail or analysis. It proved impossible to get a picture of the case or to follow the story. As the previous Chapter on the download study reported, a number of ‘completed’ exemplars also had many blank sections. In most cases (e.g. Initial and Core Assessments) it was unclear if the sections were incomplete because they were irrelevant to the case, inappropriate or had been overlooked. This raises a problem for the reader in that there is no way of knowing whether blankness or incompleteness signifies not applicable, irrelevant or entered elsewhere.

There are a number of reasons why there may be only partial information in the records. In some cases, for example, the information domain was not relevant to the situation or case: for example, health. In other cases it could be that the social worker lacked the ability or skill to complete the record effectively. Some social workers are more thorough than others, some more able, some may have better IT skills. But for the reader and for aggregating statistics, the end result is that the record is incomplete.

In part this is a design issue. For example, a ‘not applicable’ button could indicate why a section had not been filled in. At a late site visit Authority D indicated that it was going to add a N/A option. Another explanation could be that this is a transitional problem: as familiarity with the software increases, and sustained training is embedded in the agency, the quality and quantity of recording could improve.

This study did not set out to evaluate the differences between the quality of previous recording systems, and recording in the ICS, and we know from previous reports (Laming, 2003) that there are longstanding problems with the quality of records. However, a further question arises here whether there is an inherent problem in the design of this system – that the splitting of narrative and analysis into discrete sections and tick boxes inhibits the development of a coherent picture and allows for partial and incomplete recording. And, equally, does recording electronically influence and/or improve practice? There may be other explanations which the scope of the record study did not allow to be further pursued. However, data from other aspects of this evaluation provide a consistent picture and raise important design issues.

0 Comments:

Post a Comment

<< Home